Written by 8:10 am AI, Big Tech companies

### Can AI Material Standards Alone Prevent the Proliferation of Fake Content by Tech Companies?

Meta, OpenAI and Google all announced new transparency and detection tools for AI content.

“Government entities and AI service providers have jointly introduced a wave of initiatives aimed at fortifying the industry’s defenses against AI-generated misinformation.

Major AI companies recently launched new tools for transparency and detecting AI-generated content. Following this, Google made known its decision to join the steering committee of the Coalition for Content Provenance and Authenticity (C2PA), a pivotal organization that establishes standards for various forms of AI content. Shortly after Google’s announcement, Meta outlined its plans to label AI images sourced from external platforms, while OpenAI disclosed its intention to incorporate metadata for images generated by ChatGPT and DALL-E via its API. Google also pledged support for a form of “nutrition label” for AI content developed by C2PA and the Content Authenticity Initiative (CAI), which received a significant boost in October with a substantial contribution from Adobe, the founding organization behind CAI in 2019.

The updates, which notably impacted various aspects, emphasized the integration of major submission platforms into the standardization process. By expanding platform-level participation, there is potential for wider adoption of AI standards, enhancing individuals’ capacity to discern between authentic and fabricated content. Andy Parsons, senior director of CAI, highlighted the necessity for a ‘snowball effect’ in the industry’s information ecosystem to drive improvement, emphasizing the importance of collaboration among corporations, academia, and governmental bodies.

Furthermore, these efforts facilitate consistent implementation across platforms for content creation and distribution, aligning with the C2PA standards that major AI design providers are adopting and utilizing. Parsons underscored that Adobe’s Firefly system was already compliant with C2PA standards upon its launch next year.

Design providers aim to enable users to determine the significance or authenticity of their generated content, as expressed by Parsons to Digiday.

Government bodies are also taking steps to combat AI-generated propaganda. In response to AI deepfake robocalls resembling President Joe Biden, the Federal Communications Commission prohibited the use of AI-generated voices in such calls last year, deeming them illegal under the Telephone Consumer Protection Act. Concurrently, the White House announced an expanded AI partnership involving over 200 entities from various sectors. Additionally, the European Commission is soliciting feedback for its Digital Services Act (DSA) regulations on election integrity.

AI-powered social micro-targeting poses a tangible challenge, prompting several legislatures to introduce new laws addressing AI-related social advertisements. While congressional members have proposed regulations, progress has been limited thus far. Recent research from Tech Policy Press indicates that significant language models can facilitate the creation of precise micro-targeted political ads on platforms like Facebook. However, Meta’s independent oversight committee recommended a prompt reassessment of its policies regarding manipulated media, whether AI-generated or not.

Experts underscore the critical need to curb the dissemination of false information through cultural and search channels. Validating IoT content plays a key role in fostering trust and transparency, though identifying AI deepfakes and text-based scams remains complex.

Josh Lawson, chair of the Aspen Institute’s AI and Democracy program, stresses the importance of combatting misinformation about AI. While emphasizing the significance of maintaining ‘good hygiene’ on major platforms in AI content creation, Lawson warns against bad actors producing harmful AI using open-source and jailbroken AI models, likening the situation to a glass analogy.

Lawson asserts that while generative AI expands the content supply, its impact hinges on reaching the audience effectively. He highlights the necessity of preventing the dissemination of false information to safeguard democratic processes.

Amid concerns over online privacy, Meredith Whittaker, chair of the private messaging app Signal, argues that the focus on deepfakes during election seasons diverts attention from pervasive issues like targeted ads. Whittaker suggests that these challenges may benefit tech giants like Meta and Google, which have relaxed political advertising restrictions recently.

However, the potency of a deepfake lies in its strategic distribution, Whittaker emphasizes.

Artificial Intelligence Innovations and Developments

  • Google rebranded its Bard chatbot as Gemini, marking a significant milestone for its flagship large language model. Additionally, Google introduced new AI functionalities across various products and services.
  • Non-tech advertisers leveraged conceptual AI to craft standout strategies for the Big Game, while tech firms capitalized on Super Bowl LVIII’s broad viewership to showcase new AI features. Companies like Microsoft, Google, Samsung, Crowdstrike, and Etsy featured AI-driven advertisements during the Super Bowl.
  • Mixed reality applications powered by conceptual AI are on the horizon for the Apple Vision Pro helmet. Early adopters include Adobe Firefly, Wayfair, and ChatGPT.
  • A recent report from William Blair delves into the business implications of conceptual AI.
  • Advertising, social media, and cloud service providers continue to highlight conceptual AI in investor calls and earnings reports. Recent mentions include Omnicom, IPG, Snap Inc., Pinterest, Cognizant, and Confluent. However, Amazon’s cloud CEO cautioned that the hype surrounding relational AI could reach levels reminiscent of the Dotcom bubble.”
Visited 2 times, 1 visit(s) today
Tags: , Last modified: February 12, 2024
Close Search Window
Close