CNN
AI technologies like ChatGPT have gained widespread acceptance, prompting substantial investments from companies aiming to revolutionize our daily lives and work environments.
Despite this enthusiasm, concerns persist regarding potential biases and inaccuracies in AI responses. Generative AI tools, including ChatGPT, have faced allegations of copyright violations and have been misused to produce unauthorized intimate content.
Recent attention has been drawn to the rise of “deepfakes,” exemplified by AI-generated explicit images of Taylor Swift circulating on social media, highlighting the detrimental effects of mainstream artificial intelligence.
In his 2024 State of the Union address, President Joe Biden urged Congress to enact legislation regulating artificial intelligence, proposing a prohibition on “AI voice impersonation and more.” He stressed the importance of harnessing AI’s benefits while guarding against its risks, warning of the threats to Americans if left unchecked.
This call to action followed a fraudulent robocall scheme mimicking the President’s voice, targeting thousands of voters in New Hampshire—an AI-driven effort to interfere in the election. Despite warnings from disinformation experts about AI’s electoral risks, few anticipate regulatory measures to curb the AI industry during a contentious election cycle.
Nevertheless, technology behemoths and AI firms continue to captivate consumers and enterprises with innovative features and functionalities.
Most recently, OpenAI, the developer of ChatGPT, introduced a new AI model named Sora, touting its capacity to generate “realistic” and “imaginative” 60-second videos from concise text prompts. Microsoft integrated its AI assistant, Copilot—powered by ChatGPT technology—into its suite of products, including Word, PowerPoint, Teams, and Outlook, widely used by businesses globally. Additionally, Google launched Gemini, an AI chatbot replacing the Google Assistant feature on specific Android devices.
Concerned Experts
AI experts, scholars, and legal professionals express reservations about the widespread adoption of AI in the absence of effective regulatory oversight. Hundreds of experts have signed a letter urging AI companies to enact policy revisions and commit to independent safety assessments and accountability.
The letter warns against replicating the missteps of social media platforms that have impeded research aimed at holding them accountable, resorting to legal threats or other tactics to suppress scrutiny. It cites instances where generative AI companies obstructed independent research, emphasizing the need to enable researchers to evaluate the safety, security, and reliability of AI systems for informed policymaking.
Suresh Venkatasubramanian, a computer scientist and professor at Brown University, shares concerns about the disparity between the potential and actuality of AI advancements. He stresses the importance of companies delivering on their AI commitments while avoiding subpar outcomes. Venkatasubramanian, an AI policy advisor at the White House, emphasizes the necessity for policymakers to establish clear industry guidelines.
Arvind Narayanan, a computer science professor at Princeton, echoes these sentiments, expressing apprehensions about the rapid pace of AI progress surpassing society’s adaptability. He proposes more substantial reforms, such as imposing taxes on AI companies to support social welfare programs.
While acknowledging the advantages of generative AI, users must acknowledge the current limitations and intricacies of these technologies.
AI Insights
When asked about their readiness for widespread adoption, ChatGPT and other generative AI tools convey confidence but stress the ongoing efforts required to tackle ethical, societal, and regulatory obstacles for responsible and beneficial integration.
Google’s Gemini AI tool, previously known as Bard, echoes similar sentiments cautiously, noting mixed signals regarding mass adoption and emphasizing the necessity for additional training to enhance productivity. It also acknowledges concerns about bias in training data impacting AI outputs and stresses responsible usage and accountability.
CNN’s Brian Fung contributed to this report