Interview False images of Donald Trump supported by fabricated Black voters, middle-schoolers engaging in the creation of pornographic deepfakes featuring their female classmates, and Google’s Gemini chatbot struggling to accurately generate images of White individuals are among the recent mishaps documented on the AI Incident Database. This platform serves as a comprehensive repository tracking various instances where AI technology malfunctions or produces undesirable outcomes.
Initially conceived as an initiative operating under the umbrella of the Partnership On AI—an organization dedicated to ensuring that AI advancements benefit society—the AI Incident Database has evolved into an independent non-profit entity with financial backing from Underwriters Laboratories. Established in 1894, Underwriters Laboratories is the oldest and largest independent testing laboratory in the United States, renowned for assessing a wide array of products ranging from furniture to computer peripherals. The database has meticulously documented over 600 distinct incidents related to automation and AI thus far.
Patrick Hall, an assistant professor at the George Washington University School of Business and a current member of the AI Incident Database’s Board of Directors, highlighted the significant information gap between AI developers and the general public, emphasizing the need for increased transparency. He expressed the organization’s commitment to disseminating information to bridge this divide, advocating for enhanced transparency within the AI landscape.
Drawing inspiration from established programs like the CVE Program by MITRE and the National Highway Transport Safety Administration’s platform for disclosing cybersecurity vulnerabilities and vehicle accidents, the AI Incident Database aims to capture and analyze AI-related failures and incidents to prevent their recurrence. By recording and scrutinizing events such as plane crashes, train accidents, and cybersecurity breaches, a collective effort is made to comprehend past mistakes and avoid similar pitfalls in the future.
The management of the website involves a core team of approximately ten individuals, supported by volunteers and contractors who contribute to reviewing and publishing AI-related incidents online. Heather Frase, a senior fellow at Georgetown’s Center for Security and Emerging Technology specializing in AI Assessment and a director at the AI Incident Database, emphasized the platform’s unique focus on real-world implications of AI risks and harms, distinct from merely cataloging software vulnerabilities and bugs.
The organization curates incidents from media sources and monitors issues reported by users on Twitter. Prior to the launch of ChatGPT in November 2022, the AI Incident Database documented 250 unique incidents, a number that has since surpassed 600. Tracking AI-related challenges over time unveils notable trends and offers insights into the current real-world impacts of this technology.
George Washington University’s Hall disclosed that approximately half of the reports within the database pertain to generative AI. While some incidents may appear lighthearted, such as quirky products with automated descriptions on Amazon, others carry more serious implications, like accidents involving autonomous vehicles causing harm to individuals.
Reflecting on the current state of AI development, Hall lamented the prevailing “wild west” mentality characterized by rapid innovation without full comprehension of the societal implications. The AI Incident Database aims to shed light on misuses of AI technology, highlighting unintended consequences and prompting developers and policymakers to make informed decisions to enhance their models and address critical risks effectively.
Frase expressed concerns about the potential erosion of human rights and civil liberties due to AI advancements, underscoring the importance of monitoring incidents to evaluate the effectiveness of policies in enhancing the safety of AI applications over time. She anticipated a rise in physical harm as AI technologies like generative robotics become more prevalent, supplementing the existing psychological and intangible harms associated with large language models.
As the organization strives to expand its reach and raise awareness, volunteers are actively sought to contribute to the documentation of incidents. Frase clarified that despite the critical stance presented, the group members are not opposed to AI but rather advocate for its responsible and ethical use. Hall echoed this sentiment, emphasizing the necessity of proactive efforts to ensure the continued advancement of AI technology in a safe and beneficial manner.