If the year 2023 was characterized by the proliferation of AI hype, 2024 emerges as the pivotal moment when trust in AI transforms into a mission-critical imperative. Recent instances of AI failures have underscored the inadequacy of a self-regulatory, ostensibly “pro-innovation” stance towards AI, revealing that such an approach not only compromises safety but also fails to enhance the quality of AI technology. In the absence of robust safeguards, we risk hurtling towards another AI winter. The EU’s groundbreaking AI Act and President Biden’s AI Executive Order are setting new standards with comprehensive regulations. Entities that prioritize speed over adherence to regulations may find themselves excluded from lucrative government contracts and research opportunities.
The realm of AI is not novel, nor is the concept of AI-related risks. Experts in AI have long cautioned about the tangible harms that poorly designed AI systems can inflict. Dating back to 1983, incidents documented in the AI Incident Database, such as averting a nuclear crisis triggered by a computer-generated false alarm, serve as stark reminders of the potential dangers. Best-selling books like Weapons of Math Destruction and documentaries like Coded Bias have brought to light a spectrum of AI failures, making the discussion on AI risks mainstream even before ChatGPT gained widespread attention.
Over time, various concrete measures have been proposed to mitigate AI risks. The EU’s General Data Protection Regulation (GDPR) introduced in 2016 addressed issues like AI bias, biometrics, profiling, and algorithmic decision-making. Initiatives like the ACM Conference on Fairness, Accountability, and Transparency, ongoing since 2018, and the publication of numerous AI frameworks and standards have contributed to this ongoing dialogue.
Despite these efforts, numerous organizations persist in prioritizing speed over caution. As a US privacy attorney highlighted in Forbes, businesses are currently deploying AI with a hope-for-the-best mentality, urged on by legal counsel to adopt a wait-and-see approach, all while neglecting the ethical considerations and substantial risks involved. Building consumer trust is a challenging task, necessitating transparency regarding the functioning of AI tools and the handling of personal data.
Apart from regulatory concerns, companies now face the dual challenge of addressing both consumer apprehensions regarding AI and privacy issues. Research by Pew indicates that a significant percentage of individuals lack trust in companies’ responsible use of AI in their products. The Conversation notes a trend among Gen Z consumers shifting away from “smart” products to “dumb” ones, partly due to privacy apprehensions.
In 2023, the urgency of “AI Safety” surged to the forefront of policy discussions. The UK AI Safety Summit at Bletchley Park marked a significant milestone, culminating in the adoption of the Bletchley Declaration. This declaration emphasizes the identification of AI safety risks and the formulation of risk-based domestic policies, encompassing transparency measures, accountability mechanisms for developers, evaluation metrics, testing tools, and support for public sector capabilities and scientific research.
The sudden prominence of “AI Safety” can be attributed to the widespread popularity of generative AI services globally, as noted by Dr. Gabriela Zanfir-Fortuna from the Future of Privacy Forum. While the risks associated with AI have not substantially changed, the surge in AI usage has propelled these concerns to the forefront of public discourse. The privacy community has long grappled with these issues, with Data Protection Authorities informally regulating AI. The challenges we face are not entirely new but rather a reiteration of existing problems with added complexities.
Global Data Privacy Expert, Debbie “the Data Diva” Reynolds, warns that businesses lacking proficiency in privacy and cybersecurity may venture into risky AI projects driven by AI FOMO. The ease of launching AI projects swiftly and the allure of AI advancements pose significant risks to both businesses and society at large.
The AI Safety Summit underscored the importance of addressing both current and future AI harms as critical AI safety issues. Political figures like VP Kamala Harris and European Commission VP Věra Jourová rejected the notion of a false dichotomy between known AI risks and potential future existential threats posed by “frontier AI.” They emphasized the need for comprehensive regulations to mitigate existing and anticipated AI risks, promoting responsible innovation, healthy competition, and safeguarding privacy, equality, and fundamental rights.
In contrast to the proactive regulatory approaches adopted by the EU and the White House, UK PM Rishi Sunak advocates for a more light-touch, “pro-innovation” stance towards AI regulation. He argues that regulating frontier AI prematurely could impede progress before fully understanding the associated risks.
The argument that regulation stifles innovation is predicated on the assumption that innovation will ultimately lead to shared prosperity. However, Professor Renée Sieber and Ana Brandusescu challenge this notion, highlighting that shared prosperity necessitates equitable distribution of innovation benefits across society, rather than concentrating them within specific segments. They assert that rules and regulations are essential to ensure widespread benefits from AI and prevent wealth concentration within the AI industry.
The narrative of self-regulation as a facilitator of innovation is called into question by recent events involving OpenAI and LAION 5B. OpenAI, initially established to advance general-purpose AI for humanity’s benefit, succumbed to the pressures of the AI arms race, leading to the premature release of ChatGPT, subsequent data breaches, and misuse for fraudulent activities. The governance challenges within OpenAI underscore the complexities of self-regulation and raise doubts about entrusting revolutionary AI technologies to entities driven by profit motives.
Similarly, the LAION 5B project, aimed at democratizing machine learning, encountered significant privacy and risk management issues. The indiscriminate web scraping conducted by LAION introduced harmful content, including Child Sexual Abuse Materials (CSAM), into the dataset, underscoring the repercussions of immature privacy practices. The subsequent exposé revealed the inherent risks associated with large-scale datasets and the imperative of conducting thorough risk assessments and adhering to privacy regulations.
Moving forward, the year 2024 heralds a critical juncture where trust in AI becomes paramount. The EU and the White House are aligned in their regulatory objectives, with the EU introducing the hard law AI Act and the White House implementing a comprehensive Executive Order based on the Blueprint for an AI Bill of Rights. These regulations emphasize transparency, accountability, and continuous improvement in AI technologies, setting high standards for providers and potentially excluding non-compliant entities from lucrative opportunities.
In conclusion, fostering trust in AI demands concerted efforts, stringent regulations, and a genuine commitment to benefiting all of humanity. While the journey towards trustworthy AI may be a marathon, the imperative to avoid another AI winter underscores the mission-critical nature of prioritizing AI safety and ethical considerations in the development and deployment of AI technologies.