It’s not just the individual significance of the two issues that matters—they are deeply interconnected, as highlighted by Nicholas Piachaud.
Linus Zoll & Google DeepMind / Better Images of AI / Generative Image models / CC-BY 4.0
The current public policy debates on technology, whether at Davos or in legislative arenas globally, are predominantly centered around AI. These discussions revolve around the advantages, drawbacks, and the necessity for regulation of AI.
Undoubtedly, this discourse is pivotal. AI stands out as the pivotal technology of our era, demanding groundbreaking legislation that aligns with its impact. Legislation that safeguards individual rights, advocates for transparency, prevents tech giants from monopolizing the market, and holds AI firms accountable for mishaps.
However, amidst the fervor around AI policy, another critical issue seems to have been somewhat overshadowed: privacy. While the deliberations on the EU AI Act garner significant attention, the ongoing consultation on the European Commission’s General Data Protection Regulation (GDPR) seems to have taken a back seat. Yet, since its enforcement six years ago, the GDPR has played a crucial role in protecting the privacy of millions within the EU. The upcoming evaluation in May 2024 could potentially expose the GDPR to attempts at dilution by vested interests. It’s evident that major tech players will seize the opportunity to weaken its stringent safeguards.
It is imperative that both these issues receive equitable attention. Their interdependence underscores the importance of addressing them holistically. Robust data privacy regulations are not just desirable but are a prerequisite for effective AI governance. At the core of both topics lie intense debates surrounding data.
At the heart of the dominance of Large Language Models in policy and media landscapes lies data. The possession of vast proprietary data confers a significant competitive edge when training AI models—a scenario that fosters a race to the bottom in privacy standards. This trend is succinctly captured in Mozilla’s *Privacy Not Included guide: the integration of AI technology in consumer products and services leads to heightened data collection (and subsequent sharing, selling, and leakage of user data). This trend extends to various domains, including mental health apps, reproductive health apps, automobiles, and even children’s toys. Shockingly, in 2022 alone, over 1,800 data breaches were reported in the US, impacting a staggering 422 million individuals globally. Furthermore, major AI corporations like Microsoft remain opaque about their data utilization for AI training.
The convergence of AI and online privacy extends to various aspects, including the necessity for consumer transparency and notice, as well as the perils of deceptive design. Where do strategic opportunities lie in prioritizing privacy principles within the AI regulatory framework?
In the US, the enactment of legislation such as the previously proposed American Data Privacy and Protection Act (ADPPA), endorsed by Mozilla, would mark a significant stride towards establishing the essential privacy assurances that underpin responsible AI practices. Legal mandates to curtail data collection can deter companies from indiscriminate data scraping to maintain a competitive advantage.
Nonetheless, the pursuit of a federal privacy law has remained elusive over the years. In the absence of such legislation, interim measures can be adopted. The Federal Trade Commission can propel its pivotal Commercial Surveillance and Data Security rulemaking, while existing regulations safeguarding consumers and competition can be rigorously enforced. Additionally, individual states can emulate California’s lead by enacting laws akin to the California Consumer Privacy Act. Notably, a proposed data privacy law in Maine has stirred unease among major tech entities—an indication of potentially impactful privacy legislation.
Opportunities also abound across the Atlantic. In the EU, as the GDPR nears its six-year review, it is imperative to sustain enforcement pressure and safeguard the integrity of this landmark legislation. The GDPR’s protection of personal data forms the cornerstone of numerous European digital policies, and any dilution of its provisions could have far-reaching consequences. The latest iteration of the AI Act makes extensive references to the GDPR. Concurrently, European Data Protection Authorities are leveraging the GDPR to clamp down on intrusive data practices by entities like OpenAI in ChatGPT training.
As these policy dialogues evolve and intersect, policymakers must recognize the pivotal role of open-source solutions in fostering trustworthy AI without compromising privacy. Open-source platforms can expedite the development of privacy-preserving techniques in AI by enabling individuals to operate models on private devices with confidential data. Beyond this, open source unlocks a plethora of opportunities, including enhanced scrutiny of AI models and heightened competition in the market.
Privacy stands as the cornerstone of a robust AI policy framework. As we anticipate new regulations for AI systems, it is imperative to reinforce existing privacy norms—or, in the case of the US, translate them into tangible legislation. In a landscape where AI systems increasingly shape decisions concerning us, data protection emerges as a non-negotiable necessity.