California’s Privacy Protection Agency (CPPA) is gearing up for its next initiative: introducing safeguards for artificial intelligence (AI).
The state’s privacy regulator, crucial in establishing guidelines for tech behemoths given the concentration of Big Tech (and Big AI) in the region, has released draft regulations addressing the use of individuals’ data in automated decision-making technology (ADMT) – commonly known as AI. This draft is touted as the most extensive set of rules in the AI domain, drawing inspiration from the stringent regulations in the European Union under the General Data Protection Regulation (GDPR). The CPPA aims to enhance these regulations with more specific provisions to prevent tech giants from circumventing them.
The fundamental aspects of the proposed framework include the right to opt-out, mandatory pre-use notifications, and access rights. These provisions empower California residents to gain insights into how their data fuels automation and AI technologies.
The regulations extend to AI-based profiling, potentially impacting major U.S. adtech firms like Meta, which heavily rely on user tracking and profiling for targeted advertising. If enacted, these rules could compel businesses to allow Californians to reject commercial surveillance, particularly for behavioral advertising purposes. The draft also restricts exemptions that might apply in other scenarios, emphasizing the right to opt-out for consumers.
The CPPA’s approach to regulating ADMT is centered on a risk-based strategy, akin to the EU’s AI Act, which focuses on the risk assessment of AI applications. With the EU facing challenges in legislating AI practices, California could emerge as a key global authority on AI regulations.
While California’s AI rules primarily safeguard state residents, companies within its jurisdiction may voluntarily extend these protections to users in other states. However, the CPPA’s jurisdiction and enforcement are limited to California.
In line with the GDPR-inspired California Consumer Privacy Act (CCPA) of 2020, the proposed regulations seek to fortify consumer rights in the realm of automated decision-making technologies. By providing opt-out and access rights, the CPPA aims to ensure responsible AI usage while upholding privacy standards.
The draft regulations outline opt-out and access rights concerning businesses’ utilization of ADMT. Businesses must offer consumers the choice to opt-out of their data being leveraged for automated decision-making, with exceptions for security, fraud prevention, and consumer-requested services.
Moreover, businesses deploying ADMT must furnish pre-use notices to consumers, enabling them to make informed decisions about their data usage. The framework also mandates access rights, requiring businesses to disclose details about the technology’s output, decision-making processes, and the logic behind ADMT applications.
The proposed regulations set thresholds for decision-making scenarios involving consumers, employee profiling, and public space surveillance. The upcoming consultation will further debate the applicability of rules to behavioral advertising, profiling minors, and data processing for training ADMT.
The CPPA’s rulemaking process has commenced, with public consultations set to begin soon. While a finalized regulation could be anticipated by the latter half of the following year, compliance deadlines for affected companies may extend the timeline to 2025. The draft definition of ADMT encompasses systems, software, or processes utilizing computation to make decisions based on personal information, including profiling.
California’s proactive stance on AI regulation underscores its commitment to privacy and innovation, setting a potential precedent for AI governance globally.