CHARLESTON, West Virginia – Yesterday, Senators Shelley Moore Capito (R-WV) and Amy Klobuchar (D-MN), along with Roger Wicker (R-MS), introduced the AI Research, Innovation, and Accountability Act of 2023. The bill was put forth by Ben Ray Luján (D-NM) and John Hickenlooper (D-CO), both members of the Senate Committee on Commerce, Science, and Transportation. This bipartisan initiative, championed by Senators Thune and Klobuchar, aims to establish a framework for promoting AI development while enhancing transparency, accountability, and safety in the utilization of high-impact AI applications. Senator Capito expressed her enthusiasm for collaborating with colleagues on this important bipartisan effort, emphasizing the need for clear accountability without impeding the progress of machine learning.
Key Provisions of the AI Research, Innovation, and Accountability Act of 2023 include:
-
National Institute of Standards and Technology (NIST) Research: The bill mandates NIST to conduct research for setting standards that ensure the authenticity and provenance of online content, similar to the Coalition for Content Provenance and Authenticity. This initiative aims to distinguish between human-generated and AI-generated content and standardize techniques for identifying emergent properties in AI systems.
-
Artificial Definitions: Introduces new concepts such as “generative,” “high impact,” and “essential impact” for AI systems, along with a distinction between “developer” and “deployer.”
-
Generative AI Transparency: Requires major digital platforms to notify users when generative AI is used to personalize content, ensuring transparency and potential enforcement by the U.S. Department of Commerce.
-
NIST Recommendations to Agencies: NIST may provide guidelines to agencies for implementing risk-based guardrails on high-impact AI systems in collaboration with industry partners.
-
AI Risk Management Framework: Mandates businesses using critical-impact AI to conduct comprehensive risk assessments and provide detailed reports on risk management to the Commerce Department.
-
Critical-Impact AI Certification: Establishes a certification framework for critical-impact AI systems, with an advisory committee offering input on qualification standards.
-
Submission of Certification Plan: Requires the Commerce Department to submit a five-year strategy for testing and certifying critical-impact AI systems to Congress and an expert committee.
-
Critical-Impact AI Standards: Grants the Commerce Department authority to set TEVV standards for critical-impact AI, with deployers self-certifying compliance and public feedback on the standards.
-
Exclusions and Compliance: Allows for exemptions from certain standards based on Commerce Department approval and outlines compliance measures and enforcement actions.
-
AI Consumer Education: Calls for the creation of a working group within the Commerce Department to develop consumer education initiatives for AI technologies.