Written by 4:55 pm AI Threat, Uncategorized

### Tech Companies Praise Federal AI Risk Management Act Implementation

U.S. Senator Jerry Moran

The Federal Artificial Intelligence Risk Management Act, recently presented by U.S. Senators Jerry Moran (R-Kan.) and Mark Warner (D-Va.), seeks to establish guidelines in the federal government to mitigate risks linked with artificial intelligence (AI) while leveraging new technology. Corporations, universities, and foundations have all expressed approval and support for this initiative.

Chandler C. Morse, Vice President of Public Policy at Workday, stated, “Workday embraces the introduction of the Federal AI Risk Management Framework Act, emphasizing our commitment to implementing the NIST AI risk management framework.” By mandating governmental agencies and AI vendors to adopt the NIST Framework, this proposal aims to promote responsible AI practices. Leveraging the federal government’s procurement influence will provide essential guidance to the private sector and enhance public trust in AI. We commend Senators Moran and Warner for their leadership and urge Congress to support the bill’s enactment.

Fred Humphries, Corporate Vice President, U. S. Government Affairs, Microsoft, mentioned that “establishing a recognized risk management framework by the U.S. Government can harness the potential of AI and enhance its safe deployment. We look forward to collaborating with Senators Moran and Warner to expand this framework.”

Christopher Padilla, Vice President of Government and Regulatory Affairs at IBM, expressed appreciation for Senators Warner and Moran’s bipartisan efforts on the Federal Artificial Intelligence Risk Management Act of 2023, emphasizing its role in ensuring the trustworthiness and responsibility of AI used by U.S. federal agencies. This act sets forth clear guidelines and a practical approach for companies to follow as they deploy AI for societal benefit, leveraging the established NIST Risk Management Framework.

BSA- The Software Alliance highlighted that “the NIST Artificial Intelligence Risk Management Framework (RMF) serves as a comprehensive resource to help government and private sector entities identify and address AI-related risks.” The Federal Artificial Intelligence Risk Management Act underscores the increasing recognition of the NIST AI RMF as a key tool for managing AI risks amidst growing bipartisan discussions. By mandating adherence to established best practices, the policy aims to reduce the risks associated with deploying AI systems within the federal government.

Michael Clauser, Director, Head of US Federal Affairs, Okta, affirmed Okta’s support for the Federal AI Risk Management Framework Act, commending Senators Warner and Moran for their bipartisan approach. The bill empowers federal software vendors and government agencies to align AI development and deployment with the NIST AI Risk Management Framework (RMF), as outlined in the recent Executive Order on Artificial Intelligence. The RMF serves as a valuable resource for governing, mapping, measuring, controlling, and mitigating risks associated with AI models of varying impacts.

Russell Harrison, Managing Director of IEEE- USA, expressed full support for the Federal Artificial Intelligence Risk Management Act of 2023. The NIST Risk Management Framework (RMF) safeguards the public from unintended AI risks while fostering the advancement of AI technologies for the benefit of society. By requiring organizations to follow standards such as those established by IEEE, the policy ensures public safety and progress. Clear guidelines and expected compliance promote transparency without impeding competitiveness.

Dr. Arvind Narayanan, a Professor of Computer Science at Princeton University, emphasized the complexity of AI evaluation and the challenges in government procurement of AI technologies. The Federal Artificial Intelligence Risk Management Act addresses this issue by enhancing expertise, analytical skills, and risk management to ensure responsible AI deployment in critical decision-making processes.

Yacine Jernite, Machine Learning & Society Lead at Hugging Face, emphasized the importance of responsible decision-making and stakeholder involvement in AI risk management. The Act is expected to facilitate knowledge sharing, resource creation, and best practices development, fostering responsible and collaborative AI technology advancements within the federal government.

Andrew Howell, Executive Director of the Business Cloud Coalition, voiced support for the Federal AI Risk Management Act of 2023, which mandates the adoption of the NIST AI risk management framework for AI procurement. This initiative aligns with the government’s commitment to tech confidence, ensuring enhanced reliability and security in AI technologies through standardized risk management practices. The policy is seen as a pivotal step in reinforcing the U.S.’s leadership in the responsible utilization and innovation of artificial intelligence.

Visited 2 times, 1 visit(s) today
Last modified: February 10, 2024
Close Search Window
Close