Written by 11:07 pm AI Security

### Enhancing Security: Over 200 American AI Companies Join President’s Partnership

The Artificial Intelligence Safety Institute Consortium, designed to fuel collaboration, can count …

Technology advancements have a rich history of positively transforming society. Industry standards and political oversight, coupled with ethical guidelines, accountability, and reliable implementation, have fostered the growth of various technological innovations, ranging from the sewing machine to the automobile and the elevator.

Traditionally, regulatory frameworks have been established post-assessment of a technology’s impact. For example, the mandatory inclusion of seat belts in automobiles was not enforced until 1968. However, with increasing concerns about the broad implications and potential misuse of innovations in various sectors such as work, politics, and daily life, governments worldwide are now striving for a more proactive approach to ensure responsible development and swift implementation of regulatory systems.

Recently, the United States announced the formation of the U.S. Artificial Intelligence Safety Institute Consortium (AISIC), comprising over 200 organizations, including academic institutions, leading AI companies, nonprofits, and other key stakeholders in the AI landscape. This consortium aims to promote the safe advancement and deployment of generative AI technologies, fostering collaboration between industry and government to adopt appropriate risk management strategies.

Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology and NIST Director, emphasized the need to comprehensively evaluate the capabilities, limitations, and impacts of AI technologies. The collaboration between businesses, academia, civil society, and government within the AISIC framework addresses pertinent regional issues in AI development.

Gina Raimondo, the Commerce Secretary, underscored the importance of integrating AI safety efforts with real-world business dynamics, emphasizing the practical relevance of the institute’s work.

The AI sector, represented by more than 200 companies including industry giants like Adobe, Meta, Amazon, Apple, and Google, is committed to advancing AI technology responsibly. Financial institutions such as Bank of America, J.P. Morgan Chase, and Mastercard are also supporting the development of safe AI practices.

Nick Clegg, Meta’s President of Global Affairs, highlighted the necessity of balancing progress and responsibility in setting universal standards for trustworthy AI. Collaboration across sectors is crucial in achieving this goal.

Arvind Krishna, IBM’s Chairman and CEO, commended the elevation of AI concerns to a national priority by Secretary Raimondo and the administration. IBM is dedicated to supporting the AI Safety Institute with technological expertise to ensure the proper and trustworthy use of American artificial intelligence.

The AI Safety Institute’s role in promoting ethical and trustworthy AI usage is pivotal, as acknowledged by industry experts and thought leaders. The institute’s mandate includes creating guidelines for AI model assessment, facilitating standardization, and providing testing environments for AI systems, aligning with the U.S. government’s proactive approach to AI governance.

Despite the potential benefits of generative AI technology, challenges in ensuring AI safety persist. Kojin Oshiba, co-founder of Robust Intelligence, highlighted the distinction between traditional security measures and AI security, emphasizing the need for a nuanced understanding of AI-related risks and solutions.

By leveraging the collective expertise and perspectives of the diverse ecosystem supporting the AISIC, a robust framework for developing and deploying ethical AI technologies can be established, paving the way for a more accountable and reliable AI landscape.

Visited 2 times, 1 visit(s) today
Tags: Last modified: February 9, 2024
Close Search Window
Close