Written by 1:45 am AI Security

– Crafting Safety, Security, and Trust: Small Federal Agency Sets AI Standards

A little-known federal agency, The National Institute of Standards and Technology, was tapped by th…

BOSTON (AP) — Artificial intelligence, a technology rivaled only by nuclear fission in its potential impact, is poised to shape our future significantly. Ensuring the safety, security, trustworthiness, and social responsibility of AI systems is paramount.

The development of AI has primarily been driven by the private tech sector, which has shown resistance to regulation due to the high stakes involved. The Biden administration faces a significant challenge in establishing standards for AI safety with billions on the line.

To address this challenge, the administration has turned to the National Institute of Standards and Technology (NIST), a small federal agency known for defining standards in various fields, from atomic clocks to election security technology.

Leading NIST’s AI efforts is Elham Tabassi, the agency’s chief AI advisor. Tabassi, an Iranian-born expert in electrical engineering, played a key role in developing the AI Risk Management Framework, which laid the foundation for President Biden’s recent AI executive order. The framework highlighted risks such as bias and threats to privacy in AI systems.

In an edited interview, Tabassi discusses the importance of creating a shared vocabulary for AI to facilitate communication across diverse disciplines and ensure clarity in discussions.

She emphasizes the need for interdisciplinary collaboration involving computer scientists, engineers, attorneys, psychologists, and philosophers to address the socio-technical complexities of AI systems effectively.

Despite being a relatively small agency within the Commerce Department, NIST has a history of engaging with diverse communities to produce impactful outcomes. With a team of over a dozen experts dedicated to AI initiatives, NIST is expanding its efforts in this critical area.

While facing tight deadlines set by the executive order, Tabassi remains optimistic, citing the team’s expertise and commitment to delivering results. NIST has already initiated public working groups to develop guidelines for ensuring AI safety and trustworthiness.

As NIST navigates the evolving landscape of AI regulation and oversight, transparency and scientific independence remain core principles. The agency is exploring avenues for supporting research through grants and awards while maintaining its autonomy in decision-making.

Tabassi underscores the importance of integrating trustworthiness considerations into the design, development, and deployment of AI systems to mitigate risks effectively. Early intervention and ongoing monitoring are essential to address potential issues before they escalate.

In conclusion, ensuring the responsible development and deployment of AI technologies requires a collaborative and proactive approach that considers the diverse contexts in which these systems operate. The evolving regulatory landscape underscores the need for continuous evaluation and adaptation to uphold safety, security, and ethical standards in AI advancements.

Visited 2 times, 1 visit(s) today
Tags: Last modified: March 20, 2024
Close Search Window
Close