Written by 8:37 pm AI

– Advocates Call for NIST to Enhance Transparency in Funding AI Research

The National Institute of Standards and Technology is failing to provide adequate information about…

Next-generation technologies, sustainable development, and artificial intelligence and machine learning

Members of the U.S. House of Representatives have urged the National Institute of Standards and Technology (NIST) to clarify the allocation of funds for research at its Artificial university. The U.S. National Institute of Standards and Technology is gearing up to provide additional funding for research initiatives aimed at promoting the development and ethical use of artificial intelligence technologies, with lawmakers calling for increased transparency.

In a letter dated December 14, Rep. Frank Lucas, R-Okla., who chairs the House Science, Space, and Technology Committee, addressed NIST Director Laurie Locascio. The letter highlighted the challenges faced by NIST in the current landscape of AI safety research, emphasizing the importance of transparency.

Despite the ongoing efforts to establish comprehensive national legislation, the organization has taken a proactive stance in safeguarding AI within the federal government. President Joe Biden’s executive order in October instructed NIST to establish the Artificial Intelligence Safety Institute and formulate guidelines for AI developers to conduct red-teaming tests, particularly for dual-use foundational models.

Under the Obama administration, NIST transitioned from being a relatively obscure academic entity to a prominent player in cybersecurity. With the Biden administration’s support, NIST is now at the forefront of federal initiatives aimed at regulating AI, garnering significant attention.

Politicians have raised concerns about NIST’s plans to support AI research through its newly established AI university. The letter pointed out that NIST has not sufficiently clarified its approach to awarding funding opportunities to academic institutions and private entities. Moreover, details regarding the award process were not provided during a legislative staff briefing on December 11.

The lack of publicly available information on the funding operations, including the absence of notices, announcements, or postings, has raised questions about the transparency of NIST’s processes. Discrepancies between the information provided to businesses seeking collaborative research agreements and the application process for AISI-funded awards have also been highlighted.

Emphasizing the importance of scientific rigor and accountability, the letter underscored the need for NIST to prioritize these aspects when financing extramural research on AI safety.

This correspondence echoes the concerns expressed by AI and cybersecurity experts regarding the challenges NIST and federal agencies may encounter in implementing key components of the AI executive order.

On December 19, NIST issued a request for information soliciting suggestions on developing red-teaming and safety evaluation guidelines for AI designers. This guidance, mandated by the executive order, aims to assist creators of innovative AI systems that could pose risks to national security or public safety in the future.

Interested organizations can submit letters of interest for participation in a new partnership linked to the AI institute until January 15, 2024. The consortium aims to facilitate the development and responsible use of safe and reliable AI by establishing proven, adaptable techniques and metrics.

The establishment of the AI safety institute has garnered praise from policymakers, who recognize NIST as a trailblazer in establishing a robust, scientifically grounded framework for AI trust and security research.

Lawmakers expect NIST to uphold the same stringent standards of scientific excellence that govern the broader national research landscape when awarding funding for AI security research.

At the time of the request for comment from Data Security Media Group, NIST had not provided an immediate response.

Visited 2 times, 1 visit(s) today
Last modified: December 28, 2023
Close Search Window
Close