On Monday, an executive order on artificial intelligence (AI) was issued by President Joe Biden, establishing a framework for the governance of AI technology in the United States. This order aims to tackle various AI-related issues that have emerged over the past year. It encompasses requirements for testing advanced AI models to prevent weaponization, the implementation of mandatory watermarking on AI-generated content, and measures to mitigate the potential impact of AI on employment.
While the directive is specific to the US federal government, the country currently leads the global AI landscape, making any regulatory changes likely to have international implications. By stipulating that federal agencies can only engage with compliant companies, President Biden is leveraging the substantial sum of $694 billion in federal contracts to drive broader industry adherence to the new standards.
Spanning a considerable length of 20,000 words, the AI executive order emphasizes the necessity for enhanced ethics and rigorous testing of AI systems. It mandates that government agencies conduct thorough and standardized evaluations of AI systems, ensuring ethical development, proper testing, and the presence of watermarks on AI-generated content. The order also underscores the importance of strengthening consumer protection laws concerning AI to address issues such as bias, discrimination, and privacy violations.
Moreover, the executive order sets clear timelines for both government agencies and tech firms to meet specific requirements, particularly focusing on generative AI technologies. It outlines detailed expectations for various deliverables, such as guidelines from the National Institute of Standards and Technology (NIST) on generative AI within 270 days and recommendations for detecting synthetic content within 240 days.
Additionally, the order mandates that AI companies demonstrate best practices in cybersecurity, requiring continuous reporting on security measures for foundational models. This move, while deemed necessary for enhancing security measures, may impact competition among smaller AI firms.
A significant aspect of the executive order is the concern about AI’s potential misuse in bio-weapons and CBRN threats. While acknowledging the dual nature of AI as both a tool for advancement and a potential risk factor, the order aims to compel companies working with AI tools to prevent inadvertent contributions to CBRN threats.
Furthermore, the directive highlights a strong focus on upskilling the workforce to navigate the evolving AI landscape and minimize job displacement. By investing in AI-related education, training, research, and attracting talent, the US government aims to empower employees across various roles with AI-related skills. This emphasis on upskilling reflects a recognition of AI’s transformative impact on the workforce and the need for widespread education on AI tools.
In conclusion, the executive order signals a significant shift in AI governance, emphasizing stricter requirements for AI development, increased focus on cybersecurity roles, and a widespread push for workforce upskilling. While the order is not immediate legislation, it provides a glimpse into the evolving AI landscape and the changing dynamics of the global workforce, particularly in the realm of technology and AI-related careers.