Written by 5:00 am AI, AI policies, AI Security

– **Biden’s AI Entrepreneurship: Advancing Security with a Dynamic Approach**

The EO on AI leans on established policy activities in cyber, but the government really has to focu…

Although the executive order (EO) regarding artificial intelligence (AI) issued by the Biden administration focuses on policy areas within the American executive branch, its implications are far-reaching. These directives hold significance not only in shaping domestic and international laws and regulations but also in influencing industry best practices.

In recent times, policymakers have placed a strong emphasis on advancing AI technologies, particularly in the realm of generative AI. Moreover, there has been a growing call from prominent figures in the industry to establish safeguards for artificial general intelligence (AGI), garnering increased attention in Washington. Rather than viewing the EO as a definitive conclusion, it should be seen as an initial and crucial step towards addressing AI policy comprehensively.

Drawing from our extensive experience with AI since the establishment of the bank in 2011, we aim to shed light on key issues concerning technology, public strategies, and cybersecurity.

Entrepreneurial Perspective

Entrepreneurship, much like the technology it seeks to influence, encompasses various facets. Its scope spans across 13 key areas encompassing a wide array of operational and policy needs. These areas encompass aspects ranging from security and law enforcement to consumer protection and the AI workforce. Notably, the intersection of AI and security has garnered significant attention, elaborated upon in Section 4.

Before delving into specific security provisions, it is essential to highlight some key observations regarding the overall scope and approach of the article. The narrative strikes a delicate balance between acknowledging potential risks and advocating for innovation, experimentation, and potentially transformative technologies. While stakeholders may hold differing views on achieving equilibrium in complex policy domains, several aspects of the document are promising.

Furthermore, the document designates authorities as “owners” of particular next steps in various EO domains, facilitating feedback from participants and reducing the likelihood of errors or redundancies.

Additionally, Entrepreneurship outlines avenues for soliciting client feedback and consultation, which are likely to materialize through requests for comment (RFC) opportunities issued by specific agencies. To assimilate structured client feedback on AI policy matters, existing EO mandates in numerous areas establish or propose the formation of new advisory panels.

Entrepreneurship emphasizes expeditious progress in the following actions. By stipulating that agencies must fulfill tasks within 240-day deadlines, as opposed to shorter and potentially burdensome timelines, the EO aims to streamline engagement periods and enhance the efficacy of RFC processes.

Lastly, the EO explicitly states: “As conceptual AI products become widely available and popular on virtual platforms, agencies are advised against imposing broad general bans or restrictions on agency utilization of generative AI.” This guidance aims to ensure that federal agencies explore beneficial applications of AI within their respective mission areas. By encouraging technological advancements both within and outside government spheres, the EO fosters a climate conducive to innovation.

EU’s Security Measures

Within the realm of security, Entrepreneurship elaborates on several critical facets. Notably, the document acknowledges agencies such as the National Institute of Standards and Technology (NIST), Cybersecurity and Infrastructure Security Agency (CISA), and Office of the National Cyber Director (ONCD) for their substantial expertise in applied cybersecurity.

One area of focus within the EU pertains to mitigating risks associated with synthetic content, including conceptual audio, imagery, and text. The outlined actions are exploratory rather than prescriptive, necessitating collaborative efforts to devise effective solutions. With impending decisions on the horizon, swift progress in this domain is imperative.

The EU’s directives underscore the importance of enumerating AI strategies through established frameworks, some of which closely align with ongoing cybersecurity initiatives. By aligning with frameworks such as the AI Risk Management Framework (NIST AI 100-1), the Secure Software Development Framework, and the Blueprint for an AI Bill of Rights, the EU aims to mitigate risks inherent in deploying novel technologies while fostering cohesive approaches in areas where distinctions between software, surveillance, and AI blur.

Moreover, the document advocates for leveraging sector-specific risk management agencies (SRMAs) to enhance preparedness within critical infrastructure sectors. It mandates a comprehensive review and evaluation of potential risks related to AI utilization in critical infrastructure sectors, emphasizing the need to address vulnerabilities that could expose critical systems to failures, physical attacks, and cyber threats.

While this directive marks a pivotal development in American AI policy, it arrives at a crucial juncture. As highlighted in our testimony to the House Judiciary Committee, AI stands to enhance security outcomes, attracting increased interest from cybersecurity experts. Collaborative efforts are imperative to ensure that defenders harness the potential benefits of AI while mitigating risks posed by malicious actors leveraging AI systems.

Robert Sheldon, Senior Director of Public Policy and Strategy at CrowdStrike, and Drew Bagley, Vice Chairman of Digital Strategy.

Visited 1 times, 1 visit(s) today
Last modified: November 21, 2023
Close Search Window
Close