Rep. Don Beyer, the vice chair of the Congressional AI Caucus and a member of the New Democrat Coalition’s AI working group, expresses optimism about the potential success of AI-related legislation in 2024.
Rep. Don Beyer, D-Va., was captured arriving for a House Democratic caucus meeting at the U.S. Capitol on May 31, 2023, in Washington, D.C. (Photo by Anna Moneymaker/Getty Images)
As a prominent figure in the field of artificial intelligence within the House, Rep. Don Beyer, a Democrat from Virginia, holds key positions as the vice chair of the bipartisan Congressional AI Caucus and the New Democrat Coalition’s technology working group. He has been actively advocating for legislation aimed at regulating AI technology, such as his recent proposal to enforce the AI risk management framework developed by the National Institute for Standards and Technology across federal agencies and vendors.
In addition to his legislative efforts, Rep. Beyer is pursuing a master’s degree in machine learning in his spare time, showcasing his dedication to understanding and advancing in the AI domain.
During a recent interview with FedScoop, Beyer shared his positive outlook on the possibility of President Joe Biden signing federal AI legislation this year. While acknowledging the existing skepticism due to the absence of finalized major AI-focused legislation and the lack of direct endorsement from House Speaker Mike Johnson, R-La., Beyer remains confident in the traction gained by the legislative proposals on the table.
He emphasized the bipartisan nature of the forthcoming legislation, highlighting its significance in contrast to the hands-off approach taken towards social media regulation in the past decades. By proactively addressing AI governance, Beyer aims to steer the country towards a more responsible and proactive stance on technological advancements.
In a detailed conversation, Beyer discussed various aspects of the House’s AI agenda for the year, including funding for the National Institute of Standards and Technology (NIST), potential risks associated with AI deployment, the role of Congress in AI governance, and the reasons behind his optimism for the future.
Editor’s note: The transcript has been edited for clarity and length.
FedScoop: Rep. Beyer, you have been actively involved in AI-related initiatives. How is your AI master’s program progressing alongside your legislative efforts?
Rep. Don Beyer: The program is going well. Balancing Monday and Wednesday classes with a Thursday morning lab has its challenges, especially with scheduling conflicts during hearings. The coursework, currently focused on object-oriented programming, is engaging and enjoyable. I look forward to delving deeper into the subject matter in the coming months.
FS: Let’s delve into the Federal Artificial Intelligence Risk Management Act that you recently introduced. What prompted this legislative proposal?
DB: The idea behind this legislation emerged approximately nine months ago as a pragmatic approach to setting new standards for the private sector, recognizing the complexities and time constraints involved. The legislation builds on the requirement outlined in the President’s Executive Order, mandating that federal agencies adhere to the NIST risk management framework in AI-related contracts. By formalizing this requirement through legislation, we aim to not only ensure responsible AI usage within the government but also signal to the private sector the importance of adopting similar standards.
FS: Considering the Office of Management and Budget’s forthcoming guidelines for AI usage by federal agencies, how do you foresee the interaction with existing AI regulations and standards?
DB: NIST has long been regarded as the gold standard for setting technical benchmarks, and we anticipate a convergence towards a cohesive set of effective standards in the AI domain. While challenges exist, particularly in the intricate supply chain of AI systems, addressing these complexities early on is crucial. Drawing parallels from computer science principles like inheritance, where existing code structures can be leveraged, we must proactively manage these challenges to facilitate smoother AI implementation.
FS: The idea of establishing a new regulatory agency for AI oversight has been discussed. What are your thoughts on this proposal?
DB: The prospect of a new regulatory agency for AI oversight presents a dual perspective. While international coordination through entities like the United Nations may be necessary given the global nature of AI governance, creating a new federal agency raises concerns. Acknowledging the diverse AI requirements across various government sectors, I am cautious about endorsing additional bureaucratic structures. Leveraging existing frameworks, such as integrating AI oversight within the NIST guidelines, can effectively address vendor relations within specific agencies.
FS: Critics may argue that the current risk management framework may not adequately address civil rights, bias, and safety concerns in AI applications. How do you respond to such concerns?
DB: Recognizing the diverse capabilities and commitments of different agencies, I believe in fostering multiple initiatives to address AI challenges. By encouraging various approaches and learning from both successful and unsuccessful endeavors, we can iteratively enhance our understanding and application of AI standards.
FS: Looking ahead to 2024, what are the key priorities for the Congressional AI Caucus, and how do you envision shaping the AI legislative landscape?
DB: While I serve as a humble vice chair, the focus within the Congressional AI Caucus revolves around selecting and advancing a subset of the numerous AI bills already introduced. Our primary goal is to secure the enactment of several AI bills under President Biden’s administration this year, laying a solid foundation for future legislative endeavors based on practical AI experiences.
FS: As a member of the AI working group within the New Democrat Coalition, how do you perceive the coalition’s approach to AI governance compared to other Democratic factions?
DB: Differentiating the New Democrat Coalition’s stance on AI governance from other Democratic groups, particularly the Progressive Caucus, remains a nuanced analysis. While the distinctions may lie in the level of ambition, I find minimal disparities between the bipartisan Congressional AI Caucus and the New Democrat Coalition’s shared commitment to advancing AI legislation.
FS: Concerns about the potential misuse of generative AI tools in upcoming elections have been raised. How do you view this issue, and what measures are being considered to address such challenges?
DB: The apprehension surrounding the misuse of generative AI tools, especially in election contexts, is a valid concern echoed globally. While anticipating such challenges, our emphasis lies on fostering public awareness and skepticism towards manipulated content. By advocating for transparency measures, such as mandatory ad disclosures, we aim to mitigate the risks associated with AI-driven misinformation campaigns.
FS: Reflecting on the most promising and concerning aspects of AI technology, what excites you the most, and what do you perceive as the primary challenges moving forward?
DB: The transformative potential of AI in scientific realms, particularly in medical applications, stands out as a groundbreaking development. Innovations like AlphaFold, with its precision in protein folding, hold immense promise for accelerating drug discovery and advancing healthcare solutions. Conversely, the imminent challenge of job displacement underscores a pressing public policy and societal concern. Addressing these disruptions necessitates proactive strategies to navigate the evolving AI landscape, while also delving deeper into understanding and mitigating existential risks associated with AI advancements.
FS: Thank you for sharing your insights and perspectives on AI governance and legislation, Rep. Beyer.
Written by Rebecca Heilweil
Rebecca Heilweil is a technology reporter for FedScoop, where she covers topics including space, transportation, quantum computing, and disaster management.
Previously, she was a reporter at Recode/Vox and has contributed to publications such as Fortune, Slate, The Wall Street Journal, and the Philadelphia Inquirer.
You can reach her at rebecca.heilweil@ fedscoop.com