David Doermann, a specialist in artificial intelligence hailing from the University of Buffalo, raised concerns with legislators regarding the perils associated with deepfakes and deceptive advertising during his recent congressional testimony in 2019.
In his latest address on Capitol Hill, Doermann emphasized that the risks have only escalated since then.
During a subsequent speech to members of the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation on November 8, Doermann reiterated the necessity for increased funding to prevent the misuse of these systems.
Highlighting the importance of recognizing both the positive and negative impacts of advancing technologies, Doermann stressed the need for vigilance as these innovations progress rapidly. “Every year, we witness the dual nature of these advancements,” he remarked.
Recent incidents, such as the use of AI to generate nude images of high school students in New Jersey juxtaposed with the creation of new Beatles music, underscore the urgency in addressing the ongoing and future harms wrought by such technologies, despite existing executive orders and guidance from governmental and corporate entities.
Drawing on his expertise as a SUNY Empire Innovation Professor and former head of the Department of Computer Science and Engineering, Doermann delved into the detrimental effects of artificial or manipulated online content, citing instances of abuse, cyberbullying, and non-consensual dissemination of explicit material, all of which pose significant risks to individuals and national security.
Advocating for stringent federal regulations to govern the proliferation of manipulated media, Doermann stressed the imperative of striking a balance between freedom of expression and safeguarding against the malicious use of deepfakes. He underscored the pivotal role of public awareness campaigns and digital literacy initiatives in empowering individuals to discern fabricated content and deepfakes.
Addressing the multifaceted challenges posed by deepfakes, Doermann urged for collaborative efforts between Congress and technology companies, emphasizing the responsibility of tech firms in formulating and implementing policies to detect and combat algorithmically generated content on their platforms. He called for enhanced protection and consent laws to prevent unauthorized use of individuals’ likeness and messaging in algorithmic content.
Recognizing the complexity and pervasiveness of these societal challenges, Doermann underscored the need for ongoing research and development in AI and algorithmic systems, coupled with dedicated funding for initiatives combatting the misuse of deepfakes.
For further insights, Doermann’s comprehensive testimony before the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation is accessible on their official website.