Written by 12:19 pm Deepfakes, Uncategorized

### The Risk Posed by Deepfakes: A Comprehensive Analysis

UB AI expert David Doermann warned lawmakers about the dangers of deepfakes and other synthetic med…

Deepfakes and other synthetic content on the internet are posing a significant threat to lawmakers, as highlighted by David Doermann, an artificial intelligence analyst at UB. In his latest testimony before Congress, he emphasized the escalating risk posed by these technologies compared to his last appearance in 2019.

During his recent address to the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation on November 8, Doermann reiterated the urgent need for increased funding to combat the misuse of these systems. He stressed the importance of recognizing both the positive advancements and potential negative consequences associated with the rapid evolution of these technologies.

Doermann cited recent examples, such as the creation of fake nude images of a high school student in New Jersey and the completion of a Beatles song using AI, to underscore the urgency of addressing the harm caused by these technologies. He expressed concerns about the inadequate pace of efforts to mitigate the ongoing damage and emphasized the need for stronger governmental actions and industry leadership.

As a SUNY Empire Innovation Professor and former head of the Department of Computer Science and Engineering, Doermann elaborated on the various harmful implications of manipulated or synthetic online content. Beyond issues like non-consensual imagery, cyberbullying, and intimidation, he warned of broader security risks, including the potential for impersonating government officials or military personnel, leading to misinformation and security threats.

Doermann advocated for national policies to regulate the use of manipulated technology as it becomes more sophisticated. He urged lawmakers to strike a balance between protecting free speech and implementing measures to prevent the malicious use of deepfakes. Promoting public awareness and digital literacy programs was highlighted as crucial in combating the spread of fake content and deepfakes.

UB analysts are actively engaged in addressing these challenges with federal support, including initiatives like the DARPA Semantic Media Forensic Program and the Center for Information Integrity. These efforts aim to develop tools to help individuals, particularly older adults and children, identify and combat online misinformation effectively.

Recognizing the multifaceted nature of these societal challenges, Doermann emphasized the necessity of collaboration between Congress and technology companies. He called for tech firms to take responsibility for implementing policies to detect and mitigate algorithmic manipulation on their platforms. Strengthening privacy laws and consent regulations to prevent unauthorized use of individuals’ likeness in fake content was deemed essential, alongside increased funding for anti-manipulation initiatives and ongoing research in artificial intelligence and synthetic media technologies.

Visited 1 times, 1 visit(s) today
Last modified: February 7, 2024
Close Search Window