Is the impact of artificial intelligence more likely to be negative than positive? The answer remains uncertain.
In the latest episode of Globe Opinion’s Say More podcast, concerns are raised about the potential interference of AI in the upcoming 2024 election. Miles Taylor, a national security expert, has been investigating the utilization of new AI technologies by malicious entities to manipulate voters.
Taylor, previously known for his anonymous op-ed in the New York Times during the Trump administration, where he served as the chief of staff at the Department of Homeland Security, emphasizes the importance of addressing the looming threat of AI-generated disinformation to protect the integrity of our democracy.
The rise of disingenuous content has been exacerbated by the interconnectedness facilitated by social media and the escalating partisanship in our political landscape. The advent of artificial intelligence has significantly streamlined the process of creating deceptive content that appears authentic, leading Taylor to assert that the proliferation of such content during the upcoming elections is not just a possibility but a near certainty.
The evolution of misinformation campaigns, particularly in the context of the 2016 election and subsequent instances in other countries, underscores the alarming potential of AI in fabricating convincing yet false narratives with minimal effort and resources.
Attributing the responsibility to discern and address these deceptive tactics becomes increasingly challenging as AI tools become more accessible to a wider range of actors, ranging from political entities to foreign governments.
Taylor stresses the inadequacy of our current preparedness to combat this growing threat and advocates for comprehensive information campaigns targeting election officials at all levels. Red team exercises and enhanced collaboration between federal agencies and local governments are proposed as essential steps towards mitigating the risks posed by AI-generated disinformation.
While acknowledging the dual nature of AI as both a threat and a potential solution in identifying and combating deepfakes, the critical question remains: where should these AI detection tools be primarily deployed to maximize their effectiveness?
The discussion extends to the role of social media companies in regulating AI-generated content, with Taylor expressing more concern about the vulnerability of smaller businesses and local governments to falling victim to sophisticated deepfake technology.
The conversation delves into the broader societal implications of widespread distrust in visual and auditory information, potentially leading to a crisis of credibility where even egregious statements may be dismissed as fabricated, posing a significant challenge to the democratic process.
In light of these concerns, the convergence of AI-enabled disinformation and the political strategies of figures like Trump, known for sowing doubt and discord, raises profound questions about the future of information integrity and democratic stability.
For further insights, listen to the full podcast episode at globe.com/saymore.
Rob Gavin can be reached at [email protected]. Shirley Leung is a Business columnist. She can be reached at [email protected].