While artificial intelligence (AI) holds the promise of ushering in positive societal transformations, recent research underscores the critical importance of comprehending and mitigating the associated risks. Joe Burton, an educator at Lancaster University in the United Kingdom, contends that AI and systems transcend mere tools employed by regional security entities to combat harmful online behaviors.
Burton suggests that AI and systems can inadvertently fuel polarization, extremism, and threats to democracy, thereby posing inherent risks to national security. These insights stem from a recent study published in the Technology in Society Journal.
Contrary to the prevailing narrative that portrays AI as a countermeasure against violent extremism, Burton argues that there exists a darker side to this narrative. The research delves into the historical polarization of AI in various media and cultural depictions, highlighting contemporary instances where AI has irrationally radicalized individuals, leading to politically motivated crimes.
Drawing parallels to popular film franchises like The Terminator, which depict AI perpetrating heinous acts with malevolent intent, Burton emphasizes how such portrayals have shaped public perceptions, instilling a fear of AI potentially triggering catastrophic events such as intentional nuclear warfare or species annihilation.
Burton notes that governmental bodies and security agencies aim to influence technological advancements to mitigate risks and harness the positive potentials of AI. The prevalent skepticism towards machines, coupled with apprehensions regarding their implications on natural, nuclear, or biological threats, underscores the imperative for proactive measures.
Highlighting the autonomy and advanced capabilities of powerful drones utilized in conflicts like Ukraine, Burton underscores the swift integration of AI into military technologies, despite ongoing discussions at international forums like the UN to regulate autonomous weapons systems.
AI’s pervasive utilization in cybersecurity, particularly in combating disinformation and online psychological warfare, underscores its dual nature as a tool for public health surveillance during the pandemic while also raising concerns about privacy and human rights implications.
The report scrutinizes the inherent issues surrounding AI systems, including their conceptualization, data utilization, operational methodologies, and broader societal impacts. Burton stresses the imperative of comprehensively understanding and managing the risks associated with AI, acknowledging its transformative potential while advocating for responsible governance and oversight mechanisms.