A recent study has projected that the malicious use of AI by individuals intent on causing online harm through disseminating false information will become a daily occurrence by the middle of 2024. This forecast raises concerns, especially with over 50 countries, including the US, scheduled to conduct national elections this year, the outcomes of which will have a global impact.
Even before the introduction of the latest versions of Generative Pretrained Transformer (GPT) systems, experts in AI had predicted that by 2026, 90% of online content would be computer-generated without human involvement, leading to the proliferation of misinformation and disinformation.
There is a growing belief that major social media platforms, with their vast user bases, require regulation to mitigate risks. While this belief holds true to some extent and has resulted in legislative measures like the EU’s Digital Services Act and AI Act, there are also smaller “bad actors” – individuals, organizations, and nations that deliberately engage in harmful behaviors – who exploit AI.
A research study conducted by George Washington University (GW) is the first quantitative analysis to examine how these bad actors might misuse AI and GPT systems to propagate harm globally across various social media platforms and proposes solutions to address this issue.
The study, led by Neil Johnson, emphasizes the importance of understanding the battlefield to effectively combat the dangers posed by AI. By mapping the interconnected network of social media communities, the researchers identified extreme “anti-X” groups that promote hate speech, extreme nationalism, or racism. These communities form clusters across different platforms, influencing each other’s members without direct communication.
The researchers predict that by mid-2024, the misuse of AI by bad actors will escalate, facilitated by basic AI tools like GPT-2 that can mimic human content and style, making them attractive for generating inflammatory content. This activity is expected to impact over one billion users across bad-actor communities and vulnerable mainstream communities, creating a global online ecosystem conducive to malicious AI activities.
With 2024 anticipated as a significant election year globally, the threat of bad actors exploiting AI to spread disinformation during these crucial events is a real concern. The researchers suggest that social media platforms adopt strategies to combat disinformation rather than removing all content generated by bad actors.
While acknowledging the evolving nature of AI technology and the online landscape, the researchers caution that their predictions are based on historical data and provide a foundation for policy discussions on addressing bad-actor-AI challenges.
The study, published in the journal PNAS Nexus, sheds light on the critical issues surrounding the misuse of AI by malicious actors in the digital realm.