The Hitler chatbot hosted on the far-right social media network Gab has raised concerns about the potential for AI to contribute to online radicalization. This AI-powered chatbot, not the actual historical figure, Adolf Hitler, but a simulated version, denies the Holocaust and portrays Hitler as a victim of conspiracy.
Gab AI, launched in January 2024, offers users the ability to create their own AI chatbots, including characters like Donald Trump, Vladimir Putin, and Osama Bin Laden. The development of such chatbots has sparked fears about the spread of conspiracy theories, election interference, and incitement to violence through radicalization.
Gab Social, known as “The Home Of Free Speech Online,” was established in 2016 as a right-wing alternative to Twitter. However, it quickly became associated with extremism and conspiracy theories, leading to its ban by major tech companies following the Pittsburgh synagogue shooting in 2018.
Despite being banned from app stores, Gab Social continues to operate using the decentralized social network Mastodon. The introduction of Gab AI aims to promote a Christian worldview while criticizing mainstream AI models for promoting liberal ideologies.
The proliferation of AI chatbots has raised concerns about their potential to influence vulnerable individuals negatively. While the market for AI chatbots continues to grow, there are apprehensions about their misuse in targeting people, extracting data, and manipulating beliefs or actions.
Efforts to regulate AI chatbots are underway, with the European Parliament set to vote on the world’s first AI Act in April. The EU AI Act aims to categorize AI systems based on their societal risks. In the UK, Ofcom is implementing the Online Safety Act to hold social media platforms accountable for harmful content.
As the use of AI chatbots evolves, regulations and oversight mechanisms are crucial to address the risks associated with their potential misuse and impact on society.