In 1914, just before the outbreak of the First World War, HG Wells released a novel exploring the potential for an even more catastrophic conflict. The World Set Free envisions the development of atomic weapons long before the Manhattan Project, enabling individuals to carry a destructive amount of latent energy in a handbag, capable of devastating half a city. This leads to a global war culminating in an atomic apocalypse, ultimately necessitating the establishment of a global government to restore peace.
Wells’ apprehensions extended beyond the hazards of new technology to encompass concerns about democracy. In his narrative, the world government is not a product of democratic consensus but rather an imposed benevolent dictatorship. King Egbert of England ominously remarks that “the governed will show their consent by silence,” reflecting Wells’ belief that the “common man” is inept in social and public matters, requiring an educated elite with a scientific mindset to safeguard democracy from its inherent flaws.
Fast forward a century, and artificial intelligence (AI) emerges as a technology evoking similar awe and trepidation. Discussions surrounding AI, from the corridors of Silicon Valley to the meetings at Davos, oscillate between the immense benefits it promises and the existential threat posed by superintelligent machines assuming dominion over humanity. Once again, democratic principles and societal control lie at the crux of the debate.
In 2015, journalist Steven Levy engaged in a conversation with Elon Musk and Sam Altman, founders of OpenAI, the organization behind the ChatGPT chatbot that captivated the public. Concerned about AI’s implications, a cadre of Silicon Valley luminaries established OpenAI as a non-profit venture dedicated to ethically advancing technology for the betterment of all humankind.
The discourse on AI’s future diverges into the dichotomy of proliferating numerous AIs versus limiting their numbers. Musk contemplates the empowerment of malevolent actors like “Dr Evil” based on the control of AI technology, advocating for broader access to prevent detrimental consequences.
However, the narrative takes a turn as tech companies themselves navigate ethical dilemmas. Musk’s legal action against OpenAI underscores the conflict between profit motives and public welfare, exemplifying the industry’s struggle to prioritize humanity’s interests over financial gains.
As AI evolves, the veil of secrecy shrouding its development raises concerns about accountability and transparency. The tension between safeguarding against malicious exploitation and fostering open collaboration underscores the intricate power dynamics within the tech landscape.
Reflecting on Wells’ era of political upheaval and debates over extending suffrage, contemporary discourse grapples with the role of democracy amidst societal complexities. The tension between empowering the masses and entrusting critical decisions to an educated few mirrors historical and present-day dilemmas.
In navigating the AI discourse, it becomes evident that the real challenges lie not in existential threats but in societal implications. Issues such as algorithmic bias, surveillance, disinformation, and censorship underscore the urgency of addressing how technology reinforces existing inequalities and power structures.
The “Egbert manoeuvre,” akin to Wells’ narrative, symbolizes the inclination to shield certain technologies from democratic oversight under the pretext of safeguarding against malevolent forces. This posture not only perpetuates a culture of fear but also enables entities to evade accountability through veiled secrecy.