The Evolution of AI Perception: From Weird to Boring
By Adi Robertson, a senior editor specializing in VR, online platforms, and free expression at The Verge, covering topics such as video games and biohacking.
Back in 2018, a trend emerged on the internet involving scripts generated by feeding vast amounts of content, like Saw films or Olive Garden commercials, to artificial intelligence models. The outcome was a distorted, nonsensical version of the original material, filled with absurd phrases like “lasagna wings with extra Italy” or “secret soup-filled mouths.” While these scripts likely weren’t truly AI-generated, they reflected a prevailing view at the time that AI was synonymous with strangeness.
The era of peculiar AI manifestations was prominent then. Games like AI Dungeon, powered by OpenAI’s GPT-2 and GPT-3, promised intricate narratives even about mundane objects like chairs. Early AI art tools, exemplified by Google’s Deep Dream, produced eerie, surrealistic images reminiscent of Giger’s style. Janelle Shane’s AI Weirdness blog epitomized this trend, featuring AI-generated content like implausible nuclear warnings and inedible recipes. The notion of something being “made by a bot” implied a whimsical, nonsensical quality due to the limitations of the models and their novelty rather than practicality.
However, the landscape of generative AI has since shifted dramatically. The term “AI” has transitioned from representing eccentricity to mediocrity, as highlighted by Caroline Mimbs Nyce in The Atlantic. This shift is partly attributed to significant advancements in AI capabilities, moving beyond the dream-like logic of early models to producing coherent, albeit clichéd, output. For instance, the AI-written short film Sunspring in 2016 featured characters with generic designations due to the bot’s inability to grasp proper names, resulting in dialogue that, while technically accurate, felt cryptic and disjointed. Contrast this with modern tools like Sudowrite, leveraging advanced models to generate text resembling conventional genre prose.
As generative AI tools have gained traction, they have become more commercialized and, paradoxically, less captivating. Companies are integrating AI into various applications, often with underwhelming results. AI-generated content, particularly in low-quality spam, prioritizes clickbait tactics over informative or engaging material. Similarly, AI image generators have shifted from innovative experiments to producing generic stock imagery and problematic deepfakes, diluting their initial artistic appeal.
Moreover, the safety concerns surrounding AI have led to stricter regulations and training, limiting the creative freedom once associated with these tools. Instances of AI models like ChatGPT refusing to engage in imaginative scenarios indicate a departure from unconventional uses towards standardized, unexciting interactions. The quest for profitability has steered AI tools towards banality, sacrificing uniqueness for mass appeal.
While AI still possesses comedic potential, it often relies on exaggerated, commercialized absurdity for humor. Whether it’s a nonsensical product listing on Amazon or a sports-writing bot’s lackluster match summaries, the humor stems from human misinterpretation or misuse of AI capabilities. This transitional phase in AI development may eventually lead to more refined, innovative applications that enhance human creativity rather than overshadowing it.
In this current phase, where AI teeters between novelty and mundanity, there remains hope for a future where AI augments human creativity rather than diluting it. By guiding AI tools towards clever juxtapositions and unexpected remixes of information, we may witness a resurgence of genuinely engaging and original AI-generated content. Until then, the era of AI-produced content that is indistinguishable from human-created work may still be a distant reality.