You know what I yearn for? The feeling of genuine astonishment evoked by encountering a piece of art online. Just a few years ago (though technology progresses swiftly), stumbling upon a captivating artwork while casually scrolling through your feed was a delightful experience. You could simply gaze at it and immerse yourself in its beauty.
In the current era, discerning authentic artistic endeavors has become increasingly challenging. Artworks that initially seem rich in texture and detail often lose their allure upon closer inspection. Even when you chance upon a truly exceptional piece, it demands a considerable amount of time to grasp its true value. It’s almost akin to meticulously inspecting a mattress for bedbugs in a questionable hotel.
AI has turned us all into detectives. To avoid being misled, one must embrace a skeptical and critical mindset. Even when successful in avoiding deception, it’s often due to excessive caution. I briefly touched on this topic last year, albeit in the context of game analysis. Nonetheless, the pervasive existence of “algorithmic look-alikes” on the web is profoundly disheartening.
The internet has always been a domain where skepticism is warranted. Rumors tend to intertwine themselves into every facet of online content. However, I find myself growing increasingly guarded and cynical, erecting a mental barrier that questions, “Is this the handiwork of AI?” before fully engaging with any content.
The Impact of AI Challenges Everyone
Significant endeavors are essential to uphold the usability of the internet—although achieving that objective is uncertain, especially with the looming specter of technological upheaval. Proponents of AI often extol its capacity to expedite processes, nurture creativity, “democratize art,” and more. While I acknowledge that this technology can provide assistance in various ways, the current reality is that AI is causing more issues than it resolves, even for those presumed to benefit from it.
Take AI-generated art, for example. By and large, it lacks broad appeal. While meme images and anime-style illustrations may garner some appreciation, most audiences feel shortchanged when confronted with computer-generated content. Although we haven’t reached the extreme backlash experienced by NFTs, the sentiment remains negative.
A prominent company, Wizards of the Coast, has faced challenges in this regard on multiple occasions recently. In August last year, the company had to expel an artist from one of its publications for utilizing AI without disclosure. Similarly, in January, Wizards encountered backlash over a promotional image for Magic: The Gathering, initially declared to be AI-free but later revealed to involve AI tools.
The public outcry compelled Wizards to backtrack and issue corrections, underscoring the inadequacy of their internal quality control measures. Identifying AI involvement in these instances was relatively straightforward—awkward anatomy, distorted proportions, illogical elements. However, such observations are often made in hindsight, following meticulous scrutiny by numerous discerning individuals.
It seems that the individuals responsible for approval lacked the requisite skills or time to conduct a thorough review. Wizards of the Coast needs to prioritize more than just plagiarism detection—it must also ensure artistic integrity. This may entail involving not just one, but two artists—one to create the artwork and another to authenticate its genuineness.
This raises the question: Is AI genuinely cost-effective in the long run for these entities? Unless we become desensitized to AI-generated content, the discovery of AI involvement poses a financial risk in terms of reputation and goodwill.
To mitigate this risk, substantial effort is needed to either conceal AI involvement or enlist experts to filter out such content. In such a scenario, why not simply remunerate a single artist adequately to produce the artwork? Initially, I believed that AI was restricted to generating obviously subpar art. However, the emergence of Open AI’s new Sora model shattered that assumption.
A Sudden Change in Focus: Adapting to Unanticipated Developments
I typically refrain from divulging the behind-the-scenes process, but in this instance, it seems necessary to illustrate how swiftly this technology can catch us off guard. The initial draft of this article solely addressed 2D AI art and the painstaking process of scrutinizing minute details like eyes and hands. However, upon waking up and intending to revise the draft, a significant development occurred.
View post on imgur.com“
This mirrors the current state of affairs after just a few years of AI advancement. While Sora’s creations often exhibit dreamlike qualities with surreal inconsistencies reminiscent of an acid trip, assuming this trend will endure feels like willingly stepping into boiling water.
Undoubtedly, AI technology offers various advantages. However, the prevailing sense of doubt and fatigue will only escalate unless concrete actions are taken. With a pivotal election on the horizon in the United States, allowing this technology to flourish unchecked, driven by tech enthusiasts’ fantasies, will have adverse consequences for all.
Nevertheless, Open AI reassures visitors on its website by mentioning collaborations with domain experts to test the model adversarially, focusing on areas like misinformation, hateful content, and bias. This assurance is akin to consulting radiation experts post-nuclear bomb development.
While the Pandora’s box of AI has been opened, we need not passively succumb to its potential pitfalls. The mere existence of a capability does not mandate its inevitable use. As my colleague from PC Gamer, Joshua Wolens, astutely pointed out last year, there are various avenues for social action on both personal and governmental levels—after all, safety protocols are routinely implemented in diverse sectors.
Regulations serve a vital purpose, ensuring the safety of our drinking water and the edibility of our food. Technology should not be adopted simply because it exists; for instance, we’ve possessed the means to obliterate life on Earth since 1945. Despite some tense moments, we have managed to avert catastrophe for over half a century. Humanity deserves some credit.
Ideally, this is the perspective I strive to maintain. However, my inner skeptic remains unsurprised by our current predicament. The inundation of AI-generated content on the internet and its adverse impact on search engine usability reflect our cultural preference for quantity over quality. This inclination is evident in the rhetoric of some of AI’s ardent supporters—envisioning perpetual TV series or games with infinite side quests.
If anything, AI serves as a harbinger of a longstanding cultural issue, one that previously primarily affected creatives. It’s easy to overlook the origins of processed food when it tastes satisfactory, but it becomes challenging to ignore when hidden hazards emerge.
While our pre-AI circumstances were far from ideal, we are swiftly transitioning from a state of discontent to collective detriment. Artists, writers, and performers continue to face exploitation, with poorly compensated work disappearing before their eyes. Presently, corporations may reap certain benefits, but many are encountering obstacles that will impact their bottom line.
Sora primarily caters to unscrupulous advertisers seeking cost-effective solutions—a major advantage. However, the drawbacks far outweigh this benefit. “Yes, the internet may be jeopardized. But for a fleeting moment, we generated significant value for corporations?” Surely, we can aspire to achieve more.
Until that transformation materializes, we are all witnessing a gradual deterioration of the internet in myriad subtle ways. Soon, watching a video without scrutinizing for morphing objects, multiplying appendages, or shifting landscapes will be a thing of the past, perpetuating a cycle of paranoia until a breaking point is reached.