Written by 8:57 am Generative AI

The Creator Economy is currently dealing with a great storm of pirates and AI-generated information.

From Taylor Swift deepfakes to systematic theft of copyrighted content, social platforms are being …

A small wind does not frighten Brandon Martin. For more than ten years, the Emmy-winning filmmaker has posted some of the most moving images of extreme weather events. His films have amassed more than 100 million views, his YouTube channel WX Chasing has over 77,000 subscribers, and his work is frequently highlighted in well-known news articles and features.

A great storm of ambition, systems, and indifference that threatens Martin’s livelihood and the livelihoods of almost every creator hoping to sell their work on platforms like YouTube, Instagram, and Facebook, however, is one system that he has been tracking and is causing him more than a small concern.

In a phone interview earlier this month, Martin stated that it was “destroying my business, it’s putting so much stress and anxiety into my head that I can’t sleep, and I’m unable to stop thinking about it.”

The plague that is twisting the storm chaser? Dark operations that run under hundreds of fake accounts in dozens of languages and use the power of relational AI at a level that threatens to destroy human-generated content have been stealing copyrighted footage and repackaging it into clickbait on social media platforms.

He claimed that more than 60,000 pages on Facebook had stolen some of his footage. Some of these pages have thousands of readers and followers, while others have none. However, you simply cannot earn income when you divide your views by 60,000. Your content is ruined by overexposure, and you can’t grow an audience.

Kyle Hill, a YouTuber for research, expresses Martin’s worries about the danger. Even though he has thus far benefited from his viewer’s ability to “tell the distinction between high-quality content and auto-generated blah-clah,” the problem has spread far beyond the information, knowledge, and video content market. He is very concerned that the bad actors are gaining ground. A more spectacular and horrifying example of the same toxic combination of generative AI tools in the hands of dishonest operators hijacking the scope and reach of social platforms for their own gain was yesterday’s situation where Twitter was inundated with fake Taylor Swift images.

The main problem, he said, is that these con artists can quickly create and seize content. Until the copyright holder or another party claims that the content was stolen, YouTube and other platforms will provide them with ad revenue. Therefore, these actors can consistently make enough money to continue their operations before any one creator (like me) has the time to find and claim it by creating literally dozens of channels that upload new videos every few hours.

In some videos describing the issue, including this one, Hill and Martin cited certain instances of YouTube channels trafficking in AI-generated fake science content.

Since the launch of the first content platform, copyright infringement has been a problem. However, the development of relational AI for text, voice, imagery, and video has accelerated the ability of thieves to churn out hundreds or thousands of videos under AI-generated headlines and thumbnails engineered to gain views, frequently containing false, deceptive, or outright incomprehensible content generated quickly from Wikipedia posts or strange web scrapes.

Kyle Hill, a scientific YouTuber, 2024.

Hill remarked, “What I worry about in the short term is basically being drowned out by nonsense.” When you just want something quick to watch during your lunch break, there isn’t enough time in the day to separate the good from the bad. It’s a tried-and-true deception strategy. You just need to contaminate the well enough that no one cares, you don’t have to lie.

Jevin West is an associate professor at the University of Washington’s Information School and the co-founder of the Center for an Informed Public, which investigates how dangerous and false ideas spread throughout the modern world. Does users like a real people behind that information, is the real question. And the data is ambiguous on that,” he added, noting that the latest data do not yet point to an increase in the rate of misinformation spreading since conceptual AI tools became widely used in 2022–2023. “The risk is that opportunists flood in when there are data gaps, such as during natural disasters or votes.” It will probably find worse, in my opinion.

Not all material produced by AI is stolen, fake, or dangerous. People creators occasionally use the tools to improve their own content or raise production standards. Hill argued that these videos may be allowed to compete if they were using AI resources to produce previously unheard-of works, innovations, or breakthroughs. However, these pirate videos are not that. They are text-to-speech Wikipedia comments that cover content that has been taken from websites like Netflix, the Weather Channel, and yours truly.

In response to these problems, YouTube recently updated its guidelines for “responsible AI innovation” by providing creators with more information about content that includes AI-generated elements. In a blog post from November 2023, YouTube professionals Jennifer Flannery O’Connor and Emily Moxley stated that “specifically, we’ll need creators to share when they’ve created altered or synthetic content, including using AI tools.” “We will give creators new options to choose from when they upload content to indicate whether it contains genuine altered or synthetic material.” This could be an AI-generated video that depicts a hypothetical event or content that shows someone saying or doing something they didn’t actually do.

The new policy also includes a clause that allows for the removal of material that falls short of decency standards or violates people’s privacy by presenting them without their consent. The post does, however, state that “Not all content may be removed from YouTube, and we’ll take a variety of factors into consideration when evaluating these requests.”

That’s not good enough for creators like Brandon Martin who have been the victims of comprehensive material theft. The majority of these platforms, according to him, simply permit access to significant production houses and big music labels. However, they do have tools that enable the recognition of content and give creators the right to work on it. Regular creators can make the majority of their money and take down or make personal their videos before they appear on the platform’s radar because filing takedown requests against bad actors is time-consuming, unwieldy, and frequently ambiguous. Even when the victim is one of the biggest and most powerful celebrities in the world, negative information can spread quickly while countermeasures take time, as we saw on Monday with the Taylor Swift fakes.

As the investigation is ongoing, offenders are permitted to cancel evidence, he said. “There is no penalty.” They are rewarded because they can avoid punishment. As soon as a DMCA (Digital Millennium Copyright Act) demand is submitted, YouTube would have no trouble stopping any action on the video. They may put a stop to it if they so desired.

Martin has taken matters into his own hands in response to the websites’ lack of interest, setting up a business called ViralDRM to represent developers in legal and procedural steps, and filing DMCA takedowns against offenders, including the American media networks News Nation, TV9 Bharatvarsh, and Zee News.

Despite these efforts, there is no global agreement on how to address the issue, making the legal and regulatory networks blunt tools to use against such quick-moving systems.

For perhaps a company with more money than some nations, it’s not an easy solution, according to West. They can initially exert pressure by watermarking anything that has been artificially created. There will be a sizable portion that will not abide by those norms because that may only make up 80% of the content.

Kyle Hill thinks there might be some substitutes for creators and conscious customers. The majority of the bigger authors I am aware of have direct ways to support them, and that can be a huge help. Patreon, members on YouTube, goods, etc. However, despite the fact that there have been some benefits to other media and revenue streams outside of the ad-driven model, I continue to be concerned that the disruption of our instructional ecosystem will be more detrimental than beneficial. More email, lies, trust, fanaticism, and divisiveness are all present.

Visited 2 times, 1 visit(s) today
Tags: Last modified: April 19, 2024
Close Search Window
Close