Written by 11:14 am AI, Discussions

Is AI On the verge of replacing all written work? No So Fast

Given how fast AI tools are improving and how much better they’re becoming at writing text th…

Many people are beginning to wonder how much we might be from a literary singularity, a time when all wording online will be assumed to be false, given how quickly AI tools are improving and how much better they’re getting at writing wording that sounds people. One resource estimates that by 2026, some 90 percentage of the website’s content will become automated. Certainly an overstatement, but by how much?

Canada’s Artificial Intelligence and Data Act, then before Parliament, and the EU’s recently passed AI Act claim to address these concerns by requiring OpenAI, Google, and other suppliers of generative AI to add watermarks —some distinguishing characteristic showing that a piece of information is AI generated. In the United States, efforts to regulate AI are making a comeback. One Chinese laws even allows businesses to offer generated articles only if it is tagged as such in a law.

However, the new EU and Canadian laws that recommend hashing as a useful protect rely on the assumption that it is technically possible. This might be the case with images that are dense in data and have patterns that are difficult to hide in images or have AI tags in information. However, text is much more difficult to tag properly because it is not data-rich.

As others have noted, AI system creators can embed patterns into text that do n’t use punctuation or are hidden by code. However, these can be quickly removed with a little processing. They may also cause false negatives to occur from accidental matches. We may try to expand the style or signature, but this will hurt the output’s quality.

The professional turbulence suggests two possible futures for online text.

Some AI skeptics dismiss concerns that automated word is overstated. Indeed, much of the web may become a “desert” of machines. More of the analytic supply of main platforms—Google, Facebook, Amazon—will consist of artificial language. However, we’ll get better at identifying it because it will always be possible to spot it.

The idea is that automated wording does not improve to the point where it is. Even music or images does, but no text. No matter how sophisticated GPT-5 or 6 or any other models may be, no matter how large or abundant the training cast is, the output will never be more than a summation of knowledge or information. It may be more stylish and people- sounding, but never enough to meet the true, unique texture that marks great writing.

More specifically, AI will never be able to create text that provides the perception or information that draws you to websites offering an in-depth analysis of an unfolding event.

ChatGPT
The ChatGPT brand is being displayed on a laptop display and on a computer monitor in Athens, Greece, on April 24, 2024. The ChatGPT brand is being displayed on a laptop display and on a computer monitor in Athens, Greece, on April 24, 2024. Nikolas Kokovlis/NurPhoto/Getty Images

An opposing school of thought is n’t so sure. Perhaps some news sources wo n’t ever be completely produced by AI. But there will be a time when chatbots will be able to produce the majority of an article in even the most prestigious literary journal, and you wo n’t be able to tell. Writing these may have even sprinkles of human treatment, while in most other places, text may be completely chemical.

Recent research on the nature of language models and cognition aids in dissolving this conflict for the skeptics. There are compelling arguments to believe that the textual singularity will never occur. Chatbots wo n’t be redistributing any of your favorite literary magazine.

A conflict between” computational” reason and human reason is what causes the fear that generative AI will become so effective that it will replace most writing. Building on a growing body of research that seeks to temper our enthusiasm about the miracle that generative AI represents, two Oxford researchers recently published a paper that points out the difference in the case of language models in particular.

Through the aid of predictive algorithms, models can create the illusion of intelligence. They work on large” training sets” of data that contain previously parsed text into particular patterns. When a bot generates text in response to a prompt, it actually works as a translation of the text it was trained on. According to the Oxford researchers, what appears to be intelligence is largely an “epiphenomenon of the fact that the same thing can be stated, said, and represented in indefinite ways.”

Retained augmented generation is the most popular method for solving the problem of language model hallucination or knowledge gaps, where AI creates text that is incorrect or nonsensical. Before Bing Chat, which harnesses GPT4, responds to your prompt, it confirms any factual claims by consulting the web. However, this only extends the accuracy of its training data to a few minutes ago. It ignores the deeper issue that a chatbot can only repeat what has already been said.

When we take a step back, we can see that language models may improve their ability to imitate human language and may sound more convincing to us as humans. However, there is a fundamental staleness in thought that they will never be able to overcome. They can only ever summarize, translate, reformulate. They ca n’t offer a novel theory, a genuine insight, a paradigm shift.

The best policy response to fears of the net being taken over by bots is to cultivate a reputation for insight, authenticity, and truth. AI does n’t pose a threat in this version of the future, more than it does making our goals and values clearer.

Visited 4 times, 1 visit(s) today
Tags: , Last modified: May 1, 2024
Close Search Window
Close