Generative AI, in particular large language models (LLMs) like ChatGPT, is the most significant end product that has been deployed since research in artificial intelligence began—at least in terms of its contribution to economic productivity.
How significant? McKinsey estimates that generative AI could add as much as $7.9 trillion annually to the global economy. That is equivalent to the contributions of Canada, Great Britain, Russia, and Austria—combined.
Despite its spectacular potential, however, generative AI isn’t without its shortcomings. In fact, you could argue that its arrival has rather made a mess of things for now. So then, 10 months on since the release of ChatGPT 4, let’s have a look at the top problems with generative AI, and some ideas about how you might overcome them.
1. Accuracy
We’ve all heard about ChatGPT’s hallucination problem. LLMs are designed to serve probabilistic answers from training data and can be a little too eager to provide answers. Rather than saying “I don’t know,” they are known to make stuff up, unleashing a Pandora’s box worth of problems, from brand damage to regulatory breach.
Building in topic and ethical guardrails to generative AI models can help. So can fortifying a generative AI model’s training with knowledge bases specific to your Enterprise domains. I believe that for the foreseeable future, however, enterprises need to get much better at engineering prompts for accuracy, and keep a human in the loop to double check everything that LLMs produce.
2. Bias
The problem of bias creeping into AI is old news (See article “How AI Can Go Terribly Wrong: 5 Biases That Create Failure”). But the rapid proliferation of generative AI has amplified this concern more than most people thought was possible. Now, instead of worrying about a little bias sneaking in here and there, enterprise leaders must worry about it taking over their corporate cultures entirely.
Does the ease of using programs like ChatGPT mean that our multiplicity of voices will be muffled, in deference to a single viewpoint? Critics have charged that the software suffers from a “woke” political bias, that it perpetuates gender biases, and that it is inherently race-biased.
To make sure that generative AI does not perpetuate toxic viewpoints in your organization, your engineering team needs to be in close contact with the issue, working to instill your AI with your own corporate and human values.
3. Volume
We were already drowning in information before it became so easy to create new content with generative AI. emails, ebooks, web pages, social media posts, and other created works. Even the volume of job applications has jumped, fueled by the ability to quickly generate customized resumes and cover letters with AI. Managing the sheer volume of all of this new information is hard.
How do you leverage the mountain of assets that your organization creates? How do you store all that information? How do you keep up with data analytics and attribution of marketing assets? How do you evaluate the merits of anything and anyone when content is all AI-generated?
To avoid confusion and employee burnout, you need to organize the right teams, technologies, and tactics to stay on top of this now, because the volume is only going to grow from here.
4. Cybersecurity
Generative AI has radically increased the capabilities of bad actors to mount novel cyber attacks. It can be used to analyze code for vulnerabilities and write malware to exploit them. It can be used to produce deep fake videos and voice clones for fraud and virtual kidnapping. It can write convincing emails in support of phishing attacks, and much more. In addition, code written with assistance from AI may be more susceptible to hacking than human-produced code.
In this case, the best way to respond is to fight fire with fire. AI can be used to analyze your code in search of vulnerabilities and to do ongoing penetration testing and improve your defense models.
But do not forget the number one cybersecurity vulnerability in your organization–human beings. Generative AI can analyze logs of user activity in search of dangerous behavior, but your first line of defense should be to train your staff to be even more vigilant than before.
5. Intellectual Property issues
Lawsuits by artists, writers, stock photo agencies, and others allege that their proprietary data and styles were used without their permission to train generative AI software. Enterprises who use generative AI software worry about being caught up in this debacle.
If generative AI produces ad campaign images for you that inadvertently infringe on someone else’s work, whose liability will that be? Meanwhile, who owns the assets that you create using generative AI, anyway? Is it you? The generative AI software companies? The AI itself? So far, the ruling is that works created by humans with AI assistance can be copyrighted, but the jury is still out on patents.
My advice is to make sure that you keep humans in the loop for all asset creation, and make sure your legal team continues to do its due diligence as the laws rapidly evolve.
6. Shadow AI
According to a Salesforce survey of 14,000 workers across 14 countries, half of those corporate employees who use generative AI tools do so without sanctioned approval of their organizations. It’s not going to be possible to put the genie back in the bottle on this one, so you’d best work out your generative AI governance policies and establish programs to teach staff what responsible use looks like.
You also need to speak to your IT leaders about what they are doing to discover and manage generative AI software tools on company devices.
There are still problems to be managed with generative AI. But manage them, you must, because we are amidst a shift in how business is done like no other before.
Healthcare, banking, logistics, insurance, customer service, e-commerce—there is hardly an industry which is not suddenly exposed to lightning fast disruption due to generative AI. The vulnerability to and velocity of disruption has never been greater. Those enterprises who work out how to harness this technology effectively will create a flywheel effect such that it will be extremely difficult for their laggard competitors to recover. AI needs to be a board level priority this year. (See The AI Threat: Winner-Takes-All).
If you care about how AI is determining the winners and losers in business, how you can leverage AI for the benefit of your organization, and how you can manage AI risk, I encourage you to stay tuned. I write and speak about how senior executives, board members, and other business leaders can use AI effectively. You can read past articles and be notified of new ones by clicking the “follow” button here.