Written by 12:10 pm AI, Discussions

### Refrain from Cliff Diving: AI Suggestions Are Just That – Suggestions

Can this technology ever be truly trustworthy? For starters, suggests one AI expert, ChatGPT should…

When a novel technology emerges, we are inevitably captivated by the excitement, particularly when it involves a profound subject like artificial intelligence.

Artificial intelligence has the capability to generate our exam papers, craft advertisements, and even produce films. Yet, there persists a prevailing notion that AI is not flawless, especially in those frustrating instances where it fabricates information.

Nonetheless, it is evident that companies such as Google and Microsoft are keen on integrating AI into various facets of society.

So, where do we discern—truly discern—what improvements are still necessary to instill trust in AI?

In my quest for answers, I came across Ayanna Howard, an AI researcher and professor at the Ohio State University’s College of Engineering, who eloquently emphasized the importance of fairness.

Howard’s most poignant observation about the disparity between technologists and other disciplines was featured in the MIT Sloan Management Review.

She articulated a simple yet profound idea: “Technologists are not inherently social scientists or historians. We are drawn to this field out of passion, and due to our specialization in technology, we hold an optimistic view of it.

However, this very optimism, as Howard astutely pointed out, poses a significant challenge: “We excel in constructing pathways with others who can discern the positives we see and acknowledge the accompanying drawbacks.”

Nevertheless, the technologists shaping the future software landscape urgently require a deeper understanding of language and psychological acumen.

One crucial aspect is the imperative for technology firms, particularly those in artificial intelligence and generative AI, to integrate human emotional intelligence (EQ) with technology to provide cues for users on when to question the efficacy of such tools, as advocated by Howard. This likely necessitates regulatory measures.

Reflecting on the early days of the internet, we were left to differentiate between accuracy, exaggeration, and sheer fallacy.

Moreover, Microsoft aims to prevent the misuse of AI bots.

While we are filled with anticipation, we are also cautiously navigating towards a semblance of certainty.

As long as a technological innovation appears functional, people tend to place their trust in it, as per Howard. Individuals might blindly follow a robot even in hazardous situations, such as during a fire, as evidenced in one of her studies.

Howard proposes that AI models like ChatGPT should be transparent about their limitations.

This transparency does not absolve us from the need to remain vigilant, but it undoubtedly fosters a crucial level of trust essential for the acceptance, rather than apprehension or imposition, of AI.

Howard expresses concerns that presently, anyone can develop an AI product. “We have creators who lack expertise selling to overly trusting companies and consumers,” she remarked.

Additionally: Generative AI is poised to revolutionize customer service. Here’s the potential trajectory.

While her words may sound cautionary, they also signify a positive acknowledgment of the challenges inherent in introducing a potentially groundbreaking technology to the world and establishing its credibility.

Ultimately, for AI to live up to its lofty expectations, it must earn trust.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: March 30, 2024
Close Search Window
Close