Written by 4:56 pm AI, Discussions, Uncategorized

### Dealing with Malfunctioning Artificial Bots

AI ‘hallucinations’ present significant business risks, but new types of guardrails can keep them f…

As per a report by VentureBeat, 75% of the companies surveyed have already integrated ChatGPT and similar conceptual AI tools within just one year of their introduction. However, with the proliferation of new AI-powered chatbots, there is a corresponding increase in the risks associated with occasional flaws—such as nonsensical or inaccurate responses—that can be challenging to filter out from the vast language models (LLMs) on which these tools rely.

These issues are commonly referred to as “illusions” in AI terminology. In enterprises deploying new AI technologies to serve a large customer and employee base, a single AI misstep can have serious repercussions. However, such issues are less significant when experimenting with general AI prompts at home.

In a notable case, a legal firm faced repercussions when a prosecutor accused them of using fabricated quotes and references from administrative opinions in an informal legal document. The company admitted that it had overlooked the possibility of technology generating fictitious content.

Illusions occur when the quality of data used to train LLMs is subpar or insufficient. The occurrence rate of such illusions in most generative AI platforms typically falls between 3% to 8%. Steven Smith, the chief protection architect at Freshworks, likened bots to living cells, iterating constantly and evolving based on new data inputs. He emphasized the principle that the output of these systems is directly influenced by the quality of the input data.

Chatbot Errors

In industries like healthcare and finance, where complexity and regulations are high, customer service chatbots can inadvertently provide incorrect advice or information, leading to confusion and potential harm, ultimately undermining critical objectives like customer satisfaction.

AI bugs can wreak havoc in IT organizations in various ways. Chatbots might assign service tickets incorrectly, misdiagnose issues, disrupt workflows, or create structural issues that require human intervention, such as data breaches or resource mismanagement.

Professionals utilizing AI-generated scripts in software development may inadvertently introduce security vulnerabilities or infringe on training-related intellectual property. Additionally, complex bugs or stability issues that only a human designer would detect could be overlooked by automated systems.

Smith advises caution, stating that while software copilots are beneficial, it is crucial to understand and validate their suggestions. Blindly implementing code from platforms like StackExchange without comprehension can lead to unforeseen consequences, underscoring the importance of thorough review and understanding.

Mitigating Risks

Some organizations are proactively investing in risk reduction strategies. Experts recommend the following effective tactics:

  1. Information Filtering: Implementing professional or policy-based guardrails can prevent the dissemination of harmful or inappropriate content. Content filtering can avoid responding to sensitive topics or issues, and in customer service scenarios, chatbots should seamlessly transfer inquiries to human operators when unable to provide precise answers.

  2. Continuous Improvement of Data Quality: IT teams should regularly assess the quality, relevance, and comprehensiveness of the data used to train LLMs to prevent model degradation over time. Regular review of training data helps maintain performance and accuracy.

  3. Security Guardrails: By limiting chatbots’ access to third-party apps and services, organizations can reduce the risk of producing false or misleading information. Sandboxing chatbots for better performance and compliance is particularly crucial in industries where data security is paramount.

Despite current challenges, ongoing research aims to tackle these issues. From developing larger models to enabling LLMs to fact-check autonomously, efforts are underway to enhance accuracy and reliability.

Ultimately, Smith emphasizes the importance of exercising common sense to mitigate script-related risks. Understanding the capabilities and limitations of AI systems, establishing clear engagement rules, and ensuring adherence to these boundaries are essential steps in leveraging AI effectively and responsibly.

Visited 1 times, 1 visit(s) today
Last modified: February 7, 2024
Close Search Window
Close