Written by 9:51 am AI

### Leveraging SRI Analysis to Mitigate Artificial Intelligence Concerns

New SRI research is tackling a new approach to reduce AI’s propensity to hallucinate.

The testing of large language models (LLMs) at various abstraction levels is currently underway by SRI International, formerly known as Stanford Research Institute. This non-profit scientific research institution has been instrumental in pioneering significant inventions such as the graphical user interface, the web, Apple’s Siri, and laser printers. The most recent study introduces a novel strategy for addressing illusions that surpasses current methodologies.

The study focuses on how individuals learn by initially delving into the basics and subsequently discussing their interpretations in different contexts. While still in its early stages, this approach holds promise for influencing the development and evaluation of tools across the industry. Additionally, there is an exploration of how AI optimized for specific processes could contribute to the gradual refinement of a foundational model over time.

A team of experts including Ajay Divakaran, Michael Cogswell, Pritish Sahu, Yunye Gong, and Karan Sikka are at the forefront of developing this innovative method. They adopt a nuanced perspective when enhancing Orion’s reliability, particularly in addressing hallucinations. Divakaran challenges the notion that all content generated by LLMs can be equated to dreams, emphasizing the importance of logical coherence and adherence to reality in distinguishing between dreams and valid outputs.

The primary objective in de-hallucinating chatbots is to ensure that they produce logically consistent and coherent content, fostering seamless interactions akin to human conversations. For example, efforts are underway to enhance interview interactions by collaborating with a robot. The team discovered that certain LLMs tended to repeat questions, potentially leading to user disengagement.

Establishing trust with users is deemed essential by Divakaran, who highlights the significance of demonstrating an understanding of the environment and providing human-like responses to foster genuine communication.

Automation plays a pivotal role in the process, with the utilization of reinforcement learning with human feedback (RLHF) to mitigate AI hallucinations. This method involves analyzing responses against human input, enabling iterative retraining of models. By exploring the integration of AI systems proficient in reinforcement learning and abstract reasoning, the study aims to streamline this process while enhancing outcomes.

The study also delves into conceptual consistency checks to ensure coherence in statistical models, particularly in business contexts. This methodology extends to visual question-answering (VQA) programs, aiming to improve accuracy in generating image-based responses. The team is keen on imbuing AI systems with human-like reasoning capabilities, emphasizing the importance of explaining the underlying rationale behind their outputs.

Drawing inspiration from human learning studies such as Benjamin Bloom’s Taxonomy, the study categorizes learning objectives based on complexity levels, focusing on memory, comprehension, application, analysis, evaluation, and creation processes. By leveraging this framework, the team aims to enhance educational outcomes, particularly in aiding young children’s comprehension through interactive picture books.

The development of a story graph model facilitates the visualization of relationships between entities, events, and locations across different levels of abstraction. This model, an extension of traditional image diagrams, aids in automating the training of Large Vision Language Models (LVLM) like DRESS, resulting in more accurate responses and improved learning capabilities.

In conclusion, the study underscores the importance of scalability and practical application of their concepts, aiming to democratize access to advanced models and ensure their reliable usage. The focus on mitigating hallucinations in AI systems, advancing multi-modal AI capabilities, and innovating data structures for enhanced contextual understanding signifies a significant step towards bridging the gap between AI technology and human cognition.

Visited 2 times, 1 visit(s) today
Last modified: January 4, 2024
Close Search Window
Close