In response to your inquiries, ChatGPT and other generative AI tools utilize the internet to generate text and content. The same cautionary advice you give your children about not believing everything they read online also applies to generative AI.
Some companies initially had noble intentions of fostering connections among individuals through the inception of social media platforms.
While successful to some extent, social media has also led to negative consequences such as hate speech, violence, harassment, and issues related to self-esteem among teenagers.
It is evident that with the passage of years in the social media era, new systems bring both advantages and disadvantages.
Today, we have the immediate opportunity to be more intentional, thanks to the rapid advancement of technologies like ChatGPT, Bing Chat, Bard, and others. These tools fall under the category of generative artificial intelligence (AI) as they utilize deep learning algorithms to predict and produce content in response to user input or queries.
If you have utilized such tools to plan your travels or decide on a dinner choice, you may have found the experience as engaging as exploring a new social media application.
However, concerns may arise regarding the societal impact of this technology, such as implications for job security and the spread of misinformation. It is crucial to acknowledge both the immense potential and the risks associated with this technology.
To proactively address the influence of AI on our society, we are collaborating with the tech community at NIST to establish protective measures concerning various forms of AI, not limited to text and image generation applications.
Embracing AI Responsibly Can Transform Our World Positively.
The emergence of conceptual AI may seem sudden, but at NIST, we have been dedicated to researching and contemplating AI for an extended period. For over two decades, my focus has been on studying machine learning with an emphasis on trustworthy AI—ensuring that AI does not pose harm to individuals or society.
To aid tech companies in considering the implications of their developments or launches, NIST has initiated collaboration with the Artificial area to create a voluntary AI Risk Management Framework over the past year. Our goal is to shield individuals from the adverse effects of AI systems while enabling communities to reap their benefits.
While the technology is still evolving, we have engaged with tech firms, consumers, advocacy groups, legal experts, researchers, and various other specialists to evaluate the potential negative impacts of AI and how to address them effectively.
For NIST and the research community, it is a new endeavor to assess an AI system not only based on its functionality but also on the impact it may have on individuals, communities, and the world. This approach is known as a socio-technical method.
We unveiled the model in January and are currently devising protocols and testing methodologies to evaluate the reliability of AI technologies.
Subsequently, we have urged collaborative efforts to formulate fresh guidelines on various aspects related to relational AI. These guidelines encompass topics such as how companies should address a computational affair, validation techniques for language models, and ways for the general public to verify the authenticity of online images or videos.
For instance, with the escalating prevalence of deepfake images and videos, engaging in a “cat and mouse” game with such content seems futile. What if instead of engaging in a chase, we implemented authenticity markers to enable individuals to verify the legitimacy of encountered images or videos? This approach could resemble the verification process employed by certain social media platforms for renowned accounts, potentially mitigating the impact of deepfakes by promoting authenticity verification practices.
Furthermore, evaluating the large language models utilized by these tools to respond to queries poses a significant challenge. How can we ensure that a software company has rigorously tested its language model to meet the highest standards when they simply claim, “Trust us, we have tested it”? It is imperative that every platform assesses its language models using an industry-accepted standard of best practices, fostering trust in the model’s output by governmental entities and enabling the tech community to identify and address deficiencies in their language models collectively.
These are the challenges we strive to overcome.
We are endeavoring to establish a comprehensive set of adaptable guidelines that can evolve alongside technological advancements, recognizing that this technology will progress faster than policy development.
Many tech companies developing AI products have voluntarily committed to adhering to our guidelines as partners in this collaborative effort. These companies are enthusiastic collaborators in these initiatives, as they are vested in ensuring the reliability of their products and how they are perceived by the public.
Ensuring the Reliability of AI Requires Evaluation.
As Lord Kelvin stated, you cannot enhance something if you cannot measure it. At NIST, we are progressing towards the next phase of AI in this manner.
In my tenure at NIST, I devised a methodology for evaluating the quality of biometric images, which was subsequently embraced as an international standard. Despite the inherent complexity of the task, I approached it as a technical challenge with enlightening solutions.
I have come to realize that we cannot solely view AI systems through a technical or computational lens. Conceptual AI represents a sophisticated fusion of data, mathematical algorithms, human elements, and the environment in which they interact. Consequently, we must consider the potential positive and negative impacts and risks associated with these systems.
Engineers may sometimes create solutions out of necessity, but this is not always the case. We must contemplate the potential consequences of our actions. This has been our approach to studying AI thus far, and we will continue to examine its potential benefits and drawbacks. Our focus extends beyond the mere functionality of this AI resource; we are equally intrigued by its implications for individuals. How can everyone benefit from this? How can we ensure inclusivity in the solution? These inquiries significantly elevate the human-centric aspect of our work.
Embracing the Opportunities Presented by AI.
The aspect of AI that excites me the most is its potential to benefit communities and enhance our lives when employed for the collective good. By conscientiously considering the implications, we can position people at the core of these advancements.
Presently, conceptual AI predominantly serves as a source of amusement for crafting jokes or augmenting human knowledge. However, in the future, it could aid medical professionals in diagnosing symptoms or offer practical assistance in people’s daily routines.
There is no need to fear AI presently or in the future. Nonetheless, it is prudent to approach it with reasonable caution. Together with my colleagues at NIST, we are striving to ensure that people derive benefits from this technology, rather than the technology dictating the outcomes.