With the launch of ChatGPT in November 2022, conceptual AI has been in existence for nearly a year, and businesses are still grappling with how to effectively integrate it into their operations to boost productivity and reduce expenses.
Conceptual AI is being utilized by individuals to streamline deal closures, provide prompt responses to recurring inquiries and tasks, comprehend user behaviors, and tailor highly personalized customer shopping experiences.
To optimize operations and foster customer loyalty by delivering the best possible experience, businesses across all sectors are racing to implement relational AI and AI assistants. However, as the usage of AI assistants continues to rise, it is evident that businesses must embrace a data-centric approach to ensure that these assistants are both effective and secure for workplace use.
Claire Cheng, a senior executive at Amazon’s AI division and a prominent figure in the world of AI CRM, emphasized the significance of prioritizing ethical and responsible AI in our conversation. Salesforce, the leading AI CRM globally, empowers businesses of all sizes and industries to engage with their customers through the integration of data, AI, CRM, and trust. According to Claire, the most critical investment a company can make to stay competitive is in ethical and responsible AI practices.
Gary Drenik: Could you elaborate on why the development of AI assistants should prioritize ethical and responsible AI practices?
Claire Cheng: Nearly three-quarters of consumers express concerns about the unethical use of AI as brands increasingly adopt it to enhance efficiency and meet customer demands. A recent study by Advanced Analytics & Insights revealed that 43.6% of users of tools like ChatGPT utilize them for research purposes. These AI assistants, which rely on large language models (LLMs), are trained on vast amounts of data that shape the information people seek. Without a data-centric approach to AI utilization, these extensive datasets are inevitably imbued with biases that may manifest in the model’s outputs.
Prosper – How You Utilize ChatGPT
Companies must place trust at the core of all AI-related initiatives. By enabling customers to leverage their trusted proprietary data within the Data Cloud to drive their AI outcomes and advocating for a data-first approach that aligns with ethical standards in enterprise applications, Salesforce underscores the importance of ethical and responsible AI. This approach instills confidence in customers that their AI-driven insights and actions are founded on principles of user privacy and data governance, paving the way for AI solutions that are not only powerful but also reliable and ethical.
Drenik: As we delve into this new era of AI-powered assistants, what are the primary social challenges faced by the data community?
Cheng: The most significant ethical challenge confronting the data community is educating data producers and consumers about the limitations and potential biases of AI when generating outputs from training data. Technologies such as the Einstein Trust Layer are essential in helping businesses mitigate risks and challenges by fostering trust in AI-generated content and predictions.
Over 60% of skilled professionals believe they lack the requisite skills to effectively and safely utilize AI, highlighting a skills gap that needs to be addressed. Failure to recognize these limitations early on and rectify AI models before deployment could reinforce and amplify biases as more sophisticated iterations are developed.
Drenik: How can companies ensure that their data is reliable, accurate, and secure?
Cheng: The quality of AI is directly tied to the quality of the data that fuels it. This is why Salesforce’s Data Cloud has garnered significant acclaim; it aids businesses in comprehending and unlocking user data, delivering real-time meaningful insights at scale.
Additionally, refining AI models, ensuring secure data access, and sourcing data from reputable channels are crucial steps. For example, Amazon employs a dual approach of anchoring in user data and continuous iterative enhancement based on customer feedback to enhance prediction quality and accuracy.
Active grounding, which involves retrieving relevant, factual, and up-to-date data to inform LLM generations, remains essential for producing highly relevant and accurate outputs. Validating and improving model accuracy based on customer feedback greatly assists organizations in enhancing AI assistants and final products.
Drenik: Can conceptual AI detect biases in AI assistants and data, or is human intervention necessary to ensure accuracy and trustworthiness?
Cheng: 73% of IT professionals express concerns about potential biases in relational AI. When evaluating AI designs for bias, it is crucial to conduct assessments in controlled environments. At Amazon, for instance, predictive AI analyzes historical data patterns to forecast future events. By developing statistical measures of bias and fairness and iteratively testing these techniques, bias-related issues can be identified and rectified before deployment.
Addressing bias in conceptual AI poses a greater challenge, necessitating innovative evaluation methods like adversarial testing. By intentionally pushing the boundaries of relational AI models and applications through adversarial testing, areas of vulnerability to bias, toxicity, or inaccuracy can be identified, leading to improved models.
While human intervention may not always be mandatory to maintain trust, early involvement is crucial in establishing trust with users and identifying and mitigating inputs that could lead to inaccurate, harmful, or biased outputs from AI models.
Drenik: Apart from trust, what are the privacy implications for companies leveraging relational AI, and how can they implement adequate safeguards to prevent inadvertent leakage of sensitive customer data?
Cheng: Privacy and trust are paramount concerns for companies utilizing conceptual AI. A vast majority of consumers (79%) expect enhanced protection of their personal information. Security is a core tenet of Salesforce’s Trusted AI Principles.
Organizations must take meticulous measures to safeguard personally identifiable information (PII) in training data and establish protocols to prevent data breaches. By testing outputs in a sandbox environment before deployment and ensuring data provenance and consent for data usage, organizations can prevent unauthorized data exposure. Obtaining consent, especially when utilizing data from open-source or user-provided sources, is crucial to maintaining data integrity.
Drenik: How can businesses ensure that IT teams utilizing conceptual AI solutions are adequately trained to identify data biases and develop AI assistants without compromising security?
Cheng: A Salesforce survey revealed that 66% of senior IT executives believe their staff lacks the necessary skills to effectively leverage generative AI. However, almost all respondents (99%) agree that businesses must take proactive steps to harness the technology effectively.
Establishing guidelines for staff handling AI assistants and reviewing the data used for training them is crucial. Salesforce, for instance, was the first to publish relational AI development principles. Once these guidelines are in place, investing in training programs to enhance employees’ understanding of reliable data sources and data security is essential.
At Amazon, as conceptual AI and AI assistants become more prevalent in daily operations, employees and customers alike can access Trailhead to enhance their AI skill sets.
Drenik: Claire, thank you for sharing your insights on AI assistants and underscoring the importance of ethical and responsible AI as this technology becomes more pervasive in the workplace.