In the recent IPO filing by Rubrik, nestled amidst details like employee count and financial statements, a significant disclosure sheds light on the company’s approach to generative AI and the associated risks. Rubrik has discreetly established a governance committee tasked with overseeing the integration of artificial intelligence within its operations.
As per the Form S-1 submission, the newly formed AI governance committee comprises managers from various departments within Rubrik, including engineering, product development, legal, and information security. Together, these teams will assess the potential legal, security, and business risks linked to the utilization of generative AI tools. The committee will deliberate on strategies to mitigate these risks effectively, as outlined in the filing.
Although Rubrik’s core focus is not centered around AI, with its only AI offering being a chatbot named Ruby launched in November 2023, which relies on Microsoft and OpenAI APIs, the company, like many others, is contemplating a future where AI will play an increasingly pivotal role in its operations. This shift towards establishing AI governance committees could soon become the norm for businesses.
Escalating Regulatory Scrutiny
While some companies are proactively adopting AI best practices, others may be compelled to do so by regulatory frameworks such as the EU AI Act. Termed as “the world’s first comprehensive AI law,” this groundbreaking legislation, anticipated to be enacted across the EU later this year, prohibits certain AI applications considered to pose “unacceptable risk” and categorizes others as “high risk.” Moreover, the bill delineates governance guidelines aimed at mitigating risks that could lead to issues like bias and discrimination. This risk-based approach is likely to be widely embraced by companies seeking a structured path towards AI adoption.
Eduardo Ustaran, a privacy and data protection lawyer at Hogan Lovells International LLP, foresees that the EU AI Act and its extensive obligations will amplify the necessity for AI governance, consequently mandating the formation of committees. Ustaran emphasizes the pivotal role of AI governance committees in proactively identifying and addressing risks, thereby ensuring compliance and minimizing potential pitfalls.
In a recent policy paper analyzing the implications of the EU AI Act on corporate governance, Katharina Miller, an ESG and compliance consultant, concurs with the recommendation that companies establish AI governance committees as a proactive compliance measure.
Legal Implications
Compliance with regulations is not merely a box-ticking exercise to appease regulators. The EU AI Act carries substantial penalties for non-compliance, a fact underscored by the legal firm Norton Rose Fulbright. The legislation’s reach extends beyond the borders of Europe, potentially impacting companies operating globally, especially in light of enhanced EU-U.S. collaboration on AI.
Apart from regulatory considerations, the use of AI tools can expose companies to legal risks. Rubrik’s AI governance committee, as mentioned in the filing, evaluates a spectrum of risks, including issues related to confidential information, personal data protection, customer data handling, contractual obligations, intellectual property rights, transparency, output accuracy, and security.
It is important to note that Rubrik’s proactive stance on legal compliance may stem from various factors, including past incidents such as data breaches, hacks, and intellectual property disputes.
Strategic Imperatives and Trust Building
Companies are not solely driven by risk mitigation when adopting AI technologies. There are inherent opportunities that they seek to leverage, aligning with the interests of their stakeholders. Despite known limitations like “hallucination” in generative AI tools, companies are forging ahead with their implementation due to the potential benefits they offer.
Navigating this landscape requires a delicate balance. While showcasing AI utilization can enhance company valuations and market perception, addressing potential risks is equally crucial. Establishing AI governance committees is poised to become a cornerstone in fostering trust among the public regarding AI systems and their responsible deployment, as emphasized by Adomas Siudika, privacy counsel at OneTrust.
In conclusion, the establishment of AI governance committees represents a proactive step by companies to navigate the evolving AI landscape, ensuring compliance, risk mitigation, and building trust with stakeholders.