Ambitious Workers Highlight Innovative AI Solutions While Disregarding Significant SaaS Security Concerns
Similar to the historical issue of SaaS shadow IT, the emergence of AI technology is posing a challenge for Chief Information Security Officers (CISOs) and cybersecurity teams.
Employees are surreptitiously incorporating AI tools into their workflows without much consideration for established IT and cybersecurity vetting processes. With ChatGPT’s rapid expansion to 100 million users in just 60 days post-launch, driven mainly by word of mouth rather than extensive sales and marketing efforts, the demand for AI tools from employees is on the rise.
Recent research indicates that some employees are enhancing their productivity by up to 40% through the use of generative AI. This places increasing pressure on CISOs and their teams to accelerate the adoption of AI, even if it means turning a blind eye to unauthorized usage of AI tools.
However, yielding to these demands can lead to severe risks of data leaks and breaches within SaaS platforms, especially when employees gravitate towards AI tools developed by small enterprises, individual entrepreneurs, and independent developers.
The Security Challenges Posed by Indie AI Startups Compared to Enterprise AI
The landscape of indie AI applications has expanded significantly, with thousands of offerings enticing employees with freemium models and product-led growth strategies. According to Joseph Thacker, a prominent offensive security engineer and AI expert, indie AI developers typically have fewer security resources, less legal oversight, and lower compliance standards compared to enterprise AI ventures.
Thacker categorizes the risks associated with indie AI tools as follows:
- Data Exposure: Generative AI tools, particularly those utilizing large language models (LLMs), often have access to a wide range of user inputs. Instances of leaked ChatGPT chat histories underscore the lack of robust security measures in place for most indie AI tools. These tools frequently retain user prompts for training and debugging purposes, leaving sensitive data vulnerable to potential breaches.
- Quality Concerns: LLMs are susceptible to generating inaccurate or nonsensical outputs, a phenomenon known as hallucinations. Organizations relying on LLMs for content creation without human oversight risk publishing misleading information. Ethical concerns have also been raised by various groups regarding the attribution of AI-generated content.
- Vulnerabilities: Smaller organizations behind indie AI tools are more prone to overlooking common product vulnerabilities. These tools may be susceptible to injection attacks and traditional security flaws like SSRF, IDOR, and XSS.
- Compliance Risks: Due to the absence of robust privacy policies and internal regulations, indie AI vendors face potential fines for non-compliance with data regulations such as SOX, ISO 27001, NIST CSF, NIST 800-53, and APRA CPS 234. Moreover, many indie AI vendors lack SOC 2 compliance.
In essence, indie AI vendors often fall short in adhering to the stringent frameworks and protocols essential for safeguarding critical SaaS data and systems. These risks are magnified when indie AI tools are integrated with enterprise SaaS platforms.
The Impact of Connecting Indie AI with Enterprise SaaS Applications on Productivity and Security
Employees experience or perceive notable efficiency gains by leveraging AI tools. Subsequently, they seek to enhance these gains by integrating AI with their daily-used SaaS systems like Google Workspace, Salesforce, or M365.
As indie AI tools rely heavily on organic growth strategies rather than conventional marketing tactics, these tools facilitate seamless connections with SaaS platforms within the product itself. An incident highlighted in a Hacker News article regarding security risks associated with generative AI exemplifies an employee using an AI scheduling assistant to optimize time management by analyzing task and meeting data from tools like Slack, corporate Gmail, and Google Drive.
Since AI tools commonly utilize OAuth access tokens to establish connections with SaaS platforms, the AI scheduling assistant gains continuous API-based access to Slack, Gmail, and Google Drive.
Employees frequently establish such AI-to-SaaS connections without fully assessing the associated risks, focusing more on the potential benefits. However, they may unknowingly expose sensitive organizational data by integrating subpar AI applications.
Figure 1: Illustration of an indie AI tool establishing an OAuth token connection with a major SaaS platform. Credit: AppOmni
These AI-to-SaaS connections inherit the user’s permission settings, posing a significant security threat due to the lax security measures typically followed by indie AI tools. Threat actors target such tools as entry points to access connected SaaS systems containing valuable company data.
Once unauthorized access is gained through this backdoor, threat actors can extract data without detection for extended periods. Unfortunately, such unauthorized activities often go unnoticed for weeks or even years, as evidenced by the January 2023 CircleCI data breach, where a significant time lapse occurred between data exfiltration and public disclosure.
Without robust SaaS security posture management (SSPM) tools to monitor unauthorized AI-to-SaaS connections and detect anomalies like extensive data downloads, organizations remain vulnerable to SaaS data breaches. SSPM plays a pivotal role in mitigating these risks but should not replace thorough review procedures and protocols.
Practical Steps to Mitigate Security Risks Associated with Indie AI Tools
To address the security challenges posed by indie AI tools, Thacker recommends that CISOs and cybersecurity teams focus on foundational measures to prepare their organizations effectively:
1. Prioritize Standard Due Diligence
Basic due diligence is crucial. Ensure that someone within your team or legal department thoroughly reviews the terms of service for any AI tools requested by employees. While this may not guarantee immunity from breaches or leaks, a comprehensive understanding of the terms can guide your legal response in case of service term violations by AI vendors.
2. Establish or Update Application and Data Policies
Clear application policies provide transparency and guidance within your organization. Implementing an “allow-list” for AI tools developed by established SaaS providers and categorizing others as “disallowed” can streamline decision-making. Alternatively, a data policy can outline permissible data inputs for AI tools. For instance, restricting the use of intellectual property in AI programs or sharing data between SaaS systems and AI applications.
3. Invest in Ongoing Employee Training and Education
The majority of employees engaging with indie AI tools do not have malicious intent. They often lack awareness of the risks associated with unsanctioned AI tool usage.
Regular training sessions can educate employees on the potential risks of data leaks and breaches associated with AI tools, as well as the implications of AI-to-SaaS connections. These sessions also serve as opportunities to reinforce organizational policies and review processes.
4. Conduct Thorough Vendor Assessments
During vendor assessments of indie AI tools, maintain the same level of scrutiny applied to established enterprise vendors. Evaluate their security measures and compliance with data privacy regulations. Address key considerations such as:
- Who has access to the AI tool? Are access permissions restricted to specific individuals or teams? Does the tool involve third parties or external models?
- What safeguards are in place to protect user inputs and outputs? Does the tool exhibit vulnerabilities like SSRF, IDOR, or XSS?
- Are there features within the AI tool that could potentially lead to security breaches or data leaks?
- How does the tool handle external inputs, and what measures are in place to prevent malicious injections?
AppOmni offers detailed CISO Guides on AI Security that delve deeper into vendor assessment questions and provide insights into the risks and opportunities associated with AI tools.
5. Foster Collaborative Relationships and Accessibility
CISOs, cybersecurity teams, and other stakeholders responsible for AI and SaaS security should position themselves as partners in navigating AI adoption with business leaders and employees. Building strong relationships, effective communication, and accessible guidelines are essential components of integrating security into business priorities.
By illustrating the financial and operational impact of AI-related security incidents, security teams can effectively convey the importance of cybersecurity to business units. Improved communication is a critical first step, but it should be complemented by adjustments in how security teams engage with the business.
Whether implementing application or data allow-lists, ensure that these guidelines are clearly articulated and readily available to employees. When individuals understand the permissible data inputs for AI tools or the approved vendors for AI applications, they are more likely to view the security team as enablers rather than obstacles. In cases where stakeholders request AI tools outside the established boundaries, initiate a dialogue to understand their objectives and explore suitable alternatives within the approved framework.
Creating an environment where the business perceives the security team as a valuable resource rather than a hindrance is key to safeguarding the SaaS ecosystem from potential risks posed by AI tools in the long run.
If you found this article informative, follow us on Twitter and LinkedIn for more exclusive content and updates.