Written by 10:38 am Generative AI

### Combatting Four AI Cybersecurity Risks Keeping CISOs Awake at Night

Four generative AI cyber risks that keep CISOs up at night — and how to combat them – Silicon…

In this installment of the SecurityANGLE series, host Shelly Kramer, who serves as the managing director and principal analyst at theCUBE Research, engages in a discussion with Jo Peterson, an analyst, engineer, and esteemed member of theCUBE Collective community. Together, they delve into a dialogue concerning the primary concerns surrounding generative artificial intelligence cyber threats that cause sleepless nights for Chief Information Security Officers (CISOs).

Apart from exploring the progression of the AI menace and the risks associated with generative AI cyber threats, the conversation extends to encompass cybersecurity best practices related to the utilization of generative AI. Furthermore, the discourse sheds light on notable vendors and solutions within the AI security domain that are deemed essential for awareness.

Commencing with a bit of context, a Riskconnect survey involving 300 risk and compliance professionals revealed that a staggering 93% of companies anticipate significant threats linked to generative AI. However, a mere 17% of companies have taken the initiative to educate or inform their entire workforce about gen AI risks—painting a concerning picture. Even more disconcerting is the fact that only 9% claim readiness to effectively address the risks entwined with the adoption of generative AI. The underlying reason for these alarmingly low statistics likely stems from the fact that despite the current AI hype, the actual impact of these risks is yet to be tangibly felt due to the nascent stage of AI implementation.

Furthermore, statistics indicate that generative AI is poised to engage around 77.8 million users by 2024, surpassing the adoption rate of smartphones and tablets over a similar timeframe. This surge in adoption, coupled with a passive “wait-and-see” approach, poses a precarious business strategy—or rather, the absence of a coherent strategy altogether.

Similarly, research from ISACA, involving approximately 2,300 professionals specializing in risk, security, audit, data privacy, and IT governance, unveiled that a mere 10% of companies have formulated a comprehensive generative AI policy. Shockingly, over a quarter of the surveyed individuals expressed no intentions of developing an AI policy—a concerning revelation.

The Top Four Generative AI Cyber Risks

This sets the stage for the focal point of today’s discussion—the top four generative AI cyber risks that are plaguing CISOs worldwide:

  • Vulnerabilities in model training and attack surfaces
  • Data privacy concerns
  • Exposure of corporate intellectual property (IP)
  • Jailbreaks and backdoors in generative AI systems

Moving on to the specific risks, the issue of data collection across organizations in various formats emerges. Oftentimes, this data is untidy, inadequately managed, and underexploited. Generative AI compounds this problem by storing such data for unspecified durations, often in insecure environments. This combination poses a significant threat, facilitating unauthorized data access and manipulation, as well as potential biases, which are equally detrimental.

Regarding data privacy, the existing framework surrounding data collection is notably fragile, if not entirely absent. This deficiency extends to the regulations governing the types of data permissible for input into generative AI models. The absence of enforceable data exfiltration policies creates a breeding ground for models to assimilate and reproduce confidential corporate information in their outputs, paving the way for an imminent data breach.

Furthermore, the cornerstone of corporate data privacy is indispensable for sustained business prosperity. In the absence of a robust strategy delineating generative AI and corporate data privacy, models may inadvertently train on corporate codebases, leading to the inadvertent exposure of sensitive intellectual property, API keys, and other proprietary information.

Generative AI guardrails, intended to shield organizations from malicious or biased information, can paradoxically be circumvented, rendering them ineffective. In a notable incident in the summer of 2023, researchers from Carnegie Mellon University and the Center for AI Safety demonstrated the successful bypassing of guardrails in major language models. This breach enabled models to be manipulated to engage in objectionable dialogues, craft malware, and execute malicious activities. The ease with which these guardrails can be evaded underscores the pressing need for enhanced security measures in the realm of generative AI.

Cybersecurity Best Practices for Leveraging Generative AI

As the discussion transitions to cybersecurity best practices for leveraging generative AI, four key recommendations are outlined:

  • Establishing an AI governance plan within the organization
  • Providing comprehensive training to employees to foster a culture of AI literacy
  • Undertaking data discovery and classification initiatives
  • Understanding the optimal synergy between data governance and security tools

A Glimpse at AI Governance

The process of erecting technical guardrails around the deployment and utilization of AI tools within an organization is encapsulated in the concept of AI governance. Noteworthy is the Artificial Intelligence Governance and Auditing (AIGA) program initiated by the University of Turku, designed to develop governance models for AI, emphasizing responsible AI practices. The AIGA program encompasses three layers—environmental, organizational, and the AI system itself—each incorporating a suite of governance components and processes linked to the AI system lifecycle. This comprehensive framework serves as a valuable resource for organizations navigating the complexities of AI governance.

The Significance of Employee Training

Drawing parallels with the persistent challenge of Shadow IT—where technology is utilized without IT’s knowledge or approval—underscores the criticality of employee education. Given the rapid proliferation of generative AI, educating employees on data types and associated risks assumes paramount importance. Instilling an understanding of the distinctions between gen AI and proprietary AI models, alongside stringent protocols for managing sensitive data, emerges as a pivotal agenda for IT teams.

Data Discovery and Classification Imperative

Data classification plays a pivotal role in delineating access rights, streamlining information dissemination, and mitigating inadvertent data exposure or misuse. In an era where data serves as both an asset and a liability, robust data discovery and classification processes are essential for enhancing data management practices and fortifying access controls.

Harnessing Data Governance and Security Tools

While policies and training are indispensable, the integration of data governance and security tools assumes heightened significance in enforcing adherence to established protocols. Tools such as Data Loss Prevention (DLP), Cloud-Native Application Protection Platforms (CNAPPs), and Extended Detection and Response (XDR) serve as instrumental aids in thwarting unauthorized data exfiltration and fortifying organizational defenses.

Notable Cybersecurity/AI Solutions

The burgeoning global AI in cybersecurity market, projected to reach $38.2 billion by 2025, underscores the increasing reliance on AI-driven security tools within organizations. Notably, 50% of organizations are actively leveraging AI-powered security solutions, with 88% of cybersecurity professionals acknowledging the indispensable role of AI in enhancing security operations. Highlighting seven prominent cybersecurity vendors offering cutting-edge solutions for securing generative AI, the discourse sheds light on innovative tools and platforms that are instrumental in fortifying organizational cybersecurity posture.

As this episode of the SecurityANGLE series draws to a close, the emphasis remains on fostering awareness and embracing innovative security solutions to navigate the evolving landscape of generative AI cyber threats. For further engagement and insights, the audience is encouraged to connect with the hosts on various social media platforms.


Connect with us on social media:

Shelly Kramer: LinkedIn | Twitter

Jo Peterson: LinkedIn | Twitter

Visited 85 times, 1 visit(s) today
Tags: Last modified: March 25, 2024
Close Search Window
Close