Approximately twelve significant flaws in the infrastructure utilized by artificial models have been identified by researchers, including three high-severity and two moderate-severity vulnerabilities. These issues could potentially jeopardize businesses as they strive to leverage AI technology, with some of these flaws still awaiting resolution.
The affected platforms where large language models (LLM) and other machine learning (ML) systems are hosted, deployed, and shared encompass H20 type 3, an open-source Java-based software for machine learning, ModelDB, a machine-learning management system, and Ray, utilized for distributed training of ML models.
Protect AI, a machine learning surveillance company, disclosed the findings on November 16 as part of its AI-specific bug-bounty program called Huntr. Following the notification of risks, suppliers and program maintainers were granted a 45-day window to address the identified issues.
While several problems have been rectified, some persist unresolved, for which Protect AI has proposed solutions through its consulting services.
Companies are facing an increased risk from AI vulnerabilities, as intruders could exploit these weaknesses to manipulate AI designs for their benefit. This could potentially grant them unauthorized access to sensitive information within the system. Sean Morgan, the chief engineer at Protect AI, emphasized the significance of safeguarding networks against various intrusion methods, such as server compromise and credential theft from low-code AI services.
The exploitation of vulnerabilities in AI systems poses a tangible threat to businesses that are actively adopting AI technologies for diverse applications, including financial institutions using AI for tasks like mortgage processing and anti-money laundering.
Protect AI’s president, Daryan Dehghanpisheh, highlighted the importance of protecting intellectual property in the realm of AI and ML, citing the escalating risk of business espionage and unauthorized access to critical systems.
The security of AI infrastructure, often overlooked, is a critical concern in light of the rapid proliferation of AI technologies and services. The disclosure of AI-related vulnerabilities underscores the necessity of fortifying the tools and platforms supporting machine learning operations to mitigate potential risks.
Despite the nascent stage of bug hunting in the AI industry, Protect AI’s bug bounty program, Huntr, has been instrumental in soliciting vulnerability submissions from numerous researchers focused on various machine learning platforms. The evolving landscape of cybersecurity threats is prompting a shift towards prioritizing security in AI and ML tools, as highlighted by industry experts like Dustin Childs from Trend Micro’s Zero Day Initiative.