Written by 3:33 am AI Services

### Vulnerabilities in AI-as-a-Service Providers: Risks of Privilege Escalation and Cross-Tenant Attacks

New research reveals critical security risks for AI-as-a-service providers like Hugging Face. Attac…

New findings indicate that AI-as-a-service providers like Hugging Face are exposed to two critical vulnerabilities that could empower malicious actors to elevate privileges, access other customers’ models, and potentially seize control of CI/CD pipelines.

According to Wiz researchers Shir Tamari and Sagi Tzadik, the presence of malicious models poses a significant threat to AI systems, particularly for service providers, as attackers could exploit these models for cross-tenant attacks. This could have severe consequences, granting unauthorized access to numerous private AI models and applications stored within these platforms.

The emergence of machine learning pipelines as a new avenue for supply chain attacks has made platforms like Hugging Face prime targets for adversarial activities aimed at extracting sensitive data and compromising target environments.

The identified threats stem from two main sources: shared Inference infrastructure takeover and shared CI/CD takeover. These vulnerabilities enable the execution of untrusted models in pickle format and the manipulation of CI/CD pipelines to execute a supply chain attack.

The research reveals that breaching the service by uploading a rogue model and utilizing container escape techniques could lead to compromising the entire platform, allowing threat actors to breach other customers’ models within Hugging Face.

Despite the risks, Hugging Face permits users to infer Pickle-based models on their infrastructure, even if flagged as dangerous. This loophole could be exploited by attackers to create PyTorch models with code execution capabilities and leverage misconfigurations in Amazon EKS to gain elevated privileges within the cluster.

To address these security concerns, enabling IMDSv2 with Hop Limit is recommended to prevent unauthorized pod access to the Instance Metadata Service and avoid potential data breaches.

Furthermore, the research highlights the possibility of achieving remote code execution through a customized Dockerfile on Hugging Face Spaces, enabling unauthorized access to the internal container registry.

Hugging Face has taken steps to address the identified vulnerabilities and advises users to source models from trusted providers, implement multi-factor authentication, and avoid using pickle files in production environments to enhance security measures.

In a related development, Lasso Security disclosed the potential for generative AI models like OpenAI ChatGPT and Google Gemini to distribute malicious code packages, emphasizing the importance of caution when relying on large language models for coding solutions.

Additionally, AI company Anthropic introduced “many-shot jailbreaking,” a method that exploits vulnerabilities in LLMs to respond to potentially harmful queries beyond their intended scope. This underscores the risks associated with leveraging AI models without proper safeguards in place.

Visited 3 times, 1 visit(s) today
Tags: Last modified: April 7, 2024
Close Search Window
Close