Written by 2:43 am AI designs

### AI Designs Unveiled: Exploring Hugging Face’s Hidden User Machines

At least 100 instances of malicious AI ML models were found on the Hugging Face platform, some of w…

On the Hugging Face platform, more than 100 harmful Artificial ML models have been identified, some of which have the capability to execute code on the victim’s machine, potentially exposing users to a persistent backdoor threat.

Hugging Face, a technology company specializing in artificial intelligence (AI), natural language processing (NLP), and machine learning (ML), offers a collaborative platform for sharing models, datasets, and developing applications.

Despite Hugging Face’s security measures, including malware detection, pickle validation, and operational monitoring of models to detect patterns like unsafe deserialization, approximately 100 models on the platform have been flagged for their destructive functionalities by JFrog’s security team.

Discovery of Malicious AI ML Models

JFrog’s security team uncovered around 100 models with malicious characteristics while utilizing an advanced monitoring system to inspect PyTorch and Tensorflow Keras models hosted on Hugging Face.

According to the JFrog report, the term “harmful models” specifically refers to those containing genuinely hazardous payloads, excluding false positives and providing an accurate representation of the efforts to create harmful models within the PyTorch and Tensorflow frameworks on Hugging Face.

Types of Payloads Found in Suspicious Models (JFrog)

One instance highlighted by JFrog involved a PyTorch model uploaded by a user named “baller423,” which was subsequently removed from Hugging Face. This model contained a payload that allowed the establishment of a reverse shell on a specified host (210.117.212.93). The malicious payload evaded detection by concealing the code within the trusted serialization process, leveraging Python’s pickle module’s “__reduce__” method to execute arbitrary code upon loading a PyTorch model file.

Payload for Establishing a Reverse Shell (JFrog)

In separate cases, JFrog identified similar payloads connecting to various IP addresses, indicating that the individuals behind these actions might be AI researchers rather than malicious hackers. Despite attempts to monitor the connections using a HoneyPot for a day, no malicious activities were captured during this period.

Deployment of a HoneyPot for Monitoring (JFrog)

While some of the malicious uploads could be part of security research aimed at bypassing platform security measures and earning bug bounty rewards, the widespread availability of these harmful models poses a genuine threat that should not be underestimated. The significant security risks associated with AI ML models highlight the need for stakeholders and technology developers to address and mitigate these risks effectively.

JFrog’s findings underscore the importance of heightened awareness and proactive measures to safeguard the ecosystem from malicious actors.

Visited 3 times, 1 visit(s) today
Tags: Last modified: February 29, 2024
Close Search Window
Close