AI Technologies

Hugging Face Vulnerabilities Highlight AI-as-a-Service Risks

Researchers Say Illegal Access to Private AI Models Can Enable Cross-Tenant Attacks
Hugging Face Vulnerabilities Highlight AI-as-a-Service Risks
Image: Shutterstock

Security researchers have discovered two critical vulnerabilities in the Hugging Face AI platform that exposed potential gaps for attackers seeking unauthorized access and manipulation of customer data and models.

See Also: Beyond DLP: Embracing the New Necessities of Data Security

The Google and Amazon-funded Hugging Face platform is designed to help developers seamlessly access and deploy AI models. Researchers at Wiz teamed with Hugging Face to find and fix two significant risks within the AIaaS platform's infrastructure.

“If a malicious actor were to compromise Hugging Face's platform, they could potentially gain access to private AI models, datasets and critical applications, leading to widespread damage and potential supply chain risk," Wiz said in a report released last week.

The two distinct risks originate from the compromise of shared inference infrastructure and shared CI/CD systems. Such a breach could facilitate the execution of untrusted models uploaded in Pickle format on the service and enable the manipulation of the CI/CD pipeline to orchestrate a supply chain attack.

Pickle is a Python module that allows serialization and deserialization of Python objects. It helps in converting a Python object into a byte stream and storing it on a disk, sending it over a network, or storing it in a database. When the object is required again, the byte stream is deserialized back into a Python object. Despite the Python Software Foundation's acknowledgment of Pickle's insecurity, it remains popular due to its simplicity and widespread use.

Malicious actors could craft Pickle-serialized models containing remote code execution payloads that potentially grant them escalated privileges and cross-tenant access to other customers' models, the Wiz researchers said.

Attackers exploiting vulnerabilities in the CI/CD pipeline could inject malicious code into the build process. By compromising the CI/CD cluster, attackers could orchestrate supply chain attacks and potentially compromise the integrity of AI applications deployed on the platform.

The Wiz researchers demonstrated the exploitation of these vulnerabilities in a YouTube video in which they are seen uploading a specially crafted Pickle-based model to Hugging Face's platform. Using the insecure deserialization behavior of Pickle, they executed remote code and gained access to the inference environment within Hugging Face's infrastructure. "It is relatively straightforward to craft a PyTorch (Pickle) model that will execute arbitrary code upon loading," Wiz said. Even Hugging Face is aware of this but "because the community still uses PyTorch pickle, Hugging Face needs to support it," Wiz said.

Wiz's investigation found that its model operated within a pod in a cluster on Amazon Elastic Kubernetes Service - also known as EKS. The researchers used common misconfigurations and extracted information that gave them privileges necessary to access secrets required to breach other tenants on the shared infrastructure for lateral movement.

The Wiz researchers also identified weakness in Hugging Face Spaces, a hosting service for showcasing AI/ML applications or collaborative model development. Wiz found that an attacker could execute arbitrary code during application build time, enabling them to scrutinize network connections from their machine. Its examination revealed a connection to a shared container registry that houses images belonging to other customers, which they could manipulate.

Hugging Face said it has effectively mitigated the risks found by Wiz and implemented a cloud security posture management offering, vulnerability scanning and annual penetration testing activity to identify and mitigate future risks to the platform.

Hugging Face advised users to replace Pickle files that inherently contain security issues with Safetensors, a format devised by the company to store tensors.

The vulnerabilities disclosed at Hugging Face are the second set of flaws found in the AIaaS platform in the past four months. The company confirmed in December that it fixed critical API flaws that were reported by Lasso Security (see: API Flaws Put AI Models at Risk of Data Poisoning).

About the Author

Mihir Bagwe

Mihir Bagwe

Principal Correspondent, Global News Desk, ISMG

Bagwe previously worked at CISO magazine, reporting the latest cybersecurity news and trends and interviewing cybersecurity subject matter experts.

Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing, you agree to our use of cookies.