Hugging Face Reports Unauthorized Access to AI Model Platform


Late Friday afternoon, a time window companies typically use for disclosing less favorable news, AI startup Hugging Face announced that its security team detected “unauthorized access” to Spaces earlier this week. Spaces is Hugging Face’s platform for creating, sharing, and hosting AI models and resources.

In a blog post, Hugging Face explained that the intrusion involved Spaces secrets, the private pieces of information that function as keys to access protected resources like accounts, tools, and development environments. It also mentioned having “suspicions” that some secrets might have been accessed by an unauthorized third party.

As a precautionary measure, Hugging Face has revoked several tokens in those secrets. These tokens are used for identity verification. Hugging Face has already emailed notices to users whose tokens were revoked and recommends that all users “refresh any key or token” and consider switching to fine-grained access tokens, which they claim are more secure.

The number of users or applications potentially affected by the breach was not immediately clear.

“We are working with outside cybersecurity forensic specialists to investigate the issue and review our security policies and procedures. We have also reported this incident to law enforcement agencies and data protection authorities,” Hugging Face stated in the post. “We deeply regret the disruption this incident may have caused and understand the inconvenience it may have posed to you. We pledge to use this as an opportunity to strengthen the security of our entire infrastructure.”

In an email statement, a Hugging Face spokesperson conveyed to Truth Voices:

“We’ve been seeing a significant increase in cyberattacks in the past few months, likely due to our growing usage and the mainstreaming of AI. It’s technically challenging to determine how many Spaces secrets have been compromised.”

The possible breach of Spaces coincides with increased scrutiny over Hugging Face’s security practices. The platform, which hosts over one million models, datasets, and AI-powered apps, has faced several security challenges.

In April, researchers at cloud security firm Wiz identified a vulnerability—since fixed—that allowed attackers to execute arbitrary code during the build time of a Hugging Face-hosted app, enabling them to examine network connections from their machines. Earlier this year, security firm JFrog discovered that code uploaded to Hugging Face covertly installed backdoors and other malware on end-user machines. Security startup HiddenLayer also identified ways Hugging Face’s supposedly safer serialization format, Safetensors, could be exploited to create sabotaged AI models.

Hugging Face recently announced a partnership with Wiz to utilize the company’s vulnerability scanning and cloud environment configuration tools “with the goal of improving security across our platform and the AI/ML ecosystem at large.”

Kyle Wiggers
Kyle Wiggers
Kyle Wiggers is a senior reporter with a special interest in AI. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives in Brooklyn with his partner, a piano educator, and dabbles in piano himself.

Latest stories


Related Articles

Leave a reply

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!
Continue on app