Hugging Face Reports Unauthorized Access to AI Platform
AI tool development company Hugging Face has reported unauthorized access to its Spaces platform. This breach, detected earlier this week, has raised concerns about the security of AI and machine learning applications hosted on the platform. Hugging Face Spaces is widely used for creating, sharing, and discovering AI apps.
Hugging Face disclosed that the breach might have exposed some of the platform's secrets, which are essential pieces of information used to access protected resources. In response to the incident, the company swiftly revoked tokens associated with the compromised secrets and notified affected users via email. The company recommended that all users refresh their keys or tokens and switch to fine-grained access tokens for enhanced security.
"Earlier this week our team detected unauthorized access to our Spaces platform, specifically related to Spaces secrets," Hugging Face stated in a blog post. "We have suspicions that a subset of Spaces' secrets could have been accessed without authorization." The company has enlisted external cybersecurity experts to investigate the breach and has reported the incident to law enforcement and data protection authorities.
Hugging Face has since implemented several significant improvements to its infrastructure. These measures include removing organization tokens to increase traceability and audit capabilities, implementing a key management service for Spaces secrets, and enhancing the system's ability to identify and invalidate leaked tokens proactively.
The company also plans to phase out "classic" read and write tokens in favor of fine-grained access tokens, which offer tighter control over access to AI models.
This breach is the latest in a series of security challenges faced by Hugging Face. In April, cloud security firm Wiz discovered a vulnerability that could allow attackers to execute arbitrary code and gain cross-tenant access to other customers' models. Earlier in the year, security firm JFrog found evidence of malicious code uploaded to Hugging Face, which could install backdoors and other malware on user machines.
Additionally, HiddenLayer identified potential abuses of Hugging Face’s Safetensors serialization format, which could sabotage AI models.
As the AI sector continues its rapid growth, AI-as-a-service (AIaaS) providers like Hugging Face increasingly find themselves in the crosshairs of cyber attackers. The company has committed to using this incident as an opportunity to strengthen its security measures and protect its growing user base from future threats.
Please, comment on how to improve this article. Your feedback matters!