top of page

Fake OpenAI Repository on Hugging Face Delivered Infostealer Malware to 244,000 Downloads

A malicious repository on the AI model-sharing platform Hugging Face impersonated an OpenAI project and distributed information-stealing malware to Windows users before the platform removed it, according to researchers at HiddenLayer, a company specializing in AI and machine learning security.

 

The repository, named Open-OSS/privacy-filter, reached the number-one position on Hugging Face's trending list and accumulated 244,000 downloads before being taken down. HiddenLayer researchers discovered the campaign on May 7, after noticing that the repository had closely copied the model card from OpenAI's legitimate Privacy Filter release.

 

"The repository had typosquatted OpenAI's legitimate Privacy Filter release, copied its model card nearly verbatim, and shipped a loader.py file that fetches and executes infostealer malware on Windows machines," the researchers explained.

 

The malicious loader.py Python script was designed to appear harmless by including fake AI-related code. In the background, however, it disabled SSL verification, decoded a base64 URL pointing to an external resource, and retrieved a JSON payload containing a PowerShell command — all executed in an invisible window.

 

That command then downloaded a batch file called start.bat, which performed privilege escalation, downloaded a final payload named "sefirah," added it to Microsoft Defender's exclusion list, and executed it.

 

The final payload is a Rust-based infostealer targeting a broad range of sensitive data: cookies, saved passwords, encryption keys, and session tokens from Chromium- and Gecko-based browsers; Discord tokens and local databases; cryptocurrency wallet credentials and browser extensions; SSH, FTP, and VPN configuration files; system information; multi-monitor screenshots; and sensitive local files including wallet seeds and keys.

 

Stolen data is compressed and exfiltrated to a command-and-control server at recargapopular[.]com.

 

HiddenLayer noted that the malware includes extensive anti-analysis capabilities, such as checks for virtual machines, sandboxes, debuggers, and analysis tools — all aimed at evading automated detection systems.

 

The exact number of victims remains unclear. Researchers observed that the vast majority of the 667 accounts that liked the malicious repository appear to be auto-generated, and noted that the 244,000 download count may have been artificially inflated.

 

Through their investigation, HiddenLayer also uncovered additional repositories using the same malicious loader infrastructure, as well as overlaps with a separate npm typosquatting campaign distributing the WinOS 4.0 implant.

 

Users who downloaded files from the repository are advised to reimage affected machines, rotate all stored credentials, replace cryptocurrency wallets and seed phrases, and invalidate all active browser sessions and tokens.

 

Hugging Face has faced prior incidents involving malicious models hosted on the platform, highlighting persistent challenges for AI model-sharing services in detecting supply chain attacks that exploit developer trust. The incident underscores the growing risk of threat actors targeting AI development workflows — where large file downloads from public repositories are a routine part of daily research and engineering.

bottom of page