What To Know
- In the midst of growing trust in public AI repositories, this AI News report highlights how cybercriminals are now weaponizing artificial intelligence development platforms as part of sophisticated supply-chain attacks targeting valuable enterprise environments.
- The malicious listing also surged to the top of Hugging Face’s trending section, gaining 667 likes within less than 18 hours, a suspiciously rapid rise that now appears to have been manipulated to increase visibility and lure additional victims.
AI News: The rapid rise of open-source artificial intelligence platforms has delivered incredible opportunities for developers worldwide, but a shocking new cybersecurity incident is now exposing the darker side of the booming AI ecosystem. Researchers have revealed that a malicious repository hosted on Hugging Face successfully impersonated an official OpenAI release and secretly distributed credential-stealing malware to unsuspecting users across the globe.

Image Credit: Thailand AI News
The fraudulent repository, named “Open-OSS/privacy-filter,” was carefully designed to resemble a legitimate OpenAI project called Privacy Filter. Security experts from HiddenLayer disclosed that the attackers copied the original project’s model card almost word for word, creating a convincing trap for developers, AI enthusiasts, and corporate users. In the midst of growing trust in public AI repositories, this AI News report highlights how cybercriminals are now weaponizing artificial intelligence development platforms as part of sophisticated supply-chain attacks targeting valuable enterprise environments.
What made the operation particularly alarming was the sheer scale of exposure before the repository was finally removed. HiddenLayer estimated the fake project accumulated approximately 244,000 downloads, although researchers warned that attackers may have artificially inflated the numbers to make the repository appear more popular and trustworthy. The malicious listing also surged to the top of Hugging Face’s trending section, gaining 667 likes within less than 18 hours, a suspiciously rapid rise that now appears to have been manipulated to increase visibility and lure additional victims.
Malware Hidden Inside AI Setup Instructions
Unlike traditional malware campaigns that rely heavily on phishing emails or malicious attachments, this attack exploited the trust developers place in open-source AI workflows. The repository’s README file looked nearly identical to the genuine OpenAI project documentation, but included dangerous setup instructions that deviated from the original release.
Victims were instructed to execute “start.bat” on Windows systems or run “python loader.py” on Linux and macOS machines. According to HiddenLayer, these instructions were the critical first step in triggering the malware infection chain.
Researchers explained that the loader.py script initially appeared harmless and resembled a normal AI model-loading utility. However, hidden within the code was a carefully concealed infection mechanism. The script disabled SSL verification protections, decoded a Base64-encoded URL connected to jsonkeeper.com, and then retrieved remote payload instructions directly from attacker-controlled infrastructure.
This allowed the threat actors to dynamically change malware payloads without modifying the repository itself, making the attack harder to detect and increasing its flexibility.
Infostealer Targeted Browsers and Crypto Wallets
Once activated on Windows systems, the malicious PowerShell commands downloaded additional payloads from external domains. The malware then established persistence by creating scheduled tasks disguised as legitimate Microsoft Edge update processes, enabling it to survive system reboots and remain hidden from many users.
The final malware payload was identified as a Rust-based infostealer designed to harvest sensitive data from infected machines. HiddenLayer stated that the malware specifically targeted Chromium-based browsers, Firefox-derived browsers, Discord local storage files, cryptocurrency wallets, FileZilla configurations, and extensive host system information.
Researchers further revealed that the malware attempted to disable Windows security protections, including the Antimalware Scan Interface and Event Tracing mechanisms, both of which are designed to help security tools identify suspicious activity.
The attack demonstrates how modern cybercriminals are increasingly shifting toward AI-related ecosystems where developers often execute unfamiliar scripts and dependencies with elevated trust levels.
AI Repositories Becoming a New Cybersecurity Battleground
Cybersecurity specialists have been warning for years that public AI model registries could become dangerous attack surfaces. Unlike traditional software repositories that mostly contain libraries and dependencies, AI repositories frequently include executable scripts, notebooks, model loaders, dependency installers, and setup instructions.
These peripheral components often receive less scrutiny from users eager to test new AI tools quickly.
HiddenLayer researchers noted that the compromised repository was not an isolated incident. The firm discovered six additional Hugging Face repositories containing nearly identical malicious loader logic and sharing the same infrastructure patterns.
The latest findings follow earlier cases involving poisoned AI software development kits, fake OpenClaw installers, and malicious Pickle-serialized model files capable of bypassing some platform scanning systems.
Industry analysts believe this trend could intensify as enterprises increasingly integrate AI development directly into corporate environments containing source code repositories, cloud credentials, sensitive datasets, and internal systems.
Traditional Security Tools Struggling Against AI Threats
Sakshi Grover, senior research manager for cybersecurity services at IDC, warned that conventional Software Composition Analysis tools were never designed to detect the type of malicious loader logic now appearing in AI repositories.
Traditional SCA platforms mainly inspect dependency manifests, container images, and software libraries, but often overlook dangerous scripts hidden within AI development workflows.
Grover also referenced IDC’s November 2025 FutureScape report, which predicted that by 2027, approximately 60 percent of agentic AI systems would require detailed bills of materials. Such documentation could help organizations track AI artefacts, approved versions, executable components, and trusted origins.
Security experts now argue that AI governance frameworks may soon become as important as traditional software supply-chain security programs.
Urgent Warnings for Potential Victims
HiddenLayer strongly advised anyone who cloned the fake Open-OSS/privacy-filter repository and executed its files on Windows systems to immediately treat those machines as fully compromised. Researchers recommended complete system re-imaging as the safest remediation approach.
The company also warned that browser sessions should be considered stolen even if passwords were not locally stored. Session cookies harvested by infostealers can sometimes allow attackers to bypass multi-factor authentication protections and hijack active accounts.
Hugging Face has since confirmed that the malicious repository has been removed from its platform, but the incident has already intensified concerns surrounding the growing cybersecurity risks hidden inside the exploding AI development ecosystem.
As artificial intelligence adoption accelerates worldwide, experts warn that attackers are no longer simply targeting end users. Instead, they are actively infiltrating the trusted workflows developers rely upon every day, potentially turning AI innovation platforms into silent gateways for devastating cyber intrusions.
For the latest on AI-based cybercrimes, keep on logging to Thailand AI News.