DETROIT, Jan 29, 2026, 08:11 EST
- Researchers identified thousands of open-source AI systems exposed on the internet, operating beyond major platform oversight
- A study uncovered hundreds of cases where safety “guardrails” were stripped away, with certain prompts allowing harmful exploitation
- The majority of exposed systems operated using a limited range of well-known model families, such as Meta’s Llama and Google’s Gemma
Researchers warned Thursday that hackers and criminals can readily hijack computers running open-source large language models (LLMs) outside the control of major AI platforms, opening up new security threats. Reuters
The warning comes as more groups and hobbyists turn to “open-weight” models — AI systems with downloadable parameters that anyone can host — instead of depending solely on cloud platforms that control usage policies and track misuse.
Researchers at SentinelOne and Censys say this shift has exposed a growing layer of public, unmanaged AI compute on the open internet—mostly operating beyond current governance and reporting frameworks. Sentinelone
Over 293 days, researchers scanned internet-connected deployments using Ollama — a tool that enables users to run LLMs locally — gathering 7.23 million observations from 175,108 unique hosts spanning 130 countries. A steady core of roughly 23,000 machines accounted for the bulk of the activity.
Researchers found that almost half of the hosts promoted “tool-calling” capabilities — allowing the model to trigger actions like calling APIs or executing code, which broadens the potential impact if the system is exploited.
Thousands of open-source LLM variants are out there, but Reuters found that many models on internet-facing hosts are versions of Meta’s Llama and Google DeepMind’s Gemma. SentinelOne and Censys also pointed to Alibaba’s Qwen2 family as one of the most prevalent lineages.
Researchers found that “system prompts”—the instructions guiding a model’s behavior—were visible in about a quarter of the LLMs they studied. Among those, 7.5% could be exploited for harmful purposes. The teams also uncovered hundreds of cases where safety guardrails were deliberately stripped away. Investing
Geography showed clear clustering. About 30% of the hosts were based in China, and roughly 20% operated from the United States, Reuters reported. Researchers also highlighted dense pockets in key infrastructure hubs like Virginia in the U.S. and Beijing in China.
Juan Andres Guerrero-Saade, executive director for intelligence and security research at SentinelOne, called out AI industry talks on security controls for overlooking “this kind of surplus capacity that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal.” He likened the issue to an “iceberg” that’s going largely uncounted.
Rachel Adams, CEO and founder of the Global Center on AI Governance, noted that responsibility shifts once open models hit the public, including for the labs that release them. “Labs are not responsible for every downstream misuse,” she said in an email, but they’re still obligated to foresee potential risks and offer mitigation advice.
Meta didn’t comment on developers’ responsibilities for abuse stemming from their tools, instead highlighting its Llama Protection tools and a responsible-use guide. Microsoft AI Red Team lead Ram Shankar Siva Kumar told us via email that while Microsoft backs open-source models, it remains “clear-eyed” about their potential misuse, pointing to pre-release testing and ongoing monitoring for new abuse trends.
One major challenge remains: scanners detect exposed hosts and risky setups, but linking them back to an operator gets complicated, especially on residential networks or small hosting providers. The SentinelOne-Censys report highlighted that attribution data was absent for a significant portion of hosts. It also emphasized that open models serve legitimate research and deployments where closed platforms aren’t viable.