SAN JOSE, Calif., Jan 28, 2026, 09:27 PST
- Zscaler says red-team tests found critical flaws in every enterprise AI system it analyzed, with a median 16 minutes to first failure
- The company says it saw 989.3 billion AI and machine-learning transactions in 2025, up 91% from 2024
- Zscaler rolled out an AI Security Suite as firms push chatbots and “agentic” tools deeper into business workflows
Cloud security firm Zscaler said on Tuesday its latest ThreatLabz report found most enterprise AI systems could be compromised in a median 16 minutes, as AI and machine-learning (ML) activity on its platform rose 91% in 2025. “AI” is now “a primary vector for autonomous, machine-speed attacks,” said Deepen Desai, Zscaler’s EVP for cybersecurity. Zscaler
The report lands as companies feed more sensitive data into chatbots, writing tools and coding assistants, often without a clean inventory of what models are running where. That gap matters because “agentic” AI — systems that can take actions with limited human input — can move fast, and it can fail fast, too.
Zscaler researchers said they ran red-team exercises — simulated attacks — in 25 corporate environments and saw AI systems hit their first major failure after a median 16 minutes; 90% had failed by 90 minutes. The company also found corporate security policies blocked roughly 40% of attempted AI transactions. Cybersecuritydive
The report described failures that ranged from biased or off-topic responses to privacy violations and failed URL checks. In 72% of environments, Zscaler’s first test uncovered a critical vulnerability.
On the usage side, Zscaler said it analyzed 989.3 billion AI/ML transactions on its Zero Trust Exchange in 2025, generated by about 9,000 organizations, and the number of applications driving those transactions quadrupled to more than 3,400. The United States accounted for about 38% of the activity, followed by India at 14% and Canada at 5%.
Finance and insurance made up 23% of AI traffic for a third straight year, Zscaler said, while manufacturing followed at 20%. Technology and education posted the fastest growth, with transactions up 202% and 184% year-on-year, it added.
Data flowing into AI tools surged too. Zscaler said transfers to AI/ML applications rose 93% to 18,033 terabytes and it counted 410 million data-loss-prevention (DLP) violations tied to ChatGPT, including attempted sharing of source code and medical records. DLP tools scan for sensitive data and can block it from leaving a company.
ChatGPT logged 115 billion enterprise transactions in 2025, while coding assistant Codeium recorded 42 billion, the report said. Zscaler also pointed to “embedded AI” features inside software-as-a-service (SaaS) products as a blind spot, naming Atlassian as a leading source of that embedded activity.
In a separate announcement on Tuesday, Zscaler rolled out an AI Security Suite it said will help firms inventory AI apps and models, control access and inspect prompts and data flows using a “zero trust” approach that assumes no user or device is trusted by default. “Traditional security approaches were not designed to secure AI,” Chief Executive Jay Chaudhry said. Zeus Kerravala, principal analyst at ZK Research, said “AI traffic doesn’t behave like traditional web traffic” and warned many firms are “flying blind” on visibility. Nasdaq
Zscaler competes with Palo Alto Networks, Cisco and Cloudflare in cloud security and secure-access offerings for large companies. It said it is building its AI controls to align with frameworks such as the U.S. NIST AI Risk Management Framework and the EU AI Act, and is integrating with services from OpenAI, Anthropic, AWS, Microsoft and Google.
Still, Zscaler’s numbers reflect traffic seen on its own platform and testing in a limited number of corporate environments; results could differ in other setups. The company also cautioned that benefits from new features depend on successful integration and customer adoption.
Zscaler shares were down about 1% at $217.30 in morning trading.
Zscaler urged organizations to keep testing their AI systems and apply consistent governance controls, arguing that AI platforms now sit on the critical path for corporate data.