Dubai, May 12, 2026, 01:04 (GST)
- Binance said AI tools helped prevent $10.53 billion in potential user losses and protect more than 5.4 million users from Q1 2025 through Q1 2026.
- The disclosure lands as Chainalysis estimates crypto scams stole $17 billion in 2025, with AI-enabled scams proving 4.5 times more profitable than traditional scams.
- Rival exchange KuCoin is also warning users about deepfakes, “pig butchering” investment cons and address-poisoning wallet tricks. KuCoin
Binance said its artificial intelligence security systems blocked $10.53 billion in potential crypto fraud over 15 months, putting new numbers behind a fast-moving fight between exchanges and scammers using the same technology to scale attacks. The company said it intercepted 22.9 million scam and phishing attempts in the first quarter alone; phishing refers to fake messages or websites used to steal login details or funds.
The point is not only the size of the figure. It comes as crypto fraud shifts from crude giveaway posts and bad links to cheap, automated deception: deepfake videos, cloned voices, fake support desks and bots that can write malicious code or lure victims through chat apps.
That is why the timing matters. Chainalysis said impersonation scams grew more than 1,400% in 2025 and that scams with visible links to AI vendors extracted an average $3.2 million per operation, compared with $719,000 for those without such links.
Binance said it had deployed more than 24 AI initiatives and over 100 models by late 2025. It said AI now powers 57% of its fraud controls, helps screen fake payment proofs and suspicious peer-to-peer messages, and has cut card-fraud rates by 60% to 70% compared with industry benchmarks.
The exchange also pointed to Binance Ai Pro, a product architecture that keeps funds managed by AI agents away from main user accounts and blocks withdrawal access. Know-your-customer checks, or KYC identity screening, are also being tuned to catch deepfakes and synthetic identities, the firm said.
Paul Ugbede Godwin, a crypto analyst at EmageNewsDAO and MEXC, wrote in Tekedia that today’s cybercriminals “operate like multinational corporations,” exploiting decentralized finance protocols, blockchain bridges, exchanges and social-engineering campaigns. His warning is blunt: AI has lowered the cost of attacks and widened the pool of people who can run them. Tekedia
Rivals are pushing similar user warnings, even if they have not reported the same scale of blocked funds. KuCoin’s recent safety guide singled out AI deepfakes, pig butchering — long grooming scams that steer victims into fake investment platforms — and address poisoning, where a thief plants a lookalike wallet address in a user’s transaction history.
But the numbers need a careful read. They are company-reported, and Intellectia noted the lack of an independent audit or detailed methodology behind Binance’s $10.5 billion claim; Binance itself said recovery is not always possible because blockchain transactions are often irreversible.
The credibility question is sharper for Binance than for some peers. The exchange and founder Changpeng Zhao pleaded guilty in a 2023 U.S. criminal resolution, and the Justice Department said illicit proceeds from ransomware, darknet markets and internet scams had moved through Binance in an attempt to avoid detection.
For the industry, the message is less tidy than a single fraud-prevention total. Exchanges are building automated defenses, while scammers are using automated tools to move faster, sound more human and hit victims off-platform. That gap may decide whether crypto’s next security fight is won in trading systems, customer education, or in the minutes before a user clicks send.