Online Fraud and Abuse 2025: AI Is in the Driver’s Seat

Akamai Wave Blue

Nov 04, 2025

Share

AI isn’t just accelerating online fraud and abuse — it’s supercharging them. A new generation of large language model (LLM) AI bots is complicating the application and API threat landscape by (potentially) automating attacks on a massive scale.

In just the last year alone, AI-powered bot traffic increased by 300%, making it more difficult to differentiate between benign and malicious activity. At the same time, the rise in fraud as a service (FaaS) in underground markets has dramatically lowered the barriers to entry for cybercriminals.

Consequently, it makes it easier for even novice actors to perpetrate fraudulent activities, from social engineering and phishing to identity fraud.

The latest Akamai research and analyses

We explore this critical issue in a new State of the Internet (SOTI) Fraud and Abuse Report 2025: Charting a Course Through AI’s Murky Waters.

Based on the latest Akamai research, the report provides an in-depth examination of the expanding fraud and abuse landscape and its impact on key industries and regions. The report also provides tips on how organizations can use AI to strengthen their defenses while maintaining regulatory compliance.

AI is hitting the gas on fraud and abuse

The SOTI report explores how the growing adoption of AI has introduced new opportunities for cybercriminals. Here are a few highlights:

  • AI bot traffic is exploding

  • Bot intent matters

  • AI bots are targeting key industries

  • AI bot activity varies by global region

AI bot traffic is exploding

AI bot traffic accounts for billions of daily requests across the Akamai network and is growing faster than general bot traffic. This magnifies the complexity of distinguishing between legitimate bots that promote business growth and malicious bot traffic that is associated with digital fraud and abuse. The business impacts include increased expenses, site performance degradation, and pollution of key metrics.

Bot intent matters

The SOTI report explores the different types of AI bots — from training bots and agent/assistant bots to search bots — and their functions. While legitimate bots are transparent in their intent, others are designed to evade detection.

Of particular concern are those designed to mimic human interactions to probe for weaknesses, and AI chatbots like FraudGPT and WormGPT that facilitate malicious acts, including phishing and other cyberattacks.

AI bots are targeting key industries

Commerce had the highest amount of AI bot activity, reaching more than 25 billion bot requests during a two-month observation period. In the healthcare industry, more than 90% of AI bot traffic is attributed to scraping activities, mainly from search and training bots. Other industries with significant AI bot traffic include high technology and publishing.

AI bot activity varies by global region

Between July and August 2025, Akamai customers in North America experienced 54.9% of all AI bot activity, followed by EMEA (23.6%), APAC (20.2%), and LATAM (1.3%). Across regions, training bots accounted for the vast majority of AI bot traffic.

OWASP Top 10 list mapping

By focusing on key vulnerabilities, the report looks at fraud and abuse through the lens of the OWASP Top 10 lists. The report maps OWASP-related vulnerabilities to common areas linked to fraud and abuse to identify the most preventable types — valuable insight for enhancing protections.

Spotlight features

The SOTI report includes special guest columns authored by privacy and security experts that take a deeper dive into specific topics of interest.

Defensive strategies for financial services organizations

John “JD” Denning, CISO for the Financial Service Information Sharing and Analysis Center (FS-ISAC), emphasizes the importance of layered defenses, response playbooks, all-source threat intelligence, and a collaborative approach focused on collective defense.

Balancing security and regulatory compliance in AI defense strategies

James A. Casey, Vice President and Chief Privacy Officer at Akamai, examines the global AI compliance landscape; he offers best practices for adopting a flexible, risk-based governance model to satisfy emerging AI regulations while preserving the speed, scale, and precision required to defend against automated attacks.

Mitigating the threat

The SOTI report also recommends ways to effectively mitigate the threat posed by AI-driven fraud and abuse by combining technical controls with clear organizational policies and ongoing monitoring.

These practical tips include risk-based bot management and monitoring, AI-specific security controls, the use of established frameworks such as those developed by OWASP, and implementing a comprehensive API security strategy that encompasses the entire API lifecycle.

Outpacing AI

One thing is clear from our research: AI stands out as the single most significant driver of change in online fraud and abuse, transforming both attack and defense strategies.

Gaining a clear understanding of this rapidly evolving threat — and what you can do to reduce your risk — is a critical priority.

You can start by downloading the State of the Internet (SOTI) Fraud and Abuse Report 2025: Charting a Course Through AI’s Murky Waters.

Akamai Wave Blue

Nov 04, 2025

Tags

Share

Related Blog Posts

Security Research
Firmwhere? Rediscovering a Vulnerability in Vivotek Legacy Firmware
July 09, 2025
Cyber Security
Off Your Docker: Exposed APIs Are Targeted in New Malware Strain
September 08, 2025
Read about Akamai Hunt’s discovery of the latest malware strain that targets exposed Docker APIs. Get the technical details and mitigation strategies.
Cyber Security
What We Do In The Shadow (AI): New Malware Strain Vamps Up
November 18, 2025
Akamai researchers discovered malware that hides its C2 traffic inside what looks like an LLM API. Exploitation could lead to control and data exfiltration.