Executive summary
- Agentic AI is reshaping automation: Traditional bot management built on “good vs. bad” classification must evolve to understand bots’ intent and identity as AI agents increasingly interact on users’ behalf.
New authentication standards are emerging: Protocols like Web Bot Authentication (Web Bot Auth), Know Your Agent (KYA), and Visa’s Trusted Agent Protocol (TAP) bring cryptographic verification to bot and agent interactions, improving trust and transparency.
Monetization replaces blanket blocking: Instead of denying all AI bots, advanced strategies enable content licensing and fair value exchange through partnerships with platforms such as Skyfire and TollBit.
Comprehensive protection is essential: The combination of bot detection, identity transparency, and payment protocols enables websites to distinguish between legitimate agents and evasive automation, laying the foundation for secure, agentic commerce.
Introduction
In the first blog post in this two-post series, I discussed how agentic AI is revolutionizing web interactions, changing how people shop, search, and consume content.
In this post, I’ll take you through how bot management is evolving in the age of AI agents, with new authentication standards, monetization models, and ways to manage AI-driven automation.
The state of bot detection
Today’s advanced bot management products are designed to detect two main types of bots:
“Good” or “verified” bots
“Bad” or “unverified” bots
“Good” or “verified” bots
“Good” or “verified” bots identify themselves in the user-agent header. The most common categories of these bots include web search engines, social media bots, online ad bots, SEO bots, and, more recently, AI bots from platforms such as OpenAI, ChatGPT, Perplexity, Gemini, and others.
“Bad” or “unverified” bots
“Bad” or “unverified” bots are flagged by advanced bot detection methods. This category represents the largest volume of bot activity on the internet.
These “bad/unverified” bots are predominantly web scrapers, but this category also includes bots that are purpose-built to carry out various types of attacks, such as credential stuffing, account opening abuse, and other automated fraud scenarios. Without proper authentication mechanisms like the ones described later in this blog post, some of the emerging (“good”) AI agent/bot traffic may also fall into this category.
However, simply detecting, categorizing, and labelling bots as “good/verified” or “bad/unverified” is no longer enough. As agentic interaction becomes more mainstream, bot management products must evolve.
Effective products must also detect the bot’s intent. The shift to using agents to interact with websites means that the categorization of some of the activity currently identified as “bad/unverified” will need to be nuanced.
Detecting good bots has always been a game of guesswork that requires considerable maintenance. Yes, these bots provide us with the courtesy of identifying themselves in the HTTP headers, and, for most, we see the traffic originating from predictable IP addresses or networks.
The increase in legitimate automation on the internet, however, necessitates more effective identification methods for these known bots. Using at least two factors to identify this traffic continues to be important for accurate detection and to keep impersonators at bay.
Stronger bot identification and authentication
Today’s protocols bring cryptographic verification to bot and agent interactions, improving trust and transparency. These new protocols include:
Web Bot Authentication
The introduction of new standards, such as Web Bot Authentication (Web Bot Auth), is key to achieving strong and accurate identification of the so-called “known bots” now and in the future. The Web Bot Auth protocol is a lightweight authentication mechanism based on the HTTP Message Signature Protocol (RFC 9421).
API tokens or signed credentials are issued to bots and tied to an identity and role. The server (in Akamai’s case, the edge servers deployed worldwide) will extract the relevant information from the HTTP headers, retrieve the public key, and validate the signature. It’s especially useful for web APIs, scraping with permission, and inter-service communication.
Web Bot Auth can work in conjunction with the Agent2Agent (A2A) Protocol and the Agent Payments (AP2) Protocol, as well as the popular Model Context Protocol (MCP), which helps agents discover the services and tools available for a particular website.
The combination of these new standards is designed to help enable the emerging agentic commerce. Akamai is committed to supporting the new standards and collaborating with leading bot operators (Google, Microsoft, OpenAI, Perplexity, Amazon, Meta, Apple, and more) to promote their adoption.
Know Your Agent
The Know Your Agent (KYA) aspect builds upon the established Know Your Business (KYB) and Know Your Customer (KYC) models, providing a robust identification and verification process when an agent registers, which helps reduce the risks of fraud and abuse. Skyfire introduced the emerging protocol KYAPay as an open protocol, which can also help enable agentic commerce. KYAPay is an identity-linked payment protocol designed for interactions between AI agents and services (agent to agent and agent to service) in an autonomous/agentic ecosystem.
KYAPay aims to bind identity tightly (Who is acting? On whose behalf?) to payment intent or authorization, so that agents can transact without manual human steps each time. The KYA aspect of the protocol, which holds the identity, can be used in place of or in conjunction with Web Bot Auth.
It’s built to be compatible with existing infrastructure (APIs, OAuth flows) while also aligning with emerging standards in agent protocols (decentralized identifiers and verifiable credentials).
Once an agent is registered and approved, they receive an encrypted JSON Web Token (JWT). The agent is then expected to send the JWT with each request so that the server (again, in Akamai’s case, the edge server deployed worldwide) can validate it and extract valuable information to help infer intent.
Trusted Agent Protocol
Building on the HTTP Message Signature standard, Visa also recently announced the Trusted Agent Protocol (TAP), designed to enable agentic commerce. Just as with Web Bot Auth, the TAP provides strong agent authentication by Visa.
Like KYAPay, the TAP also leverages JWTs to communicate the end user's identity and payment information to the merchant. Visa is working with other card issuers to establish a consistent methodology to support ecommerce transactions through AI agents.
Provide more in-depth visibility and facilitate decision-making
Adoption of Web Bot Auth, KYAPay, and TAP will take time and require investment from all parties to make them a reality. Bot management products must support and validate the AI agent’s credentials, whether they are provided as an HTTP Message Signature or a JWT.
Information included in the JWTs will provide more context about the bot interaction, help infer intent, categorize traffic more accurately, and help establish trust between all parties. This will provide website owners with more in-depth visibility and facilitate the decision on how AI traffic should be handled.
New response strategy: Monetize as an action
Agentic AI is emerging and growing rapidly — and all signs indicate that it is here to stay. Therefore, the “block all” strategy toward AI bots (that has been adopted by most publishers, for example) will not be sustainable and may be counterproductive in the future.
Akamai partners with the monetization platforms Skyfire and TollBit. When integrated with advanced bot detection methods, these platforms help provide a clear message to AI agents that access may not be free and encourage bot operators to register with the monetization service. The services can also facilitate content licensing arrangements before the content is served.
Monetization is often discussed in the context of media and publishers, but it could just as well be applied to any organization that holds valuable information (such as scientific or financial data, product reviews, or consumer sentiment on various topics, products, or services) that has value and must be monetized to compensate the individual or organization that produced it.
Determining the optimal price for the content
The challenge with content monetization is determining the optimal price for the content. Monetization platforms enable content owners to set the price for their content. The current practice for publishing and other media organizations is to use the site CPM (cost per thousand) to reflect the typical revenue from online advertising and affiliate marketing when a user visits a page. But is this the right call?
A more dynamic model, based on content freshness, popularity, contribution to the generation of the answer, size, and a few other factors, should be considered. For example, a new and exclusive story that generates significant interest may require regular attention from the bot operator as the story unfolds, but it may have less value once the story concludes and the information has gone stale. For another type of content (e.g., financial, consumer sentiment, scientific), the price may depend on whether the data represents a commodity or is exclusive/rare/premium content.
Broad adoption is key
Whether the content monetization will experience a broad adoption depends on the cost. Everyone wants a fair price, but the definition of “fair” may differ for content owners who want to maximize their income and AI platforms that want to minimize their costs.
If no common ground can be found and everyone sets their own price, the cost variability may not be acceptable to the AI platform, which may limit their incentive to comply with content licensing arrangements.
End-user identity transparency and intent
AI agents act on behalf of end users. One of the challenges for a site owner is the lack of transparency regarding the user identity and intent. A direct visit to a website is always an opportunity to generate leads for a business. When the interaction comes from AI agents, that opportunity is lost.
If the AI agent could consistently share information about the end user while collecting information from the site, this would allow website owners to reach out to end users and attract them back to their site for subsequent visits. Figure 1 illustrates a conceptual approach to using the KYA protocol to address the transparency challenge.
To transact with the web server protected by the bot manager, the AI bot would first need to register with the KYA system.
Upon successful validation of the AI bot identity, an encrypted JWT is returned. The AI agent is expected to send the JWT with each request to the protected site. The bot manager will then run the detection, validate the JWT, and use the embedded information to more accurately classify the traffic and infer the intent.
Upon successful validation (and if the bot manager’s policy allows it), access to the web server is granted.
The KYA token has the potential to convey more specific information about the request's context, the intended use of the data, and even the end user who is initiating the request, thereby resolving transparency issues among all parties.
KYA can be applied to all types of sites and use cases. Visa’s TAP can help achieve similar transparency outcomes but is more applicable to ecommerce sites.
Good intent vs. bad intent
AI agents are not designed to engage in malicious activities, and they typically have built-in protections to prevent abuse. AI platforms can complement guardrails in place in their models with firewall for AI products to help assess prompts and blocking responses that may leak sensitive information. Bot management products can also be used to detect bots that are attempting to interact with the agent.
However, the capabilities of AI agents are growing so rapidly that one cannot guarantee that someone with malicious intent could not trick AI agents into misbehaving and abusing a website with content or resources that are used to answer a prompt or complete a task.
Therefore, a bot management product adapted to AI agents must also protect the websites the agent interacts with. For example, an agent searching for a product, adding it to the cart, logging in to the site, and checking out the product is considered an interaction with good intent.
However, an agent tricked into attempting to purchase multiple products during a sales event and logging in using various accounts and profiles should be flagged. This caution is to help prevent scalpers from exploiting the trust a bot manager system has in an AI agent once it’s been identified and authenticated through protocols like Web Bot Auth, KYA, or TAP.
Achieving selective mitigation
The existing capabilities of Akamai Account Protector can help mitigate traffic from misbehaving AI agents when they interact with transaction endpoints. However, to achieve selective mitigation — mitigating only the traffic from the user that causes the agent to misbehave, rather than all traffic from the agent — it will be necessary to identify the different users behind the agent, which is where the KYA or TAP can be helpful.
Figure 2 illustrates the protections upstream (in front of the AI agents) and downstream (in front of the website) to efficiently prevent AI agent abuse.
Apply a comprehensive bot management strategy
It is not enough to focus on one type of bot. AI agents from well-known platforms are easily detectable and manageable through a bot management solution or via the robots.txt file. If an AI bot is not allowed to collect data, it will, in most cases, honor this directive.
However, there is no guarantee that the company supporting the AI bot will not attempt to obtain data for training its models through alternative means, such as outsourcing data collection to a web scraping platform.
As described in Unpacking the Complex Dynamic Behind AI Models’ Data Needs, web scraping platforms:
Are more complex
Do not adhere to robots.txt directives
Employ strategies to blend in with legitimate traffic
Commonly use advanced methods to evade bot detection
These platforms require an equally advanced solution to keep up with their evolution and detect their activity. Therefore, adopting a comprehensive bot detection strategy is key to preventing leaking any data or compromising the monetization strategy if one is implemented.
Blocking AI bots is not sufficient
Akamai compared the volume of scraping platforms detected via advanced detection methods with the volume observed directly in the AI bots category. Figure 3 presents a case study of a subset of publishers, where the volume of web scraping activity detected by advanced bot detection methods was 8x that of AI bots.
The significant difference can partly be explained by the strict robots.txt directive applied to AI bots.
However, it also suggests that publishers who aim to prevent AI platforms from collecting their data by blocking AI bots are not sufficiently protected. A similar or higher multiplier value between web scraping and AI bot activity can generally be observed with ecommerce sites.
Conclusion
AI platforms and agentic traffic are this decade's internet disruptors — and they are here to stay. The new interaction through agentic AI is showing strong adoption, and all industries must adapt. New protocols continually emerge to facilitate agent interaction and the promise of agentic commerce.
Stronger authentication protocols, such as Web Bot Auth, KYA, and Visa’s TAP, must be standardized and adopted to provide stronger authentication and transparency with agentic interaction on the internet.
Bot management solutions must be able to detect agents and accurately infer their intent.
Ecommerce sites must optimize for AI.
Publishers and media must adapt their revenue models to recoup losses from lower ad revenue.
Although not mentioned specifically in this blog post, agentic traffic is also significantly impacting the online ad industry revenue models as users shift toward AI platforms that, so far, offer an ad-free experience.
Things are evolving rapidly, and the AI bombshell can feel a bit scary at times. But with big changes come opportunities to redefine things. From a bot management standpoint, these changes represent an opportunity for stronger cooperation among the major internet actors to redefine how bots are identified and to facilitate their smoother integration into the internet ecosystem.
Learn more
To learn more about Akamai’s agentic strategy, including our bot & abuse protection solutions, contact an expert.
Tags