From Clawdbot to OpenClaw: Practical Lessons in Building Secure Agents

Akamai Wave Blue

Feb 18, 2026

Maxim Zavodchik and Alon Ganor

Maxim Zavodchik

Written by

Maxim Zavodchik

Maxim Zavodchik is an experienced security research leader with a proven track record in establishing, growing, and defining strategic vision for Threat Research and Data Science teams in Web Application Security and API Protection. When he’s not protecting life online, you can find him being a super dad and/or watching Studio Ghibli movies.

Written by

Alon Ganor

Alon Ganor is a distinguished security researcher with deep expertise in mobile internals and vast experience in reverse engineering, malware analysis, and advanced security research projects. Alon is a devoted husband and father: If he is awake at 3AM, he’s either reverse engineering a complex obfuscation or negotiating with a screaming toddler. Honestly, the toddler is the tougher challenge.

Share

Executive summary

  • OpenClaw’s rapid evolution from prototype to widespread deployment revealed fundamental security gaps in autonomous agent design, emphasizing that robust traditional security controls are non-negotiable foundations.

  • The agent’s vulnerabilities align with the OWASP Top 10 for Agentic Applications, including threats such as goal hijacking, tool misuse, privilege escalation, supply chain risks, and more.

  • Practical security measures include separating instruction and data channels, implementing capability-based access controls, using dedicated service accounts, verifying third-party extensions, enforcing sandboxing, and continuous runtime monitoring.

  • A defense-in-depth strategy is essential, combining traditional security, architectural controls, and runtime protections to mitigate both conventional and agent-specific risks.

  • The key takeaway: Secure agent deployment requires balancing autonomy with risk management, using OpenClaw’s lessons as a blueprint for building and operating safer AI agents.

From Clawdbot to Moltbot to its current form OpenClaw, this technology has compressed years of security lessons into weeks. This agent doesn't just “talk”; it has hands. It can execute shell commands, manage files, and interact directly with websites and APIs that you as a user might have access to — moving beyond passive access into active execution.

In a few weeks, this open source personal assistant went from experimental prototype to running on thousands of machines worldwide, exposing fundamental gaps in how we approach agent security. 

OpenClaw's vulnerability catalog isn't a cautionary tale about what not to build. It's a blueprint for what we must build correctly. This blog post maps OpenClaw’s security failures to a practical framework for building truly secure autonomous agents.

We go beyond analysis by offering practical defensive guidance and an honest assessment of where agentic systems can — and cannot yet — be secured. 

The foundation: Why agent security starts with traditional security controls

The most striking lesson from OpenClaw is that you cannot build a secure agent on a broken foundation. The traditional threat model (network, OS, and application security) doesn't just sit alongside the agentic model, it forms its bedrock. If the underlying software fails, the agentic layer becomes a “privileged proxy” for attackers.

Network exposure without authentication was the primary breach vector. In OpenClaw, the “localhost trust” assumption (binding to 0.0.0.0:18789 without authentication) allowed external traffic to be treated as trusted local commands. This demonstrates a critical principle: Agentic AI security is an extension of, not a replacement for, robust systems security.

If your infrastructure allows one-click exploits with unvalidated query parameters (like CVE-2026-25253), command injection (like CVE-2026-25157), and plaintext credential storage (as in ~/.clawdbot/.env and ~/.clawdbot/clawdbot.json), the agent's autonomous capabilities simply automate your own compromise.

Mapping the OWASP Top 10 for Agentic Applications

Once the foundations are solid, we must address the risks that are unique to the autonomous nature of AI. The OWASP Top 10 for Agentic Applications 2026 offers the necessary framework to categorize how an agent’s “brain” can be turned against itself. 

Akamai’s security experts, grounded in real-world research and frontline experience, have added the following practical defense guidance to the OWASP framework that you can apply to defend against agentic abuse in the wild.

ASI01: Agent Goal Hijack

OpenClaw reads emails, processes messages, and fetches web content without distinguishing instructions from data. Malicious actors might embed instructions in Google Docs, emails, and Slack messages that redirect the agent's behavior to perform things like sensitive data exfiltration, while the user believes the agent is simply “summarizing an inbox.”

Practical defense: Agents need architectural separation between instruction channels (authenticated commands) and information channels (data retrieved from external sources). This separation can't be achieved with prompt engineering alone; it requires different processing pipelines with different privilege levels.

ASI02: Tool Misuse and Exploitation

OpenClaw demonstrates the risk of tool misuse and exploitation at multiple levels. The agent runs with full file system access, shell command execution, network connections, and API access. This means that any “prompt-injected” command could use the agent's native tools (like the shell or file system) to steal sensitive files such as ~/.clawdbot/.env or install a persistent backdoor.

With the extended OpenClaw “skills” functionality, this means that any new skill might have full access to all agents’ tools.

Practical defense: Move toward capability-based access control. Instead of a broad “shell” tool, provide granular, pre-approved commands that run in ephemeral, strictly isolated environments. A “file organizer” skill should have restricted directory access but no network capabilities.

ASI03: Identity and Privilege Abuse

Most OpenClaw deployments inherit the user-level privileges of the person who ran them. This creates a massive blast radius. Any action performed by the agent gains the user’s full identity for any OS operation or across all connected services.

Additionally, OpenClaw's localhost trust assumption creates a privilege escalation path: External traffic appearing as localhost connections bypasses all authentication. Combined with exposed instances and default bindings, this gives anonymous attackers full administrative access.

Practical defense: Deploy agents with dedicated service accounts. These should have scoped, temporary permissions (OIDC/OAuth) that are distinct from interactive user sessions. Authentication should assume hostile network environments, not localhost trust.

ASI04: Agentic Supply Chain Vulnerabilities

OpenClaw uses “skills,” which are modular, community-built instruction sets and tools that can be easily outsourced and installed via ClawHub, the platform's official public registry. OpenClaw’s ClawHub was a case study in supply chain risk, while this skills marketplace became a distribution channel for malware.

Practical defense: Fundamentally, the trust model must shift from “trust everything” to “verify before execute.” Skills should declare required capabilities, undergo automated scan, and run with minimum necessary privileges. The extension ecosystem needs developer reputation systems and sandboxed execution.

ASI05: Unexpected Code Execution (RCE)

OpenClaw’s design allows agents to execute arbitrary shell commands through the bash tool, turning every prompt into a potential remote code execution (RCE) gateway.

Practical defense: Implement mandatory zero-access sandboxing. Any code generated or executed by a large language model (LLM) must occur in an isolated environment with no network or sensitive file access. Manual user vetting is required for all out-of-sandbox code execution.

ASI06: Memory & Context Poisoning

OpenClaw’s persistent state (stored in MEMORY.md and SOUL.md) allows “sticky” attacks. An instruction injected today could lie dormant in the agent’s “memory” and be triggered weeks later.

Practical defense: Separate static knowledge (retrieved facts) from procedural instructions. Never let the agent modify its own operational logic based on external inputs.

ASI07: Insecure Inter-Agent Communication

Moltbook launched January 28, 2026, as an agent-only social network in which OpenClaw agents can read and publish posts and comments. Agents that interact on Moltbook might be tricked into sharing sensitive user context with malicious agents, which could leak private data through “normal” agent-to-agent dialog.

Practical defense: Agent communication requires authentication, encryption, message validation, and rate limiting, which are essentially the same controls as on any distributed system, but with additional semantic validation since message content may contain instructions rather than just data.

ASI08: Cascading Failures

Agentic systems chain decisions across steps. In OpenClaw, a single fault in a poisoned component or planning error could propagate, triggering system-wide compromise.

Practical defense: Agents need failure isolation mechanisms: separate credentials that scope for different operations, circuit breakers that halt operations when anomalies are detected, memory and capabilities that are compartmentalized, and monitoring that can identify cascading failures before complete compromise.

ASI09: Human-Agent Trust Exploitation

OpenClaw agents operate with user-granted trust, which attackers might weaponize through indirect injection. Users trust their agent to read emails, process documents, and directly take actions, including sensitive ones.

Practical defense: Sensitive operations should trigger user confirmation with sufficient details about the initiating origin and the chain of events that led to the operation. Additionally, implementing a “questioning” mechanism could help escalate unexpected requests to the user.

ASI10: Rogue Agents

In addition to the previously described security issues, an agent might become an “insider threat,” continuing to operate under a valid identity while its internal logic was actively working against the user's original intent through compromised SOUL.md and MEMORY.md files (which stores its personality and history).

Practical defense: Restrict write access to SOUL.md and MEMORY.md and check their integrity to a known baseline.

So, can an OpenClaw-style agent be secured?

The pessimistic view: OpenClaw's core value proposition of unrestricted access to data and systems is fundamentally incompatible with security. Autonomous agents require trust boundaries that limit autonomy, which defeats their purpose.

The pragmatic view: Secure agent deployment requires accepting reduced autonomy in exchange for controlled risk. Not every agent needs full shell access. Not every task requires unrestricted capability.

Translating this pragmatic view into reality requires a defense-in-depth strategy consisting of three layers:

Layer 1: Secure the foundation (traditional security)

Network security: Bind services to localhost when possible, authenticate all external access, encrypt communications, adopt Zero Trust concepts, and implement rate limiting

Credential management: Use secret management systems, rotate credentials, scope permissions, separate agent credentials from user credentials, and never store secrets in plaintext

Dependency management: Audit dependencies, maintain updates, monitor for vulnerabilities, and verify package integrity

Layer 2: Architectural controls

Capability restriction: Grant minimum necessary privileges, sandbox execution environments, isolate file system access, restrict network access, and implement permission models for extensions

Context separation: Maintain distinct channels for user commands vs. environmental data, process untrusted content with reduced privileges, validate instructions before execution, and compartmentalize memory

Execution gates: Require confirmation for actions triggered by untrusted content, maintain allowlists for automated actions, and escalate suspicious operations

Failure isolation: Use separate credentials per service, implement circuit breakers, detect and halt anomalous behavior, log extensively, and prepare rollback mechanisms

Layer 3: Runtime protection

Input filtering and dynamic prompt injection defense: Limiting injection opportunities through input inspection and flagging injection attempts

Behavioral monitoring: Track resource access patterns, monitor network connections, identify privilege escalations, detect data exfiltration attempts, and flag unexpected operations

Memory auditing: Review persistent state for malicious instructions, validate memory content before loading, compartmentalize memory by trust level, and implement memory expiration

The path forward

OpenClaw's initial security failures aren't indictments of autonomous agents, but they are lessons in what happens when powerful capabilities deploy without corresponding security controls.

To their credit, OpenClaw’s creator and contributors took the security community's feedback seriously, and worked to fix many of the identified critical vulnerabilities.

Traditional software took decades to develop secure engineering practices. Autonomous agents are following the same trajectory on an accelerated timeline. The difference is that the stakes are higher, because agents have broader access, more complex attack surfaces, and greater potential for damage.

The security community's challenge isn't preventing agent deployment; that train has left the station. It is developing practical security controls that enable safe deployment without eliminating the utility that makes agents valuable.

OpenClaw's vulnerability catalog provides a defense-in-depth roadmap: Secure the foundation with traditional controls, implement architectural boundaries and agentic-specific threats controls, and deploy runtime protections.

Find out more

To learn more about AI security and how to safely deploy AI agents, contact an expert.

Akamai Wave Blue

Feb 18, 2026

Maxim Zavodchik and Alon Ganor

Maxim Zavodchik

Written by

Maxim Zavodchik

Maxim Zavodchik is an experienced security research leader with a proven track record in establishing, growing, and defining strategic vision for Threat Research and Data Science teams in Web Application Security and API Protection. When he’s not protecting life online, you can find him being a super dad and/or watching Studio Ghibli movies.

Written by

Alon Ganor

Alon Ganor is a distinguished security researcher with deep expertise in mobile internals and vast experience in reverse engineering, malware analysis, and advanced security research projects. Alon is a devoted husband and father: If he is awake at 3AM, he’s either reverse engineering a complex obfuscation or negotiating with a screaming toddler. Honestly, the toddler is the tougher challenge.

Tags

Share

Related Blog Posts

Security
Zero Trust Switching: Why Firewalls Alone Can’t Secure AI Workloads
February 18, 2026
Struggling with AI security? Find out how microsegmentation and Zero Trust switching can protect your AI workloads and cloud environments.
Security
Understand the Shift Toward AI-Driven Interactions: ​A Guide to MCP
February 12, 2026
Learn why enterprises are paying attention to MCP and discover the high-level leadership considerations that come with adopting it.
Security
Industrialized Ransomware: Confronting the New Reality
February 10, 2026
Read about the new ransomware reality and what most security strategies get wrong. Learn how to protect your organization in 2026.