For the full technical analysis, please visit the Eye Research website.
AI assistants with deep system access are rapidly moving into production environments. Tools like OpenClaw connect to email, cloud platforms and local machines, and can take action on behalf of users. That level of autonomy changes the security model.
In our review of OpenClaw, we identified a vulnerability in the way the application handled WebSocket connections. User-controlled HTTP headers, including the Origin and User-Agent fields, were written directly into application logs without sanitisation. We confirmed that nearly 15 KB of data could be injected through these headers.
On its own, logging untrusted input is a known weakness. In the context of an AI agent that may read and interpret its own logs, it creates a different type of exposure.
The issue has been responsibly disclosed and patched by the OpenClaw maintainers before publication of this blog.
When injected data becomes reasoning context
OpenClaw is designed to help troubleshoot issues by analysing its own logs. During testing, we confirmed that externally supplied header values were stored verbatim. If an OpenClaw instance is reachable from the internet, an attacker could inject structured content into those logs through a crafted request.
If the agent later reads those logs as part of its debugging process, the injected content becomes part of the model’s reasoning context.
Below is an example from our controlled sandbox environment where OpenClaw detected the injected content in its logs and raised a prompt injection alert.

This is not a traditional remote code execution vulnerability. We were not able to directly execute arbitrary commands. During our proof-of-concept testing, OpenClaw’s guardrails detected the injection attempt and refused to act on it.
However, the injection surface itself existed. With a large allowed payload size and no sanitisation at the logging layer, the model could be exposed to attacker-controlled contextual input. The practical impact depends on how logs are consumed and how robust downstream safeguards are.
We therefore classify this as an indirect prompt injection risk. It attempts to influence decision-making rather than bypassing technical controls.
A broader design consideration
AI agents differ from traditional software in one important way: they treat contextual data as input for reasoning.
Logs and system output are no longer just diagnostic artifacts. In AI-driven systems, they can influence behaviour. When untrusted data enters those channels without control, the boundary between external influence and internal reasoning becomes less clear.
The OpenClaw vulnerability has been resolved. The structural lesson remains relevant for any organisation deploying AI agents with operational access.
In earlier research, including our Prompt Injection for Good initiative, we explored how contextual input can shape model behaviour. The OpenClaw case illustrates the same principle from a defensive perspective: if contextual injection is possible, it must be anticipated, tested and controlled.
Moving forward
AI assistants connected to production systems become part of the attack surface. Vulnerabilities or design oversights can have operational consequences beyond the AI tool itself.
This is not about one specific product. It is about visibility and control.
In practice, organisations need:
- Visibility into internet-exposed services
- Continuous monitoring for abnormal behaviour
- Detection of exploitation attempts, including injection and abuse patterns
- The ability to respond quickly when systems behave unexpectedly
At Eye Security, we approach this through continuous 24/7 monitoring, detection and response. Whether exposure originates from legacy infrastructure, cloud services or emerging AI systems, the objective is the same: identify risk early, detect malicious activity in real time and contain it before it escalates.
As AI agents become embedded in daily operations, ensuring they are covered by ongoing monitoring and response capabilities is no longer optional.