Researchers report that the open-source agent’s ability to execute commands and connect to external systems could increase the attack surface if not properly controlled.
Trend Micro researchers identified multiple security risks associated with Openclaw, an open-source AI agent that autonomously performs tasks by interacting with external tools and systems.
In a report published on its website, Trend Micro described how Openclaw integrates large language models with capabilities that allow it to execute commands, retrieve data, and interact with web services. The researchers said these features could enable automated reconnaissance, credential harvesting, and scripted exploitation if the agent is improperly configured or intentionally misused.
According to the report, Openclaw can be connected to external application programming interfaces (APIs), databases, and system shells. Trend Micro said that when guardrails are limited or absent, such integrations may allow the agent to access sensitive information or perform actions beyond a user’s intent. The company noted that combining language-model reasoning with tool execution increases the potential impact of compromised or malicious prompts.
Trend Micro stated that organizations deploying agentic AI systems should implement strict access controls, logging, and monitoring. The report also emphasized the need to restrict system permissions and validate outputs before execution to reduce the risk of automated misuse.
The company said its findings were based on technical testing of Openclaw’s publicly available code and documentation. The report did not attribute the risks to any specific organization but focused on the broader security considerations associated with agent-based AI systems.
The Trend Micro findings coincide with China’s Ministry of Public Security warning that some open-source AI agents, including Openclaw, may pose cybersecurity and data protection risks if deployed without sufficient safeguards, according to a public notice issued this month.