Why security experts say you should beware of OpenClaw.
 ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­  
Thursday, February 12, 2026
OpenClaw is the bad boy of AI agents. Here’s why security experts say you should beware


Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition: The wild side of OpenClaw…Anthropic’s new $20 million super PAC counters OpenAI…OpenAI releases its first model designed for super-fast output…Anthropic will cover electricity price increases from its AI data centers…Isomorphic Labs says it has unlocked a new biological frontier beyond AlphaFold.

OpenClaw has spent the past few weeks showing just how reckless AI agents can get — and attracting a devoted following in the process.

The free, open-source autonomous artificial intelligence agent, developed by Peter Steinberger and originally known as ClawdBot, takes the chatbots we know and love — like ChatGPT and Claude — and gives them the tools and autonomy to interact directly with your computer and others across the internet. Think sending emails, reading your messages, ordering tickets for a concert, making restaurant reservations, and much more — presumably while you sit back and eat bonbons.

The problem with giving OpenClaw extraordinary power to do cool things? Not surprisingly, it’s the fact that it also gives it plenty of opportunity to do things it shouldn’t, including leaking data, executing unintended commands, or being quietly hijacked by attackers, either through malware or through so-called “prompt injection” attacks. (Where someone includes malicious instructions for the AI agent in data that an AI agent might use.)

The excitement about OpenClaw, say two cybersecurity experts I spoke to this week, is that it has no restrictions, basically giving users largely unfettered power to customize it however they want.

“The only rule is that it has no rules,” said Ben Seri, cofounder and CTO at Zafran Security, which specializes in providing threat exposure management to enterprise companies. “That’s part of the game.” But that game can turn into a security nightmare, since rules and boundaries are at the heart of keeping hackers and leaks at bay.

Classic security concerns
The security concerns are pretty classic ones, said Colin Shea-Blymyer, a research fellow at Georgetown’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Permission misconfigurations — who or what is allowed to do what — mean humans could accidentally give OpenClaw more authority than they realize, and attackers can take advantage.

For example, in OpenClaw, much of the risk comes from what developers call “skills,” which are essentially apps or plugins the AI agent can use to take actions — like accessing files, browsing the web, or running commands. The difference is that, unlike a normal app, OpenClaw decides on its own when to use these skills and how to chain them together, meaning a small permission mistake can quickly snowball into something far more serious.

“Imagine using it to access the reservation page for a restaurant and it also having access to your calendar with all sorts of personal information,” he said. “Or what if it’s malware and it finds the wrong page and installs a virus?”

OpenClaw does have security pages in its documentation and is trying to keep users alert and aware, Shea-Blymyer said. But the security issues remain complex technical problems that most average users are unlikely to fully understand. And while OpenClaw’s developers may work hard to fix vulnerabilities, they can’t easily solve the underlying issue of the agent being able to act on its own — which is what makes the system so compelling in the first place.

“That’s the fundamental tension in these kinds of systems,” he said. “The more access you give them, the more fun and interesting they’re going to be — but also the more dangerous.”

Enterprise companies will be slow to adopt
Zafran Security’s Seri admitted that there is little chance of squashing user curiosity when it comes to a system like OpenClaw, though he emphasized that enterprise companies will be much slower to adopt such an uncontrollable, insecure system. For the average user, he said, they should experiment as though they were working in a chemistry lab with a highly explosive material.

Shea-Blymyer pointed out that it’s a positive thing that OpenClaw is happening first at the hobbyist level. “We will learn a lot about the ecosystem before anybody tries it at an enterprise level,” he said. “AI systems can fail in ways we can’t even imagine,” he explained. “[OpenClaw] could give us a lot of info about why different LLMs behave the way they do and about newer security concerns.”

But while OpenClaw may be a hobbyist experiment today, security experts see it as a preview of the kinds of autonomous systems enterprises will eventually feel pressure to deploy.

For now, unless someone wants to be the subject of security research, the average user might want to stay away from OpenClaw, said Shea-Blymyer. Otherwise, don’t be surprised if your personal AI agent assistant wanders into very unfriendly territory.

With that, here’s more AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

FORTUNE ON AI
AI IN THE NEWS

Anthropic's new $20 million super PAC counters OpenAI. According to the New York Times, Anthropic has pledged $20 million to a super PAC operation designed to back candidates who favor stronger AI safety and regulation, setting up a direct clash ahead of the midterm elections. The funding will flow through the dark-money nonprofit Public First Action and allied PACs, in opposition to Leading the Future, a super PAC backed by primarily by OpenAI president and cofounder Greg Brockman and venture firm  Andreessen Horowitz. While Anthropic avoided naming OpenAI directly, it warned that “vast resources” are being deployed to oppose AI safety efforts, highlighting a deepening divide within the AI industry over how tightly powerful models should be regulated — and signaling that the battle over AI governance is now playing out not just in labs and boardrooms, but at the ballot box.

Mustafa Suleyman plots AI ‘self-sufficiency’ as Microsoft loosens OpenAI ties. The Financial Times reported that Microsoft is pushing toward what its AI chief Mustafa Suleyman calls “true self-sufficiency” in artificial intelligence, accelerating efforts to build its own frontier foundation models and reduce long-term reliance on OpenAI, even as it remains one of the startup’s largest backers. In an interview, Suleyman said the shift follows a restructuring of Microsoft’s relationship with OpenAI last October, which preserved access to OpenAI’s most advanced models through 2032 but also gave the ChatGPT maker more freedom to seek new investors and partners — potentially turning it into a competitor. Microsoft is now investing heavily in gigawatt-scale compute, data pipelines, and elite AI research teams, with plans to launch its own in-house models later this year, aimed squarely at automating white-collar work and capturing more of the enterprise market with what Suleyman calls “professional-grade AGI.” 

OpenAI releases its first model designed for super-fast output. OpenAI has released a research preview of GPT-5.3-Codex-Spark, the first tangible product of its partnership with Cerebras, using the chipmaker’s wafer-scale AI hardware to deliver ultra-low-latency, real-time coding in Codex. The smaller model, a streamlined version of GPT-5.3-Codex, is optimized for speed rather than maximum capability, generating responses up to 15× faster so developers can make targeted edits, reshape logic, and iterate interactively without waiting for long runs to complete. Available initially as a research preview to ChatGPT Pro users and a small set of API partners, the release signals OpenAI’s growing focus on interaction speed as AI agents take on more autonomous, long-running tasks — with real-time coding emerging as an early test case for what faster inference can unlock.

Anthropic will cover electricity price increases from its AI data centers. Following a similar announcement by OpenAI last month, Anthropic announced yesterday that as it expands AI data centers in the U.S., it will take responsibility for any increases in electricity costs that might otherwise be passed on to consumers, pledging to pay for all grid connection and upgrade costs, bring new power generation online to match demand, and work with utilities and experts to estimate and cover any price effects; it also plans to invest in power-usage reduction and grid optimization technologies, suppo