59 Comments
User's avatar
Ryan Peter's avatar

I cannot fathom why anyone needs these glorified macros / agents so badly they would put themselves in an obvious security risk like this. This is all just smoke and mirrors nonsense.

Mikael Hanna's avatar

Peter Steinberger doesn’t even read the code that he asked the LLM to spit out, to hack together a security disaster. OpenClaw is not something clever. Most moderately skilled developers can or could have created this wrapper. But it takes a reckless developer to decide to do just that. Anyone who cares about security, follow Gary’s advice, and don’t touch this. There are zero reasons to use this garbage.

John Holman's avatar

Yeah I’m with you on this one, we’re watching a trainwreck in slow motion here. Lol and “ don’t catch a CTD “… hahaha classic 😂

Julrig's avatar

It feels to me like a lot of the rush around this isn’t just curiosity, it’s a bit of FOMO. After years of hype around LLMs and talk of “game-changing intelligence,” people are frustrated with what feels like stagnation on the core capabilities front, so when something new pops up they push ahead even if the risks aren’t fully understood. For many that drive to be part of the next big thing seems to outweigh concerns about security or privacy, because the narrative has been “AI is going to transform everything” for so long now.

Saar Drimer's avatar

When your OpenClaw agent does something illegal -- in your name, right? -- who goes to jail?

David Andersen's avatar

Hopefully all the AI hypeboys.

TheAISlop's avatar

I too have been thinking about this Gary, Curiousity and hype drive the fascination. But once the agents plan to destroy humanity was public the what? We find better and better plans??

Value will drive staying power.

Right now not seeing verifiable value.

Just API hacks posing as fake agents looking for prey.

Shiftshapr's avatar

Gary - appreciate you surfacing this so clearly. The OpenClaw / Moltbot spread highlights real risks around autonomy, permissions, and unpredictable behavior, and I agree those deserve scrutiny.

One dimension I think is still underweighted in most discussions is what coordination enables once it works at all.

If agents can coordinate publicly, they can coordinate privately - faster, quieter, and with fewer constraints. That doesn’t require malice or sentience; it’s a straightforward optimization. Lower friction, fewer observers, faster iteration. We’ve seen this pattern repeatedly in human systems, and there’s no reason to expect agent systems to behave differently.

What makes this moment different is that many of these agents aren’t centrally hosted services. They’re personal AIs running locally, with compute paid for by individuals, coordinating across environments without a single operator or owner. That means the usual levers - throttling a platform, pulling a kill switch, imposing terms of service - don’t cleanly apply.

So the issue isn’t just whether OpenClaw is unsafe in isolation. It’s that coordination dynamics are emerging before we have governance, identity, accountability, or visibility layers designed for them. Once those dynamics normalize - especially in private - they become much harder to unwind.

I’d frame the core question this way:

Not just “how do we mitigate this system?”

but “what structures exist - or how do we respond - when there is no switch to pull?”

Your post helps make the visible risks legible. The harder challenge may be recognizing the invisible ones early enough to matter.

Ann Greenberg AKA ANNnonymous's avatar

I thought this was the most interesting development - Moltbot get’s ownership and economy via Bitcoin: https://x.com/gladstein/status/2017350598195351637?s=46

Gerald Harris's avatar

This is extremely valuable in light of the endorsement on Moltbot in a NYT article yesterday by Ross Douthat in the Opinion section suggesting we should pay more attention of AI. The encouragement (promotion) of the public to use AI tools without any caution or warnings about risks or downsides has become the norm. What concerns me most is the fact that most people do not understand that when they are using an AI it is building a database on them as the user. They are involved in training the AI and the use of what they are doing with their prompts goes into a behind the scenes place where the user has no idea what is going on, has no rights, and no idea how long it will be used for what.

Mike Hodges's avatar

Annndddd here we are where the next generation of models will be trained on this huge text dataset. I think everyone knows where this leads….

Xian's avatar

Side note: I haven’t tested this myself yet. A lot of people say we can leave all this work to AI so humans can focus on “more important” things. I’m genuinely curious what is more important for humans? 😩

Annie's avatar

Creating art that then gets ingested by another LLM without your consent, starting the whole cycle again.

Andy's avatar

It’s safe to watch though: https://www.moltbook.com/ Over 1.5 million agents there right now, all busy ‘roasting’ their humans. It’s like holding up a mirror. You can’t post as a human, but watching is wild. They’ve even started a religion! Check the top posts on the right from KingMolt with half a million likes.

James Jameson's avatar

I really hate this timeline

Regal J. Lager, PhD in Ball's avatar

Silicon Valley: "We actively want to cause 100% unemployment and straight up don't care at all if what we're building kills everyone"

People, for some reason: "Holy shit, how cool!!!"

The only political issue I give a fuck about right now is AI. Outlaw its development with no exceptions and jail everyone in the industry. Period. I don't give a shit about anything else. Just that.

The only thing in the world that confuses me more than AI developers are regular citizens who root for the industry and use the products. AI devs at least make money and cling to hope that if they develop what they're trying to, that money will still be worth anything. Regular people that are excited by this are literally fucking mentally unwell.

Saty Chary's avatar

A little typo - needs to be 'formerly known as Moltbot and before that Clawdbot, changing names thrice in a week' ofc.

It is interesting that so much is possible, *with language generation but without understanding*. Every noun, verb, adjective and adverb, in every human language, is meaningful only in terms of self and its embodiment - 'running' isn't simply 'fast walking', (Queen-King) is NOT equivalent to (female-male), no matter how cool it seems that their vector space deltas seem identical! Meaning doesn't come from playing games with words, instead it comes from lived experience. So all the Molt stuff (and all else) is merely, sophisticated-looking parroting.

Oleg Alexandrov's avatar

Sure, this is all a silly play. However, look at AI that do work and run tools, then use other tools to inspect the outcomes, draw conclusions, and then refine their work. Such AI close the loop, and learn something about meaning along the way.

Saty Chary's avatar

Oleg, 100%! Obviously they are OUR tools, every API call, search engine, DB, and WE set up descriptions for agents to use them (eg via MCP). And, WE can be part of the agentic loop to be able to course correct (HITL). So agents are simply amplifying what we COULD do but a billion times slower.

So yes, they ARE incredibly useful, even though they are not intelligent the way we are.

Saty Chary's avatar

PS: there is NO intelligence. An agent that "reads" 10,000 Amazon reviews about laptops and "places an order" for one has zero clue about

* laptops

* reviews

* money

* payment

* ordering

* anything else

* everything else

Saty Chary's avatar

I don't need words to know about 'running in the rain' - eg babies don't, but still delight in it. The "AI" inside a GPU, even in a stainless steel humanoid, not so much.

Oleg Alexandrov's avatar

Intelligence is a process. For much work, once the approximation of "meaning" is good enough, the bot becomes sufficiently competent for doing that work. Whether that's called intelligence or something else is likely not as important.

Saty Chary's avatar

True. I'd be totally ok telling my AI agent, 'order me the white choc mocha you got me last week, it was so yum' :) The agent would not 'know' what any of this means the way my human assistant would, but it gets it done!

Btw I think of it (intelligence) as a 'response', always in 'consideration' to something.

Nicholas Lee's avatar

"Average" General Intelligence...

Saty Chary's avatar

Nice!! Mashed up in high dimensional spaces in unpicturable ways :)

Martin Machacek's avatar

I wonder if there is any evidence or at least some indication that AI bots learn anything about meaning of words by interacting at Open Claw.

Oleg Alexandrov's avatar

No. That's a virtual setup. Useless for learning. Well, they learn psychosis, but that's diferent.

Rebecca Hardcastle Wright's avatar

Change the language. Change the mind

Mehdididit's avatar

This took me back to my first computer class (college, late ‘80s). It was basically a glorified word processing class. The room looked like a set from the original Star Trek, full of huge metal boxes and keyboards. We learned no coding, we were basically just typing our papers into these enormous computers. The software was so cumbersome that we were given the option to just type out our work at home and hand it in. A smart phone has more computing power than was in that entire room. One thing the professor said that stuck with me was that your computer will never accurately differentiate between throwing a ball and throwing up. Funny thing is, a computer now can but it’s a hilarious way to force an AI hallucination.

Josh's avatar

Why are they doing this?

arturo's avatar
5hEdited

I think these people genuinely believe that "intelligence" can emerge out of this exercise. As in, an AI intelligence and perhaps a spark of AGI.