OpenClaw isn't fooling me. I remember MS-DOS (flyingpenguin.com)
113 points by feigewalnuss 4 hours ago
piker 3 hours ago
Is anyone finding value in these things other than VCs and thought leaders looking for clicks and “picks and shovels” folks? I just personally have zero interest in letting an AI into my comms and see no value there whatsoever. Probably negative.
stingraycharles 4 minutes ago
This is being asked on pretty much every Openclaw thread, and the use cases brought up seem roughly similar: digital assistant.
It of course depends heavily on your work, but my work is 50% communication / overseeing, and I simply lose track of everything.
I don’t give it any credentials of any sort, but I run data pipelines on an hourly basis that ingest into the agent’s workspace.
TheDong 3 hours ago
I find some value as kinda a better alexa.
I have it hooked up to my smart home stuff, like my speaker and smart lights and TV, and I've given it various skills to talk to those things.
I can message it "Play my X playlist" or "Give me the gorillaz song I was listening to yesterday"
I can also message it "Download Titanic to my jellyfin server and queue it up", and it'll go straight to the pirate bay.
It having a browser and the ability to run cli tools, and also understand English well enough to know that "Give me some Beatles" means to use its audio skill, means it's a vastly better alexa
It only costs me like $180 a month in API credits (now that they banned using the max plan), so seems okay still.
swiftcoder 2 hours ago
> It only costs me like $180 a month in API credits (now that they banned using the max plan), so seems okay still.
I have a hard time imagining how much better Alexa would have to be for me to spend $180/month on it...
miroljub an hour ago
vovavili an hour ago
retired 2 hours ago
> It only costs me like $180 a month in API credits
In The Netherlands you can get a live-in au-pair from the Philippines for less than that. She will happily play your Beatles song, download the Titanic movie for you, find your Gorillaz song and even cook and take care of your children.
It's horrible that we have such human exploitation in 2026, but it does put into perspective how much those credits are if you can get a real-life person doing those tasks for less.
quietbritishjim 2 hours ago
vovavili 8 minutes ago
kombine 2 hours ago
cameronh90 34 minutes ago
DrewADesign 2 hours ago
CalRobert an hour ago
_zoltan_ an hour ago
tikotus 2 hours ago
I don't want to be judgemental, but I do find it funny that you're paying $180 for this convenience, and use it to pirate movies.
llmocallm an hour ago
TeMPOraL 2 hours ago
LeCompteSftware 2 hours ago
puelocesar 2 hours ago
180 grand a month for PA is a lot of money. But I guess each person has its own priority. I mean, I can pay a very fancy gym with that price instead of the shitty popular one I go, which would probably improve my well being much more than asking to play Gorillaz
quietbritishjim 2 hours ago
bluedel 2 hours ago
Am I right to be a little concerned by the phrase "it'll go straight to the pirate bay"?
Not to be a narc or anything, but is OpenClaw liable to just perform illegal acts on your behalf just because it seemed like that's what you meant for it to do?
jappgar 15 minutes ago
jappgar 18 minutes ago
You're spendin 180 a month on tokens and still refusing to buy media like Titanic?
qsera an hour ago
I have the almost same thing using a network connected raspberry-pi and no AI.
Hendrikto an hour ago
180$/month to queue playlists does not “seem okay” at all. We must be living in different worlds.
pizza234 12 minutes ago
> Is anyone finding value in these things other than VCs and thought leaders looking for clicks and “picks and shovels” folks?
Mostly (but of course, not exclusively), porn for the techies. Receiving a phone notification every time a PR is opened on a project of yours? Exciting or sad, depends on one's outlook on life.
moffkalast 4 minutes ago
I thought emails from github already did that?
vbezhenar 2 hours ago
Many wealthy people use human assistants to offload mundane work.
This is cheap replacement for ordinary people.
It's going to be big. But probably it's best to wait for Google and Apple to step up their assistants.
piker an hour ago
Yes, and that's because the workflow of those people generally requires managing a crazy, dynamic schedule including travel, meetings, comms, etc. Those folks need real humans with long-term memories and incentives to establish trust for managing these high-stakes engagements. Their human assistants might find these things useful, but there's zero chance Bill Gates is having an AI schedule his travel plans or draft his text messages.
OTOH, this isn't an issue for "ordinary people". They go to work, school, children's sports events, etc. If they had an assistant for free, most of them would probably find it difficult to generate enough volume to establish the muscle memory of using them. In my own professional life, this occurred with junior lawyers and legal assistants--the juniors just never found them useful because they didn't need them even though they were available. Even the partners ended up consolidating around sharing a few of them for the same reason.
Down in this thread someone mentions it being an advanced Alexa, which seems apt. Yes, a party novelty but not useful enough to be top of mind in the every day work flow.
Terr_ 12 minutes ago
nainachirps_ an hour ago
vbezhenar an hour ago
andai 43 minutes ago
I'm not sure how solvable it is. It only takes one screw up to ruin the reputation, and a screw up is basically guaranteed.
The tech has existed for a while but nobody sane wants to be the one who takes responsibility for shipping a version of this thing that's supposed to be actually solid.
Issues I saw with OpenClaw:
- reliability (mostly due to context mgmt), esp. memory, consistency. Probably solvable eventually
- costs, partly solvable with context mgmt, but the way people were using it was "run in the background and do work for me constantly" so it's basically maxing out your Claude sub (or paying hundreds a day), the economics don't work
- you basically had to use Claude to get decent results, hence the costs (this is better now and will improve with time)
- the "my AI agent runs in a sandboxed docker container but I gave it my Gmail password" situation... (The solution is don't do that, lol)
See also simonw's "lethal trifecta":
>private data, untrusted content, and external communication
https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/
The trifecta (prompt injection) is sorta-kinda solved by the latest models from what I understood. (But maybe Pliny the liberator has a different opinion!)
eloisant an hour ago
$180 a month is huge for "ordinary people".
So I guess that leaves the in-between people who don't care about spending $180 every month but don't have any personal staff yet or even access to concierge services.
lionkor 24 minutes ago
Those human assistants can be held accountable.
LeCompteSftware 27 minutes ago
The problem is that if you're wealthy enough to hire someone to do your errands, those errands likely aren't very mundane - the exception is a socialite giving their friend a low-effort job, but executive assistants are paid well because their jobs are cognitively demanding.
OTOH a lower-middle-class Joe like me really does have a lot of mundane social/professional errands, which existing software has handled just fine for decades. I suppose on the margins AI might free up 5 minutes here or there around calendar invites / etc, but at the cost of rolling snake eyes and wasting 30 minutes cleaning up mistakes. Even if it never made mistakes, I just don't see the "personal assistant" use case really taking off. And it's not how people use LLMs recreationally.
Really not trying to say that LLM personal assistants are "useless" for most people. But I don't think they'll be "big," for the same reason that Siri and Alexa were overhyped. It's not from lack of capability; the vision is more ho-hum than tech folks seem to realize.
littlecranky67 38 minutes ago
I can see a value in a smarter email-inbox sorting algorithm - but only because all major players (except google which I don't trust with my mails) have abandoned bayesian email filtering with training. This was standard in 2005 in such basic clients such as the Opera browser, but somehow we lost this technology along the way.
easygenes 14 minutes ago
I was an original Thunderbird pre-1.0 (from 2003) user and prior to that, Netscape Mail, and am quite certain it has had bayesian spam filtering all this time, at least since the late ‘90s. That was a headline feature in the early days. My first email account used POP3 through a shared web host for my own domain in that era.
Edit: Yes it’s still there https://support.mozilla.org/en-US/kb/thunderbird-and-junk-sp...
Terr_ 17 minutes ago
I can't recall the name, but I vaguely remember a Bayesian spam filter for arbitrary POP3 accounts in the 2000s that had a local web frontend, and how excited I was at its effectiveness.
I believe that the shift from "my one computer" to multiple clients (computer + phone + webmail) probably has something to do with it. Even with IMAP sharing state, you still don't have a great way to see and control the filtering, except by moving things in/out of spam folders.
jstummbillig an hour ago
> letting an AI into my comms
Idk, it's strange for me to think of it that way. It's tech. If it does something useful, that's cool.
Data protection is always a consideration. I just don't consider a LLM to be a special case or a person, the same way that I don't have strong feelings about "AI" being applied in google search since forever. I don't have special feelings or get embarrassed by the thought of a LLM touching my mails.
Right now for me, agentic coding is great. I have a hard time seeing a future where the benefits that we experience there will not be more broadly shared. Explorations in that direction is how we get there.
piker 35 minutes ago
My issues aren’t really with privacy so much as what the failure modes look like, and, more fundamentally, with becoming a passenger to my own life.
rowanG077 40 minutes ago
The problem for me is not the LLM reading it. The problem is the company behind it can most likely recover the sessions. That is a problem since they could share it with whomever they want. Even if they are fully incorruptable it's also not uncommon that they simply get hacked and all this data ends up on the open market.
ZeroGravitas 2 hours ago
I see the appeal, but I also see the risks.
If you ignore the risks I don't see why it's hard to see value.
The AI can read all your email, that's useful. It can delete them to free up space after deciding they are useless. It can push to GitHub. The more of your private info and passwords you give it the more useful it becomes.
That's all great, until it isn't.
Putting firewalls in place is probably possible and obviously desirable but is a bit of a hassle and will probably reduce the usefulness to some degree, so people won't. We'll all collectively touch the stove and find out that it is hot.
surgical_fire 6 minutes ago
No.
But I am someone that, for example, dislikes home automation. Know that thing that you ask Alexa to open your curtains? I think that is cringe af.
Maybe there's an overlap with the crowd that likes that.
andai an hour ago
It's pretty much just Claude Code, except hooked up to your Telegram / WhatsApp / iMessage.
I don't know why they don't make an official integration for it. Probably cause they're already out of GPUs lol
pjmlp an hour ago
Same here, I care to the extent I am obligated to, and staying relevant for finding a job.
_pdp_ 3 hours ago
There is value but it is hard to discover and extract outside of a few known areas - like coding, etc.
piker 3 hours ago
Yes, I can see the (potential) value in working with agents in software development. The “claw” movement I understood to suggest value in less constrained access to my inbox, personal messages, calendar etc like some sort of PA. It’s hard to quantify how much damage a bad PA can do to someone’s personal and professional life, so if my understand is correct, this seems like a dead end.
_pdp_ 2 hours ago
onchainintel 2 hours ago
It all depends on what you do aka your use case. If you're in the content creatio business, which is part of my responsibilities, then yes has been massively helpful. For other roles, I can absolutely see no use case or benefit. Context matters, like with everything.
rimliu an hour ago
I am also surprised by the number of people willing to outsource their lives.
mathgladiator 2 hours ago
Agent environments like OpenClaw are in the toy phase, and OpenClaw is teaching people how to build things with agents in a toy-like and unreliable way. I used my understanding of OpenClaw to build scalable + secure + auditable agent infrastructure in my platform such that I can build products that other people can use.
bayindirh 2 hours ago
We had better agent infrastructures (namely JADE) back in the day. I worked with them, and now these things look like flimsy 50¢ plastic toys to me, too.
dankobgd an hour ago
no, it's only for scammers
iugtmkbdfil834 2 hours ago
Eh, buddy says he uses them for his network and, apparently, some light IT maintenance for his family members. So far it seems to be working for him. I am not that brave.
stared 2 hours ago
I don’t get this OpenClaw hype.
When people vibe-code, usually the goal is to do something.
When I hear people using OpenClaw, usually the goal seems to be… using OpenClaw. At a cost of a Mac Mini, safety (deleting emails or so), and security (litelmm attack).
Someone 12 minutes ago
In the early 1980’s, what did people use home computers such as Atari’s and Commodore 64’s for? Mostly playing games; nerds also used their computer with the goal seeming to be… using their computer.
It wasn’t (only) that, though; they also learned, so that, when people could afford to buy computers that were really useful, there were people who could write useful programs, administer them, etc.
Same thing with 3D printers a decade or so ago. What did people use them for? Mostly tinkering with hard- and software for days to finally get them to print some teapot or rabbit they didn’t need or another 3D printer.
This _may_ be similar, with OpenClaw-like setups eventually getting really useful and safe enough for mere mortals.
But yes, the risks are way larger than in those cases.
Also, I think there are safer ways to gain the necessary expertise.
d0gsg0w00f 44 minutes ago
I have OC on a VPS. So far it's a way for me to play with non-Claude models and try to get them to get OC under control. So far I'm about $200 all in and OC is still not under control. Every few weeks it goes on an ACP bender and blows my credits in hidden sub-agents for no damn reason. I'm determined to break this horse though, it's like a fun video game with a glitchy end boss.
eloisant 44 minutes ago
The idea is to get a virtual personal assistant. Like Siri or Gemini but with access to all of your accounts, computers, etc. (Well whatever you give it access to). Like having a butler with access to your laptop.
From what I understand, the main appeal isn't the end result, but building that AI personal assistant as a hobby is the appeal.
SlinkyOnStairs an hour ago
The main "sales pitch" appears to be "You can have the computer do things for you without having to learn how to use a computer" (at the cost of now having to learn how to use a massively overcomplicated and fundamentally unreliable system; It's just an illusion of ease of use.)
The thread's linked article is about comparing MS-DOS' security, but the comparison works on another level as well: I remember MS-DOS. When the very idea of the home/office computer was new. When regular people learned how to use these computers.
All this pretension that computers are "hard to use", that LLMs are making the impossible possible, it's all ahistoric nonsense. "It would've taken me months!" no, you would've just had to spend a day or two learning the basics of python.
stared 37 minutes ago
I was one of those using MS-DOS (still I remember blue Norton Commander). I didn't understand people mocking it later - as it just worked. Enough to run the Prince of Persia, Doom or so. Or edit text files. (As an excuse, I was just ~7 yo back then.)
leonidasrup an hour ago
OpenClaw, the ultimate arbitrary code execution
thenthenthen an hour ago
To me openclaw sounds like a software clickfarm?
pantulis 20 minutes ago
This weekend I installed Hermes on my computer. My M4 Max Studio started spinning its fans as if it wanted to fly, so I went for some cloud hosted models. The thing works as advertised, but token consumption is through the roof. of course ymmv depending on the LLM you choose.
But my main takeaway is that from the security standpoint this is a ticking bomb. Even under Docker, for these things to be useful there is no going around giving it credentials and permissions that are stored in your computer where they can be accessed by the agent. So, for the time being, I see Telegram, my computer, the LLM router (OpenRouter) and the LLM server as potential attack/exfiltration surfaces. Add to that uncontrolled skills/agents from unknown origins. And to top it off, don't forget that the agent itself can malfunction and, say, remove all your email inboxes by mistake.
Fascinating technology but lacking maturity. One can clearly see why OpenAI hired Clawdbot's creator. The company that manages to build an enterprise-ready platform around this wins the game.
repelsteeltje 2 hours ago
One could argue that the discussion is once again about tech debt.
Both OpenClaw and MSDOS gaining a lot a traction by taking short cuts, ignoring decades of lessons learned and delivering now what might have been ready next year. MSDOS (or the QDOS predecessor) was meant to run on "cheap" microcomputer hardware and appeal to tinkerers. OpenClaw is supposed to appeal to YOLO / FOMO sentiments.
And of course, neither will be able to evolve to their eventual real-world context. But for some time (much longer than intended), that's where it will be.
TeMPOraL 2 hours ago
OpenClaw was an inevitability. An obvious idea that predates LLMs. It took this long for models and pricing to catch up. As much as I dislike this term, if there's one clear example of "Product Model Fit", it's OpenClaw - well, except that arguably what made it truly possible was subscription pricing introduced with Claude Code; before, people were extremely conservative with tokens.
But the point is, OpenClaw is just the first that lucked and got viral. If not for it, something equivalent would. Much like LangChain in the early LLM days.
leonidasrup an hour ago
OpenClaw, the ultimate example of Facebook's motto "Move Fast and Break Things"
Schlagbohrer an hour ago
It worked to launch the creator into a gig at OpenAI.
Similar YOLO attitude to OpenAI's launch of modern LLMs while Google was still worrying about all the legal and safety implications. The free market does not often reward conservative responsible thinking. That's where government regulation comes in.
Earw0rm 31 minutes ago
MSDOS and similar single-user OS were not originally designed for networked computers with persistent storage. Different set of constraints.
nryoo 2 hours ago
$180/month to control your lights and music. A Raspberry Pi + Home Assistant does this for $0/month and doesn't exfiltrate your home network topology to a third-party API. The value proposition only makes sense if your time is worth more than your privacy.
UqWBcuFx6NV4r 2 hours ago
This comparison is dishonest, and you know that it is. This is coming from someone that uses Home Assistant and wouldn’t touch OpenClaw with a 10 foot pole. If I had a horse in this race it’d be your horse, but to pretend that these achieve the same goals is just… not in the spirit of an actual discussion.
albatrosstrophy an hour ago
Kindly elaborate? Coming from someone who still uses AI mainly to draft emails and raspberry Pi as sandboxed automation project.
ymolodtsov 21 minutes ago
I run OpenClaw on a $4 VPS with read-only access to most of the accounts. Just this morning I asked it to confirm how exactly our company is paying for a particular service and whether we ever switched to the vendor directly. In about 30s it found all the necessary emails and provided me with a timeline.
It's like your actual asssitant. Now, most of this can be done inside ChatGPT/Claude/Codex now. Their only remaining problem for certain agentic things is being able to run those remotely. You can set up Telegram with Claude Code but it's somehow even more complicated than OpenClaw.
saidnooneever 21 minutes ago
DOS didn't have certain protections because the hardware it targeted did not have those protections. For UNIX on the same machines, they also had no such protections. On 8086 there were no CPU rings, no virtual memory and no other features to help there.
Memory isolation is enforced by the MMU. This is not software.
Maybe you were confused with Linux, which came later, and landed in a soft x32 bed with CPU rings and Page Tables/VirtualMemory. ("Protected Mode", named for that reason...)
That being said, OpenClaw is criminally bad, but as such, fits well in our current AI/LLM ecosystem.
electroglyph 2 hours ago
nopurpose 2 hours ago
I agree that sandboxing whole agent is inadequate: I am fine sharing my github creds with the gh CLI, but not with the npm. More granular sunboxing and permission is what I'd like to see and this project seems interesting enough to have a closer look.
I am not interested in the "claw" workflow, but if I can use it for a safer "code" environment it is a win for me.
mkesper 2 hours ago
When the agent uses your GH credentials to nuke all your projects or put out a lot of crap, this separation will not save you.
nopurpose an hour ago
whitelisting `gh` args should solve it. Event opencode's primitive permission system allows that.
teach 2 hours ago
This isn't especially related to the article, but when I was at university my first assembler class taught the Motorola 680x0 assembly. I didn't own a computer (most people didn't) but my dorm had a single Mac that you could sign up to use so I did some assignments on that.
Problem is, I was just learning and the mac was running System 7. Which, like MS-DOS, lacked memory protection.
So, one backwards test at the end of your loop and you could -- quite easily -- just overwrite system memory with whatever bytes you like.
I must have hard-locked that computer half a dozen times. Power cycle. Wait for it to slowly reboot off the external 20MB SCSI HDD.
Eventually I took to just printing out the code and tracing through it instead of bothering to run it. Once I could get through the code without any obvious mistakes I'd hazard a "real" execution.
To this day, automatic memory management still feels a little luxurious.
falense 2 hours ago
Very cool project! I am working on something similar myself. I call mine TriOnyx. Its based on Simon Willison's lethal trifecta. You get a star from me :D
tomasol 43 minutes ago
I believe the codegen must be separated from the runtime. Every time you ask AI for a new task, it must be deployed as a separate app with the least amount of privileges possible, potentially with manual approvals as the app is executing. So essentially you need a workflow engine.
LudwigNagasena an hour ago
And I remember OSes today, 1 year ago, 5 years ago, 10 years ago, etc. Security was always a problem. People blindly delegate admin privileges to scripts and programs from the internet all the time. It’s hard to make something secure and usable at the same time. It’s not like agent harnesses suddenly broke all adopted best practices around software and sandboxing.
I remember Apple introducing sandboxing for Mac apps, extending deadlines because no one was implementing it. AFAIK, many apps still don’t release apps there simply because of how limiting it is.
Ironically, the author suggests to install his software by curl’ing it and piping it straight into sh.
Schlagbohrer 2 hours ago
Why am I totally unable to understand this post. I have been a long time computer user but this has way too much jargon for me.
wccrawford an hour ago
There's a difference between using a thing and understanding how it works. There's a lot of stuff in this that reference things that only hardware and software creators are going to understand, and only if they're deep enough into their craft.
"Interrupts", for example, are an old concept that is rarely talked about anymore until you get into low-level programming. At a high level, you don't even think about them, let alone talk about them.
khalic 34 minutes ago
cries in rust interrupts
sriku an hour ago
"Fast" is not always a virtue and "efficiency" is not always the only consideration.
trilogic 2 hours ago
Great article. Been skeptical of it since the beginning with this Python "Cli" agents. Been looking for local ai driven Agentic GUI that offers real privacy but coulnt find it anywhere. Finally what we call real local and ClI agents pipeline local ai driven with llama.cpp engine is done. Just pure bash and c++, model isolated, no http, no python, no api, no proprietary models. There is the native version (in c++) and the community version in Electron. Is electron Good enough to protect users Wrapping all the rest? This is exciting.
TacticalCoder 4 minutes ago
> curl-pipe-sh as well. The installer verifies the release signature with ssh-keygen against an embedded key, fail-closed on every failure path. The installer’s own SHA is pinned in the README for readers who want to check the script before piping.
Packages shipping as part of Linux distros are signed. Official Emacs packages (but not installed by the default Emacs install) are all signed too.
I thankfully see some projects released, outside of distros, that are signed by the author's private key. Some of these keys I have saved (and archived) since years.
I've got my own OCI containers automatically verifying signed hashes from known author's past public keys (i.e. I don't necessarily blindly trust a brand new signature key as I trust one I know the author has been using since 10 years).
Adding SHA hashes pinning to "curl into bash" is a first step but it's not sufficient.
Software shipped properly aren't just pinning hashes into shell scripts that are then served from pwned Vercel sites. Because the attacker can "pin" anything he wants on a pwned JavaScript site.
Proper software releases are signed. And they're not "signed" by the 'S' in HTTPS as in "That Vercel-compromised HTTPS site is safe because there's an 'S' in HTTPS".
Is it hard to understand that signing a hash (that you can then PIN) with a private key that's on an airgapped computer is harder to hack than an online server?
We see major hacks nearly daily know. The cluestick is hammering your head, constantly.
When shall the clue eventually hit the curl-basher?
Oh wait, I know, I know: "It's not convenient" and "Buuuuut HTTPS is just as safe as a 10 years old private key that has never left an airgapped computer".
Here, a fucking cluestick for the leftpad'ers:
https://wiki.debian.org/Keysigning
(btw Debian signs the hash of testing release with GPG keys that haven't changed in years and, yes, I do religiously verify them)
pointlessone 2 hours ago
Wow. Much security.
I too remember DOS. Data and code finely blended and perfectly mixed in the same universally accessible block of memory. Oh, wait… single context. nwm