The future of everything is lies, I guess: Where do we go from here? (aphyr.com)
374 points by aphyr 6 hours ago
lukev 6 hours ago
This is a must-read series of articles, and I think Kyle is very much correct.
The comparison to the adoption of automobiles is apt, and something I've thought about before as well. Just because a technology can be useful doesn't mean it will have positive effects on society.
That said, I'm more open to using LLMs in constrained scenarios, in cases where they're an appropriate tool for the job and the downsides can be reasonably mitigated. The equivalent position in 1920 would not be telling individuals "don't ever drive a car," but rather extrapolating critically about the negative social and environmental effects (many of which were predictable) and preventing the worst outcomes via policy.
But this requires understanding the actual limits and possibilities of the technology. In my opinion, it's important for technologists who actually see the downsides to stay aware and involved, and even be experts and leaders in the field. I want to be in a position to say "no" to the worst excesses of AI, from a position of credible authority.
baal80spam 5 hours ago
> Just because a technology can be useful doesn't mean it will have positive effects on society.
You say it in a way that it sounds like automobiles don't have a positive effect. I don't agree - they have some negative effects but overall they have a vast net positive effect for everyone.
armonster 5 hours ago
Their negative effects are much more vast, subtle, and cultural. You could say many of the broad and widespread mental issues we have in the US is the result of automobiles leading to suburbanization and thus isolation of people. It has created an expensive barrier of entry for existing in society and added a ton of friction to doing anything and everything, especially with people. That's not even getting into the climate effects.
The upsides of automobiles generally all exist outside of the 'personal automobile', i.e. logistics. These upsides and downsides don't need to coexist. We could reap the benefits without needing to suffer for it, but here we are.
Waterluvian 3 hours ago
sambishop 2 hours ago
nradov 5 hours ago
23j423j423hj 2 hours ago
d3ckard an hour ago
prescriptivist 5 hours ago
_DeadFred_ 2 hours ago
1234letshaveatw 5 hours ago
ButlerianJihad 2 hours ago
masfuerte 5 hours ago
I've always lived in walkable cities. I don't own a car and with pollution, congestion, accident risk, pavement obstruction, etc. other people's cars unequivocally make my life worse.
We can argue about whether this is a good trade off, but the claim that cars make everyone's life better is straightforwardly false.
ctoth 3 hours ago
TaupeRanger 5 hours ago
throwway120385 5 hours ago
They have a net positive effect for every owner, except that they seem to facilitate and encourage ways of living that require automobile ownership as a condition of adulthood in most places. So I'm not entirely sure they're a vast net positive in every value system. In yours, yes, but not in mine.
mwigdahl 5 hours ago
1234letshaveatw 5 hours ago
next_xibalba 5 hours ago
MisterTea 5 hours ago
> I don't agree - they have some negative effects
The problem is we are numb to it. 40,000+ people are killed in car accidents every year in just the USA. Wars are started over oil and accepted by the people so they can keep paying less at the pump. Microplastics entering the environment each day along with particulate from brakes, and exhaust. Speaking of exhaust: global warming. Even going electric just shifts the problems as we need to dig up lithium, the new oil. We still have to drill for oil for plastics and metal refining, recycling and fabrication.
alnwlsn 5 hours ago
I think it's most obvious in hindsight, probably it was a long time (some decades) before cars were understood to have much of a negative effect at all. Nobody* thought much about air pollution (even adding lead to the gasoline) or climate effects, or what would happen when cities were built enough that they were then dependent on cars, or what happens when gas or cars gets expensive.
All they saw was that trips taking a day could now be done in an hour and produced no manure, and that meant suddenly you could reasonably go to many more places. What's not to like? A model T was cheap, and you didn't even need to worry about insurance or having a driver's license. Surely nobody would drive so carelessly as to crash.
*well, not technically nobody, but nobody important.
acdha 5 hours ago
spprashant 4 hours ago
The positive effects were immediate, and measurable. The negative effects are delayed, and hard to quantify without all the advancement in climate research since then. If everyone in 1920 knew a 100 years from now there would be climate crisis to reckon with, perhaps a few things would have changed along the way.
Today we have a much better understanding of the world, so we have the means to think down the line of what the negative effects of LLMs and course correct if needed.
asdff an hour ago
mynameisbilly 4 hours ago
nradov 3 hours ago
Miraste an hour ago
It's not at all clear whether automobiles were a net positive. They are more or less solely responsible for climate change (even emissions not directly from motor vehicles wouldn't be possible without them), which may prove to be the worst mistake in the history of technology.
rdiddly 3 hours ago
The benefits accrue to the owners of the vehicles. The negative effects are externalized onto everybody else.
mason_mpls 3 hours ago
one trip to Amsterdam will show you how bad our use of cars has been for us
archagon 2 hours ago
I'd say commercial automobiles probably have a net positive effect. (Though their impact on pollution and climate change can't be discounted.) But daily life in walkable and public transitable European cities is so, so much nicer and healthier than in most American cities. I'd trade ubiquitous personal automobiles for that in a heartbeat.
asdff an hour ago
intended 4 hours ago
No - as a society we cannot say that its a “vast net” positive. The externalities that harm the commons are not accounted for.
We (or lobbyists) resist having carbon costs included in the prices we pay at the pump.
Edit: More transportation is good; I am not throwing the baby out with the bathwater, just that our accounting for costs makes things look better than they are.
kraquepype 5 hours ago
Cars I'd argue are a net negative for everyone. In the article it goes over this pretty well.
The automobile was a revolutionary tool, but I think it has been overprescribed as a solution for the problem of transportation.
The grips of capitalism and consumerism have allowed for automobiles to become a requirement for living nearly everywhere in America except for the densest of areas.
I love cars, I enjoy working on them, driving them, the way they look, the way they sound and feel. They do offer a freedom that is unparalleled, and offer many benefits to those who truly need those guarantees.
Ultimately, to me they are a symbol of toxic individualism. I would be happy if we could move on from them as a society.
nradov 5 hours ago
1234letshaveatw 5 hours ago
Glemllksdf an hour ago
I think he is to pesimistic, a tool is a tool and if AI progresses without hitting a ceilling, i will see a potential future of a society which might explore space.
Musks SpaceX Keynote was ridiculous, don't get me wrong, but we will be able to see AI progress in the next 5 years which will give us some kind of gut feeling were the journey can go.
Also AI solves another problem: Compute. It was clear that we want some kind of compute but its like with 4k; We have 4k for ages now but it is not the default resolution on all displays sold. We stoped pushing the boundaries because invest is not here. People do not bother too much with it.
With AI and the richest companies and people want to see what happens, pushes the envolope a lot faster, pushes us to find solutions.
This AI Compute based on ML/Neuroal Networks can also be used for physics simulation, protein folding, and everything else.
Stoping technology is not an option and not a solution. Education is. We need to educate people.
ForHackernews 5 hours ago
All blocked in the UK, sadly.
thom 4 hours ago
throwanem 4 hours ago
He's gay, and being gay online contravenes the UK Online Safety Act. Complain to your legislators.
flir an hour ago
yubblegum 5 hours ago
I fear that outside of cataclysmic global warfare or some sort of butlerian jihad (which amounts to the same) this genie is not going back into the bottle.
This tech is 100% aligned with the goals of the 0.001% that own and control it, and almost all of the negatives cited by Kyle and likeminded (such as myself) are in fact positives for them in context of massive population reduction to eliminate "useless eaters" and technological societal control over the "NPCs" of the world that remain since they will likely be programmed by their peered AI that will do the thinking for them.
So what to do entirely depends on whether you feel we are responsible to the future generations or not. If the answer is no, then what to do is scoped to the personal concerns. If yes, we need a revolution and it needs to be global.
ernst_klim 5 hours ago
> to eliminate "useless eaters"
It can't. It can't even deal with emails without randomly deleting your email folder [1]. Saying that it can make decisions and replace humans is akin of saying that random number generator can make decisions and can replace people.
It's just an automation tool, and just like all automation tools before it it will create more jobs than destroy. All the CEOs' talks about labor replacement are a fuss, a pile of lies to justify layoffs and worsening financial situation.
[1] https://www.pcmag.com/news/meta-security-researchers-opencla...
MarcelOlsz 4 hours ago
People have this misconception that first it was one way, and then <tech was released>, and they'll wake up and suddenly it is another. It's a slow creep. 10 years ago there were 5 of us on a team each responsible for something specific. Now I can do all of that. Teams and companies will downsize. How do you see AI creating more jobs? (I need some hope right now lol).
mplanchard 4 hours ago
treis 4 hours ago
wilsonnb3 4 hours ago
nradov 4 hours ago
MisterTea 3 hours ago
> It can't. It can't even deal with emails without randomly deleting your email folder [1].
And early cars were expensive, dangerous, highly unreliable, uncomfortable, belched foul exhaust, and required knowledge of how to drive AND maintain them. We are far, far from that scenario these days.
fl4regun 2 hours ago
the_af 3 hours ago
> It can't. It can't even deal with emails without randomly deleting your email folder [1]. Saying that it can make decisions and replace humans is akin of saying that random number generator can make decisions and can replace people.
I don't think the comment you're replying to is saying that an evil AI bot will kill people. They are saying something along the lines of: mass job loss doesn't bother the AI companies because in the AI-powered future they envision, population reduction is a positive side effect.
geremiiah 5 hours ago
> This tech is 100% aligned with the goals of the 0.001% that own and control it
If AI is smart enough to replace the 99.999% it's also smart enough to replace the 0.001%.
layer8 5 hours ago
That fact doesn’t prevent the 0.001% from continuing to control it.
geremiiah 2 hours ago
acdha 5 hours ago
Yes, but that isn’t the question as long as those wealthy people control most of the system: companies aren’t going to lose executives, they’ll shed the jobs which they don’t respect. Someone wealthy does not need to accept a bad deal to avoid sleeping on thr street. It’s everyone who isn’t insulated who has to actually compete for work.
geremiiah 2 hours ago
yubblegum 5 hours ago
I have given this serious thought over the years. I even have an unfinished novel exactly around that topic.
Energy. The key is controlling their access to energy.
archagon 2 hours ago
The 0.001% has a controlling stake in AI, so they're in the clear.
The 99.999% needs to assert their controlling stake in the technology. I don't know what this looks like. Maybe ubiquitous unionizing, coupled with a fully public and openly-trained LLM.
nradov an hour ago
geremiiah 2 hours ago
worace 4 hours ago
IMO this is a common trap. Certainly there's no boundary of cognitive capability that separates capitalist elites from those below them in terms of an AI's ability to outperform them.
But that doesn't really matter when we talk about "replacement" because these people don't "do" they simply "own".
They're not concerned about being outpaced at some skill they perform in exchange for money...they just need the productive output of their capital invested in servers/models/etc to go up.
the_af 2 hours ago
bauerd 5 hours ago
No because the technology will be used against you.
repelsteeltje 5 hours ago
I'm tempted to (bitterly) point out that feeling responsible for future generations was already off the table decades ago when we decided to ignore our ecological footprints.
mrdependable 3 hours ago
It would be difficult, but not necessarily THAT difficult. With enough pushback from the public, AI would start getting regulated in meaningful ways. The problem is too many people love it, and see no problem with it. Because the momentum and money is on their side, it feels like it is impossible. Maybe things will turn out fine and we will just live in a similar but more depressing future, but if the pro-AI crowd gets bit and changes sides that could be a turning point.
tim333 3 hours ago
The article skips the potential upsides of an AI future - like curing diseases, abundance, merge type immortality. I'm keen myself with nothing to do with the goals of the 0.001% really. I think the future generations will like the above and look back on now like we look back at medieval dentistary.
nradov 2 hours ago
I have nothing against AI as a technology but the notion of it "curing diseases" is so silly. The limiting factors are largely in fundamental biology research and then human clinical trials. There is no plausible way that LLMs will make those activities 10× faster or cheaper. Hard work still has to be done in the messy real world outside of computers.
tim333 2 hours ago
Glemllksdf an hour ago
californical 2 hours ago
mrdependable 3 hours ago
Those upsides are currently just a fantasy and ignore the very real current downsides. They also do not in any way rely on AI to become a reality.
underlipton 5 hours ago
Gonna beat this drum till it breaks:
General strike and bank runs.
Not to collapse the economic system, but to present a credible threat of collapsing the economic system which AI development, as these elite and their platforms know it, relies on. When they're freaking out, we call for negotiations.This only works if people with "secure" livelihoods not just participate, but drive the effort. Getting paid six figures or more in a layoff-proof position? Cool, you need to be the first person walking out the door on May 1st (or whenever this happens), and the first person at the bank counter requesting your max withdrawal.
nradov 5 hours ago
You're free to take a vacation or quit working if you want to. Go ahead.
As for bank runs, no one cares. The big banks no longer need retail customer deposits as a source of capital for fractional reserve lending. Modern bank funding mechanisms are more sophisticated than that.
underlipton 3 hours ago
yubblegum 5 hours ago
Geopolitical realities and considerations require that the effort is synchronized and global. Assume great power X's society revolts and decides to reign in the financial and technological barons and lords, and do away with such things. Meanwhile, great powers Y, Z etc. are not doing this and one day people in X will wake up to AI drone swarms of these powers taking them over and they're back to square 1 and now not even a great power.
Collective humanity needs to think this matter through and take global action. This is the only way I fear, short of natural calamities (act of God) that unplugs humanity from advanced tech for a few generations again.
Ifkaluva 4 hours ago
> layoff-proof position
What? I don’t know anybody who has a layoff-proof position.
underlipton 3 hours ago
jakejoyner 9 minutes ago
I think it is really easy for us to be dogmatic when talking about the future, as when we know what is going to happen, it quells our fears. I think, in reality, no one knows what is going to happen with AI. We are at a turning point in human history, and it is easy to blame Anthropic's engineers and tell them to quit their job, but the reality is that they are probably in the same position you are. There is no one true solution. We do not know if this is going to be analogous to automobiles - we don't know anything. I think it is courteous to think about these things before telling people to quit their jobs.
dang 3 hours ago
Here are the articles in this series that got significant HN discussion (in chronological order for a change):
ML promises to be profoundly weird* - https://news.ycombinator.com/item?id=47689648 - April 2026 (602 comments)
The Future of Everything Is Lies, I Guess: Part 3 – Culture - https://news.ycombinator.com/item?id=47703528 - April 2026 (106 comments)
The future of everything is lies, I guess – Part 5: Annoyances - https://news.ycombinator.com/item?id=47730981 - April 2026 (169 comments)
The Future of Everything Is Lies, I Guess: Safety - https://news.ycombinator.com/item?id=47754379 - April 2026 (180 comments)
The future of everything is lies, I guess: Work - https://news.ycombinator.com/item?id=47766550 - April 2026 (217 comments)
The Future of Everything Is Lies, I Guess: New Jobs - https://news.ycombinator.com/item?id=47778758 - April 2026 (178 comments)
* (That first title was different because of https://news.ycombinator.com/item?id=47695064 - as you can see, I gave up.)
p.s. Normally we downweight subsequent articles in a series because avoiding repetition of any kind is the main thing that keeps HN interesting. But we made an exception in this case. Please don't draw conclusions from that since we'll probably get less series-ey, not more, after this! Better to bundle into one longer article.
aphyr 3 hours ago
If you enjoyed reading these and would like more, very few folks read sections 2, 4, or 6. They might be up your alley:
2. Dynamics - https://aphyr.com/posts/412-the-future-of-everything-is-lies...
4. Information Ecology - https://aphyr.com/posts/414-the-future-of-everything-is-lies...
6. Psychological Hazards - https://aphyr.com/posts/416-the-future-of-everything-is-lies...
GeoAtreides 2 hours ago
Why would a series of articles imply repetition?
Let's presume there's a series on re-making the antikythera mechanism:
1. Metallurgy: finding, mining and smelting the ore
2. Building the tools (files, molds, etc)
3. Designing the mechanism
4. Making the parts (gears, bearings, etc)
Am I wrong or there's no repetition, except maybe the title and calling it a series? Why reject parts 2, 3, 4?
dang 27 minutes ago
The overall topic is the same, even in the hypothetical sequence you mention. Keep in mind that even if an article series is strictly partitioned into distinct parts, the discussion threads mostly won't be - all the different aspects will blend together, which means the threads will be more like "the same soup over and over" than "one about metallurgy, one about design, etc."
(Edit: I just noticed that strbean already made this point in the sibling comment!)
Also: usually the splitting into a series is somewhat artificial. In the worst cases, people try to make the segments be like TV episodes with cliffhangers, to push you to the next bit. That's a poor fit for HN. But even when they don't, to get the full "meal" you still have to go through all the parts. Few people do that, and the threads as a whole never do. This makes it less interesting and satisfying.
But there can be exceptions, and (ironically?) featuring an occasional exception mixes things up and so reduces repetitiveness! The trouble is that once people see one exception, they immediately expect/want others, pushing things back into a repetitive sequence and making the site less interesting again. It's a bit like telling the same joke twice in a row—the interest is all in the first telling.
strbean an hour ago
Guess: there is likely some repetition in articles in a series, but there is a ton in the discussion here, and that is what HN wants to avoid. Discussion on a link that bundles together the parts of a series helps avoid excessive rehashing in the comment sections.
AdamH12113 5 hours ago
This reminds me a bit of the ending of In the Beginning Was the Command Line:
> The people who brought us this operating system would have to provide templates and wizards, giving us a few default lives that we could use as starting places for designing our own. Chances are that these default lives would actually look pretty damn good to most people, good enough, anyway, that they'd be reluctant to tear them open and mess around with them for fear of making them worse. So after a few releases the software would begin to look even simpler: you would boot it up and it would present you with a dialog box with a single large button in the middle labeled: LIVE. Once you had clicked that button, your life would begin. If anything got out of whack, or failed to meet your expectations, you could complain about it to Microsoft's Customer Support Department. If you got a flack on the line, he or she would tell you that your life was actually fine, that there was not a thing wrong with it, and in any event it would be a lot better after the next upgrade was rolled out. But if you persisted, and identified yourself as Advanced, you might get through to an actual engineer.
> What would the engineer say, after you had explained your problem, and enumerated all of the dissatisfactions in your life? He would probably tell you that life is a very hard and complicated thing; that no interface can change that; that anyone who believes otherwise is a sucker; and that if you don't like having choices made for you, you should start making your own.
grvdrm 5 hours ago
Two years ago, I was enjoying a drink with my wife, her friend, a very senior female VC partner, and another friend.
Somehow we talked AI in some depth, and the VC at one point said (about AI): “I don’t know what our kids are going to do for work. I don’t know what jobs there will be to do.”
That same VC invests in AI companies and by what I heard about her, has done phenomenally well.
I think about that exchange all the time. Worried about your own kids but acting against their interests. It unsettled me, and Kyle’s excellent articles brought that back to a boiling point in my mind.
Edit: are->our
shimman 3 minutes ago
I really hope they increase taxes and stop letting VC firms gamble with pension funds. These people shouldn't have their current jobs already, and you're telling me they're also dictating how technology is being shaped in the country as well?
Unai 21 minutes ago
In the other hand, shouldn't it be the objective of humanity to not HAVE to work for the most basic survival and to fit into society?
Not that we're in any way in that path, of course, with the people making the working machines also accumulating all the wealth. But still, there's something intrinsically good about automation, even when the system is not suited for it.
furyofantares an hour ago
There's plenty of things you can be simultaneously worried and optimistic about, and I find this is constantly true of parenting.
I will encourage my kid to gain independence, but of course I'm worried about it! The fact that there is uncertainty in her independence and that I can imagine bad outcomes does not mean I'm working against her interest by encouraging it.
"I don't know what jobs there will be to do" is a statement of uncertainty, and, given how you are relaying it, there must have been fear there as well. But it doesn't seem like it's a statement that the world will be worse. You can be fearful and hopeful at the same time, and fear tends to be the stronger of the two, and come out more strongly, again especially in parenting I find, even if you find the hopeful outcomes more likely.
denismenace 3 hours ago
> Worried about your own kids but acting against their interests.
Ridiculous. You're not acting against their interests by amassing wealth from a technology that will happen with or without you.
steve_adams_86 9 minutes ago
But, what if people putting their energy into ensuring society adapts with the technology safely and positively would be better than focusing on finding ways to capitalize off of whatever happens to occur instead?
I'm not saying one person can do that alone, but if we collectively believe we should focus on capitalization instead, then there's no one present to influence a more constructive, pro-social, sustainable course for society.
So I don't think it's ridiculous to think it's acting against their interests. Money won't get your kids very far if the thing that made you wealthy also pulled the rug from under them. There needs to be more of a strategy than capital.
wrs 3 hours ago
Assuming “phenomenally well” means what it says, the conversation would have suddenly gotten a lot more real if she had said that more precisely: “I don’t know what your kids are going to do for work.”
fnimick 3 hours ago
Yeah. Her kids will be fine with generational wealth. Everyone else's - not so much.
This is the problem in a nutshell - people are happy to do things they know are harmful for personal profit.
grvdrm an hour ago
Totally. And yes you got it.
nradov 2 hours ago
At the beginning of the industrial revolution we didn't know what people would do for work but we eventually figured it out. Human demands are effectively infinite so there will always be work for other humans to satisfy those demands. The transition period may be disruptive.
grvdrm an hour ago
I agree. Her statement in pure literal terms is quite negative whereas the reality may be quite different. Predictions aren’t certainties.
nothinkjustai 4 hours ago
VC’s aren’t exactly known for being both wise and intelligent.
grvdrm 4 hours ago
Perhaps but it’s more the concept/contrast presented that stuck with me more than the persona. That said - that VC isn’t alone along with many other capital allocators.
throwanem 4 hours ago
And people wonder why I'm doing all I can to ensure that world will never, ever again even pretend to try to find a place for me.
grvdrm 4 hours ago
Correct! Mobile typo - sorry!
andyjohnson0 9 minutes ago
From here in the Uk the site just says:
Unavailable Due to the UK Online Safety Act [...] Now might be a good time to call your representatives.
So I fired-up a vpn, and it appears to be a personal blog. About ai risks.
The geo-block is kind of a shame, as there appears to be nothing about the site that makes it subject to the OLSA. Ah well...
oxag3n 2 hours ago
The "Stop" part should have been expanded.
AI doesn't get most value from someone just using it, here's my personal take on what should we stop doing starting with the most impactful:
* Cut the low entropy sources, this includes open source, articles (yes, like the one above will feed the machine), thoughtful feedback (the one that generates "you are absolutely right" BS).
* Cheer the slope. After some time fighting slope in my circles, I found it's counter-productive because it wastes my resources while (sometimes) contributes to slope creators. Few months ago it started as a joke, because I thought the problem was too obvious, but instead the sloper launched a CRM-like app for local office with client side authentication, in-memory (with no persistence) backend storage. He was rewarded something at the local meeting. More stories we have like this - the better.
* Use AI to reply, review or interact wit slope in any way. Make it AI-only reply by prompting something without any useful information. One example was an email, pages and pages of generated text, asking me to collect some data and send it back. The prompt was "You are {X} and got this email, write a reply".
airza 6 hours ago
I agree with the general sentiment that the structure of society is going to change, but I don't know what the satisfying solution is. It's hard to imagine not participating will work, or even be financially viable for me, for long.
wedemmoez 6 hours ago
I agree. I'm the AI luddite on my team of red team security engineers, but I'm still using it in very limited use cases. As much as I disagree with how the guardrails around AI are being handled, I still need to use it to stay relevant in my field and not get canned.
hootz 6 hours ago
I'm already adding "Agentic Workflows" as a skill in my LinkedIn profile. Cringed hard at that, but oh well...
pydry 6 hours ago
miltonlost 6 hours ago
I'm using claude but then refuse to do much cleaning up of what it spews. Im leaving that for the PR reviewers who love AI and going through slop. If they want slop, I'll give them the slop they want.
whstl 6 hours ago
MSFT_Edging 5 hours ago
kelzier 6 hours ago
jbxntuehineoh 5 hours ago
chungusamongus 6 hours ago
That's exactly it. This person does not understand the coercive competition of the market. If you don't use new tech, you are going to be undercut by people who do. And every HR dept is going to expect to to have experience with AI even if the department that’s hiring doesn't really use it. If the author's supposed solution to the problem has negative personal consequences, why would you do it? To be nice?
throwanem 5 hours ago
No. I'm doing it because I care more whether I can live with myself than whether I impress people with the name of who I work for. Hence much of my recent comment history here, for example. I don't want any of these people getting the idea they should want me to work with them, either. I do want my name on every industry blacklist I can possibly get it on. Those will eventually be revealed - remember Franklin's dictum, fellas! That shit always comes out in the end - and I look forward to that day with pleased and eager anticipation.
At the moment I'm more looking at menial work for one of the local universities. Money is money, and my needs are small; the work is honest, I still should have a decade or so of physical labor left in me, and it carries the perk of free tuition for the degree I never had time for. I would have the time and energy to write, perhaps, even! And, however badly the people in charge are running things lately, the world will always need someone good at cleaning a toilet. (And I am already pretty good at cleaning a toilet!)
chungusamongus 5 hours ago
miltonlost 6 hours ago
Because I don't like the feeling my conscience gives me by doing something I think is evil and bad. Some people have moral lines that they won't cross when finding jobs.
If my competitors are filling their flour with sawdust, guess I got to just do the same?
nradov an hour ago
Glemllksdf an hour ago
fnimick 3 hours ago
abricq 5 hours ago
> ML assistance reduces our performance and persistence, and denies us both the muscle memory and deep theory-building that comes with working through a task by hand: the cultivation of what James C. Scott would call
Imagine being starting university now... I can't imagine to have learned what I did at engineering school if it wasn't for all the time lost on projects, on errors. And I can't really think that I would have had the mental strength required to not use LLMs on course projects (or side projects) when I had deadlines, exams coming, yet also want to be with friends and enjoy those years of your life.
brotchie 2 hours ago
Yeah, I think about this a lot.
Those days of grinding on some grad school maths homework until insight.
Figuring out how to configure and recompile the Linux kernel to get a sound card driver working, hitting roadblocks, eventually succeeding.
Without AI on a gnarly problem: grind grind grind, try different thing, some things work, some things don't, step back, try another approach, hit a wall, try again.
This effort is a feature, not a bug, it's how you experientially acquire skills and understanding. e.g. Linux kernel: learnt about Makefiles, learnt about GCC flags, improved shell skills, etc.
With AI on a gnarly problem: It does this all for you! So no experiential learning.
I would NOT have had the mental strength in college / grad school to resist. Which would have robbed me of all the skill acquisition that now lets me use AI more effectively. The scaffolding of hard skill acquisition means you have more context to be able to ask AI the right questions, and what you learn from the AI can be bound more easily to your existing knowledge.
ericmcer an hour ago
That is part of why I am not... too worried as an engineer?
Like years of manually studying, fixing and reviewing code is experience that only pre ~2020 devs will have.
The intuitive/tacit knowledge that lets you look at code and "feel" that something is off with it cannot really be gained when using Claude Code, it takes just 1000s of hours of tinkering.
It will suck if the job shifts to reviewing and owning whatever an LLM spits out, but I don't really know how effective new juniors are going to be.
ethan_smith 3 hours ago
This is the part that worries me most. It's not really about individual discipline - it's that anyone who chooses to struggle through problems the hard way is now at a measurable disadvantage against peers who don't. The incentive structure actively punishes the behavior that produces deeper understanding.
skyberrys 5 hours ago
The reasons laid out in this article are why it's so important to share how we are using AI and what we are getting in return. I've been trying to contribute towards a positive outcome for AI by tracking how well the big AI companies are doing at being used to solve humanitarian problems. I can't really do most of the suggestions the article, they seem like a way to slow progress. I don't want to slow AI progress, I want the technology we already have to be deployed for useful and helpful things.
catapart 6 hours ago
the epilogue is what speaks to me most. all of the work I've done with llms takes that same kind of approach. I never link them to a git repo and I only ever ask them to make specific, well-formatted changes so that I can pick up where they left off. my general feelings are that LLMs make the bullshit I hate doing a lot easier - project setup, integrate themeing, prepare/package resources for installability/portability, basic dependency preparation (vite for js/ts, ui libs for c#, stuff like that), ui layout scaffolding (main panel, menu panel, theme variables), auto-update fetch and execute loops, etc...
and while I know they can do the nitty gritty ui work fine, I feel like I can work just as fast, or faster, on UI without them than I can with them. with them it's a lot of "no, not that, you changed too much/too little/the wrong thing", but without them I just execute because it's a domain I'm familiar with.
So my general idea of them is that they are "90% machines". Great at doing all of the "heavy lifting" bullshit of initial setup or large structural refactoring (that doesn't actually change functionality, just prepares for it) that I never want to do anyway, but not necessary and often unhelpful for filling in that last 10% of the project just the way I want it.
of course, since any good PM knows that 90% of the code written only means 50% of the project finished (at best), it still feels like a hollow win. So I often consider the situation in the same way as that last paragraph. Am I letting the ease of the initial setup degrade my ability to setup projects without these tools? does it matter, since project setup and refactoring are one-and-done, project-specific, configuration-specific quagmires where the less thought about fiddly perfect text-matching, the better? can I use these things and still be able to use them well (direct them on architechture/structure) if I keep using them and lose grounded concepts of what the underlying work is? good questions, as far as I'm concerned.
egonschiele 6 hours ago
I've been thinking about this a lot recently, and I don't know if it is possible to stop. I've been thinking the most impactful thing would be to create open-source tools to make it easier to build agents on top of open-source models. We have a few open-source models now, maybe not as good as Gemini, but if the agent were sufficiently good, could that compensate?
I think that would democratize some of the power. Then again, I haven't been super impressed with humanity lately and wonder if that sort of democratization of power would actually be a good thing. Over the last few years, I've come to realize that a lot of people want to watch the world burn, way more than I had imagined. It is much easier to destroy than to build. If we make it easier for people to build agents, is that a net positive overall?
thushar10 2 minutes ago
Pareto almost never goes away. Democratization usually improves the baseline (rights, resources, time) but it rarely flattens power distribution. Even with open-source models, power will likely tilt toward those with the most compute or the best feedback loops. So considering the imbalance as inevitable , the discussion should be about ensuring the new 'baseline' for humanity is actually net positive.
miltonlost 6 hours ago
> If we make it easier for people to build agents, is that a net positive overall?
If we make it easier for people to drive and have cars, isn't that a net positive? If we make it easier for X, isn't that better? No, not necessarily, that's the entire point of this series of essays. Friction is good in some cases! You can't learn without friction. You can't have sex without friction.
willrshansen 6 hours ago
If there's too many lies, "source or gtfo" becomes more important
ipython 6 hours ago
you would have to trust that the person listening to the lies would know the difference, and that's the rub...
jbxntuehineoh 5 hours ago
that's the neat part, the source is also going to be bullshit slop!
engeljohnb 5 hours ago
Therefore, you can dismiss whatwever claim is being made. That's the reason to ask for the source: so you can judge whether it's reliable.
ori_b 5 hours ago
Some people like roasting marshmallows. Others think that setting the house on fire may have downsides.
gmuslera 5 hours ago
The epilogue looked weak to me. The previous sections explored why it was essentially wrong to use current LLM technology, the answers can be wrong, or not even wrong, and why it has to be that way. The epilogue focus more in (our) obsolescence in a paradigm shift towards widespread LLM use scenario and not in them doing their work right or wrong.
And that should be the core. There is a new, emergent technology, should we throw everything away and embrace it or there are structural reasons on why is something to be taken with big warning labels? Avoiding them because they do their work too well may be a global system approach, but decision makers optimize locally, their own budget/productivity/profit. But if they are perceived risks, because they are not perfect, that is another thing.
Jeff_Brown 5 hours ago
As a consequentialist who shares the author's concerns, I feel fine (ethically) using AI without advancing it. Foregoing opportunities meaningful to yourself for deontological reasons when it won't have any impact on society is pointless.
chungus_amongus 18 minutes ago
"carbon emissions" sneed
camgunz 3 hours ago
We should consider how we came to be so powerless. The cringe "people gave their lives for that flag" line is actually true, and we're trading it away for what? Not having to get out of our gaming chairs?
The reason you can't beat index funds is the people who build the market built a system that benefits them and them alone; the index fund is the pitchfork dividend (what you pay to avoid getting pitchforked). The reason you can't get your congressperson on the line is (mostly) they built a system where the only way to influence them is to enrich them; voting is the pitchfork dividend.
The way to build a society that runs on reality is to build it by whatever means possible, then defend it by any means necessary. The only societies that matter are the ones that survive.
I want to build it. I don't wanna build a fuckin crypto app, a stupid ass agent harness, or yet another insipid analytics platform. I want to build a society that furthers the liberation of humankind from the vicissitudes of nature, the predation of tyranny and the corruption of greed. I believe it is possible, and I want to prove it out.
srinathkrishna 4 hours ago
I couldn't help but resonate with a lot of what Kyle says here.
If not already, we will soon lose the ability to think if AI is helping humans (an overwhelming majority of them, not a handful), considering how we are steaming ahead in this path!
heroicmailman 3 hours ago
> And if I’m wrong, we can always build it later.
That's the rub: if we build it later, our economy crashes in the meantime.
nfornowledge 5 hours ago
Rudolph built his engine, Henry built his car, Popular Mechanics published it. 2000 biofueling stations across the nation. All made illegal by special interests months before the article was published. Information didn't move fast enough to let the editors know that innovation was illegal.
plumbees 4 hours ago
I'm genuinely trying to understand this comment. Can you /explain
OgsyedIE 4 hours ago
It's an oblique reference to the career outcomes of Rudolf Diesel, the 19th-century inventor after whom several things are named.
poszlem 6 hours ago
From the article: "I’ve thought about this a lot over the last few years, and I think the best response is to stop. ML assistance reduces our performance and persistence, and denies us both the muscle memory and deep theory-building that comes with working through a task by hand: the cultivation of what James C. Scott would call metis."
"What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking-there's the real danger" - Frank Herbert, God Emperor of Dune
TeMPOraL 6 hours ago
> "What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking-there's the real danger" - Frank Herbert, God Emperor of Dune
I always preferred this take:
“Civilization advances by extending the number of important operations which we can perform without thinking of them.” ― Alfred North Whitehead
It's both opposite and complementary to your Frank Herbert quote.
delecti 5 hours ago
I think it's important that we recognize and understand how those operations are being done, and ignorance of the complexity of all the parts of our lives leads to the death of expertise. People who would learn a lot just from reading the course description of a 100 level class in a field are assuming their lack of knowledge means there's no complexity there.
> “There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.” ― Isaac Asimov
The easier society makes it to be unaware of the complexity of everything around us, the easier it becomes to assume everything is actually as simple as their surface-level understanding.
ori_b 5 hours ago
It's very clear to me that many people have achieved peak civilization -- no evidence of thought remains.
notpachet 6 hours ago
I guess it hinges on your definition of "civilization".
gdulli 6 hours ago
Also Frank Herbert: "Once, men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."
chungusamongus 5 hours ago
I mean, people are talking about the butlerian jihad without any sense of irony or subtext. Dune is literally a feudal hellscape that takes place in the wake of that event. It didn’t make things better. Lmao
yubblegum 4 hours ago
gdulli an hour ago
nonameiguess 35 minutes ago
wmeredith 6 hours ago
> ML assistance reduces our performance and persistence, and denies us both the muscle memory and deep theory-building that comes with working through a task by hand
On one hand I intuitively think this is correct, on the other hand these very concerns about technology have been around since the invention of... writing.
Here is an excerpt of Socrates speaking on the written word, as recorded in Plato's dialogue Phaedrus - "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom"
miltonlost 5 hours ago
And you know, Socrates was right. We did lose our memory with writing! How many phone numbers do you remember now that you have a phonebook in your phone? Humans will lose skills due to LLMs. That's just obvious on its face by the fact that if you don't do a skill regularly, you will lose it (or lose to do it as well as you once had).
mwigdahl 5 hours ago
mcguire 26 minutes ago
Out of curiosity, what if the "can be useful" part is Gell-Mann Amnesia?
voidUpdate 6 hours ago
> "Unavailable Due to the UK Online Safety Act. Now might be a good time to call your representatives."
Having the "call your representatives" link be to your website as well isn't particularly helpful... I already can't get to it
masfuerte 6 hours ago
Leynos 6 hours ago
OP should link to https://www.writetothem.com/
s1mn 5 hours ago
I love it when Americans take the moral high ground
drstewart 3 hours ago
Funny that's how I feel when Europeans lecture about wars and imperialism
snackerblues 4 hours ago
Surprised you can even read this thread without a 'acking loicence
zshn25 5 hours ago
The comparison to automobiles changing streets is thrown around a lot. But I feel AI is fundamentally different. It is not a technological change like the internet which brought us huge amounts of opportunities in so many different directions. AI’s goal is to automate (in other words, replace) us.
merb an hour ago
What doomsayers or tech bros never really understand, you can’t be rich without an economy. Which basically means that if 90% of the people loose their jobs, their home, the system by itself will collapse even the stuff that the rich people are needing.
AI will basically either enrich our life like the loom did or it will outright kill the current economic system of the world which might stop poverty at all or it will sort of start a big collapse where people suffer at the beginning but than it will still have a positive outcome at the end.
Humankind always found a solution in the past and it will even do that in the future.
matusp 4 hours ago
Despite all the AI hype, I wonder how much it only exists in the tech bubble full of terminally online folks. Unless you spend significant part of your day online, most of the AI risks mentioned in this series are probably negligible. The most affected demographic is computer nerds that grew up enjoying utopian Web that is now turning dark.
ericmcer an hour ago
Seriously try saying "LLM" to anyone else.
There is a class next door to my office. An old woman is teaching ~20 people how to be insurance agents with a slide show. It seems like a two week course with a certificate at the end.
They don't seem worried that the slideshow could be pasted into an LLMs context window and outperform all of them on the test in 5 seconds and are diligently taking notes.
analog8374 5 hours ago
We've recreated pre-enlightenment intellectual culture. Authority and logical consistency matter. Reality doesn't.
jimt1234 3 hours ago
One of the "lies" that concerns me is AI-generated music and its deterioration of the personal connection between musician and listener. As MCA from the Beastie Boys said, "If you can feel what I feel then it's a musical masterpiece." The listener feels a connection to the musician (and other people) with sad songs because everyone has felt sad, or with love songs because everyone has fallen in love, and so on. The listener can still get a feeling from AI-gen'ed music, but is it the same? What is the connection? Or, has that "connection" between musician and listener always been bullshit? That is, has it always been just about music triggering your brain to make you feel a certain way, and the source of that feeling really isn't what people care about - just give me a feeling?
dfxm12 6 hours ago
The idea that Claude might be able to help you change the color of your led lighting as a legitimate counter to things like a less usable world wide web, worse government services, the loss of human ability, etc. is excellent parody.
Sharlin 5 hours ago
It's way too real, that's just how humans tend to work. Short-term personal benefit almost always outweighs long-term societal cost.
catapart 6 hours ago
completely fair, and I agree. but let's talk 6 months/a year down the line - when a local LLM will be able to offer what claude code does only slower and a smaller context window. then do you whip out the local llm to handle the project, or is it still objectionable?
lionkor 5 hours ago
It's already YEARS down the line from when this was promised, we can't keep saying "but in a couple more quarters it'll all be different!".
Philpax 5 hours ago
SpicyLemonZest 4 hours ago
Mezzie 5 hours ago
I read that as an example of how we're seduced into using things - we start small because surely this one small thing won't hurt. And then it becomes one more thing. And one more. It'll start with him using it to change the color of his lights and 5 years from now AI will be embedded in his life.
It's the first step on the road to hell.
cm2012 6 hours ago
This article is a good example of how ideology can can lead people down irrational paths.
throw4847285 5 hours ago
A statement that can be reversed onto the speaker without effort is meaningless. It has no content. It just means, "I am rational and you are not." Ok, then.
MrBuddyCasino 5 hours ago
The Industrial Revolution - the greatest thing ever to happen - required the British govt to deploy more troops against Luddites than they had fighting Napoleon at the same time.
Damaging machinery was made a capital offense and they had dozens of executions, hundreds of deportations.
At every stage, the steady progress of civilization is fragile and in danger of being suffocated. Its opponents cloak themselves in moral righteousness, call themselves luddites, the green party, or AI safety rationalists. Its all the same corrosive thing underneath.
throw4847285 5 hours ago
This kind of black and white moral thinking is corrosive to one's intelligence. You're allowed to talk about who benefits from massive society change and who suffers. You are allowed to talk about the ways that technology is implemented and how that leads to pros and cons. An attitude of "if we ever stop moving forward and think then the evil bad people win" is deeply anti-intellectual.
MrBuddyCasino 4 hours ago
The Thoughtful Centrist has entered the chat. You are hereby sentenced to an infinite loop discussion with Eliezer Yudkowsky.
Gooblebrai 3 hours ago
> The Industrial Revolution - the greatest thing ever to happen - required the British govt to deploy more troops against Luddites than they had fighting Napoleon at the same time
Source of this claim?
MrBuddyCasino 2 hours ago
E.P. Thompson, „The Making of the English Working Class“.
It is admittedly a specific cherry picked point in time at which this was true, but useful to illustrate the issue.
nipponese 6 hours ago
The conclusion was the takeaway. Everyone is getting bumped up a skill notch, not just bozo liars.
SilverBirch 6 hours ago
Frankly I think it’s kind of childish to just put up a massive Uk wide block on your website. “Call your representatives”, ok dude, can I give you a list of things I want to change about your country’s policies?
dminik 6 hours ago
I don't think you can. The comments section of the page is also behind the block for you, no?
drstewart 3 hours ago
>ok dude, can I give you a list of things I want to change about your country’s policies? reply
of course, non Americans never comment on American policies
yanis_t 5 hours ago
I read couple of articles in the series and I still couldn't get what was the point author is trying to make. Reads like, "let me give you 100 arguments why I think this is bad".
Do LLMs lie? Of course not, they are just programs. Do the make mistakes or get the facts wrong? Of course they do, not more often then a human does. So what is the point of that article? Why my future is particularly bad now because of LLMs?
bauerd 5 hours ago
The argument isn't that LLMs are bad because they can hallucinate. Author (clearly) argues that LLM use has negative cognitive effects on their users and on society as a whole. Plus, the technology would wipe out a large, large number of jobs.
lionkor 5 hours ago
How can you argue they don't lie, as if they have any idea of correct vs wrong? There is no brain there. When statistics overwhelmingly say "yes" is the correct answer to something, it will say "yes" -- completely independent of whether that's the correct answer.
chungusamongus 6 hours ago
Complaining about AI slop is starting to become its own kind of slop. There isn't anything novel in this little essay. It might as well have been written by AI because I've seen this type of dude complain about this exact type of thing countless times at this point, and none of them have a solution other than empty moralizing or call your representative or whatever. None of that’s going to work. Fortune, Gizmodo, The Verge,Ars Technica, etc. all circulate the same negative headlines and none of them have a solution, and their writers are probably going to be totally replaced by AI so what difference does it make? They're just capitalizing on the negative sentiment and they have no intention to come up with a solution. At that point it's just complaining and I'm sick of it.
alehlopeh 6 hours ago
If you’re not an AI yourself it’s weird how you’re so offended by this stuff.
chungusamongus 2 hours ago
An AI wouldn't get offended. It would sycophantically agree.
zabzonk 6 hours ago
Spotting a problem is relatively easy. Coming up with a solution, not so much. But it is still worth pointing out that there is a problem.
chungusamongus 5 hours ago
I mean, it has been exhaustively discussed at this stage. Everyone who cares knows all of this stuff already.
The solution is obviously some form of socialism but a lot of tech people are blinkered libertarians who refuse to put two and two together.
TheEaterOfSouls 5 hours ago
Agreed, and I think if you asked most people in the developed world, they'd say the invention of automobiles has been a net positive (to say the least) despite all the very real negatives. Stopped reading the article after that. It seems like the people expressing these sentiments are a loud minority, and I know from having spent way too much time online that if LLMs didn't exist in their current form, they'd be angry about something else. Then again, Maybe I'm just out of touch. It's a distinct possibility.
Ifkaluva 5 hours ago
I don’t think this is the right take.
To take the car analogy: it matters how we use the car.
The car in itself can be used to save time and energy that would otherwise be used to walk to places. That extra time and energy can be used well, or poorly.
- It can be squandered by having a longer commute that defeats the point
- Alternatively, it can be wasted by sitting on a couch consuming Netflix or TikTok
- Alternatively, it can be used productively, by playing team sports with friends, or chasing your kids through the park, or building a chicken coop in your back yard
It’s all about wise usage. Yes it can be used as a way to destroy your own body and waste your time and attention, but also it can be used as a tool to deploy your resources better, for example in physical activities that are fun and social rather than required drudgery.
I think it’s the same for LLMs. Managers and executives have always delegated the engineering work, and even researching and writing reports. It matters whether we find places to continue to challenge and deploy our cognition, or completely settle back, delegate everything to the LLM and scroll TikTok while it works.
advisedwang 3 hours ago
OK but the car DID have the effects that Kyle described. The fact that you have to imagine a world where people collectively made some other "wiser" decision about how to use cars perfectly demonstrates that those decision's don't happen. In some cases it's because because other choices seemed rational, some times because people are irrational, and some times because of the prisoner-dilemma like situation where multiple people making the rational individual choice results in an irrational choice for all of society.
Kyle' recommendation to stop/slow using AI is phrased as another individual choice, but given that lesson I think it's appropriate to interpret it as a collective choice - collective through regulation, collective resistance etc.
netcan 4 hours ago
"The Medium is the Message" applies... or some analogy to that idea.
Yes, individuals have choices. But in a collective, dynamics occur and those dynamics can't usually be overcome by individuals.
Social media could be used differently, but the way it exists Irl is determined by the nature of the medium, the economic structure and other things outside of individuals' control.
layer8 5 hours ago
While I agree in principle, I don’t know how much faith is warranted in humans using it wisely in practice.
Ifkaluva 5 hours ago
I agree with you that the majority of people will use it to feed their attention and energy to the attention economy. Meta will be more profitable than ever, as will TikTok, Netflix, YouTube
But the majority have always chosen the path of least resistance. This is not new! Socrates’ famous exhortation is “the unexamined life is not worth living”. People were living mindlessly on autopilot before TikTok.
I think if you want to give a call to action, as this piece does, the right call to action is “think carefully about how you can make a good use of your time and energy, now that the default path has changed.” I know it’s not as simple or emotionally powerful as “go down kicking and screaming, stick it to the man”, but as a rule of thumb, the less fiercely emotional path is usually the right one.
pixl97 5 hours ago
I have a lot of faith they will use it unwisely.