Coding after coders: The end of computer programming as we know it? (nytimes.com)
192 points by angst 2 days ago
neonate a day ago
Other gift link: https://www.nytimes.com/2026/03/12/magazine/ai-coding-progra...
dsQTbR7Y5mRHnZv 12 hours ago
> in coding, L.L.M.s take away the drudgery and leave the human, soulful parts to you.
I've always hated solving puzzles with my deterministic toolbox, learning along the way and producing something of value at the end.
Glad that's finally over so I can focus on the soulful art of micromanaging chatbots with markdown instead.
barnabee 4 hours ago
I like designing data, algorithms, and systems. I like picking the right tools for the job. I like making architectural and user interface (CLI, configuration format, GUI, whatever) decisions.
Actually typing code is pretty dull. To the extent that I rarely do it full time (basically only when prototyping or making very simple scripts etc.), even though I love making things.
So for me, personally, LLMs are great. I'm making more software (and hardware) than ever, mostly just to scratch an itch.
Those people that really love it should be fine. Hobbies aren't supposed to make you money anyway.
I don't have much interest in maintaining the existence of software development/engineering (or anything else) as a profession if it turns out it's not necessary. Not that I think that's really what's happening. Software engineering will continue as a profession. Many developers have been doing barely useful glue work (often as a result of bad/overcomplicated abstractions and tooling in the first place, IMO) and perhaps that won't be needed, but plenty more engineers will continue to design and build things just more effectively and with better tools.
galactus an hour ago
I think reducing what LLMs do to « typing » is misleading. If it was just typing, you could simply use speech-to-text. But LLMs do far more than that, they shape the code itself. And I think we lose something when we delegate that work to LLMs
jodleif 2 minutes ago
staplers 2 hours ago
The assembly line has been mass producing ready-made products for over 100 years and yet product quality, material stability, aesthetic trends, and function design still dominate the purchasing decisions of the general public.
Being tapped into fickle human preference and changing utility landscape will be necessary for a long time still. It may get faster and easier to build, but tastemakers and craftsmen still have heavy sway over markets than can mass-produce vanilla products.
Ygg2 20 minutes ago
IanCal 8 hours ago
To read it in a kinder way, I can focus on a complex logic problem, a flow, an architecture or micro optimisation. I can have an llm setup the test harnesses.
I improved test speed which was fun, I had an llm write a nice analysis front end to the test timing which would have taken time but just wasn’t interesting or hard.
Ask yourself if there are tasks you have to do which you would rather just have done? You’d install a package if it existed or hand off the work to a junior if that process was easy enough, that kind of thing. Those are places you could probably use an LLM.
relativeadv 4 hours ago
But you don't actually do any of that, do you? Instead, you get tired and lazy and attempt to have the LLM solve those hard problems for you too. You just don't tell others about it.
DennisP 3 hours ago
bluefirebrand 4 hours ago
> Ask yourself if there are tasks you have to do which you would rather just have done?
Yeah. My laundry, my dishes, my cooking...
You know. Chores.
Not my software, I actually enjoy building that
DennisP 4 hours ago
apsurd 3 hours ago
arcxi 4 hours ago
amidst this whole AI craze it's illuminating to learn how many programmers secretly hated programming all along
doug_durham an hour ago
I don't know if it's "hate" rather than "a means to an ends". I love learning new languages, and coding. But it was always a means to an ends. The dopamine hit always came from seeing the project compile and do something.
m0llusk an hour ago
sph 2 hours ago
It’s clear at this point that the term programmer was used to refer to two very different types of people.
RobRivera 2 hours ago
Yea! Back to my amazing Pax Americana of friendly neighbors, high trust in my authorities, and cheerful joyous days in oeace and harmony with my fellow man, complete with gum drop smiles and firm faith in my institutions. A truly brave new world
GorbachevyChase 3 hours ago
You can always code by hand as a hobby.
If someone is paying you for your work results, that you find it interesting or fun is orthogonal. I get the sense from the commentary section here that there’s a perception that writing programs is an exceptional profession where developer happiness is an end unto itself, and everyone doing it deserves to be a millionaire in the process. It just comes across as child-like thinking. I don’t think many of us spend time, wondering if the welder enjoys the torch or if a cheaper shop weld is robbing the human welder of the satisfaction of a field weld. And we don’t shed so much ink wondering if digital spreadsheets are a moral good or not because perhaps they robbed the accountant of the satisfaction of holding a beautiful quill in hand dipped expertly in carefully selected ink. You’re lucky if you enjoy your job, I think most of us find a way to learn to enjoy our work or at least tolerate it.
I just wish all the moaning would end. Code generation is not new, and that the state of the art is now as good at translating high-level instructions into a program at least as well as the bottom 10% of programmers is a huge win for humanity. Work that could be trivially automated, but is not only because of the scarcity of programming knowledge is going to start disappearing. I think the value creation is going to be tremendous and I think it will take years for it to penetrate existing workflows and for us to recognize the value.
notpachet 2 hours ago
> at least as well as the bottom 10% of programmers
I don't think this is the flex you think it is... in my experience, the bottom 10% of programmers are actively harmful and should never be allowed near your codebase.
GoblinSlayer 9 hours ago
Why not, normies love to talk with the computer.
rikroots 7 hours ago
Well I do seem to spent a fair amount of my developer time swearing at my laptop screen. And then there's that time I spend just prior to writing code just staring at the wall while I figure out what sort of code I want to write - if I can repackage that wall-staring time into "time spent consulting with AI about approaches and architecture decisions" I'm sure my engineering manager will think more kindly of me ...
GoblinSlayer 4 hours ago
caseyf 9 hours ago
+1024. what the FUCK, Anil. We solved coding-is-for-everyone by throwing up our hands. please crush my body under the heaviest layer of abstraction yet and have the llm read my eulogy because who could possibly know me better than the code I spend all day talking to as if it were a human
someprick 5 hours ago
Lurked for >10y here. Created an account just to say, "+1 well said."
crocodile10203 5 hours ago
I mean if > 30% of my work is drudgery, I have failed already.
rjh29 9 hours ago
The two types of coder argument seems strong to me. Coders who love the art of programming (optimisation for the sake of it, beautiful designs, data structures...) and builders. The former are in for a rough time. The latter are massively enabled and no longer have to worry about smashing together libs by hand to make crud apps.
ori_b 4 hours ago
Doordash has also enabled home cooks; they no longer have to worry about smashing together ingredients by hand to make dinner. They just prompt the app to make them the food they want.
Doordash is the future of home cooking.
falkensmaize 2 hours ago
Thinking carefully about the details of implementation MATTERS. Even with crud apps. Getting something “built” fast isn’t and should not be the only consideration.
I can go to a junkyard and assemble the parts to build a car. It may run, but for a thousand tiny reasons it will be worse than a car built by a team of designers and engineers who have thought carefully about every aspect of its construction.
crocodile10203 5 hours ago
"bvilders" right now its mostly people who want to build a substandard app and shill it everywhere.
ThrowawayR2 an hour ago
Imagine if your operating system or compiler were written by the sort of person that thinks "Coders who love the art of programming ... are in for a rough time."
swader999 7 hours ago
Yes this is the state of it. But just wait a few months, maybe years, the builders aren't safe either. It just won't be cost effective to let humans build in a matter of time.
DennisP 6 hours ago
beepbooptheory 5 hours ago
comrade1234 a day ago
Having an AI is like having a dedicated assistant or junior programmer that sometimes has senior-level insights. I use it to do tedious tasks where I don't care about the code - like today I used it to generate a static web page that let me experiment with the spring-ai chat bot code I was writing - basic. But yesterday it was able to track down the cause of a very obscure bug having to do with a pom.xml loading two versions of the same library - in my experience I've spent a full day on that type of bug and Claud was able to figure it out from the exception in just minutes.
But when I've used AI to generate new code for features I care about and will need to maintain it's never gotten it right. I can do it myself in less code and cleaner. It reminds me of code in the 2000s that you would get from your team in India - lots of unnecessary code copy-pasted from other projects/customers (I remember getting code for an Audi project that had method names related to McDonalds)
I think though that the day is coming where I can trust the code it produces and at that point I'll just by writing specs. It's not there yet though.
zazibar 9 hours ago
>I think though that the day is coming where I can trust the code it produces and at that point I'll just by writing specs. It's not there yet though.
Must be nice to still have that choice. At the company I work for they've just announced they're cancelling all subscriptions to JetBrains, Visual Studio, Windsurf, etc. and forcing every engineer to use Claude Code as a cost-saving measure. We've been told we should be writing prompts for Claude instead of working in IDEs now.
sobjornstad 5 hours ago
This is completely insane, and that's coming from someone who does 95% of edits in Claude Code now.
qudat 5 hours ago
That’s going to give you all a ton of job security in a year when we realize that prompt first yields terrible results for maintainability.
kjkjadksj 2 hours ago
the_real_cher 9 hours ago
Thats insane!
mchaver 5 hours ago
I wonder how much cost savings there are in the long term when token prices go up, the average developer's ability to code has atrophied, and the company code bases have turned into illegible slop. I will continue to use LLMs cautiously while working hard to maintain my ability to code in my off time.
daveguy 11 minutes ago
kubb 6 hours ago
Thoughts and prayers.
swader999 7 hours ago
I didn't renew Jet Brains this month. Been a loyal customer and would have quit jobs from 2008 onwards without it.
bredren 2 hours ago
DennisP 3 hours ago
deadbabe 5 hours ago
I hope they are prepared to pay the $500/month per head when subsidies expire.
delecti 2 hours ago
mekael 2 hours ago
gedy 2 hours ago
GoblinSlayer 9 hours ago
Isn't Visual Studio a one time purchase?
gedy 5 hours ago
Honestly while I know everyone needs a job, just speed run all this crap and let the companies learn from making a big unmaintainable ball of mud. Don't make the bad situation work by putting in your good skills to fix things behind the scenes, after hours, etc.
falkensmaize 2 hours ago
DennisP 6 hours ago
Even if you're using Claude, canceling the IDEs might be poor strategy. Steve Yegge points out in his book that the indexing and refactoring tools in IDEs are helpful to AIs as well. He mentions JetBrains in particular as working well with AI. Your company's IDE savings could be offset by higher token costs.
DennisP 3 hours ago
thangalin 12 hours ago
Syntax highlighting rules, initially vibe-coded 40 languages and formats in about 10 minutes. What surprised me is when it switched the design from a class to the far more elegant single line of code:
return \file_exists( $file ) ? require $file : [];
* https://repo.autonoma.ca/repo/treetrek/blob/HEAD/render/High...The rules files:
* https://repo.autonoma.ca/repo/treetrek/tree/HEAD/render/rule...
DennisP 6 hours ago
I'm halfway through Steve Yegge's book Vibe Coding. Yegge was quoted in the article:
> “We’re talking 10 to 20 — to even 100 — times as productive as I’ve ever been in my career,” Steve Yegge, a veteran coder who built his own tool for running swarms of coding agents
That tool has been pretty popular. It was a couple hundred thousand lines of code and he wrote it in a couple months. His book is about using AI to write major new projects and get them reliable and production-ready, with clean, readable code.
It's basically a big dose of solid software engineering practices, along with enough practice to get a feel for when the AI is screwing up. He said it takes about a year to get really good at it.
(Yegge, fwiw, was a lead dev at Amazon and Google, and a well-known blogger since the early 2000s.)
triyambakam 13 hours ago
This is the take when you haven't really tried driving these tools with much practice
nickjj 6 hours ago
I don't think it's that
Here's an example from Gemini with some Lua code:
label = key:gsub("on%-", ""):gsub("%-", " "):gsub("(%a)([%w_']*)", function(f, r)
return f:upper() .. r:lower()
end)
if label:find("Click") then
label = label:gsub("(%a+)%s+(%a+)", "%2 %1")
elseif label:find("Scroll") then
label = label:gsub("(%a+)%s+(%a+)", "%2 %1")
end
I don't know Lua too well (which is why I used AI) but I know programming well enough to know this logic is ridiculous.It was to help convert "on-click-right" into "Right Click".
The first bit of code to extract out the words is really convoluted and hard to reason about.
Then look at the code in each condition. It's identical. That's already really bad.
Finally, "Click" and "Scroll" are the only 2 conditions that can ever happen and the AI knew this because I explained this in an earlier prompt. So really all of that code isn't necessary at all. None of it.
What I ended up doing was creating a simple map and looked up the key which had an associated value to it. No conditions or swapping logic needed and way easier to maintain. No AI used, I just looked at the Lua docs on how to create a map in Lua.
This is what the above code translated to:
local on_event_map = {
["on-click"] = "Left Click",
["on-click-right"] = "Right Click",
["on-click-middle"] = "Middle Click",
["on-click-backward"] = "Backward Click",
["on-click-forward"] = "Forward Click",
["on-scroll-up"] = "Scroll Up",
["on-scroll-down"] = "Scroll Down",
}
label = on_event_map[key]
IMO the above is a lot clearer on what's happening and super easy to modify if another thing were added later, even if the key's format were different.Now imagine this. Imagine coding a whole app or a non-trivial script where the first section of code was used. You'd have thousands upon thousands of lines of gross, brittle code that's a nightmare to follow and maintain.
andoando 13 hours ago
For any non professional work its there for me.
Wire up authentication system with sso. done Setup websockets, stream audio from mic, transcribe with elvenlabs. done.
Shit that would take me hours takes literally 5 mins.
leptons 12 hours ago
All that stuff would take me about 5 minutes without AI. Those are things with 10,000 examples all over the web. AI is good at distilling the known solutions. But anything even slightly out of the ordinary, it fails miserably. I'd much rather write that code myself instead of spend an hour convincing an AI to do it for me.
andoando 12 hours ago
andoando 11 hours ago
coldtea 10 hours ago
mexicocitinluez 6 hours ago
kansface 12 hours ago
I've generated 250KLoC this week, absolutely no changes in deps or any other shenanigans. I'm not even really trying to optimize my output. I work on plans/proposals with 2 or 3 agents simultaneously in Cursor while one does work, sometimes parallelized. I can't do that in less code and cleaner. I can't do it at all. Don't wait too long.
zahlman 7 hours ago
> I can't do that in less code and cleaner. I can't do it at all.
Can't do what, precisely?
habinero 12 hours ago
> I've generated 250KLoC this week
It's horrifying, all right, but not in the way you think lol. If you don't understand why this isn't a brag, then my job is very safe.
coldtea 10 hours ago
sevenzero 8 hours ago
slopinthebag 11 hours ago
LOL that's it? I generated over 5 million lines of code this week. You need to step it up or you're gonna be left behind.
zahlman 7 hours ago
kubb 6 hours ago
MadxX79 12 hours ago
Your developers were so preoccupied with whether or not they could, they didn't stop to think if they should (add 250kloc)
bot403 11 hours ago
dorfsmay 4 hours ago
For me, the biggest shift is people who don't care about local AI. The idea that you can no longer code without paying a tax to one of the billion $ backed company isn't sitting well.
FuckButtons an hour ago
I don’t understand why more people aren’t focused on how to get the benefits of ai but on your own machine. If the last 20 years of software transitioning off of our desktops and into the cloud has taught us anything, it’s that letting corporate entities run the software you rely on end to end gives you: worse software with more bugs, surveillance and subscriptions. Why on earth would you want that for everything you do.
whiplash451 a minute ago
Local AI is what people want/need, but centralized AI is where the investors' money is flowing, because a walled garden has always been easier to turn into a cash printer.
glaslong 2 hours ago
The marginal differences in quality seem pretty meaningful right now, enough to make Claude wildly dominant, but some of the locally runnable models like Qwen feel only a few months behind the leaders.
I'm betting the generational gains level off and smaller local models close the gap somewhat. Then harnesses will generally be more important than model, and proprietary harnesses will not offer much more than optimization for specific models. All while SaaS prices ratchet up, pushing folks toward local and OSS. Or at least local vs a plethora of hosted competition, same as cloud vs on prem.
hacker_homie 3 hours ago
Yeah look at the price of netfilx, do you think starting at $200 it's going to stay anywhere close to that.
dorfsmay an hour ago
The price does not matter, even if it were free. If you need to be logged on into an external service to be able to code, it's just not the same any more, and I'm thinking of basic technology here, but the political/distopian ramifications are crazy.
duskdozer an hour ago
bikelang 13 hours ago
If coding truly becomes effortless to produce - and by that extension a product becomes near free to produce - then I find it quite odd that the executive class thinks their businesses won’t be completely up ended by a raging sea of competition.
glaslong 2 hours ago
They're going to have a fast and ruthless testing of whether their product senses and abilities to attract and trap customers were actually skill or lucky positioning, as competition explodes from every direction, including from within customer and user bases.
EagnaIonat 9 hours ago
All run of the mill software is gone or on borrowed time. Why pay a subscription for a product that I can get something Claude to build it for me.
Before I was building tools, now I am building full applications in less time than I did before for tools.
What will be around for a while is where you need an expert in the loop to drive the AI. For example enterprise applications. You simply can't hand that off to an AI at this point.
AstroBen 5 hours ago
Can we see the applications?
EagnaIonat 3 hours ago
xienze 4 hours ago
dominotw 6 hours ago
no one wants your vibecoded full applications though. not sure why you are building them.
EagnaIonat 6 hours ago
some_random 5 hours ago
No, that's exactly what at least some of them think and it's why the market has been so volatile lately.
bot403 11 hours ago
The markets seem to agree with you and are pricing accordingly.
postsantum 13 hours ago
The struggle will completely shift to how to get traffic
mirsadm 12 hours ago
That is already the struggle. There is too much stuff already.
GoblinSlayer 5 hours ago
d4rkp4ttern 7 hours ago
I see lots of discussions about humans no longer writing code but the elephant in the room is the rapid extinction of human-review of AI-made code. I expect this will lead to a massive hangover. In the meantime we try to mitigate this by ensuring the structure of code remains AI-friendly. I also expect some new types of tools to emerge that will help with this “cognitive debt”.
kusokurae 5 hours ago
My impression is that people who think that LLMs will completely release reviewing or writing code have never really worked on anything safety critical. I'm not looking forward to the next wave of pacemaker glitches.
kjkjadksj 2 hours ago
You act like we live in a world where companies are held sufficiently liable.
olsondv 6 hours ago
That is why I do not use the multi agent team technique. My code generation has atrophied, but my code review skills have only gotten stronger both for human and AI code. If I handed over both, it hurts my employability and will definitely lead to that hangover.
allreduce 11 hours ago
I'm starting to find the naive techno-optimism here annoying. If you don't have capital or can do something else you will be homeless.
gf000 9 hours ago
Well, there are so many more lower hanging fruits that LLMs can actually replace before they get to developers -- basically every middle manager, and a significant chunk of all white collar jobs.
I'm not convinced software developers will be replaced - probably less will be needed and the exact work will be transformed a bit, but an expert human still has to be in the loop, otherwise all you get is a bunch of nonsense.
Nonetheless, it may very well transform society and we will have to adapt to it.
allreduce 9 hours ago
Not all software development will be automated immediatly. But I've noticed that many skills I've built are lessened in worth with every model release.
Having a lot of specifics about a programming environment memorized for example used to be the difference between building something in a few hours and a week, but now is pretty unimportant. Same with being able to do some quick data wrangling on the command line. LLMs are also good at parsing a lot of code or even binary format quickly and explaining how it works. That used to be a skill. Knowing a toolbox of technologies to use is needed less. Et cetera.
They haven't come for the meat of what makes a good engineer yet. For example, the systems-level interfacing with external needs and solving those pragmatically is still hard. But the tide is rising.
samiv 8 hours ago
flux3125 3 hours ago
> probably less will be needed and the exact work will be transformed a bit
My guess is the opposite: they'll throw 5–10x more work at developers and expect 10x more output, while the marginal cost is basically just a Claude subscription per dev.
olsondv 6 hours ago
I don’t see middle managers taking the initial brunt unless they truly are just pushing papers around. At companies of sufficient size, they do play a role of separation between C suite and the grunts. To me, certain low-performing grunts will be the first out. Then a team reorg to rebalance. Then some middle managers will be out as fewer of them can handle multiple teams.
hnthrow0287345 7 hours ago
>I'm not convinced software developers will be replaced
Most of us will probably need to shift to security. While you can probably build AI specifically to make things more secure, that implies it could also attack things as well, so it ends up being a cat-and-mouse game that adjusts to what options are available.
j-a-a-p 5 hours ago
Agree. Marketing, Finance, Legal, already having huge impact for junior positions.
GeoAtreides 5 hours ago
you know, natural attrition is still attrition.
butILoveLife 8 hours ago
Yep. I own a software shop and yesterday was when I realized that I'm no longer going to be a 1%er doing this.
dominotw 6 hours ago
what happened yesterday?
ipnon 10 hours ago
But you only $200/month for the productivity of what used to cost monthly salary for 10 software engineers. Doesn't this democratize software construction?
allreduce 9 hours ago
It commoditizes software construction.
The resources to learn how to construct software are already free. However learning requires effort, which made learning to build software an opportunity to climb the ladder and build a better life through skill. This is democratization.
Now the skill needed to build software is starting to approach zero. However as you say you can throw money at an AI corporation to get some amount of software built. So the differentiator is capital, which can buy software rather cheaply. The dependency on skill is lessened greatly and software is becoming worthless, so another avenue to escape poverty through skill closes.
kjkjadksj 2 hours ago
Did ms word usher in a surge of novel writing? Not really apparent.
yubainu 11 hours ago
In the near future, a "good programmer" might not be defined by someone who can write bug-free, clear code, but rather by someone who can prompt for code that works consistently within the context of AI. If that happens, I'll have to find a different job.
shinycode 6 hours ago
That’s the exact definition our CEO gave of our job this week. That’s how he sees and expects us to work now. I feel some anxiety because that’s too much too fast. We went from « we need to fix every single bug we encounter » to « it doesn’t matter if there’s bugs as long as we ship a feature fast »
olsondv 6 hours ago
At least at my company, we have never really cared how it gets done, even before AI. It just has to work (ideally bug-free and maintainable) by the deadline. If you can keep up with shorter deadlines, more power to you. It’s basically a modern John Henry vs the steam drill.
flux3125 3 hours ago
> You can’t just tell an agent, Build me the code for a successful start-up. The agents work best when they’re being asked to perform one step at a time
That's also true for humans. If you sit down with an LLM and take the time to understand the problem you're trying to solve, it can perfectly guide you through it step by step. Even a non-technical person could build surprisingly solid software if, instead of immediately asking for new shiny features, they first ask questions, explore trade-offs, and get the model's opinion on design decisions..
LLMs are powerful tools in the hands of people who know they don't know everything. But in the hands of people who think they always know the best way, they can be much less useful (I'd say even dangerous)
GorbachevyChase 2 hours ago
I appreciate this sober take. If you hired a remote developer and the only thing you said to that person was “build a program that does this. Make no mistakes” would you expect that to be successful? Are you certain you would get what you wanted?
AstroBen 2 hours ago
Any competent developer there is going to push back and get the needed information out of you.
LLMs don't know when you're under-specifying the problem.
anonzzzies 2 hours ago
I dunno; I finally can focus on writing the logic I wanted to write all along and finally my upbringing in formal verification makes sense as I can spend my time on it instead of figuring what garbage (I cannot use it in my work but sbcl is one of the things that does not grow tumors in software) updates I will never ever need my friends added to the framework or language or ide I happen to use.
suheilaaita 8 hours ago
I'm from an accounting/finance background and spent about 10 years in Big4. I was always into tech, but never software development because writing code (as I thought) takes years to master, and I had already chosen accounting.
Fast forward to 2024 when I saw Cursor (the IDE coding agent tool). I immediately felt like this was going to be the way for someone like me.
Back then, it was brutal. I'd fight with the models for 15 prompts just to get a website working without errors on localhost, let alone QA it. None of the plan modes or orchestration features existed. I had to hack around context engineering, memories, all that stuff. Things broke constantly. 10 failures for 1 success. But it was fun. To top it all off, most of the terminology sounded like science fiction, but it got better in time. I basically used AI itself to hack my way into understanding how things worked.
Fast forward again (only ~2 years later). The AI not only builds the app, it builds the website, the marketing, full documentation, GIFs, videos, content, screen recordings. It even hosts it online (literally controls the browser and configures everything). Letting the agent control the browser and the tooling around that is really, genuinely, just mad science fiction type magic stuff. It's unbelievable how often these models get something mostly right.
The reality though is that it still takes time. Time to understand what works well and what works better. Which agent is good for building apps, which one is good for frontend design, which one is good for research. Which tools are free, paid, credit-based, API-based. It all matters if you want to control costs and just get better outputs.
Do you use Gemini for a website skeleton? Claude for code? Grok for research? Gemini Deep Search? ChatGPT Search? Both? When do you use plan mode vs just prompting? Is GPT-5.x better here or Claude Opus? Or maybe Gemini actually is.
My point is: while anyone can start prompting an agent, it still takes a lot of trial and error to develop intuition about how to use them well. And even then everything you learn is probably outdated today because the space changes constantly.
I'm sure there are people using AI 100× better than I am. But it's still insane that someone with no coding background can build production-grade things that actually work.
The one-person company feels inevitable.
I'm curious how software engineers think about this today. Are you still writing most of your code manually?
butILoveLife 8 hours ago
> it still takes a lot of trial and error to develop intuition about how to use them well.
I used to think so. Then a customer made their own replacement for $600/mo software in 2 days. The guy was a marketer by training. I don't exaggerate. I saw it did the exact same things.
suheilaaita 6 hours ago
It's true. We're also at the point where the models and the orchestration around them are so good that any beginner to those tools who knows how to use a computer can build working apps. Interesting times.
I was pointing out that practice helps with the speed and the scope of capabilities. Building a personal prototype is a different ballgame than building a production solution that others will use.
butILoveLife 6 hours ago
shimman 2 hours ago
Can you say what kind of software the customer replaced?
shinycode 6 hours ago
It’s true there’s some magic effect from Claude code’s work. But still, often it’s not exactly the same infra and scaling than production grade. But for a customer I guess that’s perfect, they have a mean to make their own tools instead of relying on platforms to build those tools.
suheilaaita 6 hours ago
bwhiting2356 3 hours ago
> Pushing code that fails pytest is unacceptable and embarrassing.
CI is for preventing regressions. Agents.md is for avoiding wasted CI cycles.
bryanrasmussen 13 hours ago
how many times in the history of computer programming has there been an end to computer programming as we know it, successfully, and how many times predicted?
I can think of one successfully, off hand, although you could probably convince me there was more than one.
the principle phrase being "as we know it", since that implies a large scale change to how it works but it continues afterwards, altered.
mech422 13 hours ago
Off the top of my head, I can think of the following during my career:
1. COBOL (we actually did still use it back in the 80s)
2.AI back in the 80s (Dr. Dobbs was all concerned about it ...)
3. RAD
4. No-Code
5. Off-shoring
6. Web 2.0
7. Web 3.0
8. possibly the ADA/provably correct push depending on your area of programming
TBH - I think the AI's are nice tools, but they got a long way to go before it's the 'end of computer programming as we know it'edit: formatting
bryanrasmussen 8 hours ago
OK, those are all ones that didn't change programming as we know it, but some came closer than others right?
I definitely considered some of those in my list of failed revolutions.
My one completely successful revolution is moving from punch card programming.
mech422 2 hours ago
kuboble 12 hours ago
Stack overflow (and internet in general) changed the programming as we (at least some of us) knew it.
When I was learning programming I had no internet, no books outside of library, nobody to ask for days.
I remember vividly having spent days trying to figure out how to use the stdlib qsort, and not being able to.
joefourier 4 hours ago
mech422 12 hours ago
fweimer 11 hours ago
COBOL certainly had a lasting impact, but only for some application domains. The rest didn't seem to be particularly successful or impactful. Maybe RAD if you consider office application macros and end user report generation in it. (Spreadsheets extended programming to non-programmers and had a long-lasting impact, but I wouldn't call them RAD.)
mech422 2 hours ago
ralferoo 8 hours ago
FWIW I worked for a company from 2002 to 2006 that still had quite a large COBOL team even then. Some of the team members were also in their 20s and they'd been hired and trained up in COBOL.
mech422 2 hours ago
rzmmm 6 hours ago
Someone compared LLM in 2020s to GUI in 80s and 90s. Graphical interfaces didn't replace text interface, but it just became additional to it.
fweimer 11 hours ago
What's the one successful one? Visicalc?
bryanrasmussen 8 hours ago
I would say the one that definitely changed programming was moving from the punch card era. A lot of these others that people are mentioning I don't think really changed programming, they just looked like they were going to.
__mharrison__ 5 hours ago
I wasn't around when we moved to that stack from assembly. I didn't experience the mourning then.
Most folks I hang out with are infatuated with turning tokens into code. They are generally very senior 15+ years of experience.
Most folks I hang out with experience existential dread for juniors and those coming up in the field who won't necessarily have the battle scars to orchestrate systems that will work in the will world.
Was talking with one fellow yesterday (at an AI meetup) who says he has 6 folks under him, but that he could now run the team with just two of them and the others are basically a time suck.
fixxation92 a day ago
Conversations of the future...
"Can you believe that Dad actually used to have to go into an office and type code all day long, MAUALLY??! Line by line, with no advice from AI, he had to think all by himself!"
aleph_minus_one a day ago
> "Can you believe that Dad actually used to have to go into an office and type code all day long, MAUALLY??! Line by line, with no advice from AI, he had to think all by himself!"
Grumpy old man: "That's exactly why our generation was so much smarter than today's whippersnappers: we were thinking from morning to night the whole long day."
duskdozer 7 hours ago
>What's ~~a computer~~ thinking?
bitwize 12 hours ago
This was literally part of the premise of The Jetsons. George's job was to press a single button while the computer RUDI did all the work.
The difference is, Jetsons wasn't a dystopia (unlike the current timeline), so when Mr. Spacely fired George, RUDI would take his side and refuse to work until George was re-hired.
bot403 11 hours ago
I had to run Jenkins to build my code. In the snow. And uphill on git pull and deploy.
allenu 13 hours ago
I was thinking about that recently. Maybe decades from now people will look at things like the Linux kernel or Doom and be shocked that mere humans were able to program large codebases by hand.
johnisgood 7 hours ago
Hmm, can you think of anything that we could do decades ago but cannot do it now, today?
allenu 2 hours ago
c0_0p_ 6 hours ago
iamflimflam1 11 hours ago
It must have been:
Aliens Atlanteans Time travellers A hoax …
ares623 14 hours ago
More likely:
"Dad, I've sent out 1000 applications and haven't had a call back. I can't take it anymore. Has it always been like this?"
The Dad: It's not my fault!
cineticdaffodil 2 hours ago
Revenge of the writers and software managers, the wishfull hoping for hurt of those made redundant upon those they blame for having been made redundant.
jazz9k 2 days ago
Because they are still making the same salary. In 5 years, when their job is eliminated, and they can't find work, they will regret their decision.
siva7 21 hours ago
we had no choice. if i don't do it someone else will..
chrisra a day ago
Their decision to... use AI for coding?
lelanthran a day ago
Well, their position on AI.
By their own accounts they are just pressing enter.
ripe 2 days ago
> it could also be that these software jobs won’t pay as well as in the past, because, of course, the jobs aren’t as hard as they used to be. Acquiring the skills isn’t as challenging.
This sounds opposite to what the article said earlier: newbies aren’t able to get as much use out of these coding agents as the more experienced programmers do.
kittikitti a day ago
This article is ragebaiting people and it's an embarrassing piece from the NYT.
0xcafefood 3 hours ago
NYT has it out for digital advertisers, who directly compete with them. I do sense some schadenfreude here that the tech nerds who work at these places might be in trouble.
"Silicon Valley panjandrums spent the 2010s lecturing American workers in dying industries that they needed to “learn to code."
To copywriters at the NYT, LLMs are far better at stringing together natural language prose than large amounts of valid software. Get ready to supervise LLMs all day if you're not already.
logicchains 2 hours ago
youknownothing 4 hours ago
I once suggested a drinking game: shot every time someone says "X is dead". I was told to f** off because I'd kill half of humanity.
COBOL is dead. Java is dead. Programming is dead. AI is dead (yes, some people are already claiming this: https://hexa.club/@phooky/116087924952627103)
I must be the kid from The Sixth Sense because I keep seeing all these allegedly dead guys around me.
lelanthran a day ago
This is a very one-sided article, unashamedly so.
Where's the references to the decline in quality and embarrassing outages for Amazon, Microsoft, etc?
dboreham a day ago
Everything you read is in service of someone's business model.
gnz11 8 hours ago
What’s your point? Journalists have jobs?
0xcafefood 3 hours ago
negromcnig 2 hours ago
esafak a day ago
Do we know that it decreased the quality, or introduced more opportunities for bugs by simply increasing the velocity? If every commit has a fixed probability of having a bug, you'll run into more bugs in a week by going faster.
lelanthran 13 hours ago
> Do we know that it decreased the quality, or introduced more opportunities for bugs by simply increasing the velocity?
That's an easy question to answer - you can look at outages per feature released.
You may be instead looking at outages per loc written.
leptons 12 hours ago
AI is constantly trying to introduce bugs into my code. I've started disabling it when I know exactly where I'm going with the code, because the AI is often a lot more confused than I am about where the code is going.
pydry a day ago
Do we know it increased the velocity and didnt just churn more slop?
Even before AI the limiting factor on all of the teams I ever worked on was bad decisions, not how much time it took to write code. There seem to be more of those these days.
htx80nerd a day ago
You have to hold AI hand to do even simple vanilla JS correctly. Or do framework code which is well documented all over the net. I love AI and use it for programming a lot, but the limitations are real.
xtracto 15 hours ago
The other day I (well, the AI) just wrote a Rust app to merge two (huge, GB of data) tables by discovering columns with data in common based on text distance (levenshtein and Dice) . It worked beautifully
An i have NEVER made one line of Rust.
I dont understand nay-sayers, to me the state of gen.AI is like the simpsons quote "worst day so far". Look were we are within 5 years of the first real GPT/LLM. The next 5 years are going to be crazy exciting.
The "programmer" position will become a "builder". When we've got LLMs that generate Opus quality text at 100x speed (think, ASIC based models) , things will get crazy.
dannersy 9 hours ago
Because if you don't know the language or problem space, there are footguns in there that you can't find, you won't know what to look for to find them. Only until you try to actually use this in a production environment will the issues become evident. At that point, you'll have to either know how to read and diagnose the code, or keep prompting till you fix it, which may introduce another footgun that you didn't know that you didn't know.
This is what gets me. The tools can be powerful, but my job has become a thankless effort in pointing out people's ignorance. Time and again, people prompt something in a language or problem space they don't understand, it "works" and then it hits a snag because the AI just muddled over a very important detail, and then we're back to the drawing board because that snag turned out to be an architectural blunder that didn't scale past "it worked in my very controlled, perfect circumstances, test run." It is getting really frustrating seeing this happen on repeat and instead of people realizing they need to get their hands dirty, they just keep prompting more and more slop, making my job more tedious. I am basically at the point where I'm looking for new avenues for work. I say let the industry just run rampant with these tools. I suspect I'll be getting a lot of job offers a few years from now as everything falls apart and their $10k a day prompting fixed one bug to cause multiple regressions elsewhere. I hope you're all keeping your skills sharp for the energy crisis.
psyklic 8 hours ago
npinsker 14 hours ago
Human minds are built to find patterns, and you should be careful not to assume the rate of improvement will continue forever based on nothing but a pattern.
throwawaytea 14 hours ago
fauchletenerum 14 hours ago
zeroonetwothree 5 hours ago
The less you know about a domain/language the better AI seems to be :)
fastforwardius 10 hours ago
I seem to remember doing it in SQL (EDIT_DISTANCE) 20ish years ago. While I wouldn't say it worked beautifully, I also didn't need to make a single line of Rust :) also no more than 2 line s of SQL were needed.
jqbd 6 hours ago
sjeiuhvdiidi 13 hours ago
Let me explain the naysayers, they know "programmer" has always meant "builder" and just because search is better and you can copy and paste faster doesn't mean you've built anything.First thing people need to realize is no proprietary code is in those databases, and using Ai will ultimately just get you regurgitated things people don't really care about. Use it all you want, you won't be able to do anything interesting, they aren't giving you valuable things for free. Anything of value will still take time and knowledge. The marketing hype is to reduce wages and prevent competition. Go for it.
snozolli 4 hours ago
The next 5 years are going to be crazy exciting.
I don't want exciting. I want a stable, well-paying job that allows me to put food on the table, raise a family with a sense of security and hope, and have free time.
sp00chy a day ago
Exactly that is also my experience also with Claude Code. It can create a lot of stuff impressively but with LOTS of more code than necessary. It’s not really effective in the end. I have more than 35 years of coding experience and always dig into the newest stuff. Quality wise it’s still not more than junior dev stuff even with latest models, sorry. And I know how to talk to these machines.
TuxSH a day ago
I don't have as many years of professional experience as you do, but IMO code pissing is one of the areas LLMs and "agentic tools" shine the least.
In both personal projects and $dayjob tasks, the highest time-saving AI tasks were:
- "review this feature branch" (containing hand-written commits)
- "trace how this repo and repo located at ~/foobar use {stuff} and how they interact with each other, make a Mermaid diagram"
- "reverse engineer the attached 50MiB+ unstripped ELF program, trace all calls to filesystem functions; make a table with filepath, caller function, overview of what caller does" (the table is then copy-pasted to Confluence)
- basic YAML CRUD
Also while Anthropic has more market share in B2B, their model seems optimized for frontend, design, and literary work rather than rigorous work; I find it to be the opposite with their main competitor.
Claude writes code rife with safety issues/vulns all the time, or at least more than other models.
iamflimflam1 11 hours ago
Try the new /simplify command.
jcranmer a day ago
I must say, I do love how this comment has provoked such varying responses.
My own observations about using AI to write code is that it changes my position from that of an author to a reviewer. And I find code review to be a much more exhausting task than writing code in the first place, especially when you have to work out how and why the AI-generated code is structured the way it is.
thegrim33 17 hours ago
There's a very wide range of programming tasks of differing difficulty that people are using / trying to use it for, and a very wide range of intelligence amongst the people that are using / trying to use it, and who are evaluating its results. Hence, different people have very different takes.
seanmcdirmid 14 hours ago
> especially when you have to work out how and why the AI-generated code is structured the way it is.
You could just ask it? Or you don’t trust the AI to answer you honestly?
chmod775 12 hours ago
tayo42 13 hours ago
Your always reviewing code though. Either a team mates pr or maybe your own code in 3 months, or some legacy thing.
christophilus 6 hours ago
wek a day ago
This is not my experience either. If you put the work in upfront to plan the feature, write the test cases, and then loop until they pass... you can build a lot of high quality software quickly. The difference between a junior engineer using it and a great architect using it is significant. I think of it as an amplifier.
grey-area 5 hours ago
I’m amazed at how many great architects and experts on AI we now have.
andrekandre 10 hours ago
> If you put the work in upfront to plan the feature, write the test cases, and then loop until they pass...
it can be exhausting and time consuming front-loading things so deeply though; sometimes i feel like i would have been faster cutting all that out and doing it myself because in the doing you discover a lot of missing context (in the spec) anyways...bluefirebrand 14 hours ago
This honestly reads to me like "if you spend a lot of time doing tedious monotonous shit you can save a lot of time on the interesting stuff"
I have no interest being a "great architect" if architects don't actually build anything
hrimfaxi 3 hours ago
Mars008 14 hours ago
> The difference between a junior engineer using it and a great architect using it is significant
Yes, juniors are trying to use AI with the minimum input. This alone tells a lot..
seanmcdirmid a day ago
Not in my experience. But then again, lots of programmers are limited in how they use AI to write code. Those limitations are definitely real.
keeganpoppen a day ago
that's just not even remotely my experience. and i am ~20k hours into my programming career. ai makes most things so much faster that it is hard to justify ever doing large classes of things yourself (as much as this hurts my aesthetic sensibilities, it simply is what it is).
lumost a day ago
Part of this depends on if you care that the AI wrote the code "your way." I've been in shops with rather exotic and specific style guides and standards which the AI would not or will not conform to.
igor47 13 hours ago
localhost 21 hours ago
leptons 12 hours ago
I've never seen a human estimate their "programming career" in kilohours. Is that supposed to look more impressive than years? So, you've been programming only about 7 years? I guess I'm at about "170 kilohours".
ralferoo 8 hours ago
kennywinker 10 hours ago
moezd a day ago
AI assisted code can't even stick to the API documentation, especially if the data structures are not consistent and have evolved over time. You would see Claude literally pulling function after function from thin air, desperately trying to fulfill your complicated business logic and even when it's complete, it doesn't look neat at all. Yes, it will have test coverage, but one more feature request will probably break the back of the camel. And if you raise that PR to the rest of your team, good luck trying to summarise it all to your colleagues.
However if you just have an easy project, or a greenfield project, or don't care about who's going to maintain that stuff in 6 months, sure, go all in with AI.
ccosky 18 hours ago
I definitely wonder if the people going all-in on AI harnessing are working on greenfield projects, because it seems overwhelming to try to get that set up on a brownfield codebase where the patterns aren't consistent and the code quality is mixed.
tayo42 13 hours ago
So just iterate on it? Your complaint is that the model isn't one shotting the problem and reading your mind about style. It's like any coding workflow, make it work, then make it nice.
moezd 13 hours ago
GalaxyNova a day ago
Not what I've experienced
neversupervised 13 hours ago
It’s crazy how some people feel the ai and others don’t. But one group is wrong. It’s a matter of time before everyone feels the AI.
fudfomo 20 hours ago
Most of this thread is debating whether models are good or bad at writing code... however, I think a more important question is what we feed the AI with because that dramatically determines the quality of the output.
When your agent explores your codebase trying to understand what to build, it read schema files, existing routes, UI components etc... easily 50-100k tokens of implementation detail. It's basically reverse-engineering intent from code. With that level of ambiguous input, no wonder the results feel like junior work.
When you hand it a structured spec instead including data model, API contracts, architecture constraints etc., the agent gets 3-5x less context at much higher signal density. Instead of guessing from what was built it knows exactly what to build. Code quality improves significantly.
I've measured this across ~47 features in a production codebase with amedian ratio: 4x less context with specs vs. random agent code exploration. For UI-heavy features it's 8-25x. The agent reads 2-3 focused markdown files instead of grepping through hundreds of KB of components.
To pick up @wek's point about planning from above: devs who get great results from agentic development aren't better prompt engineers... they're better architects. They write the spec before the code, which is what good engineering always was... AI just made the payoff for that discipline 10x more visible.
lagrange77 a day ago
It's really time that mainstream media picks up on 'agentic coding' and the implications of writing software becoming a commodity.
I'm an engineer (not only software) by heart, but after seeing what Opus 4.6 based agents are capable of and especially the rate of improvement, i think the direction is clear.
thrawa8387336 a day ago
I like 4.6 and agents based on it but can only qualify it as moderately useful.
IntrepidPig 21 hours ago
> “The reason that tech generally — and coders in particular — see L.L.M.s differently than everyone else is that in the creative disciplines, L.L.M.s take away the most soulful human parts of the work and leave the drudgery to you,” Dash says. “And in coding, L.L.M.s take away the drudgery and leave the human, soulful parts to you.”
This doesn’t really make sense to me. GenAI ostensibly removes the drudgery from other creative endeavors too. You don’t need to make every painstaking brushstroke anymore; you can get to your intended final product faster than ever. I think a common misunderstanding is that the drudgery is really inseparable from the soulful part.
Also, I think GenAI in coding actually has the exact same failure modes as GenAI in painting, music, art, writing, etc. The output lacks depth, it lacks context, and it lacks an understanding of its own purpose. For most people, it’s much easier to intuitively see those shortcomings of GenAI manifest in traditional creative mediums, just because they come more naturally to us. For coding, I suspect the same shortcomings apply, they just aren’t as clear.
I mean, at the end of the day if writing code is just to get something that works, then sure, let’s blitz away with LLMs and not bother to understand what we’re doing or why we do it anymore. Maybe I’m naive in thinking that coding has creative value that we’re now throwing away, possibly forever.
steve-atx-7600 5 hours ago
Maybe they mean more soulful like a fellow that blacksmiths his own tools and metal fasteners prior to constructing something. I’d personally think this person was a badass, but until wwiii, it’s so impractical and seems arbitrary because why stop there - get more soulful and mine your own ore too.
CrzyLngPwd 10 hours ago
Visual Basic was the end of programming as we knew it...until it wasn't.
heikkilevanto 10 hours ago
And before that, COBOL was supposed to allow computer users to write in almost plain English without even knowing the machine instruction set.
It did change the programming landscape, but there was still a huge need for this new kind of programmers.
Nevermark 10 hours ago
The psycho engineering of model prompts does feel very Phillip K. Dick.
If your base prompt informs the model they are a human software developer in a Severed situation, it gets even closer.
igor47 12 hours ago
I'm not normally a fan of the NYT but this wasn't too bad. It passed the Gel-Mann test, and is clearly written by someone who knows the field well, even though the selection of quotes skews to towards outliers -- I think Yeggie for instance is pretty far out of the mainstream in his views on LLMs, whether ahead or sideways.
As a result a lot of the responses here are either quibbles or cope disguised as personal anecdotes. I'm pretty worried about the impact of the LLMs too, but if you're not getting use out of them while coding, I really do think the problem is you.
Since people always want examples, I'll link to a PR in my current hobby project, which Claude code helped me complete in days instead of weeks. https://github.com/igor47/csheet/pull/68 Though this PR creates a bunch of tables, routes, services -- it's not just greenfield CRUD work. We're figuring out how to model a complicated domain (the rules to DnD 5e, including the 2014 and the 2024 revisions of those rules), integrating with existing code, thinking through complex integrations including with LLMs at run time. Claude is writing almost all the code, I'm just steering
whoisstan 7 hours ago
I feel the need to tell the LLM to rewrite the article for a software developer audience, but don't, those kinds of passage are hard to overcome:
'Salva opened up his code editor — essentially a word processor for writing code — to show me what it’s like to work alongside Gemini, Google’s L.L.M. '
And what's up with L.L.M, A.I., C.L.I. :)
moregrist 5 hours ago
> And what's up with L.L.M, A.I., C.L.I. :)
It’s probably N.Y.T. style requirements; a lot of style guides (eg: Chicago Manual of Style, Strunk & White, etc) have a standard form for abbreviations and acronyms. A paper like N.Y.T. does too and probably still employs copy editors who ensure that every article conforms to it.
daveguy 2 hours ago
> "...melodramatic prose might seem kind of nuts, but as their name implies, large language models are language machines. “Embarrassing” probably imparted a sense of urgency.
> “If you say, This is a national security imperative, you need to write this test, there is a sense of just raising the stakes,” Ebert said.
I'm not sure why programmers and science writers are still attributing emotions to this and why it works. Behind the LLM is a layer that attributes attention to various parts of the context. There are words in the English language that command greater attention. There is no emotion or internal motivation on the part of the LLM. If you use charged words you get charged attention. Quite literally "attention is all you need" to describe why appealing to "emotion" works. It's a first order approximation for attention.
zjp a day ago
There is no such thing as "after coders": https://zjpea.substack.com/p/embarrassingly-solved-problems
This excerpt:
>A.I. had become so good at writing code that Ebert, initially cautious, began letting it do more and more. Now Claude Code does the bulk of it.
is a little overstated. I think the brownfield section has things exactly backwards. Claude Code benefits enormously from large, established codebases, and it’s basically free riding on the years of human work that went into those codebases. I prodded Claude to add SNFG depictions to the molecular modeling program I work on. It couldn’t have come up with the whole program on its own and if I tried it would produce a different, maybe worse architecture than our atomic library, and then its design choices for molecules might constrain its ability to solve the problem as elegantly as it did. Even then, it needed a coworker to tell me that it had used the incorrect data structure and needed to switch to something that could, when selected, stand in for the atoms it represented.
Also this:
>But A.I.-generated code? If it passes its tests and works, it’s worth as much as what humans get paid $200,000 or more a year to compose.
Isn’t really true. It’s the free-riding problem again. The thing about an ESP is that the LLM has the advantage of either a blank canvas (if you’re using one to vibe code a startup), or at least the fact that several possibilities converge on one output, but, genuinely, not all of those realities include good coding architecture. Models can make mistakes, and without a human in the loop those mistakes can render a codebase unmaintainable. It’s a balance. That’s why I don’t let Claude stamp himself to my commits even if he assisted or even did all the work. Who cares if Claude wrote it? I’m the one taking responsibility for it. The article presents Greenfield as good for a startup, and it might be, but only for the early, fast, funding rounds, when you have to get an MVP out right now. That’s an unstable foundation they will have to go back and fix for regulatory or maintenance reasons, and I think that’s the better understanding of the situation than framing Aayush’s experience as a user error.
Even so, “weirdly jazzed about their new powers” is an understatement. Every team including ours has decades of programmer-years of tasks in the backlog, what’s not to love about something you can set to pet peeves for free and then see if the reality matches the ideal? git reset --hard if you don't like what it does, and if you do all the better. The Cuisy thing with the script for the printer is a perfect application of LLMs, a one-off that doesn’t have to be maintained.
Also, the whole framing is weirdly self limiting. The architectural taste that LLMs are, again, free riding off of, is hard won by doing the work more senior engineers are giving to LLMs instead of juniors. We’re setting ourselves up for a serious coordinated action problem as a profession. The article gestures at this a couple times
The thing about threatening LLMs is pretty funny too but something in me wants to fall back to Kant's position that what you do to anything you do to yourself.
htx80nerd a day ago
I spent ~6hrs with Claude trying to fix a web worker bug in a small JS code base Claude made. In the end it failed and I ran out of credits. Claude kept wanting to rip out huge blocks of code and replace entire functions. We never got any closer to a solution. The Claude hype is unreal. My 'on the ground' experience has been vastly different.
kuboble a day ago
Yes, you can get a project with claude to a state of unrecoverable garbage. But with a little experience you can learn what it's good at and this happens less and less.
zjp a day ago
That isn't my experience. My code and bug tracker are public, so I have the privilege of being able to paste URLs to tickets into Claude Code with the prompt "what the fuck?" and it usually comes up with something workable on its own.
movpasd a day ago
Regarding LLM's performances on brownfield projects, I thought of Naur's "Programming as Theory Building". He explains an example of a compiler project that is taken over by a team without guidance from the original developers:
> "at [the] later stage the original powerful structure was still visible, but made entirely ineffective by amorphous additions of many different kinds"
Maybe a way of phrasing it is that accumulating a lot of "code quality capital" gives you a lot more leverage over technical debt, but eventually it does catch up.
nenadg 11 hours ago
sensationalism give it a couple of months
0xbadcafebee 12 hours ago
Back in the day, programming was done on punch cards. In 20 years, that's how kids will see typing out lines of program code by hand.
znort_ 12 hours ago
different things. adding levels of abstraction is not the same as having a statistical model generate abstractions for you.
you can still call it spec-programming but if you don't audit your generated code then you're simply doing it wrong; you just don't realize that yet because you've been getting away with it until now.
Revanche1367 12 hours ago
At the rate things have been going, that is likely to happen in 20 days rather than 20 years.
somewhereoutth 7 hours ago
I have a suspicion that for a task (or to make an artifact) of a given complexity, there is a minimum level of human engagement required to complete it successfully - and that human engagement cannot be substituted for anything else. However, the actual human engagement for a task is not bounded above - efficiency is often less (much less?) than 100%.
So tools (like AI) can move us closer to the 100% efficiency (or indeed further away if they are bad tools!) but there will always be the residual human engagement required - but perhaps moved to different activities (e.g. reviewing instead of writing).
Probably very effective teams/individuals were already close to 100% efficiency, so AI won't make much difference to them.
holoduke 8 hours ago
The best developers are the ones using AI to its best. Mediocre devs will become a useless skill as even a PO could become one. But one who understands architecture, software, code and AI will be expensive to hire. I know plenty of them. I wory for the ones not willing to adopt ai.
DGAP 5 hours ago
Lots of cope here. Highly paid white collar jobs are going to disappear.
xenadu02 a day ago
It's an accelerator. A great tool if used well. But just like all the innovations before it that were going to replace programmers it simply won't.
I used Claude just the other day to write unit test coverage for a tricky system that handles resolving updates into a consistent view of the world and handles record resurrection/deletion. It wrote great test coverage because it parsed my headerdoc and code comments that went into great detail about the expected behavior. The hard part of that implementation was the prose I wrote and the thinking required to come up with it. The actual lines of code were already a small part of the problem space. So yeah Claude saved me a day or two of monotonously writing up test cases. That's great.
Of course Claude also spat out some absolute garbage code using reflection to poke at internal properties because the access level didn't allow the test to poke at the things it wanted to poke at, along with some methods that were calling themselves in infinite recursion. Oh and a bunch of lines that didn't even compile.
The thing is about those errors: most of them were a fundamental inability to reason. They were technically correct in a sense. I can see how a model that learned from other code written by humans would learn those patterns and apply them. In some contexts they would be best-practice or even required. But the model can't reason. It has no executive function.
I think that is part of what makes these models both amazingly capable and incredibly stupid at the same time.
CollinEMac a day ago
>but like most of their peers now, they only rarely write code.
Citation needed. Are most developers "rarely" writing code?
jcranmer a day ago
I'd expect that probably less than 10% of my time is spent actually writing code, and not because of AI, but because enough of it is spent analyzing failures, reading documents, participating in meetings, putting together presentations, answering questions, reading code, etc. And even when I have a nice, uninterrupted coding session, I still spend a decent fraction of that time thinking through the design of how I want the change rather than actually writing the code to effect that change.
habinero 11 hours ago
Yeah, actually writing code is a surprisingly small part of the job.
thrawa8387336 a day ago
And was true before AI
dboreham a day ago
In my direct experience this is mostly true.
sjeiuhvdiidi 14 hours ago
It's all nonsense. It's just better search, intelligence in not artificial. They are trying to convince everyone that they don't need to pay programmers. That's all, all it is. It'll work on the ignorant who'll take less money to make sure it works and fix the bugs, which is mostly what they were paying for anyway. They just want to devalue the work of the people they are reliant on. Nothing new.
neversupervised 13 hours ago
I think you’re a bit behind on your world view. Just because it’s inconvenient to you that non coders can now code, doesn’t make it untrue.
mdavid626 13 hours ago
No, they can’t.
It has nothing to do with inconvenience.
I really like that layman now make these statements - they know better than people working in the industry for decades.
habinero 12 hours ago
gist 21 hours ago
For one thing comments here appear to apply to the quality and issues today not potentially going forward. Quality will change quicker than anyone expects. I am wondering how many people at HN remember when the first Mac came out with Mac Paint and then Pagemaker or Quark. That didn't evolve anywhere nearly as quickly as AI appears to be.
Also I am not seeing how anyone is considering that what a programmer considers quality and what 'gets the job done' (as mentioned in the article) matters in any business. (Example with typesetting is original laser printers were only 300dpi but after a short period became 1200dpi 'good enough' for camera ready copy).
kittikitti a day ago
Another trash article from the New York Times, who financially benefit from this type of content because of their ongoing litigation against OpenAI. I think the assumption that developers don't code is wrong. Most software engineers don't even want to code, they are opportunists looking to make money. I have yet to experience this cliff of coding. These people aren't asking for hard enough questions. I have a bunch of things I want AI to build that it completely fails on.
The article could have been written from a very different perspective. Instead, the "journalists" likely interviewed a few insiders from Big Tech and generalized. They don't get it. They never will.
Before the advent of ChatGPT, maybe 2 in 100 people could code. I was actually hoping AI would increase programming literacy but it didn't, it became even more rare. Many journalists could have come at it from this perspective, but instead painted doom and gloom for coders and computer programming.
The New York Times should look in the mirror. With the advent of the iPad, most experts agreed that they would go out of business because a majority of their revenue came from print media. Look what happened.
Understand this, most professional software and IT engineers hate coding. It was a flex to say you no longer code professionally before ChatGPT. It's still a flex now. But it's corrupt journalism when there is a clear conflict of interest because the NYT is suing the hell out of AI companies.
hn_acc1 a day ago
Agreed - just like the Fortune article talking about (Edit: Morgan Stanley, not GS) saying "the AI revolution is coming next year, and will decimate tons of industries, and no one is ready for it". They quote Altman and Musk. Gee - what did you expect from those two snake-oil salesmen?
novemberYankee7 19 hours ago
Also the fact that NYT gives all their devs licenses to Cursor and Claude
deflator 2 days ago
What is a coder? Someone who is handed the full specs and sits down and just types code? I have never met such a person. The most annoying part of SWE is everyone who isn't an SWE has inane ideas about what we do.
pjmlp a day ago
Never worked on offshoring projects? That is exactly what the sweatshop coders do.
Tade0 21 hours ago
No we don't.
For one, I never saw a "full spec" (if such a thing even exists) back in my days of making 8k. Annually.
recursivedoubts a day ago
I think that the current AI tooling is a much bigger threat to offshore sweatshops than to domestic programmers.
Why deal with language barriers, time shifts, etc. when a small team of good developers can be so much more productive, allegedly?
pjmlp a day ago
theshackleford a day ago
> The most annoying part of SWE is everyone who isn't an SWE has inane ideas about what we do.
I’ve tended to hold the same opinion of what the average SWE thinks everyone else does.
bookofjoe 2 days ago
fraywing a day ago
I keep getting stuck on the liability problem of this supposed "new world". If we take this as far as it goes: AI agent societies that designs, architects, and maintains the entire stack E2E with little to no oversight. What happens when rogue AIs do bad things? Who is responsible? You have to have fireable senior engineers that understand deep fundamentals to make sure things aren't going awry, right? /s
suzzer99 14 hours ago
Check out the movie Brazil, if you haven't seen it already. Incredibly far ahead of its time.
ramesh31 2 days ago
Because we love tech? I'm absolutely terrified about the future of employment in this field, but I wouldn't give up this insane leap of science fiction technology for anything.
bigstrat2003 a day ago
I love tech - tech that actually works well. The current tech we have for AI does not, so I'm not excited about it.
hn_acc1 a day ago
A really good pattern-matching engine is an "insane leap of science fiction"? It saves me a bit of typing here and there with some good pattern matching. Trying to get it to do anything more than a few lines gives me gibberish, or an infinite loop of "Oh, you're right, I need to do X, not Y", over and over - and that's Opus 4.5 or whatever the recent one is.
Would you give it access to your bank account, your 401k, trust it to sell your house, etc? I sure wouldn't.
ramesh31 3 hours ago
>A really good pattern-matching engine is an "insane leap of science fiction"?
Yes, literally. The ship computer voice interface in Star Trek was complete science fiction until 2022. Now its ability to understand speech and respond seem quaint in comparison to current AI.
kittikitti a day ago
"One such test for Python code, called a pytest"
The brain rot from the author couldn't even think of "unit test".
mkehrt a day ago
Why would you expect a reporter to magically know what a "unit test" is? Sounds like a simple miscommunication with one of his sources. Not perfect but not "brain rot".