Every layer of review makes you 10x slower (apenwarr.ca)

472 points by greyface- 16 hours ago

onion2k 14 hours ago

But you can’t just not review things!

Actually you can. If you shift the reviews far to the left, and call them code design sessions instead, and you raise problems on dailys, and you pair programme through the gnarly bits, then 90% of what people think a review should find goes away. The expectation that you'll discover bugs and architecture and design problems doesn't exist if you've already agreed with the team what you're going to build. The remain 10% of things like var naming, whitespace, and patterns can be checked with a linter instead of a person. If you can get the team to that level you can stop doing code reviews.

You also need to build a team that you can trust to write the code you agreed you'd write, but if your reviews are there to check someone has done their job well enough then you have bigger problems.

alkonaut 11 hours ago

This falls for the famous "hours of planning can save minutes of coding". Architecture can't (all) be planned out on a whiteboard, it's the response to the difficulty you only realize as you try to implement.

If you can agree what to build and how to build it and then it turns out that actually is a working plan - then you are better than me. That hasn't happened in 20 years of software development. Most of what's planned falls down within the first few hours of implementation.

Iterative architecture meetings will be necessary. But that falls into the pit of weekly meeting.

ventana 2 hours ago

That's actually one thing that always prevented me from following the standard pathway of "write a design document first, get it approved, then execute" during my years in Google.

I cannot write a realistic non-hand-wavy design document without having a proof of concept working, because even if I try, I will need to convince myself that this part and this part and that part will work, and the only way to do it is to write an actual code, and then you pretty much have code ready, so why bother writing a design doc.

Some of my best (in terms of perf consequences) design documents were either completely trivial from the code complexity point of view, so that I did not actually need to write the code to see the system working, or were written after I already had a quick and dirty implementation working.

nyrikki 2 hours ago

eyelidlessness 5 hours ago

It’s a muscle you can exercise, and doing so helps you learn what to focus on so it’ll be successful. IME a very successful approach is to focus on interfaces, especially at critical boundaries (critical for your use case first, then critical for your existing design/architecture).

Doing this often settles the design direction in a stable way early on. More than that, it often reveals a lot of the harder questions you’ll need to answer: domain constraints and usage expectations.

Putting this kind of work upfront can save an enormous amount of time and energy by precluding implementation work on the wrong things, and ruling out problematic approaches for both the problem at hand as well as a project’s longer term goals.

sodapopcan 7 hours ago

Pair programming 100% of also works. It's unfortunately widely unpopular, but it works.

alkonaut 6 hours ago

jakevoytko 6 hours ago

iampims 6 hours ago

lostdog 3 hours ago

hallway_monitor 6 hours ago

dotancohen 2 hours ago

  > Most of what's planned falls down within the first few hours of implementation.
Planning is priceless. But plans are worthless.

apexalpha 6 hours ago

This might be true for tech companies, but the tech department I am in at a large government could absolutely architecture away >95% of 'problems' we are fixing at the end of the SDLC.

2OEH8eoCRo0 9 hours ago

I've worked waterfall (defense) and while I hated it at the time I'd rather go back to it. Today we move much faster but often build the wrong thing or rewrite and refactor things multiple times. In waterfall we move glacially but what we would build sticks. Also, with so much up front planning the code practically writes itself. I'm not convinced there's any real velocity gains in agile when factoring in all the fiddling, rewrites, and refactoring.

> Most of what's planned falls down within the first few hours of implementation.

Not my experience at all. We know what computers are capable of.

steveBK123 9 hours ago

datsci_est_2015 7 hours ago

orthoxerox 8 hours ago

zingar 9 hours ago

Hendrikto 6 hours ago

alkonaut 7 hours ago

goalieca 7 hours ago

AIorNot 9 hours ago

“Everyone has a plan until they get punched in the mouth" - Mike Tyson

loire280 14 hours ago

I've seen engineers I respect abandon this way of working as a team for the productivity promise of conjuring PRs with a coding agent. It blows away years of trust so quickly when you realize they stopped reviewing their own output.

overfeed 13 hours ago

Perhaps due to FOMO outbreak[1], upper management everywhere has demanded AI-powered productivity gains, based on LoC/PR metrics, it looks like they are getting it.

1. The longer I work in this industry, the more it becomes clear that CxO's aren't great at projecting/planning, and default to copy-cat, herd behaviors when uncertain.

serial_dev 10 hours ago

tripledry 12 hours ago

onion2k 14 hours ago

Putting too much trust in an agent is definitely a problem, but I have to admit I've written about a dozen little apps in the past year without bothering to look at the code and they've all worked really well. They're all just toys and utilities I've needed and I've not put them into a production system, but I would if I had to.

Agents are getting really good, and if you're used to planning and designing up front you can get a ton of value from them. The main problem with them that I see today is people having that level of trust without giving the agent the context necessary to do a good job. Accepting a zero-shotted service to do something important into your production codebase is still a step too far, but it's an increasingly small step.

camillomiller 13 hours ago

denkmoon 10 hours ago

I’m so disappointed to see the slip in quality by colleagues I think are better than that. People who used to post great PRs are now posting stuff with random unrelated changes, little structs and helpers all over the place that we already have in common modules etc :’(

bluefirebrand 5 hours ago

nvardakas 11 hours ago

This is the part that doesn't get talked about enough. Code review was never just about catching bugs it was how teams built shared understanding of the codebase. When someone skips reviewing their own AI-generated PR, they're not just shipping unreviewed code, they're opting out of knowing what's in their own system. The trust problem isn't really about the AI output quality, it's about whether the person submitting it can answer questions about it six months from now.

psychoslave an hour ago

>The expectation that you'll discover bugs and architecture and design problems doesn't exist if you've already agreed with the team what you're going to build.

This is like, there's not going to be surprise on the road you'll take if you already set the destination point. Though most of the time, you are just given a vague description of the kind of place you want to reach, not a precise point targeted. And you are not necessarily starting with a map, not even an outdated one. Also geological forces reshape the landscape at least as fast as you are able to move.

Certhas 9 hours ago

That's partly the point of the article, except the article acknowledges that this is organizationally hard:

> You get things like the famous Toyota Production System where they eliminated the QA phase entirely.

> [This] approach to manufacturing didn’t have any magic bullets. Alas, you can’t just follow his ten-step process and immediately get higher quality engineering. The secret is, you have to get your engineers to engineer higher quality into the whole system, from top to bottom, repeatedly. Continuously.

> The basis of [this system] is trust. Trust among individuals that your boss Really Truly Actually wants to know about every defect, and wants you to stop the line when you find one. Trust among managers that executives were serious about quality. Trust among executives that individuals, given a system that can work and has the right incentives, will produce quality work and spot their own defects, and push the stop button when they need to push it.

> I think we’re going to be stuck with these systems pipeline problems for a long time. Review pipelines — layers of QA — don’t work. Instead, they make you slower while hiding root causes. Hiding causes makes them harder to fix.

roncesvalles 11 hours ago

>shift the reviews far to the left, and call them code design sessions instead, and you raise problems on dailys, and you pair programme through the gnarly bits

hell in one sentence

hedora 3 hours ago

I have seen the future, and it is a robotic boot pushing a human neck to the left.

srean 8 hours ago

Bean counters do not like pair programming.

If we hired two programmers, the goal was to produce twice the LOC per week. Now we are doing far less than our weekly target. Does not meet expectation.

wcfrobert an hour ago

Master planning has never worked for my side projects unless I am building the exact replica of what I've done in the past. The most important decisions are made while I'm deep in the code base and I have a better understanding of the tradeoffs.

I think that's why startups have such an edge over big companies. They can just build and iterate while the big company gets caught up in month-long review processes.

Swizec 14 hours ago

> You also need to build a team that you can trust to write the code you agreed you'd write

I tell every hire new and old “Hey do your thing, we trust you. Btw we have your phone number. Thanks”

Works like a charm. People even go out of their way to write tests for things that are hard to verify manually. And they verify manually what’s hard to write tests for.

The other side of this is building safety nets. Takes ~10min to revert a bad deploy.

pdhborges 11 hours ago

> The other side of this is building safety nets. Takes ~10min to revert a bad deploy.

Does it? Reverting a bad deploy is not only about running the previous version.

Did you mess up data? Did you take actions on third party services that that need to be reverted? Did it have legal reprecursions?

Swizec 5 hours ago

DrewADesign 6 hours ago

herbstein 8 hours ago

> I tell every hire new and old “Hey do your thing, we trust you. Btw we have your phone number. Thanks”

That's cool. Expect to pay me for the availability outside work hours. And extra when I'm actually called

Swizec 6 hours ago

namanyayg 11 hours ago

How does the phone number help?

chaboud 10 hours ago

swiftcoder 9 hours ago

gabriel-uribe 11 hours ago

rendaw 6 hours ago

I've seen this mentioned a couple times lately, so I want to say I don't believe pair programming can serve in place of code review.

Code review benefits from someone coming in fresh, making assumptions and challenging those by looking at the code and documentation. With pair programming, you both take the same logical paths to the end result and I've seen this lead to missing things.

riffraff 14 hours ago

This is also the premise of pair programming/extreme programming: if code review is useful, we should do it all the time.

roncesvalles 10 hours ago

Anyone who talks about pair programming has either never done them or just started doing them last week.

interroboink 10 hours ago

rimunroe 6 hours ago

nicoburns 10 hours ago

orwin 10 hours ago

ap99 10 hours ago

Unless you're covering 100% of edge/corner cases during planning (including roughly how they're handled) then there is still value in code reviews.

You conveniently brushed this under the rug of pair programming but of the handful of companies I've worked at, only one tried it and just as an experiment which in the end failed because no one really wanted to work that way.

I think this "don't review" attitude is dangerous and only acceptable for hobby projects.

zingar 9 hours ago

Reviews are vital for 80% of the programmers I work with but I happily trust the other 20% to manage risk, know when merging is safe without review, and know how to identify and fix problems quickly. With or without pairing. The flip side is that if the programmer and the reviewer are both in the 80% then the review doesn’t decrease the risk (it may even increase it).

chrisweekly 6 hours ago

"If you can get the team to that level you can stop doing code reviews."

IMHO / IME (over 20y in dev) reviewing PRs still has value as a sanity check and a guard against (slippery slope) hasty changes that might not have received all of the prior checks you mentioned. A bit of well-justified friction w/ ROI, along the lines of "slow is smooth, smooth is fast".

totetsu 14 hours ago

This seems to be a core of the problem with trying to leave things to autonomous agents .. The response to Amazons agents deleting prod was to implement review stages

https://blog.barrack.ai/amazon-ai-agents-deleting-production...

chaiyihein 3 hours ago

actually you don't need reviews if you have a realistic enough simulation test environment that is fully instrumentable by the AI agent. If you can simulate it almost exactly as in production and it works, there's no need to code review.

to move to the hyperspeed timescale you need reliable models of verification in the digital realm, fully accessible by AI.

ramon156 13 hours ago

I'm in a company that does no reviews and I'm medior. The tools we make is not interesting at all, so it's probably the best position I could ask for. I occasionally have time to explore some improvements, tools and side projects (don't tell my boss about that last one)

ozim 12 hours ago

Then you spend all your budget on code design sessions and have nothing to show to the customer.

froh 13 hours ago

yes!

and it also works for me when working with ai. that produces much better results, too, when I first so a design session really discussing what to build. then a planning session, in which steps to build it ("reviewability" world wonder). and then the instruction to stop when things get gnarly and work with the hooman.

does anyone here have a good system prompt for that self observance "I might be stuck, I'm kinda sorta looping. let's talk with hooman!"?

jayd16 5 hours ago

Linting isn't going to catch most malicious implementation patterns. You still need to sniff test what was written.

agumonkey 7 hours ago

Anybody has idea on how to avoid childish resistance? Anytime something like this pops up people discuss it into oblivion and teams stay in their old habits

thrwaway55 11 hours ago

Okay but Claude is a fucking moron.

QuiEgo 4 hours ago

> your reviews are there to check someone has done their job well enough then you have bigger problems

Welcome to working with real people. They go off the rails and ignore everything you’ve agreed to during design because they get lazy or feel schedule pressure and cut corners all the time.

Sideline: I feel like AI obeys the spec better than engineers sometimes sigh.

Asooka 3 hours ago

Well we can't not review things, because the workflow demands we review things. So we hacked the process and for big changes we begin by asking people who will be impacted (no-code review), then we do a pre-review of a rough implementation and finally do a formal review in a fraction of the time.

anal_reactor 14 hours ago

I never review PRs, I always rubber-stamp them, unless they come from a certified idiot:

1. I don't care because the company at large fails to value quality engineering.

2. 90% of PR comments are arguments about variable names.

3. The other 10% are mistakes that have very limited blast radius.

It's just that, unless my coworker is a complete moron, then most likely whatever they came up with is at least in acceptable state, in which case there's no point delaying the project.

Regarding knowledge share, it's complete fiction. Unless you actually make changes to some code, there's zero chance you'll understand how it works.

_kidlike 12 hours ago

I'm very surprised by these comments...

I regularly review code that is way more complicated that it should.

The last few days I was going back and forth on reviews on a function that had originally cyclomatic complexity of 23. Eventually I got it down to 8, but I had to call him into a pair programming session and show him how the complexity could be reduced.

servo_sausage 12 hours ago

zzrrt 6 hours ago

recursivecaveat 14 hours ago

Do people really argue about variable names? Most reviews comments I see are fairly trivial, but almost always not very subjective. (Leftover debug log, please add comment here, etc) Maybe it helps that many of our seniors are from a team where we had no auto-formatter or style guide at all for quite a while. I think everyone should experience that a random mix of `){` and `) {` does not really impact you in any way beyond the mild irking of a crooked painting or something. There's a difference between aesthetically bothersome and actually harmful. Not to say that you shouldn't run a formatter, but just for some perspective.

jffhn 12 hours ago

alemanek 4 hours ago

anal_reactor 13 hours ago

swiftcoder 9 hours ago

> 2. 90% of PR comments are arguments about variable names.

This sort of comment is meaningless noise that people add to PRs to pad their management-facing code review stats. If this is going on in your shop, your senior engineers have failed to set a suitable engineering culture.

If you are one of the seniors, schedule a one-on-one with your manager, and tell them in no uncertain terms that code review stats are off-limits for performance reviews, because it's causing perverse incentives that fuck up the workflow.

anal_reactor 8 hours ago

g947o 7 hours ago

That seems a lot about the company and the culture rather than about how code review is supposed to work.

I have been involved in enough code reviews both in a corporate environment and in open source projects to know this is an outlier. When code review is done well, both the author and reviewer learn from the experience.

worldsayshi 12 hours ago

People always makes mistakes. Like forgetting to include a change. The point of PRs for me is to try to weed out costly mistakes. Automated tests should hopefully catch most of them though.

Fargren 10 hours ago

devmor 14 hours ago

I used to do this! I can’t anymore, not with the advent of AI coding agents.

My trust in my colleagues is gone, I have no reason to believe they wrote the code they asked me to put my approval on, and so I certainly don’t want to be on a postmortem being asked why I approved the change.

Perhaps if I worked in a different industry I would feel like you do, but payments is a scary place to cause downtime.

eudamoniac 4 hours ago

As far as I'm concerned if I approved the PR I'm equally responsible for it as the author is. I never make nitpick comments and I still have to point out meaningful mistakes in around 30% of reviews. The percentage has only risen with AI slop.

hinkley 12 hours ago

These systems make it more efficient to remove the actively toxic members for your team. Beligerence can be passively aggressively “handled” by additional layers but at considerable time and emotional labor cost to people who could be getting more work done without having to coddle untalented assholes.

layer8 11 hours ago

Sounds like there was a bad hiring process.

hinkley 13 minutes ago

usefulcat an hour ago

DeathArrow 11 hours ago

The issue is that every review adds a lot of delay. A lot of alignment and pair programming won't be time expensive?

onion2k 3 hours ago

A lot of alignment and pair programming won't be time expensive?

The question is really "Will up-front design and pair programming cost more than not doing up-front design and pair programming?".

In my experience, somewhat counter-intuitively, alignment and pairing is cheaper because you get to the right answer a bit 'slower' but without needing the time spent reworking things. If rework is doubling the time it takes to deliver something (which is not an extreme example, and in some orgs would be incredibly conservative) then spending 1.5 times the estimate putting in good design and pair programming time is still waaaay cheaper.

rendall 13 hours ago

Yes. This is the way. Declarative design contracts are the answer to A.I. coders. A team declares what they want, agents code it together with human supervision. Then code review is just answering the question "is the code conformant with the design contract?"

But. The design contract needs review, which takes time.

jauntywundrkind 14 hours ago

I wonder what delayed continuous release would be like. Trust folks to merge semi-responsibly, but have a two week delay before actually shipping to give yourself some time to find and fix issues.

Perhaps kind of a pain to inject fixes in, have to rebase the outstanding work. But I kind of like this idea of the org having responsibility to do what review it wants, without making every person have to coral all the cats to get all the check marks. Make it the org's challenge instead.

teeray 6 hours ago

Code reviews are a volunteer’s dilemma. Nobody is showered with accolades by putting “reviewed a bunch of PRs” on their performance review by comparison with “shipped a bunch of features.” The two go hand-in-hand, but rewards follow marks of authorship despite how much reviewers influence what actually landed in production.

Consequently, people tend to become invested in reviewing work only once it’s blocking their work. Usually, that’s work that they need to do in the future that depends on your changes. However, that can also be work they’re doing concurrently that now has a bunch of merge conflicts because your change landed first. The latter reviewers, unfortunately, won’t have an opinion until it’s too late.

Fortunately, code is fairly malleable. These “reviewers” can submit their own changes. If your process has a bias towards merging sooner, you may merge suboptimal changes. However, it will converge on a better solution more quickly than if your changes live in a vacuum for months on a feature branch passing through the gauntlet of a Byzantine review and CI process.

surajrmal 6 hours ago

Or the reviewer feels responsible for the output of the code from the person they are reviewing or the code they are modifying. For instance a lead on the team gets credit for the output of the team Also, wanting to catch bugs on review before they make your on call painful can be a large motivation.

bee_rider 2 hours ago

It’s weird that the two tasks that most programmers would agree are most important (reviewing code and deleting code) are not heavily rewarded.

cherry_tree 2 hours ago

Unfortunately for programmers, programmers aren’t doing the rewarding

ndriscoll 5 hours ago

I've always encouraged everyone more junior to review everything regardless of who signs off, and even if you don't understand what's going on/why something was done in a particular way, to not be shy to leave comments asking for clarification. Reviewing others' work is a fantastic way to learn. At a lower level, do it selfishly.

If you're aiming for a higher level, you also need to review work. If you're leading a team or above (or want to be), I assume you'll be doing a lot of reviewing of code, design docs, etc. If you're judged on the effectiveness of the team, reviews are maybe not an explicit part of some ladder doc, but they're going to be part of boosting that effectiveness.

thot_experiment 15 hours ago

Valve is one of the only companies that appears to understand this, as well as that individual productivity is almost always limited by communication bandwidth, and communication burden is exponential as nodes in the tree/mesh grow linearly. [or some derated exponent since it doesn't need to be fully connected]

MrBuddyCasino 11 hours ago

The first one to realise this was Jeff Bezos, afaik. One would think the others have wisened up in the meantime, but no.

trymas 9 hours ago

> The first one to realise this was Jeff Bezos, afaik

I am not aware about the details - can you elaborate?

pkos98 8 hours ago

lelanthran 15 hours ago

I wonder where the reviewer worked where PRs are addressed in 5 hours. IME it's measured in units of days, not hours.

I agree with him anyway: if every dev felt comfortable hitting a stop button to fix a bug then reviewing might not be needed.

The reality is that any individual dev will get dinged for not meeting a release objective.

usr1106 13 hours ago

I worked in a company where reviews took days. The CTO complained a lot about the speed, but we had decent code quality.

Now I work at a company where reviews take minutes. We have 5 lines of technical debt per 3 lines of code written. We spend months to work on complicated bugs that have made it to production.

titanomachy 10 hours ago

My last FAANG team had a soft 4-hour review SLA, but if it was a complicated change then that might just mean someone acknowledging it and committing to reviewing it by a certain date/time. IIRC, if someone requested a review and you hadn't gotten to it by around the 3-hour mark you'd get an automated chat message "so-and-so has been waiting a while for your review".

Everyone was very highly paid, managers measured everything (including code review turnaround), and they frequently fired bottom performers. So, tradeoffs.

duskdozer 9 hours ago

That sounds horrible. I don't know how people stand to work in those conditions.

titanomachy 9 hours ago

Jensson 9 hours ago

IshKebab 8 hours ago

jannyfer 15 hours ago

At the bottom of the page it says he is CEO of Tailscale.

ivanjermakov 11 hours ago

I'm yet to see a project where reviews are handled seriously. Both business and developers couldn't care less.

eterm 11 hours ago

I worked somewhere that actually had a great way to deal with this. It only works in small teams though.

We had a "support rota", i.e. one day a week you'd be essentially excused from doing product delivery.

Instead, you were the dev to deal with big triage, any code reviews, questions about the product, etc.

Any spare time was spent looking for bugs in the backlog to further investigate / squash.

Then when you were done with your support day you were back to sprint work.

This meant there was no ambiguity of who to ask for code review, and limited / eliminated siloing of skills since everyone had to be able to review anyone else's work.

That obviously doesn't scale to large teams, but it worked wonders for a small team.

krilcebre 7 hours ago

I have, and in each sprint we always had tickets for reviewing the implementation, which could take anywhere from an hour to 2 days.

The code quality was much better than in my current workplace where the reviews are done in minutes, although the software was also orders of magnitude more complex.

mcdeltat 11 hours ago

Bonus points: reviews are not taken seriously in the legitimate sense, but a facade of seriousness consisting of picky complaints is put forth to reinforce hierarchy and gatekeeping

devmor 14 hours ago

I’ve worked on teams like you describe and it’s been terrible. My current team’s SDLC is more along the 5-hour line - if someone hasn’t reviewed your code by the end of today, you bring it up in standup and have someone commit to doing it.

yason 10 hours ago

One thing that often gets dismissed is the value/effort ratio of reviews.

A review must be useful and the time spent on reviewing, re-editing, and re-reviewing must improve the quality enough to warrant the time spent on it. Even long and strict reviews are worth it if they actually produce near bugless code.

In reality, that's rarely the case. Too often, reviewing gets down into the rabbithole of various minutiae and the time spent to gain the mutual compromise between what the programmer wants to ship and the reviewer can agree to pass is not worth the effort. The time would be better spent on something else if the process doesn't yield substantiable quality. Iterating a review over and over and over to hone it into one interpretation of perfection will only bump the change into the next 10x bracket in the wallclock timeline mentioned in this article.

In the adage of "first make it work, then make it correct, and then make it fast" a review only needs to require that the change reaches the first step or, in other words, to prevent breaking something or the development going into an obviously wrong direction straight from the start. If the change works, maybe with caveats but still works, then all is generally fine enough that the change can be improved in follow-up commits. For this, the review doesn't need to be thorough details: a few comments to point the change into the right direction is often enough. That kind of reviews are very efficient use of time.

Overall, in most cases a review should be a very short part of the development process. Most of the time should be spent programming and not in review churn. A review serves as a quick check-point that things are still going the right way but it shouldn't dictate the exact path that should be used in order to get there.

swiftcoder 10 hours ago

> The job of a code reviewer isn't to review code. It's to figure out how to obsolete their code review comment, that whole class of comment, in all future cases, until you don't need their reviews at all anymore

Amen brother

kkl 2 hours ago

> The job of a code reviewer isn't to review code. It's to figure out how to obsolete their code review comment, that whole class of comment, in all future cases, until you don't need their reviews at all anymore.

Making entire classes of issues effectively impossible is definitely the ideal outcome. But, this feels much more complicated when you consider that trust doesn't always extend beyond the company's wall and you cannot always ignore that fact because the negative outcomes can be external to the company.

What if I, a trusted engineer, run `npm update` at the wrong time and malware makes its way into production and user data is stolen? A mistake to learn from, for sure, but a post-mortem is too late for those users.

I'm certainly not advocating for relying on human checks everywhere, but reasoning about where you crank the trust knob can get very complicated or costly. Occasionally a trustworthy human reviewer can be part of a very reasonable control.

pu_pe 11 hours ago

Nice piece, and rings true. I also think startups and smaller organizations will be able to capture better value out of AI because they simply don't have all those approval layers.

beepbooptheory 6 hours ago

I do not think you have comprehended the blog.

> you can’t overcome latency with brute force

Curious what rang true to you if not the main point?

jerf 4 hours ago

The approval tree grows logarithmically as the size of the company grows. A startup can win initially because they may have zero or one level to get to production. That's part of how they manage to get inside the OODA loop of much bigger companies.

The flip side of that, and why the software world is not a complex network of millions of tiny startups but in fact has quite a few companies where log(organization) >= 2, is that there are a lot of tasks that are just larger than a startup, and the log of the minimum size organization that can do the job becomes 2 or 3 or 4.

There is certainly at least the possibility that AI can really enhance those startups even faster, but it also means that they'll get to the point that they need more layers more quickly, too. Since AI can help much, much more with coding than it can with the other layers (not that it can't help, but at the moment I don't think there's anybody else in the world getting the advantages from AI that programmers are getting), it can also result in the amount of time that startups can stay in the log(organization)=1 range shrink.

(Pardon the sloppy "log(organization)" notation. It should not be taken too literally.)

alkonaut 11 hours ago

I think this makes an assumption early on which is that things are serialized, when usually they are not.

If I complete a bugfix every 30 minutes, and submit them all for review, then I really don't care whether the review completes 5 hours later. By that time I have fixed 10 more bugs!

Sure, getting review feedback 5 hours later will force me to context switch back to 10 bugs ago and try to remember what that was about, and that might mean spending a few more minutes than necessary. But that time was going to be spent _anyway_ on that bug, even if the review had happened instantly.

The key to keeping speed up in slow async communication is just working on N things at the same time.

trigvi 11 hours ago

Excellent article. Based on personal experience, if you build cutting edge stuff then you need great engineers and reviewers.

But for anything else, you just need an individual (not a team) who's okay (not great) at multiple things (architecting, coding, communicating, keeping costs down, testing their stuff). Let them build and operate something from start to finish without reviewing. Judge it by how well their produce works.

sltr 6 hours ago

about a year ago I shared this on /r/AskProgramming:

"...a Pull Request is a delivery. It's like UPS standing at your door with a package. You think, "Nice, the feature, bugfix, etc has arrived! And because it's a delivery, it's also an inspection. A Code Review. Like a freight delivery with a manifest and signoff. So you have to be able to conduct the inspection: to understand what you're receiving and evaluate if it's acceptable as-is. Like signing for a package, once you approve, the code is yours and your team's to keep."

The metaphor has limits. IRL I sign immediately and resolve issues post-hoc with customer service. The UPS guy is not going to stand on my porch while I check if there's actually a bootable MacBook in the box. The vast majority of the time, there's no issue. If that were the same with code, teams could adopt a similar "trust now and defer verification" approach.

The article has a section on Modularity but never defines it. I wrote a post a few weeks ago on modularity and LLMs which does provide a definition. [1].

[1] https://www.slater.dev/2026/02/relieve-your-context-anxiety-...

drob518 5 hours ago

Overall, this is pretty accurate. Of course, it’s a range at every level, say 5x-15x. Large companies trend toward 15x and startups toward 5x, which is why startups out-execute large companies. Also, they just skip some levels of review because, for instance, the CEO is sitting in a code review meeting. But yea, the average is close.

tptacek 15 hours ago

Not before coding agents nor after coding agents has any PR taken me 5 hours to review. Is the delay here coordination/communication issues, the "Mythical Mammoth" stuff? I could buy that.

Aurornis 15 hours ago

The article is referring to the total time including delays. It isn’t saying that PR review literally takes 5 hours of work. It’s saying you have to wait about half a day for someone else to review it.

yxhuvud 12 hours ago

Which is a thing that depend very much on team culture. In my team it is perhaps 15 min for smaller fixes to get signoff. There is a virtuous feedback loop here - smaller PRs give faster reviews, but also more frequent PRs, which give more frequent times to actually check if there is something new to review.

workmandan 10 hours ago

christophilus 10 hours ago

abtinf 15 hours ago

The PR won’t take 5 hours of work, but it could easily sit that long waiting for another engineer to willing to context switch from their own heads-down work.

paulmooreparks 15 hours ago

Exactly. Even if I hammer the erstwhile reviewer with Teams/Slack messages to get it moved to the top of the queue and finished before the 5 hours are up, then all the other reviews get pushed down. It averages out, and the review market corrects.

bsjshshsb 13 hours ago

Exaxtly. Can you get a lawyer on the phone now or do you wait ~ 5 hours. How about a doctor appt. Or a vet appt. Or a mechanic visit.

Needing full human attention on a co.plex task from a pro who can only look at your thing has a wait time. It is worse when there are only 2 or 3 such people in the world you can ask!

nixon_why69 15 hours ago

The article specified wall clock time. One day turnaround is pretty typical if its not urgent enough to demand immediate review, lots of people review incoming PRs as a morning activity.

sevenseacat 11 hours ago

I've had PRs that take me five hours to review. If your one PR is an entire feature that touches the database, the UI, and an API, and I have to do the QA on every part of it because as soon as I give the thumbs up it goes out the door to clients? Then its gonna take a while and I'm probably going to find a few critical issues and then the loop starts again

lelanthran 15 hours ago

Some devs interrupt what they are doing when they see a PR in a Slack notification, most don't.

Most devs set aside some time at most twice a day for PRs. That's 5 hours at least.

Some PRs come in at the end of the day and will only get looked at the next day. That's more than 5 hours.

IME it's rare to see a PR get reviewed in under 5 hours.

CBLT 14 hours ago

I use a PR notifier chrome extension, so I have a badge on the toolbar whenever a PR is waiting on me. I get to them in typically <2 minutes during work hours because I tab over to chrome whenever AI is thinking. Sometimes I even get to browse HN if not enough PRs are coming and not too many parallel work sessions.

riffraff 14 hours ago

But there's more than one person that can review a PR.

If you work in a team of 5 people, and each one only reviews things twice a day, that's still less than 5 hours any way you slice it.

ukuina 13 hours ago

> "Mythical Mammoth"

Most excellent.

furryrain 7 hours ago

Man moth?

squirrellous 12 hours ago

One pattern I've seen is that a team with a decently complex codebase will have 2-3 senior people who have all of the necessary context and expertise to review PRs in that codebase. They also assign projects to other team members. All other team members submit PRs to them for review. Their review queue builds up easily and average review time tanks.

Not saying this is a good situation, but it's quite easy to run into it.

dominicrose 10 hours ago

Managers are expected to say that we should be productive yet they're responsible for the framework which slows down everyone and it's quite clear that they're perfectly fine with this framework. I'm not saying it's good or bad because it's complicated.

mablopoule 9 hours ago

A few years ago there was a thread about "How complex systems fail" here on HN[1], and one aspect of it (rule 9) is about how individuals have to balance between security and productivity, and being judged differently depending on the context (especially being judged after-the-fact for the security aspect, while being judged before the accident for the productivity aspect).

The linked page in the thread is short and quite enlightening, but here is the relevant passage:

  > Rule 9: Human operators have dual roles: as producers & as defenders against failure.

  > The system practitioners operate the system in order to produce its desired product and also work to forestall accidents. This dynamic quality of system operation, the balancing of demands for production against the possibility of incipient failure is unavoidable. Outsiders rarely acknowledge the duality of this role. In non-accident filled times, the production role is emphasized. After accidents, the defense against failure role is emphasized. At either time, the outsider’s view misapprehends the operator’s constant, simultaneous engagement with both roles.
[1] https://news.ycombinator.com/item?id=32895812

abtinf 15 hours ago

I find to be true for expensive approvals as well.

If I can approve something without review, it’s instant. If it requires only immediate manager, it takes a day. Second level takes at least ten days. Third level trivially takes at least a quarter (at least two if approaching the end of the fiscal year). And the largest proposals I’ve pushed through at large companies, going up through the CEO, take over a year.

TheChelsUK 10 hours ago

That’s because most teams are doing engineering wrong.

The handover to a peer for review is a falsehood. PRs were designed for open source projects to gate keep public contributors.

Teams should be doing trunk-based development, group/mob programming and one piece flow.

Speed is only one measure and AI is pushing this further to an extreme with the volume of change and more code.

The quality aspect is missing here.

Speed without quality is a fallacy and it will haunt us.

Don’t focus on speed alone, and the need to always be busy and picking up the next item - focus on quality and throughput keeping work in progress to a minimum (1). Deliver meaningful reasoned changed as a team, together.

ChrisMarshallNY 10 hours ago

Communication overhead is the #1 schedule killer, in my experience.

Whenever we have to talk/write about our work, it slows things down. Code reviews, design reviews, status updates, etc. all impact progress.

In many cases, they are vital, and can’t be eliminated, but they can be streamlined. People get really hung up on tools and development dogma, but I've found that there’s no substitute for having experienced, trained, invested, technically-competent people involved. The more they already know, the less we have to communicate.

That’s a big reason that I have for preferring small meetings. I think limiting participants to direct technical members, is really important. I also don’t like regularly-scheduled meetings (like standups). Every meeting should be ad hoc, in my opinion.

Of course, I spent a majority of my career, at a Japanese company, where meetings are a currency, so fewer meetings is sort of my Shangri-La.

I’m currently working on a rewrite of an app that I originally worked on, for nearly four years. It’s been out for two years, and has been fairly successful. During that time, we have done a lot of incremental improvements. It’s time for a 2.0 rewrite.

I’ve been working on it for a couple of months, with LLM assistance, and the speed has been astounding. I’m probably halfway through it, already. But I have also been working primarily alone, on the backend and model. The design and requirements are stable and well-established. I know pretty much exactly what needs to be done. Much of my time is spent testing LLM output, and prompting rework. I’m the “review slowdown,” but the results would be disastrous, if I didn’t do it.

It’s a very modular design, with loosely-coupled, well-tested and documented components, allowing me to concentrate on the “sharp end.” I’ve worked this way for decades, and it’s a proven technique.

Once I start working on the GUI, I guarantee that the brakes will start smoking. All because of the need for non-technical stakeholder team involvement. They have to be involved, and their involvement will make a huge difference (like a Graphic UX Designer), but it will still slow things down. I have developed ways to streamline the process, though, like using TestFlight, way earlier than most teams.

lukaslalinsky 12 hours ago

Reviewing things is fast and smooth is things are small. If you have all the involved parties stay in the loop, review happens in the real time. Review is only problematic if you split the do and review steps. The same applies to AI coding, you can chose to pair program with it and then it's actually helpful, or you can have it generate 10k lines of code you have no way of reviewing. You just need people understand that switching context is killing productivity. If more things are happening at the same time and your memory is limited, the time spent on load/save makes it slower than just doing one thing at the time and staying in the loop.

rafaelmn 12 hours ago

Honestly if I'm just following what a single LLM is doing I'm arguably slower than doing it myself so I'd say that approach isn't very useful for me.

I prefer to review plan (this is more to flush out my assumptions about where something fits in the codebase and verify I communicated my intent correctly).

I'll loosely monitor the process if it's a longer one - then I review the artifacts. This way I can be doing 2/3 things in parallel, using other agents or doing meetings/prod investigation/making coffee/etc.

cm2012 6 hours ago

This is very true in marketing and advertising as well. A campaign where the channel manager can just test ads within a general framework will do ten times better than a campaign that has to go through review processes.

laughing_mann 7 hours ago

https://vekthos.com/papers/cognitive-sight-theory.pdf

Solution: Feed this paper to the llm and ask it to solve your problem. Then contact me with your experience. XD

wei03288 10 hours ago

The 10x estimate tracks — I've seen it too. The underlying mechanism is queuing theory: each approval step is a single-server queue with high variance inter-arrival times, so average wait explodes non-linearly. AI makes the coding step ~10x faster but doesn't touch the approval queue. The orgs winning right now are the ones treating async review latency as a first-class engineering metric, same way they treat p99 latency for services.

gebalamariusz 8 hours ago

Well, this all makes sense for application code, but not necessarily for infrastructure changes. Imagine a failed Terraform merge that deletes the production database but opens the inbound at 0.0.0.0/0, and you can't undo it for 10 minutes. In my opinion, you need to pay attention to the narrow scope specific to a given project.

furryrain 8 hours ago

Try to imagine a deployment/CI system where that isn't possible. That's what the post is asking.

* Maybe you don't have privileges to delete the database

* Maybe your CI environments are actually high fidelity, and will fail when there is no DB

* Maybe destructive actions require further review

* Maybe your service isn't exposed to the public internet, and exposing to 0.0.0.0/0 isn't a problem.

* Maybe we engineer our systems to have trivial instant undo, and deleting a DB triggers an undo

Our tooling is kind of crappy. There's a lot we can do.

presentation 11 hours ago

I broadly agree with this, it really is all about trust. Just, as a company scales it’s hard to make sure that everybody in the team remains trustworthy – it isn’t just about personality and culture, it’s also about people actually having the skill, motivation, and track record of doing good work efficiently. Maybe AI‘s greatest value will be to allow teams to stay small, which reduces the difficulty of maintaining trust.

kkl 3 hours ago

It’s also the case that someone you trust makes an honest mistake and, for example, gets their laptop stolen and their credentials compromised. I do trust my team, and want that to be the foundation to our relationship, but I also recognize that humans are infallible and having guardrails (eg code review) is beneficial.

rainmaking 10 hours ago

That's exactly why I think vibecoding uniquely benefits solo and small team founders. For anything bigger, work is not the bottleneck, it's someone's lack of imagination.

https://capocasa.dev/the-golden-age-of-those-who-can-pull-it...

ap99 9 hours ago

Yes there's more red tape the larger you get but there's also working product(s) that when they're broken you stop making money.

See recent Amazon outages caused by vibe/slop/movefast coding practices with little review.

superlopuh 12 hours ago

In my experience a culture where teammates prioritise review times (both by checking on updates in GH a few times a day, and by splitting changes agressively into smaller patches) is reflected in much faster overall progress time. It's definitely a culture thing, there's nothing technically or organisationally difficult about implementing it, it just requires people working together considering team velocity more important than personal velocity.

threatofrain 12 hours ago

Let's say a teammate is writing code to do geometric projection of streets and roads onto live video. Another teammate is writing code to do automated drone pursuit of cars. Let's say I'm over here writing auth code, making sure I'm modeling all the branches which might occur in some order.

To what degree do we expect intellectual peerage from someone just glancing into this problem because of a PR? I would expect that to be the proper intellectual peer of someone studying the problem, it's quite reasonable to basically double your efforts.

pm215 11 hours ago

If the team is that small and working on things that are that disparate, then it is also very vulnerable to one of those people leaving, at which point there's a whole part of the project that nobody on the team has a good understanding of.

Having somebody else devote enough time to being up to speed enough to do code review on an area is also an investment in resilience so the team isn't suddenly in huge difficulty if the lone expert in that area leaves. It's still a problem, but at least you have one other person who's been looking at the code and talking about it with the now-departed expert, instead of nobody.

servo_sausage 11 hours ago

This is an unusually low overlap per topic; probably needs a different structure to traditional prs to get the best chance to benefit from more eyes... Higher scope planning or something like longer but intermittent partner programming.

Generally if the reviewer is not familiar with the content asynchronous line by line reviews are of limited value.

riffraff 14 hours ago

> Code a simple bug fix 30 minutes

> Get it code reviewed by the peer next to you 300 minutes → 5 hours → half a day

Is it takes 5 hours for a peer to review a simple bugfix your operation is dysfunctional.

thi2 13 hours ago

Its rare that devs are on standby, waiting for a pr to review. Usually they are working on their own pr, are in meetings, have focus time.

We talked a lot about the costs of context switches so its reasonable to finish your work before switching to the review.

ge96 4 hours ago

Hehe I'm waiting right now, should have been reviewed yesterday but I'm like alright, I'll just chill then.

habinero 13 hours ago

People are busy, and small bugfixes are usually not that critical. If you make everyone drop everything to review everything, that is much more dysfunctional.

karel-3d 12 hours ago

nobody will immediately jump on your code review

riffraff 9 hours ago

Sure, but five hours is a lot of time, and a small fix takes little to review.

So, 1 hour? Sure. Two hours? Ok. But five hours means you only look at your teammates code once a day.

It's ok for a process where you work on something for a week and then come back for reviews but then it's silly to complain about overhead.

Nijikokun an hour ago

> only >> recenlty << started happening

can't believe I was baited into reading this slop

/jk

good post actually, and a fair point

I do think many people will argue that you can just not review things though.

codemog 13 hours ago

This reads like a scattered mind with a few good gems, a few assumptions that are incorrect but baked into the author’s world view, and loose coherence tying it all together. I see a lot of myself in it.

I’ll cover one of them: layers of management or bureaucracy does not reduce risk. It creates in-action, which gives the appearance of reducing risk, until some startup comes and gobbles up your lunch. Upper management knows it’s all bullshit and the game theoretic play is to say no to things, because you’re not held accountable if you say no, so they say no and milk the money printer until the company stagnates and dies. Then they repeat at another company (usually with a new title and promotion).

zingar 8 hours ago

This is a profound point but is review really the problem or is it the handoff that crosses boundaries (me to others, our team to other team, our org to outside our org)?

p0w3n3d 15 hours ago

Meanwhile there are people who, as we speak, say that AI will do review and all we need to do is to provide quality gates...

duskdozer 13 hours ago

AI reviews? Sounds like a waste of tokens!

afc 11 hours ago

Waiting for a few days of design review is a pain that is easy to avoid: all we need is to be ready to spend a few months building a potentially useless system.

frandroid 4 hours ago

For all the people talking about 5 hour PR review delays... This reminds me of some teams that rotate the "fire extinguisher/emergency bug fixer" duty every day/week/sprint to a different developer. One could rotate a dedicated "first review duty" person. That developer would be in charge of focusing on rapidly starting PR reviews as their priority, with option to request other reviewers if necessary. Spreading the duty around would make people be respectful of the reviewer because if they send unreviewed slop to the reviewers, it's likely that people will send them slop too.

simianwords 12 hours ago

I don’t agree that AI can’t fix this. It is too easy to dismiss.

With AI my task to review is to see high level design choices and forget reviewing low level details. It’s much simpler.

jbrozena22 15 hours ago

I think the problem is the shape of review processes. People higher up in the corporate food chain are needed to give approval on things. These people also have to manage enormous teams with their own complexities. Getting on their schedule is difficult, and giving you a decision isn't their top priority, slowing down time to market for everything.

So we will need to extract the decision making responsibility from people management and let the Decision maker be exclusively focused on reviewing inputs, approving or rejecting. Under an SLA.

My hypothesis is that the future of work in tech will be a series of these input/output queue reviewers. It's going to be really boring I think. Probably like how it's boring being a factory robot monitor.

nottorp 7 hours ago

Are we starting to need a BuSab for programming?

janpmz 10 hours ago

A lot of this goes away when the person who builds also decides what to build.

nananana9 10 hours ago

That's great, but if I hire a random person from this thread and let them decide, chances are they would build an agent orchestrator.

orwin 10 hours ago

> Now you either get to spend 27 minutes reviewing the code yourself in a back-and-forth loop with the AI (this is actually kinda fun); or you save 27 minutes and submit unverified code to the code reviewer, who will still take 5 hours like before, but who will now be mad that you’re making them read the slop that you were too lazy to read yourself.

That's me. I'm the mad reviewer. Each time I ranted against AI on this site, it was after reviewing sloppy code.

Yes, Claude Opus is better on average than my juniors/new hires. But it will do the same mistakes twice. I _need_ you to fucking review your own generated code and catch the obvious issues before you submit it to me. Please.

halo 12 hours ago

In my experience, good mature organisations have clear review processes to ensure quality, improve collaboration and reduce errors and risk. This is regardless of field. It does slow you down - not 10x - but the benefits outweigh the downsides in the long run.

The worst places I’ve worked have a pattern where someone senior drives a major change without any oversight, review or understanding causing multiple ongoing issues. This problem then gets dumped onto more junior colleagues, at which point it becomes harder and more time consuming to fix (“technical debt”). The senior role then boasts about their successful agile delivery to their superiors who don’t have visibility of the issues, much to the eye-rolls of all the people dealing with the constant problems.

DeathArrow 11 hours ago

I totally agree with his ideas, but somehow he seems just stating the obvious: startups move better than big orgs and you can solve a problem by dividing it in smaller problems - if possible. And that AI experimentation is cheap.

usr1106 13 hours ago

What makes me slower is the moment is the AI slop my team lead posts into reviews. I have to spend time to argue why that's not a valid comment.

sublinear 15 hours ago

As they say: an hour of planning saves ten hours of doing.

You don't need so much code or maintenance work if you get better requirements upfront. I'd much rather implement things at the last minute knowing what I'm doing than cave in to the usual incompetent middle manager demands of "starting now to show progress". There's your actual problem.

hrmtst93837 10 hours ago

If an hour of planning always saved ten hours of work, software schedules would be a whiteboard exercise.

Instead everyone wants perfect foresight, but systems are full of surprises you only find by building and the cost of pushing uncertainty into docs is that the docs rot because nobody updates them. Most "progress theater" starts as CYA for management but hardens into process once the org is too scared to change anything after the owners move on.

lmm 15 hours ago

> As they say: an hour of planning saves ten hours of doing.

In software it's the opposite, in my experience.

> You don't need so much code or maintenance work if you get better requirements upfront.

Sure, and if you could wave a magic wand and get rid of all your bugs that would cut down on maintenance work too. But in the real world, with the requirements we get, what do we do?

JoshTriplett 14 hours ago

> In software it's the opposite, in my experience.

That's been my experience as well: ten hours of doing will definitely save you an hour of planning.

If you aren't getting requirements from elsewhere, at least document the set of requirements you think you're working towards, and post them for review. You sometimes get new useful requirements very fast if you post "wrong" ones.

seer 13 hours ago

camillomiller 13 hours ago

>> Now you either get to spend 27 minutes reviewing the code yourself in a back-and-forth loop with the AI (this is actually kinda fun); or you save 27 minutes and submit unverified code to the code reviewer, who will still take 5 hours like before, but who will now be mad that you’re making them read the slop that you were too lazy to read yourself. Little of value was gained.

This seems to check out, and it's the reason why I can't reconcile the claims of the industry about workers replacement with reality. I still wonder when a reckoning will come, though. seems long overdue in the current environment

wiseowise 9 hours ago

> I still wonder when a reckoning will come, though. seems long overdue in the current environment

Never. Until 1-10 person teams starts disrupt enterprises (legacy banks, payments systems, consultancies).

“Why” would you ask? Because it’s a house of cards. If engineers get redundant, then we don’t need teams. If we don’t need teams, then we don’t need team leads/PMs/POs and others, if we don’t need middle management, then we don’t need VPs and others. All of those layers will eventually catch up to what’s going on and kill any productivity gains via bureaucracy.

steve_taylor 11 hours ago

I don't agree with this take in the article. One person with Claude Code can replace a team of devs. It resolves many issues, such as the tension between devs wanting to focus and devs wanting their peers to put aside their task to review their pull requests. Claude generates the code and the human reviews it. There's no delay in the back-and-forth unlike in a team of humans. There's no ego and there's no context switching fatigue. Given that code reviewing is a bottleneck, it's feasible that one person can do it by themselves. And Claude can certainly generate working code at least 10x faster than any dev.

wiseowise 9 hours ago

You’re talking from idealistic requirements - input - programming - output point. That’s not how the world operates. Egos are “important”, politics, bureaucracy, all of those are essential parts of the organizations. LLMs don’t change that, and without changing that there’s no chance at all. Previously coding was maybe 0.1 bottleneck, now it’s 0.07 bottleneck.

ferguess_k 6 hours ago

These are just made up numbers. In our team, PR review is always 1 minute -- we never review, just approve, and let production reviews. /s

PunchyHamster 11 hours ago

> I know what you're thinking. Come on, 10x? That’s a lot. It’s unfathomable. Surely we’re exaggerating.

See this rarely known trick! You can be up to 9x more efficient if you code something else when you wait for review

> AI

projectile vomits

Fuck engineering, let's work on methods to make artificial retard be more efficient!

wiseowise 8 hours ago

> See this rarely known trick! You can be up to 9x more efficient if you code something else when you wait for review

Context switch alone would kill any productivity gains from this. And I’m not even touching on conflicting MRs and interdependencies yet.

nfw2 11 hours ago

from article:

1. Whoa, I produced this prototype so fast! I have super powers!

2. This prototype is getting buggy. I’ll tell the AI to fix the bugs.

3. Hmm, every change now causes as many new bugs as it fixes.

4. Aha! But if I have an AI agent also review the code, it can find its own bugs!

5. Wait, why am I personally passing data back and forth between agents

6. I need an agent framework

7. I can have my agent write an agent framework!

8. Return to step 1

the author seems to imply this is recursive when it isn't. when you have an effective agent framework you can ship more high quality code quickly.

nananana9 10 hours ago

I've been begging left and right, and I've yet to see a single example of this agent-written high-quality quickly-shipped code.

nfw2 10 minutes ago

what do you mean exactly? you are asking random people to share their company's code with you?

munksbeer 6 hours ago

There are examples littered around threads on HN. What happens is when people provide the examples, the goalposts get moved. So people have stopped bothering to reply to these demands.

wiseowise 8 hours ago

OpenClaw! You just need to slightly change the definition of “good code”. The point of code is to ultimately bring money. The guy got hired by OpenAI and who gives a shit what happens to the “project” next. Mission accomplished.

duskdozer 9 hours ago

I'm guessing a lot of the high-x productivity boost is from a cycle of generating lots of code, having bug reports detected or hallucinated from that code, and then generating even more code to close out those reports, and so on

simonw 15 hours ago

This is one of the reasons I'm so interested in sandboxing. A great way to reduce the need for review is to have ways of running code that limit the blast radius if the code is bad. Running code in a sandbox can mean that the worst that can happen is a bad output as opposed to a memory leak, security hole or worse.

MeetingsBrowser 14 hours ago

Isn’t “bad output” already worst case? Pre-LLMs correct output was table stakes.

You expect your calculator to always give correct answers, your bank to always transfer your money correctly, and so on.

swiftcoder 9 hours ago

> Isn’t “bad output” already worst case?

Worst case in a modern agentic scenario is more like "drained your bank account to buy bitcoin and then deleted your harddrive along with the private key"

> Pre-LLMs correct output was table stakes

We're only just getting to the point where we have languages and tooling that can reliably prevent segfaults. Correctness isn't even on the table, outside of a few (mostly academic) contexts

simonw 3 hours ago

MeetingsBrowser 6 hours ago

simonw 7 hours ago

I've seen plenty of decision makers act on bad output from human employees in the past. The company usually survives.

KnuthIsGod 15 hours ago

And if the bad output leads to a decision maker making a bad decision, that takes down your company or kills your relative ?

riffraff 14 hours ago

The sandbox in question was to absorb shrapnel from explosions, clearly

markbao 15 hours ago

If you save 3 hours building something with agentic engineering and that PR sits in review for the same 30 hours or whatever it would have spent sitting in review if you handwrote it, you’re still saving 3 hours building that thing.

So in that extra time, you can now stack more PRs that still have a 30 hour review time and have more overall throughput (good lord, we better get used to doing more code review)

This doesn’t work if you spend 3 minutes prompting and 27 minutes cleaning up code that would have taken 30 minutes to write anyway, as the article details, but that’s a different failure case imo

lelanthran 14 hours ago

> So in that extra time, you can now stack more PRs that still have a 30 hour review time and have more overall throughput

Hang on, you think that a queue that drains at a rate of $X/hour can be filled at a rate of 10x$X/hour?

No, it cannot: it doesn't matter how fast you fill a queue if the queue has a constant drain rate, sooner or later you are going to hit the bounds of the queue or the items taken off the queue are too stale to matter.

In this case, filling a queue at a rate of 20 items per hour (every 3 minutes) while it drains at a rate of 1 item every 5 hours means that after a single day, you can expect your last PR to be reviewed in ((8x20) - 1) hours.

IOW, after a single day the time-to-review is 159 hours. Your PRs after the second day is going to take +300 hours.

zmmmmm 14 hours ago

This is the fundamental issue currently in my situation with AI code generation.

There are some strategies that help: a lot of the AI directives need to go towards making the code actually easy to review. A lot of it it sits around clarity, granularity (code should be committed primarily in reviewable chunks - units of work that make sense for review) rather than whatever you would have done previously when code production was the bottleneck. Similarly, AI use needs to be weighted not just more towards tests, but towards tests that concretely and clearly answer questions that come up in review (what happens on this boundary condition? or if that variable is null? etc). Finally, changes need to be stratified along lines of risk rather than code modularity or other dimensions. That is, if a change is evidently risk free (in the sense of, "even if this IS broken it doesn't matter) it should be able to be rapidly approved / merged. Only things where it actually matters if it wrong should be blocked.

I have a feeling there are whole areas of software engineering where best practices are just operating on inertia and need to be reformulated now that the underlying cost dynamics have fundamentally shifted.

balamatom 13 hours ago

balamatom 13 hours ago

You are considering a good-faith environment where GP cares about throughput of the queue.

I think GP is thinking in terms of being incentivized by their environment to demonstrate an image of high personal throughput.

In a dysfunctional organization one is forced to overpromise and underdeliver, which the AI facilitates.

josephg 15 hours ago

If your team's bottleneck is code review by senior engineers, adding more low quality PRs to the review backlog will not improve your productivity. It'll just overwhelm and annoy everyone who's gotta read that stuff.

Generally if your job is acting as an expensive frontend for senior engineers to interact with claude code, well, speaking as a senior engineer I'd rather just use claude code directly.

eru 14 hours ago

Linting, compiler warnings and automated tests have helped a lot with the grunt work of code review in the past.

We can use AI these days to add another layer.

CuriouslyC 15 hours ago

Except that when you have 10 PRs out, it takes longer for people to get to them, so you end up backlogged.

zmmmmm 14 hours ago

And when the PR you never even read because the AI wrote it gets bounced back you with an obscure question 13 days later ..... you're not going to be well positioned to respond to that.