At the end you use `git bisect` (kevin3010.github.io)
197 points by _spaceatom 19 hours ago
bikelang 18 hours ago
Git bisect was an extremely powerful tool when I worked in a big-ball-of-mud codebase that had no test coverage and terrible abstractions which made it impossible to write meaningful tests in the first place. In that codebase it was far easier to find a bug by finding the commit it was introduced in - simply because it was impossible to reason through the codebase otherwise.
In any high quality codebase I’ve worked in, git bisect has been totally unnecessary. It doesn’t matter which commit the bug was introduced in when it’s simple to test the components of your code in isolation and you have useful observability to instruct you on where to look and what use inputs to test with.
This has been my experience working on backend web services - YMMV wildly in different domains.
kccqzy 17 hours ago
Git bisect is never unnecessary. Even when you can easily test the components and find the bug that way, a bisect allows you to understand why the bug was introduced. This is wonderful in all places where there is a culture of writing long and comprehensive commit messages. You get to understand why the bug occurred from a previous commit message and you will write about that context in your bug fix commit message. And this becomes positive reinforcement. The better the commit messages are, the more useful it is to use git bisect or git blame to find the relevant commit messages.
kemayo 17 hours ago
Yeah, bisect is really handy because often a bug will have been introduced as a side-effect of a change made to support something else, and if you don't know what new usage was introduced you're relatively likely to break that in the course of fixing the bug.
You can avoid it via the "just look at every usage of this function and hold the entire codebase in your head" method, of course, but looking at the commit seems a bit simpler.
_alternator_ 14 hours ago
diegocg 17 hours ago
There are certainly other use cases. git bisect was enormously useful when it was introduced in order to find Linux kernel regressions. In these cases you might not even be able to have tests (eg. a driver needs to be tested against real hardware - hardware that the developer that introduced the bug could not have), and as an user you don't have a clue about the code. Before git bisect, you had to report the bug and hope that some dev would help you via email, perhaps by providing some patch with print debug statements to gather information. With git bisect, all of sudden a normal user was able to bisect the kernel by himself and point to the concrete commit (and dev) that broke things. That, plus a fine-grained commit history, entirely changed how to find and fix bugs.
bsder 13 hours ago
> With git bisect, all of sudden a normal user was able to bisect the kernel by himself and point to the concrete commit (and dev) that broke things.
Huh. Thanks for pointing that out. I definitely would never have thought about the use case of "Only the end user has specific hardware which can pinpoint the bug."
kragen 8 hours ago
Volundr 17 hours ago
If all you care about it fixing the bug, this is probably often true. Certainly bisect is not part of my daily workflow. Sometimes though you also need to know how long a bug has been in place e.x. to track down which records may have been incorrectly processed.
Edit: tracking down where something was introduced can also be extremely helpful for "is this a bug or a feature" type investigations, of which I have done many. Blame is generally the first tool for this, but over the course of years the blame and get obscured.
jayknight 16 hours ago
Yes to both of these. In a healthcare setting, some bugs leave data that needs to be reviewed and/or corrected after it is identified and fixed.
And also a fair number of bugs filed can be traced back to someone asking for it to work that way.
hinkley 15 hours ago
See also Two Devs Breaking Each Other’s Features.
I got to hear about a particularly irate customer during a formative time of my life and decided that understanding why weird bugs got in the code was necessary to prevent regressions that harm customer trust in the company. We took too long to fix a bug and we reintroduced it within a month. Because the fix broke another feature and someone tried to put it back
aidenn0 5 hours ago
What about finding the commit a bug was fixed in?
Example use #1: Customer using a 6-year-old version of the software wants to know if upgrading to a 4-year-old version of the software will solve their problem.
Example use #2: The part of code that was likely previously causing the problem has been significantly reworked; was the bugfix intentional or accidental? If the latter, is the rework prone to similar bugs?
geon 5 hours ago
https://www.reddit.com/r/talesfromtechsupport/s/K2xme9A0MQ
I once bisected to find a bug in a 6 month old commit. An off-by-one error in some array processing. I fixed the bug there to confirm. But on main, the relevant code didn’t even exist any more. It had been completely refactored away.
I ended up rebasing the entire 6 months worth of commits onto the bugfix, propagating the fix throughout the refactoring.
Then a diff against main showed 3 lines changed in seemingly unrelated parts of the code, together triggering the bug. I would never have found them without bisect and rebase.
sfvisser 18 hours ago
Even if you can reason through a code base a bisect can still be much quicker.
Instead of understanding the code you only need to understand the bug. Much easier!
foresto 17 hours ago
I find git bisect indispensable when tracking down weird kernel bugs.
Nursie 10 hours ago
That was my first thought when reading this.
It sounds like the author doesn't understand the codebase, if you're brute-forcing bug detection by bisecting commit versions to figure out where the issue is, something's already failed. In most cases you should have logs/traces/whatever that give you the info you need to figure out exactly where the problem is.
thfuran 8 hours ago
Every system is imperfectly understood.
Nursie 5 hours ago
funnymunny 13 hours ago
I used git bisect in anger for the first time recently and it felt like magic.
Background: We had two functions in the codebase with identical names and nearly identical implementations, the latter having a subtle bug. Somehow both were imported into a particular python script, but the correct one had always overshadowed the incorrect one - that is, until an unrelated effort to apply code formatting standards to the codebase “fixed” the shadowing problem by removing the import of the correct function. Not exactly mind bending - but, we had looked at the change a few times over in GitHub while debugging and couldn’t find a problem with it - not until we knew for sure that was the commit causing the problem did we find the bug.
f311a 18 hours ago
I've used bisect a few times in my life. Most of the time, I already know which files or functions might have introduced a bug.
Looking at the history of specific files or functions usually gives a quick idea. In modern Git, you can search the history of a specific function.
>> git log -L :func_name:path/to/a/file.c
You need to have a proper .gitattributes file, though.nielsole 16 hours ago
Alternatively if you do not have that set up, `git log -S` helps you find commits whose diff contain a specific string.
adastra22 17 hours ago
I use git bisect literally every day. We are clearly different people :)
hinkley 15 hours ago
I don’t use it for myself often, but I use it fairly often when someone has to escalate a problem to me. And how you work when the shit hits the fan says a lot about you overall, IMO.
adastra22 14 hours ago
_1tan 18 hours ago
Can you elaborate on the dependent .gitattributes file? Where can I find more information on the necessary content? Sounds super useful!
f311a 18 hours ago
You need to specify diff format, so that Git can correctly identify and parse function body.
*.py diff=python
_1tan 16 hours ago
PaulDavisThe1st 15 hours ago
I use this often, but it is sadly weak when used on C++ code that includes polymorphic methods/functions:
/* snip */
void
Object::do_the_thing (int)
{
}
void
Object::do_the_thing (float)
{
}
/* snip*/
AFAICT, git log will never be able to be told to review the history of the second version.thombles 11 hours ago
One place bisect shines is when a flaky test snuck in due to some race condition but you can’t figure out what. If you have to run a test 100000 times to be convinced the bug isn’t present, this can be pretty slow. Bisecting makes it practical to narrow in on the faulty commit, and with the right script you can just leave it running in the background for an hour.
kragen 8 hours ago
We really would benefit from a Bayesian binary search for this purpose, so you can get by with only running the test 1000 times in most cases.
paulbjensen 15 hours ago
I recently used git bisect to help find the root cause of a bug in a fun little jam of mine (a music player/recorder written in Svelte - https://lets-make-sweet-music.com).
My scenario with the project was:
- no unit/E2E tests - no error occurring, either from Sentry tracking or in the developer tools console. - Many git commits to check through as GitHub's dependabot alerts had been busy in the meantime.
I would say git bisect was a lifesaver - I managed to trace the error to my attempt to replace a file I had with the library I extracted for what it did (http://github.com/anephenix/event-emitter).
It turns out that the file had implemented a feature that I hadn't ported to the library (to be able to attach multiple event names to call the same function).
I think the other thing that helps is to keep git commits small, so that when you do discover the commit that breaks the app, you can easily find the root cause among the small number of files/code that changed.
Where it becomes more complex is when the root cause of the error requires evaluating not just one component that can change (in my case a frontend SPA), but also other components like the backend API, as well as the data in the database.
eru 10 hours ago
> People rant about having to learn algorithmic questions for interviews. I get it — interview system is broken, but you ought to learn binary search at least.
Well, the example of git bisect tells you that you should know of the concept of binary search, but it's not a good argument for having to learn how to implement binary search.
Also just about any language worth using has binary search in the standard library (or as a third party library) these days. That's saner than writing your own, because getting all the corner cases right is tricky (and writing tests so they stay right, even when people make small changes to the code over time).
Arcuru 10 hours ago
Unfortunately I can't find the reference now, but I remember reading that even though binary search was first described in the 1940's, the first bug-free implementation wasn't published until the 1960s.
The most problematic line that most people seem to miss is in the calculation of the midpoint index. Using `mid = (low + high) / 2` has an overflow bug if you're not using infinite precision, but there are several other potential problems even in the simplest algorithm.
kragen 8 hours ago
The overflow bug wasn't fixed until the 21st century; the comment you remember reading dates from before it was discovered.
To be fair, in most computing environments, either indices don't overflow (Smalltalk, most Lisps) or arrays can never be big enough for the addition of two valid array indices to overflow, unless they are arrays of characters, which it would be sort of stupid to binary search. It only became a significant problem with LP64 and 64-bit Java.
eru 6 hours ago
eru 8 hours ago
Agreed.
> Using `mid = (low + high) / 2` has an overflow bug if you're not using infinite precision, but there are several other potential problems even in the simplest algorithm.
Well, if you are doing binary search on eg items you actually hold in memory (or even disk) storage somewhere, like items in a sorted array (or git commits), then these days with 64 bit integers the overflow isn't a problem: there's just not enough storage to get anywhere close to overflow territory.
A back of the envelope calculation estimates that we as humanity have produced enough memory and disk storage in total that we'd need around 75 bits to address each byte independently. But for a single calculation on a single computer 63 bits are probably enough for the foreseeable future. (I didn't go all the way to 64 bits, because you need a bit of headroom, so you don't run into the overflow issues.)
runeblaze 4 hours ago
My personal mantra (that I myself cannot uphold 100%) is that every dev should at least do the exercise of implementing binary search from scratch in a language with arbitrary-precision integers (e.g., Python) once in a while. It is the best exercise in invariant-based thinking, useful for software correctness at large
eru an hour ago
Yes, it's a simple enough algorithm to be a good basic exercise---most people come up with binary search on their own spontaneously when looking a word up in dictionary.
Property based testing is really useful for finding corner cases in your binary search. See eg https://fsharpforfunandprofit.com/series/property-based-test... for one introduction.
aag 8 hours ago
Make sure you know about exit code 125 to your test script. You can use it in those terrible cases where the test can't tell, one way or another, whether the failure you seek happened, for example when there is an unrelated build problem.
I wrote a short post on this:
lucasoshiro 17 hours ago
Git has some really good tools for searching code and debugging. A few years ago I wrote a blog post abot them, including bisect, log -L, log -S and blame. You can see it and the discussion here: https://news.ycombinator.com/item?id=39877637
tarwich 11 hours ago
When I learned about git bisect I thought it was a little uppity. I thought it was something I would never use in a practical scenario. Working on large code bases. However, sometimes a bug pops up and we don't know when it started. We use git bisect not place blame on a person, but to try to figure out when the bug was no longer there so we know what code introduced it. Yes, clean code helps. Sometimes git bisect is really nice to have.
dpflan 19 hours ago
`git-bisect` is legit if you have to do the history archaeological digging. Though, there is the open question of how git commit history is maintained, the squash-and-merge vs. just retain all history. With squash-and-merge you're looking at the merged pull-request versus with full history you can find the true code-level inflection point.
echelon 18 hours ago
Can someone explain why anyone would want non-squashed PRs?
For the 5% of engineers that diligently split each PR into nice semantic changes, I suppose that's nice. But the vast majority of engineers don't do this. Individual commits in a PR are testing and iteration. You don't want to read though that.
Unless, of course, you're asking the engineer to squash on their end before making the PR. But what's the value in that ceremony?
Each PR being squashed to 1 commit is nice and easy to reason about. If you truly care about making more semantic history, split the work into multiple PRs.
For that matter, why merge? Rebase it on top. It's so much cleaner. It's atomic and hermetic.
rectang 18 hours ago
Crafting a PR as an easily-consumed, logical sequence of commits is particularly useful in open source.
1. It makes review much easier, which is both important because core maintainer effort is the most precious resource in open source, and because it increases the likelihood that your PR will be accepted.
2. It makes it easier for people to use the history for analysis, which is especially important when you may not be able to speak directly to the original author.
These reasons also apply in commercial environments of course, but to a lesser extent.
For me, organizing my PRs this way is second nature and only nominal effort, because I'm extremely comfortable with Git, including the following idiom which serves as a more powerful form of `git commit --amend`:
git add -p
git commit --fixup COMMIT_ID
git stash
git rebase -i --autosquash COMMIT_ID~
An additional benefit is that this methodology doesn't work well for huge changesets, so it discourages the anti-pattern of long-lived topic branches. :)> For that matter, why merge? Rebase it on top.
Yes, that works for me although it might not work for people who aren't going to the same lengths to craft a logical history. I have no interest in preserving my original WIP commits — my goal is to create something that is easy to review.
BUT... the PR should ultimately be merged with a merge commit. Then when you have a bug you can run `git bisect` on merges only, which is good enough.
Izkata 17 hours ago
vjerancrnjak 18 hours ago
hinkley 15 hours ago
borntyping 18 hours ago
> Can someone explain why anyone would want non-squashed PRs? > > For the 5% of engineers that diligently split each PR into nice semantic changes, I suppose that's nice. But the vast majority of engineers don't do this.
I think cause and effect are the other way around here. You write and keep work-in-progress commits without caring about changes because the history will be discarded and the team will only look at pull requests as a single unit, and write tidy distinct commits because the history will be kept and individual commits will be reviewed.
I've done both, and getting everyone to do commits properly is much nicer, though GitHub and similar tools don't really support or encourage it. If you work with repository history a lot (for example, you have important repositories that aren't frequently committed to, or maintain many different versions of the project) it's invaluable. Most projects don't really care about the history—only the latest changes—and work with pull-requests, which is why they tend to use the squashed pull request approach.
baq 17 hours ago
adastra22 17 hours ago
> For the 5% of engineers that diligently split each PR into nice semantic changes, I suppose that's nice. But the vast majority of engineers don't do this.
Here's a simple reason: at my company, if you don't do this, you get fired.
This is basic engineering hygiene.
devnullbrain 8 hours ago
bentcorner 18 hours ago
> If you truly care about making more semantic history, split the work into multiple PRs.
This exactly - if your commit history for a PR is interesting enough to split apart, then the original PR was too large and should have been split up to begin with.
This is also a team culture thing - people won't make "clean" commits into a PR if they know people aren't going to be bisecting into them and trying to build. OTOH, having people spend time prepping good commits is potentially time wasted if nobody ever looks at the PR commit history aside from the PR reviewers.
hamburglar 17 hours ago
imron 4 hours ago
embedding-shape 18 hours ago
> For that matter, why merge? Rebase it on top. It's so much cleaner. It's atomic and hermetic.
With an explicit merge, you keep two histories, yet mostly care about the "main" one. With rebase, you're effectively forgetting there ever was a separate history, and chose to rewrite the history when "effectively merging" (rebasing).
There's value in both, mostly seems to come down to human preference. As long as the people that will be working with it agrees, I personally don't care either way which one, granted it's consistently applied.
hinkley 15 hours ago
eastbound 17 hours ago
pizza234 18 hours ago
> Each PR being squashed to 1 commit is nice and easy to reason about. If you truly care about making more semantic history, split the work into multiple PRs.
I don't argue with your point (even if I am obsessive about commits separation), but one needs to keep in mind that the reverse also applies, that is, on other end of the spectrum, there are devs who create kitchen-sink PRs which include, for example, refactorings, which make squashed PRs harder to reasons about.
koolba 18 hours ago
> Can someone explain why anyone would want non-squashed PRs?
So you can differentiate the plumbing from the porcelain.
If all the helpers, function expansions, typo corrections, and general renamings are isolated, what remains is the pure additional functional changes on its own. It makes reviewing changes much easier.
SatvikBeri 18 hours ago
Making human-readable commit history is not that hard with a little practice. It's one of the big benefits of tools like magit or jj. My team started doing it a few weeks ago, and it's made reviewing PRs substantially easier.
criemen 18 hours ago
mikeocool 18 hours ago
If you ever worked with stacked PRs, and the top one gets squashed and merged it often becomes a nightmare to rebase the rest of the PRs to bring them up to date.
rectang 18 hours ago
baq 17 hours ago
mkleczek 17 hours ago
git merge --no-ff
git log --first-parent
git bisect --first-parent
The above gives you clean PR history in the main branch while retaining detailed work history in (merged) feature branches.
I really don't understand why would I squash having git merge --no-ff at my disposal...
rectang 17 hours ago
kragen 8 hours ago
I would definitely neither accept a pull request where the individual commits were testing and iteration, nor a pull request with hundreds of lines of changes are in a single commit. (New code, maybe.) It's not about ceremony; it's about knowing what changed and why.
anonymars 17 hours ago
Hoo boy is it fun to figure out where things went wrong when the real commit history was thrown away to make it look prettier. Especially a mistake from a merge conflict.
hinkley 14 hours ago
nixpulvis 18 hours ago
I'd take fully squashed PRs over endless "fix thing" and "updated wip"... but if you work in a way that leaves a couple meaningful commits, that's even better. Sometimes I end up in this state naturally by having a feature branch, which I work on in sub branches, each being squashed into a single final commit. Or when the bulk of the logic is on one commit, but then a test case or two are added later, or a configuration needs changing.
I like merge commits because they preserve the process of the review.
hinkley 15 hours ago
echelon 18 hours ago
lucasoshiro 17 hours ago
I wrote a text about that:
https://lucasoshiro.github.io/posts-en/2024-04-08-please_don...
CogitoCogito 18 hours ago
I have always been very careful with git histories and often rewrite/squash them before final review/merge. Often my rewritten histories have nothing to do with the original history and commits are logically/intuitively separated and individually testable.
That said, very few people seem to be like me. Most people have no concept of what a clear commit history is. I think it's kind of similar to how most people are terrible written communicators. Few people have any clue how to express themselves clearly. The easiest way to deal with people like this is to just have them squash their PRs. This way you can at least enforce some sanity at review and then the final commit should enforce some standards.
I agree on rebasing instead of straight merging, but even that's too complicated for most people.
vjerancrnjak 18 hours ago
You can just inspect merge commits, you can also just bisect over merge commits.
Splitting work into multiple PRs is unnecessary ritual.
I have never reasoned about git history or paid attention to most commit messages or found any of it useful compared to the contents of the change.
When I used git bisect with success it was on unknown projects. Known projects are easy to debug.
nothrabannosir 18 hours ago
Because github doesn't support stacked diffs, basically.
T_T
leptons 18 hours ago
I manage a team of developers, and I don't think any of us squash commits, and I really don't care. It's been working fine for 8 years at this job.
We keep our git use extremely simple, we don't spend much time even thinking about git. The most we do with git is commit, push, and merge (and stash is useful too). Never need to rebase or get any deeper into git. Doing anything complicated with git is wasting development time. Squashing commits isn't useful to us at all. We have too much forward velocity to worry that some commit isn't squashed. If a bug does come up, we move forward and fix it, the git history doesn't really figure into our process much, if at all.
ziml77 17 hours ago
stefan_ 18 hours ago
There is no free lunch, the same people that can't be bothered to make atomic semantic commits are the same people that will ruin your bisect with a commit that doesn't build or has some other unrelated run failure. People that don't care can't be fixed by tools.
The advice around PRs rings hollow, after all they were invented by the very people that don't care - which is why they show all changes by default and hide the commits away, commit messages buried after 5 clicks. And because this profession is now filled with people that don't care, add the whole JIRA ticket and fix version rigmarole on top - all kinds of things that show up in some PMs report but not in my console fixing an issue that requires history.
formerly_proven 19 hours ago
> with full history you can find the true code-level inflection point.
"typo fix"
inopinatus 18 hours ago
“tues lunch wip”
rf15 18 hours ago
Honestly, after 20 years in the field: optimising the workflow for when you can already reliably reproduce the bug seems misapplied because that's the part that already takes the least amount of time and effort for most projects.
nixpulvis 18 hours ago
Just because you can reproduce it doesn't mean you know what is causing it. Running a bisect to fix which commit introduces it will reduce the area you need to search for the cause.
SoftTalker 18 hours ago
I can think of only a couple of cases over 20+ years where I had to bisect the commit history to find a bug. By far the normal case is that I can isolate it to a function or a query or a class pretty quickly. But most of my experience is with projects where I know the code quite well.
cloud8421 18 hours ago
wyldfire 8 hours ago
hinkley 15 hours ago
hinkley 15 hours ago
I would add to nixpulvis’s comments that git history may also help you find a repro case, especially if you’ve only found a half-assed repro case that is overly broad.
Before you find even that, your fire drill strategy is very very important. Is there enough detail in the incident channel and our CD system for coworkers to put their dev sandbox in the same state as production? Is there enough if a clue of what is happening for them to run speculative tests in parallel? Is the data architecture clean enough that your experiments don’t change the outcome of mine? Onboarding docs and deployment process docs, if they are tight, reduce the Amdahl’s Law effect as it applies to figuring out what the bug is and where it is. Which is I. This context also Brooks ‘s Law.
zeroonetwothree 10 hours ago
Eh not always. If you work in a big codebase with 1000s of devs then it can quite tricky to find the cause of some bug when it’s in some random library someone changed for a different reason.
utopiah 17 hours ago
I agree with the post.
I also think that typically if you have to resort to bisect you are probably in a wrong place. You should have found the bug earlier so if do not even know when the bug came from
- your test coverage isn't good sufficient
- your tests are probably not actually testing what you believe they do
- your architecture is complex, too complex for you
To be clear though I do include myself in this abstract "you".
imiric 15 hours ago
I mean, sure—in a perfect world bugs would be caught by tests before they're even deployed to production.
But few of us have the privilege of working on such codebases, and with people who have that kind of discipline and quality standards.
In reality, most codebases have statement coverage that rarely exceeds 50%, if coverage is tracked at all; tests are brittle, flaky, difficult to maintain, and likely have bugs themselves; and architecture is an afterthought for a system that grew organically under deadline pressure, where refactors are seen as a waste of time.
So given that, bisect can be very useful. Yet in practice it likely won't, since usually the same teams that would benefit from it, don't have the discipline to maintain a clean history with atomic commits, which is crucial for bisect to work. If the result is a 2000-line commit, you still have to dig through the code to find the root cause.
gegtik 10 hours ago
git bisect gets interesting when API signatures change over a history - when this does happen, I find myself writing version-checking facades to invoke the "same" code in whatever way is legal
kfarr 19 hours ago
Wow and here I was doing this manually all these years.
inamberclad 17 hours ago
'git bisect run' is probably one of the most important software tools ever.
anthomtb 18 hours ago
Binary searching your commit history and using version control software to automate the process just seems so...obvious?
I get that author learned a new-to-him technique and is excited to share with the world. But to this dev, with a rapidly greying beard, the article has the vibe of "Hey bro! You're not gonna believe this. But I just learned the Pope is catholic."
Espressosaurus 17 hours ago
Seriously.
Binary search is one of the first things you learn in algorithms, and in a well-managed branch the commit tree is already a sorted straight line, so it's just obvious as hell, whether or not you use your VCS to run the bisect or you do it by hand yourself.
"Hey guys, check it out! Water is wet!"
PaulDavisThe1st 15 hours ago
ObXKCD: https://xkcd.com/1053/
I mean, do you really not know this XKCD?
lloydatkinson 19 hours ago
I’ve used bisect a couple of times but really it’s a workaround for having a poor process. Automatic unit tests, CI/CD, should have caught it first.
It’s still very satisfying to watch run though, especially if you write a script that it can run automatically (based on the existing code) to determine if it’s a good or bad commit.
nixpulvis 19 hours ago
It's not a workaround. In this case it seems like it, but in general you cannot always rely on your existing tests covering everything. The test you run in the bisect is often updated to catch something new which is reported. The process is often:
1. Start with working code
2. Introduce bug
3. Identify bug
4. Write a regression test
5. Bisect with new test
In many cases you can skip the bisect because the description of the bug makes it clear where the issue is, but not always.
Izkata 19 hours ago
Important addendum to 4 that can throw someone their first time - Put the new test in a new file and don't commit it to the repo yet. You don't want it to disappear or conflict with old versions of the test file when bisect checks old commits.
nixpulvis 18 hours ago
lloydatkinson 18 hours ago
masklinn 18 hours ago
> Automatic unit tests, CI/CD, should have caught it first.
Tests can't prove the absence of bugs, only their presence. Bugs or regressions will slip through.
Bisect is for when that happens and the cause is not obvious.
slang800 19 hours ago
Sometimes you notice a problem that your unit tests didn't cover and want to figure out where it was introduced. That's where git bisect shines.
You can go back and efficiently run a new test across old commits.
tmoertel 18 hours ago
I don't think it's that simple. For example: Show me the unit tests and CI/CD scripts you would write to prove your code is free from security holes.
Yet, once you've identified a hole, you can write a script to test for it, run `git biset` to identify what commit introduced the hole, and then triage the possible fallout.
lucasoshiro 17 hours ago
Ideally, we should write bug-free code, but we can't. There are some tools to avoid bugs, tests are one of them. Those tools avoid them, but not mitigate. Bisect doesn't replace tests, it only helps find where the bugs are happening. After finding and fixing the bugs, it's a good idea to write a test covering that bug.
To sum up: bisect and tests are not in opposite sides, they complement each other
trenchpilgrim 19 hours ago
"We write unit tests so our code doesn't have bugs."
"What if the unit tests have bugs?"
monitron 18 hours ago
> the OG tool `git`
This phrase immediately turned the rest of my hair gray. I'm old enough to still think of Git as the "new" version control system, having survived CVS and Subversion before it.
c0brac0bra 18 hours ago
But did you survive rcs?
kragen 8 hours ago
Worse: PVCS!
0x20cowboy 18 hours ago
Or visual source safe
PaulKeeble 17 hours ago
smcameron 18 hours ago
and sccs
shermantanktop 17 hours ago
rco8786 17 hours ago
I still remember dragging my team kicking and screaming away from Subversion. Which, to be fair, was fine. I think GitHub’s rise was really what won it for git vs subversion. The others though, good riddance.
huflungdung 19 hours ago
I hardly think binary search is an unknown algorithm even by beginner standards for someone from a completely different field
trenchpilgrim 18 hours ago
I know a lot of professional, highly paid SWEs and DevOps people who never went to college or had any formal CS or math education beyond high school math. I have a friend who figured out complexity analysis by himself on the job trying to fix up some shitty legacy code. Guy never got past Algebra in school.
(If you're about to ask how they can get jobs while new grads can't - by being able to work on really fucking terrible legacy code and live in flyover states away from big cities.)
rr808 18 hours ago
Surely everyone has a CI pipeline that wont allow merges with failing tests?
jmount 18 hours ago
This if the case where you introduce the test after the failure.
ervine 18 hours ago
More than one assumption in that sentence, ha!
trenchpilgrim 18 hours ago
Including "code is delivered in a way that involves merges"
ervine 17 hours ago
thealistra 17 hours ago
But most CIs allow flaky tests :)