Hypothesis, Antithesis, synthesis (antithesis.com)
131 points by alpaylan 4 hours ago
sunshowers an hour ago
Hi David, congratulations on the release! I'm excited to play around with Hypothesis's bitstream-based shrinking. As you're aware, prop_flat_map is a pain to deal with, and I'd love to replace some of my proptest-based tests with Hegel.
I spent a little time looking at Hegel last week and it wasn't quite clear to me how I'd go about having something like a canonical generator for a type (similar to proptest's Arbitrary). I've found that to be very helpful while generating large structures to test something like serialization roundtripping against — in particular, the test-strategy library has derive macros that work very well for business logic types with, say, 10-15 enum variants each of which may have 0-10 subfields. I'm curious if that is supported today, or if you have plans to support this kind of composition in the future.
edit: oh I completely missed the macro to derive DefaultGenerator! Whoops
tybug an hour ago
Yep, `#[derive(DefaultGenerator)]` and `generators::default<T>()` are the right tools here.
This is one of the areas we've dogfooded the least, so we'd definitely be happy to get feedback on any sharp corners here!
I think `from_type` is one of Hypothesis's most powerful and ergonomic strategies, and that while we probably can't get quite to that level in rust, we can still get something that's pretty great.
sunshowers an hour ago
Thank you! I have some particularly annoying proptest-based tests that I'll try porting over to Hegel soon. (Thanks for writing the Claude skill to do this.)
pron 4 hours ago
> property-based testing is going to be a huge part of how we make AI-agent-based software development not go terribly.
There's no doubt, I think, testing will remain important and possibly become more important with more AI use, and so better testing is helpful, PBT included. But the problem remains verifying that the tests actually test what they're supposed to. Mutation tests can allow agents to get good coverage with little human intervention, and PBT can make tests better and more readable. But still, people have to read them and understand them, and I suspect that many people who claim to generate thousands of LOC per day don't.
And even if the tests were great and people carefully reviewed them, that's not enough to make sure things don't go terribly wrong. Anthropic's C compiler experiment didn't fail because of bad testing. Not only were the tests good, it took humans years to write the tests by hand, and the agents still failed to converge.
I think good tests are a necessary condition for AI not generating terrible software, but we're clearly not yet at a point where they're a sufficient one. So "a huge part" - possibly, but there are other huge parts still missing.
zoogeny an hour ago
> t took humans years to write the tests by hand, and the agents still failed to converge.
I think there is some hazard in assuming that what agents fail at today they will continue to fail on in the future.
What I mean is, if we take the optimistic view of agents continuing to improve on the trajectory they have started at for one or two years, then it is worth while considering what tools and infrastructure we will need for them. Companies that start to build that now for the future they assume is coming are going to be better positioned than people who wake up to a new reality in two years.
js8 3 hours ago
> There's no doubt, I think, testing will remain important and possibly become more important with more AI use, and so better testing is helpful, PBT included.
Given Curry-Howard isomorphism, couldn't we ask AI to directly prove the property of the binary executable under the assumption of the HW model, instead of running PBTs?
By no means I want to dismiss PBTs - but it seems that this could be both faster and more reliable.
skybrian 2 hours ago
Proofs are a form of static analysis. Static analysis can find interesting bugs, but how a system behaves isn't purely a property of source code. It won't tell you whether the code will run acceptably in a given environment.
For example, if memory use isn't modelled, it won't tell you how big the input can be before the system runs out of memory. Similarly, if your database isn't modelled then you need to test with a real database. Web apps need to test with a real web browser sometimes, rather than a simplified model of one. Databases and web browsers are too complicated to build a full-fidelity mathematical model for.
When testing with real systems there's often the issue that the user's system is different from the one you use to test. You can test with recent versions of Chrome and Firefox, etc, which helps a lot, but what about extensions?
Nothing covers everything, but property tests and fuzzers actually run the code in some test environment. That's going to find different issues than proofs will.
js8 an hour ago
Groxx 2 hours ago
And how do you know if it has proven the property you want, instead of something that's just complicated looking but evaluates to true?
js8 an hour ago
groby_b 35 minutes ago
> Given Curry-Howard isomorphism, couldn't we ask AI to directly prove the property of the binary executable under the assumption of the HW model, instead of running PBTs?
Yes, in principle. Given unlimited time and a plentiful supply of unicorns.
Otherwise, no. It is well beyond the state of the art in formal proofs for the general case, and it doesn't become possible just because we "ask AI".
And unless you provide a formal specification of the entire set of behavior, it's still not much better than PBT -- the program is still free to do whatever the heck it wants that doesn't violate the properties formally specified.
DRMacIver 3 hours ago
> But the problem remains verifying that the tests actually test what they're supposed to.
Definitely. It's a lot harder to fake this with PBT than with example-based testing, but you can still write bad property-based tests and agents are pretty good at doing so.
I have generally found that agents with property-based tests are much better at not lying to themselves about it than agents with just example-based testing, but I still spend a lot of time yelling at Claude.
> So "a huge part" - possibly, but there are other huge parts still missing.
No argument here. We're not claiming to solve agentic coding. We're just testing people doing testing things, and we think that good testing tools are extra important in an agentic world.
pron 3 hours ago
> We're not claiming to solve agentic coding. We're just testing people doing testing things, and we think that good testing tools are extra important in an agentic world.
Yeah, I know. Just an opportunity to talk about some of the delusions we're hearing from the "CEO class". Keep up the good work!
ngruhn 3 hours ago
> I have generally found that agents with property-based tests are much better at not lying to themselves
I also observed the cheating to increase. I recently tried to do a specific optimization on a big complex function. Wrote a PBT that checks that the original function returns the same values as the optimized function on all inputs. I also tracked the runtime to confirm that performance improved. Then I let Claude loose. The PBT was great at spotting edge cases but eventually Claude always started cheating: it modified the test, it modified the original function, it implemented other (easier) optimizations, ...
DRMacIver 3 hours ago
tybug 3 hours ago
I actually think there's another angle here where PBT helps, which wasn't explored in the blog post.
That angle is legibility. How do you know your AI-written slop software is doing the right thing? One would normally read all the code. Bad news: that's not much less labor intensive as not using AI at all.
But, if one has comprehensive property-based tests, they can instead read only the property-based tests to convince themselves the software is doing the right thing.
By analogy: one doesn't need to see the machine-checked proof to know the claim is correct. One only needs to check the theorem statement is saying the right thing.
pron 3 hours ago
Right, I said that property based tests are easier to read, and that's good. But people still have to actually read them. Also, because they still work best at the "unit" level, to understand them, the people reading them need to know how all the units are connected (e.g. a single person cannot review even PBTs required for 10KLOC per day [1]).
My point isn't so much about PBT, but about how we don't yet know just how much agents help write real software (and how to get the most help from them).
[1]: I'm only using that number because Garry Tan, CEO of YC, claimed to generate 10K lines of text per day that he believes to be working code and developers working with AI agents know they can't be.
shrubby 29 minutes ago
In here just for the Hegel joke.
DRMacIver 4 hours ago
Post author here btw, happy to take questions, whether they're about Hegel in particular, property-based testing in general, or some variant on "WTF do you mean you wrote rust bindings to a python library?"
Chinjut 3 hours ago
You mention in the post that there are design differences between Hegel/Hypothesis and QuickCheck, partly due to attitude differences between Python/non-Haskell programmers and Haskell programmers. As someone coming from the Haskell world (though by no means considering Haskell a perfect language), could you expand on what kinds of differences these are?
DRMacIver 3 hours ago
So I think a short list of big API differences are something like:
* Hypothesis/Hegel are very much focused on using test assertions rather than a single property that can be true or false. This naturally drives a style that is much more like "normal" testing, but also has the advantage that you can distinguish between different types of failing test. We don't go too hard on this, but both Hegel and Hypothesis will report multiple distinct failures if your test can fail in multiple ways.
* Hegelothesis's data generation and how it interacts with testing is much more flexible and basically fully imperative. You can basically generate whatever data you like wherever in your test you like, freely interleaving data generation and test execution.
* QuickCheck is very much type-first and explicit generators as an afterthought. I think this is mostly a mistake even in Haskell, but in languages where "just wrap your thing in a newtype and define a custom implementation for it" will get you a "did you just tell me to go fuck myself?" response, it's a nonstarter. Hygel is generator first, and you can get the default generator for a type if you want but it's mostly a convenience function with the assumption that you're going to want a real generator specification at some point soon.
From an implementation point of view, and what enables the big conveniences, Hypothesis has a uniform underlying representation of test cases and does all its operations on them. This means you get:
* Test caching (if you rerun a failing test, it will immediately fail in the same way with the previously shrunk example)
* Validity guarantees on shrinking (your shrunk test case will always be ones your generators could have produced. It's a huge footgun in QuickCheck that you can shrink to an invalid test case)
* Automatically improving the quality of your generators, never having to write your own shrinkers, and a whole bunch of other quality of life improvements that the universal representation lets us implement once and users don't have to care about.
The validity thing in particular is a huge pain point for a lot of users of PBT, and is what drove a lot of the core Hypothesis model to make sure that this problem could never happen.
The test caching is because I personally hated rerunning tests and not knowing whether it was just a coincidence that they were passing this time or that the test case had changed.
anentropic 3 hours ago
TBH reading the first few words of that section I was definitely expecting it to continue "so we used Claude to rewrite Hypothesis in Rust..." so that was quite a surprise!
DRMacIver 3 hours ago
It's on the agenda! We definitely want to rewrite the Hegel core server in rust, but not as much as we wanted to get it working well first.
My personal hope is that we can port most of the Hypothesis test suite to hegel-rust, then point Claude at all the relevant code and tell it to write us a hegel-core in rust with that as its test harness. Liam thinks this isn't going to work, I think it's like... 90% likely to get us close enough to working that we can carry it over the finish line. It's not a small project though. There are a lot of fiddly bits in Hypothesis, and the last time I tried to get Claude to port it to Rust the result was better than I expected but still not good enough to use.
mullr 2 hours ago
Why would I use this over the existing Proptest library in Rust?
DRMacIver 2 hours ago
Answered this over here: https://news.ycombinator.com/item?id=47506274
skybrian 2 hours ago
It isn't used by anyone besides me, but I wrote a property-testing library for Deno [1] that has a form of "sometimes" assertions (inspired by Antithesis) and uses "internal shrinking" (inspired by Hypothesis).
But it's still a "blind" fuzzer and it would be nice to write one that gets feedback from code coverage somehow. Instead, you have to run code coverage yourself and figure out how to change test data generation to improve it.
chriswarbo 3 minutes ago
> But it's still a "blind" fuzzer and it would be nice to write one that gets feedback from code coverage somehow
There have been simplistic attempts at this, e.g. instead of performing 100 tests, just keep going as long as coverage increases.
The Choice Gradient Sampling algorithm from https://arxiv.org/pdf/2203.00652 feels like a nice way to steer generators in a more nuanced way. That paper uses it to avoid discards when rejection-sampling; but I have a feeling it could be repurposed to "reward" based on new coverage instead/as-well.
lwhsiao 3 hours ago
DRMacIver, can you comment on how this fits into the existing property-based testing ecosystems for various languages? E.g., if I use proptest in Rust, why would/should I switch to Hegel?
DRMacIver 2 hours ago
The short answer to how it fits into existing ecosystems is... in competition I suppose. We've got a lot of respect for the people working on these libraries, but we think the Hypothesis-based approach is better than the various approaches people have adopted. I don't love that the natural languages for us to start with are ones where there are already pretty good property-based testing libraries whose toes we're stepping on, but it ended up being the right choice because those are the languages people care about writing correct software in, and also the ones we most want the tools in ourselves!
I think right now if you're a happy proptest user it's probably not clear that you should switch to Hegel. I'd love to hear about people trying, but I can't hand on my heart say that it's clearly the correct thing for you to do given its early state, even though I believe it will eventually be.
But roughly the things that I think are clearly better about the Hegel approach and why it might be worth trying Hegel if you're starting greenfield are:
* Much better generator language than proptest (I really dislike proptest's choices here. This is partly personal aesthetic preferences, but I do think the explicitly constructed generators work better as an approach and I think this has been borne out in Hypothesis). Hegel has a lot of flexible tooling for generating the data you want.
* Hegel gets you great shrinking out of the box which always respects the validity requirements of your data. If you've written a generator to always ensure something is true, that should also be true of your shrunk data. This is... only kindof true in proptest at best. It's not got quite as many footguns in this space as original quickcheck and its purely type-based shrinking, but you will often end up having to make a choice between shrinking that produces good results and shrinking that you're sure will give you valid data.
* Hegel's test replay is much better than seed saving. If you have a failing test and you rerun it, it will almost immediately fail again in exactly the same way. With approaches that don't use the Hypothesis model, the best you can hope for is to save a random seed, then rerun shrinking from that failing example, which is a lot slower.
There are probably a bunch of other quality of life improvements, but these are the things that have stood out to me when I've used proptest, and are in general the big contrast between the Hypothesis model and the more classic QuickCheck-derived ones.
tybug 4 hours ago
As possibly the one community on earth where it's actually better to post the code than the blog post: TL;DR this is a universal property-based testing protocol (https://github.com/hegeldev/hegel-core) and family of libraries (https://github.com/hegeldev/hegel-rust, more to come later).
I've talked with lots of people in the PBT world who have always seen something like this as the end goal of the PBT ecosystem. It seemed like a thing that would happen eventually, someone just had to do it. I'm super excited to actually be doing it and bringing great PBT to every and any language.
It doesn't hurt that this is coming right as great PBT in every language is suddenly a lot more important thanks to AI code!
rdevilla 3 hours ago
This is the first time in my HN membership where I was excited to read about the dialectic, only to be disappointed upon finding out the article is about Rust.
PBT is for sure the future - which is apparently now? 10 years ago when I was talking about QuickCheck [0] all the JS and Ruby programmers in my city just looked at me like I had two heads.
[0] https://github.com/ryandv/chesskell/blob/master/test/Test/Ch...
DRMacIver 3 hours ago
TBF PBT has been the present in Python for a while now.
10 years ago might have been a little early (Hypothesis 1.0 came out 11 years ago this coming Thursday), but we had pretty wide adoption by year two and it's only been growing. It's just that the other languages have all lagged behind.
It's by no means universally adopted, but it's not a weird rare thing that nobody has heard of.
hugeBirb 4 hours ago
Not that it matters at this point but the hegelian dialectic is not thesis, antithesis and synthesis. Usually attributed to Hegel but as I understand it he actually pushed back on this mechanical view of it all and his views on these transitory states was much more nuanced.
jjgreen 4 hours ago
"Not that it matters ...", What? Of course it matters! I only come to HN for extended arguments on the meaning of the Dialectic.
AndrewKemendo 3 hours ago
I gave you one in a sibling ;)
DRMacIver 4 hours ago
Conversation with Will (Antithesis CEO) a couple months ago, heavily paraphrased:
Will: "Apparently Hegel actually hated the whole Hegelian dialectic and it's falsely attributed to him."
Me: "Oh, hm. But the name is funny and I'm attached to it now. How much of a problem is that?"
Will: "Well someone will definitely complain about it on hacker news."
Me: "That's true. Is that a problem?"
Will: "No, probably not."
(Which is to say: You're entirely right. But we thought the name was funny so we kept it. Sorry for the philosophical inaccuracy)
wwilson 4 hours ago
If I had been wearing my fiendish CEO hat at the time, I might have even said something like: "somebody pointing this out will be a great way to jumpstart discussion in the comments."
One of the evilest tricks in marketing to developers is to ensure your post contains one small inaccuracy so somebody gets nerdsniped... not that I have ever done that.
1-more 3 hours ago
jpadkins 2 hours ago
dfabulich 3 hours ago
If that's not motivation enough for you to rename it, well, TypeScript already has a static type checker called Hegel. https://hegel.js.org/ (It's a stronger type system than TypeScript.)
DRMacIver 3 hours ago
cmrdporcupine 2 hours ago
I think it's more that Hegel was fine with "dialectics" but that the antithesis/synthesis stuff is not actually what's going on in his dialectic. It's a bit of a popular misconception about the role of negation and "movement" in Hegel.
I believe (unless my memory is broken) they get into this a bunch in Ep 15 of my favourite podcast "What's Left Of Philosophy": https://podcasts.apple.com/gb/podcast/15-what-is-dialectics-...
Also if you're not being complained about on HN, are you even really nerd-ing?
sigbottle 3 hours ago
From what I understand, it's a proof technique (other techniques include Kant's Transcendental Deduction or Descartes's pure doubt) that requires generating new conceptual thoughts via internal contradiction and showing necessarily that you lead from one category to the next.
The necessity thing is the big thing - why unfold in this way and not some other way. Because the premises in which you set up your argument can lead to extreme distortions, even if you think you're being "charitable" or whatever. Descartes introduced mind-body dualisms with the method of pure doubt, which at a first glance seemingly is a legitimate angle of attack.
Unfortunately that's about as nuanced as I know. Importantly this excludes out a wide amount of "any conflict that ends in a resolution validates Hegel" kind of sophistry.
viccis 3 hours ago
>other techniques include Kant's Transcendental Deduction or Descartes's pure doubt
This is not quite accurate. Kant says very explicitly in the (rarely studied) Transcendental Doctrine of Method (Ch 1 Section 4, A789/B817) that this kind of proof method (he calls it "apagogic") is unsuitable to transcendental proofs.
You might be thinking of the much more well studied Antinomies of Pure Reason, in which he uses this kind of proof negatively (which is to say, the circumscribe the limits of reason) as part of his proof against the way the metaphysical arguments from philosophers of his time (which he called "dogmatic" use of reason) about the nature of the cosmos were posed.
The method he used in his Deduction is a transcendental argument, which is typically expressed using two things, X and Y. X is problematic (can be true but not necessarily so), and Y is dependent on X. So then if Y is true, then X must necessarily be true as well.
sigbottle 2 hours ago
zero0529 2 hours ago
I remember first learning about Hegel when playing Fallout NV. Caesar made it seem so simple.
biggestlou 2 hours ago
This is 100% true and a major pet peeve of mine.
AndrewKemendo 3 hours ago
Eh… it’s always worth keeping in mind the time period and what was going on with the tooling for mathematics and science at the time.
Statistics wasn’t really quite mature enough to be applied to let’s say political economy a.k.a. economics which is what Hegel was working in.
JB Say (1) was the leading mind in statistics at the time but wasn’t as popular in political circles (Notably Proudhon used Says work as epistemology versus Hegel and Marx)
I’ve been in serious philosophy courses where they take the dialectic literally and it is the epistemological source of reasoning so it’s not gone
This is especially true in how marx expanded into dialectical materialism - he got stuck on the process as the right epistemological approach, and marxists still love the dialectic and Hegelian roots (zizek is the biggest one here).
The dialectic eventually fell due to robust numerical methods and is a degenerate version version of the sampling Markov Process which is really the best in class for epistemological grounding.
Someone posted this here years ago and I always thought it was a good visual: https://observablehq.com/@mikaelau/complete-system-of-philos...
sigbottle 3 hours ago
I thought the dialectic was just a proof methodology, and especially the modern political angles you might year from say a Youtube video essay on Hegel, was because of a very careful narrative from some french dude (and I guess Marx with his dialectical materialism). I mean, I agree with many perspectives from 20th century continental philosophy, but it has to be agreed that they refactored Hegel for their own purposes, no?
AndrewKemendo 3 hours ago
seamossfet 2 hours ago
Oh my god, the rust developers are writing tests with Hegelian dialects.