What Category Theory Teaches Us About DataFrames (mchav.github.io)
153 points by mchav 5 days ago
rich_sasha 9 hours ago
The article starts well, on trying to condense pandas' gaziliion of inconsistent and continuously-deprecated functions with tens of keyword arguments into a small, condensed set of composable operations - but it lost me then.
The more interesting nugget for me is about this project they mention: https://modin.readthedocs.io/en/latest/index.html called Modin, which apparently went to the effort of analysing common pandas uses and compressed the API into a mere handful of operations. Which sounds great!
Sadly for me the purpose seems to have been rather to then recreate the full pandas API, only running much faster, backed by things like Ray and Dask. So it's the same API, just much faster.
To me it's a shame. Pandas is clearly quite ergonomic for various exploratory interactive analyses, but the API is, imo, awful. The speed is usually not a concern for me - slow operations often seem to be avoidable, and my data tends to fit in (a lot of) RAM.
I can't see that their more condensed API is public facing and usable.
sweezyjeezy 5 hours ago
The pandas API is awful, but it's kind of interesting why. It was started as a financial time series manipulation library ('panels') in a hedge fund and a lot of the quirks come from that. For example the unique obsession with the 'index' - functions seemingly randomly returning dataframes with column data as the index, or having to write index=False every single time you write to disk, or it appending the index to the Series numpy data leading to incredibly confusing bugs. That comes from the assumption that there is almost always a meaningful index (timestamps).
bbkane 6 hours ago
Check out polars- I find it much more intuitive than pandas as it looks closer to SQL (and I learned SQL first). Maybe you'll feel the same way!
rich_sasha 4 hours ago
I've looked at Polars. My sense is that Pandas is an interactive data analysis library poorly suited to production uses, and Polars is the other way around. Seemed quite verbose for example. Sometimes doing `series["2026"]` is exactly the right thing to type.
entropicdrifter 3 hours ago
Lyngbakr 4 hours ago
Agreed — I much prefer polars, too. IIRC the latest major version of pandas even introduced some polars-style syntax.
Patient0 4 hours ago
toxik 41 minutes ago
Pandas and so on exist for the same reason Django's ORM and SqlAlchemy do: people do not want to string interpolate to talk to their database. SQL is great for DBA's, and absolutely sucks for programmers. Microsoft was really onto something with LINQ, in my opinion.
few 8 hours ago
I felt like one or two decades ago, all the rage was about rewriting programs into just two primitives: map and reduce.
For example filter can be expressed as:
is_even = lambda x: x % 2 == 0
mapped = map(lambda x: [x] if is_even(x) else [], data)
filtered = reduce(lambda x, y: x + y, mapped, [])
But then the world moved on from it because it was too rigidmrlongroots 5 hours ago
MapReduce is nice but it doesn't, by itself, help you reason about pushdowns for one. Parquet, for example, can pushdown select/project/filter, and that's lost if you have MapReduce. And a reduce is just a shuffle + map, not very different from a distributed join. MapReduce as an escape hatch over what is fundamentally still relational algebra may be a good intuition.
mememememememo 7 hours ago
Performance aside it seems you could do most maybe a the ops with those three. I say three because your sneaky plus is a union operation. So map, reduce and union.
But you are also allowing arbitrary code expressions. So it is less lego-like.
bjourne 4 hours ago
Reductions are painful because they specify a sequence of ordered operations. Runtime is O(N), where N is the sequence length, regardless of amount of hardware. So you want to work at a higher level where you can exploit commutativity and independence of some (or even most) operations.
toxik an hour ago
You can reduce in parallel. That was the whole point of MapReduce. For example, the sum abcdefgh can be found by first ab, cd, ef, gh; then those results (ab)(cd), (ef)(gh); then the final result by (abcd)(efgh). That's just three steps to compute seven sums.
ux266478 an hour ago
You're right it's primarily a runtime + compiler + language issue. I really don't understand why people tried to force functional programming in environments without decent algebraic reasoning mechanisms.
Modern graph reducers have inherent confluence and aren't reliant on explicit commutation. They can do everything parallel and out of order (until they have to talk to some extrinsic thing like getting input or spitting out output), including arbitrary side-effectual mutation. We really live in the future.
heavenlyblue 21 minutes ago
Reduce is massively parallel for commutative operations
pavodive 6 hours ago
When I started reading about pandas complexity and the smaller set of operations needed, couldn't help but think of R's data.table simplicity.
Granted, it's got more than 15 functions, but its simplicity seems to me very similar to what the author presented in the end.
Lyngbakr 4 hours ago
Back when I used to use Stackoverflow, someone would always come along with a data.table solution when I asked a question about dplyr. The terse syntax seemed so foreign compared to the obvious verb syntax of dplyr. But then I learned data.table and I've never looked back. It's a superb tool!
hermitcrab 5 hours ago
>a dataframe is a tuple (A, R, C, D): an array of data A, row labels R, column labels C, and a vector of column domains D.
What is 'a vector of column domains D'? A description of how the data A maps to columns?
throw_await 4 hours ago
I think "domain" here is like the datatype
getnormality 7 hours ago
Hmm. Folks trying to discover the elegant core of data frame manipulation by studying... pandas usage patterns. When R's dplyr solved this over a decade ago, mostly by respecting SQL and following its lead.
The pandas API feels like someone desperately needed a wheel and had never heard of a wheel, so they made a heptagon, and now millions of people are riding on heptagon wheels. Because it's locked in now, everyone uses heptagon wheels, what can you do? And now a category theorist comes along, studies the heptagon, and says hey look, you could get by on a hexagon. Maybe even a square or a triangle. That would be simpler!
No. Stop. Data frames are not fundamentally different from database tables [1]. There's no reason to invent a completely new API for them. You'll get within 10% of optimal just by porting SQL to your language. Which dplyr does, and then closes most of the remaining optimality gap by going beyond SQL's limitations.
You found a small core of operations that generates everything? Great. Also, did you know Brainfuck is Turing-complete? Nobody cares. Not all "complete" systems are created equal. A great DSL is not just about getting down to a small number of operations. It's about getting down to meaningful operations that are grammatically composable. The relational algebra that inspired SQL already nailed this. Build on SQL. Don't make up your own thing.
Like, what is "drop duplicates"? What are duplicates? Why would anyone need to drop them? That's a pandas-brained operation. You want the distinct keys defined by a select set of key columns, like SQL and dplyr provide.
Who needs a separate select and rename? Select is already using names, so why not do your name management there? One flexible select function can do it all. Again, like both SQL and dplyr.
Who needs a separate difference operation? There's already a type of join, the anti-join, that gets that done more concisely and flexibly, and without adding a new primitive, just a variation on the concept of a join. Again, like both SQL and dplyr.
Props to pandas for helping so many people who have no choice but to do tabular data analysis in Python, but the pandas API is not the right foundation for anything, not even a better version of pandas.
[1] No, row labels and transposition are not a good enough reason to regard them as different. They are both just structures that support pivoting, which is vastly more useful, and again, implemented by both R and many popular dialects of SQL.
DangitBobby 6 hours ago
I guess I have pandas brain because I definitely want to drop duplicates, 100% of the time I'm worried about duplicates and 99% of the time the only thing I want to do with duplicates is drop them. When you've got 19 columns it's _really fucking annoying_ if the tool you're using doesn't have an obvious way to say `select distinct on () from my_shit`. Close second at say, 98% of the time, I want to a get a count of duplicates as a sanity check because I know to expect a certain amount of them. Pandas makes that easy too in a way SQL makes really fucking annoying. There are a lot of parts on pandas that made me stop using it long ago but first class duplicates handling is not among them.
And the API is vastly superior to SQL is some respects from a user perspective despite being all over the place in others. Dataframe select/filtering e.g. df = df[df.duplicated(keep='last')] is simple, expressive, obvious, and doesn't result in bleeding fingers. The main problem is the rest of the language around it with all the indentations, newlines, loops, functions and so on can be too terse or too dense and much hard to read than SQL.
gregw2 5 hours ago
You articulate your case well, thank you!
I always warn people (particularly junior people) though that blindly dropping duplicates is a dangerous habit because it helps you and others in your organization ignore the causes of bad data quickly without getting them fixed at the source. Over time, that breeds a lot of complexity and inefficiency. And it can easily mask flaws in one's own logic or understanding of the data and its properties.
michaelbarton an hour ago
DangitBobby 5 hours ago
getnormality 6 hours ago
Duplicates in source data are almost always a sign of bad data modeling, or of analysts and engineers disregarding a good data model. But I agree that this ubiquitous antipattern that nobody should be doing can still be usefully made concise. There should be a select distinct * operation.
And FWIW I personally hate writing raw SQL. But the problem with the API is not the data operations available, it's the syntax and lack of composability. It's English rather than ALGOL/C-style. Variables and functions, to the extent they exist at all, are second-class, making abstraction high-friction.
doug_durham 3 hours ago
DangitBobby 5 hours ago
mamcx 4 hours ago
mr_toad an hour ago
> just by porting SQL to your language
You make it sound like writing an SQL parser and query engine is a trivial task. Have you ever looked at the implementation of a query engine to see what’s actually involved? You can’t just ‘build on SQL’, you have to build a substantial library of functions to build SQL on top of.
getnormality 4 hours ago
On reflection I think it's possible I may have missed the potential positive value of the post a bit. Maybe analyzing pandas gets you down to a set of data frame primitives that is helpful to build any API. Maybe the API you start with doesn't matter. I don't know. When somebody works hard to make something original, you should try to see the value in it, even if the approach is not one you would expect to be helpful.
I stand by my warnings against using pandas as a foundation for thinking about tabular data manipulation APIs, but maybe the work has value regardless.
fn-mote 7 hours ago
Amen.
The author takes the 4 operations below and discusses some 3-operation thing from category theory. Not worth it, and not as clear as dplyr.
> But I kept looking at the relational operators in that table (PROJECTION, RENAME, GROUPBY, JOIN) and thinking: these feel related. They all change the schema of the dataframe. Is there a deeper relationship?
doug_durham 3 hours ago
SQL only works on well defined data sets that obey relational calculus rules. Pandas is a power tool for dealing with data as you find it. Without Pandas you are stuck with tools like Excel.
jiehong 8 hours ago
Dups of a few days ago:
kiviuq 3 hours ago
there is also ZIO Prelude and ZIO schema...
jmount 2 hours ago
I like this sort of study- but it really misses the point to not give more credit for some of the observations and designs to Codd and others.
jeremyscanvic 6 hours ago
It's very insightful how they explain the difference between dataframes and SQL tables / standard relational structures!
hermitcrab 5 hours ago
I guess this article is an interesting exercise from a pure maths point of view. But, as someone developing a drag and drop data wrangling tool the important thing is creating a set of composable operations/primitive that are meaningful and useful to your end user. We have ended up 73 distinct transforms in Easy Data Transform. Sure they overlap to an extent, but feel they are at the right semantic level for our users, who are not category theorists.
mrlongroots 5 hours ago
Algebras are also nice for implementations. If you can decompose a domain into a few algebraic primitives you can write nice SIMD/CUDA kernels for those primitives.
To your point, I wonder if the 73 distinct transforms were just different defaults/usability wrappers over these. And you may also get into situations where kernels can be fused together or other batching constraints enable optimizations that nice algebraic primitives don't capture. But that's just systems---theory is useful in helping rethink API bloats and keeping us all honest.
hermitcrab 4 hours ago
They are effectively highly level wrappers over the most primitive operations. High enough level that they can be used from a GUI, rather than code.
It is a balance. Too few transforms and they become to low level for my users. Too many and you struggle to find the transform you want.
jimbokun 3 hours ago
tikhonj 3 hours ago
You can have both: you start with a small, mathematically inspired algebraic core, then you express the higher-level more user-friendly operations in terms of the algebraic core.
As long as your core primitives are well designed (easier said than done!), this accomplishes two things: it makes your implementation simpler, and it helps guide and constrain your user-facing design. This latter aspect is a bit unintuitive (why would you want more constraints to work around?), but I've seen it lead to much better interface designs in multiple projects. By forcing yourself to express user-level affordances in terms of a small conceptual core, you end up with a user design that is more internally consistent and composable.
jimbokun 3 hours ago
For one thing it gives users of your library fewer concepts to learn.
hermitcrab 2 hours ago
whattheheckheck 4 hours ago
Have you heard of the book Mathematics for Big data
He says himself the ideas are more important than the software package
hermitcrab 4 hours ago
D4M seems to be a library, not a book. Or am I missing something?