A decade of Docker containers (cacm.acm.org)
195 points by zacwest 6 hours ago
pixelmonkey 2 hours ago
The math of “a decade” seemed wrong to me, since I remembered Docker debuting in 2013 at PyCon US Santa Clara.
Then I found an HN comment I wrote a few years ago that confirmed this:
“[...] I remember that day pretty clearly because in the same lightning talk session, Solomon Hykes introduced the Python community to docker, while still working on dotCloud. This is what I think might have been the earliest public and recorded tech talk on the subject:”
YouTube link: https://youtu.be/1vui-LupKJI?t=1579
Note: starts at t=1579, which is 26:19.
Just being pedantic though. That’s about 13 years ago. The lightning talk is fun as a bit of computing history.
(Edit: as I was digging through the paper, they do cite this YouTube presentation, or a copy of it anyway, in the footnotes. And they refer to a 2013 release. Perhaps there was a multi-year delay between the paper being submitted to ACM with this title and it being published. Again, just being pedantic!)
bmitch3020 5 hours ago
I've seen countless attempts to replace "docker build" and Dockerfile. They often want to give tighter control to the build, sometimes tightly binding to a package manager. But the Dockerfile has continued because of its flexibility. Starting from a known filesystem/distribution, copying some files in, and then running arbitrary commands within that filesystem mirrored so nicely what operations has been doing for a long time. And as ugly as that flexibility is, I think it will remain the dominant solution for quite a while longer.
kccqzy an hour ago
> But the Dockerfile has continued because of its flexibility.
The flip side is that the world still hasn’t settle on a language-neutral build tool that works for all languages. Therefore we resort to running arbitrary commands to invoke language-specific package managers. In an alternate timeline where everyone uses Nix or Bazel or some such, docker build would be laughed out of the window.
muvlon 7 minutes ago
As a Nix evangelist, I have to say: Nix is really not capable of replacing languag-specific package managers.
> running arbitrary commands to invoke language-specific package managers.
This is exactly what we do in Nix. You see this everywhere in nixpkgs.
What sets apart Nix from docker is not that it works well at a finer granularity, i.e. source-file-level, but that it has real hermeticity and thus reliable caching. That is, we also run arbitrary commands, but they don't get to talk to the internet and thus don't get to e.g. `apt update`.
In a Dockerfile, you can `apt update` all you want, and this makes the build layer cache a very leaky abstraction. This is merely an annoyance when working on an individual container build but would be a complete dealbreaker at linux-distro-scale, which is what Nix operates at.
brightball 35 minutes ago
Reminds me of the “Electric cars in reverse” video where the guy envisions a world where all vehicles are electric and tries to make the argument for gas engines.
anal_reactor 11 minutes ago
__MatrixMan__ 3 hours ago
There are some hurdles preventing that flow from achieving reproducible builds. As the bad guys get more sophisticated, it's going to become more and more important that one party can say "we trust this build hash" and a separate party to say "us too".
That's not going to work if both parties get different hashes when they build the image, which won't happen as long as file modification timestamps (and other such hazards) are part of what gets hashed.
miladyincontrol 4 hours ago
The lack of docker registry-like solutions really does seem to be the chokepoint for many alternatives.
Personally I love using mkosi and while it has all the composability and deployment options I'd care for, its clear not everyone wants to build starting only with a blank set of OS templates.
whateveracct 3 hours ago
Nix is exceptionally good at making docker containers.
stabbles an hour ago
Does Nix do one layer per dependency? Does it run into >=128 layers issues?
In Spack [1] we do one layer per package; it's appealing, but I never checked if besides the layer limit it's actually bad for performance when doing filesystem operations.
Spivak 3 hours ago
Yes but then you're committed to using Nix which doesn't work so well the moment you need some software not packaged by Nix.
Want to throw a requirements.txt in there? No no, why would you even ask that? Meanwhile docker says yeah sure just run pip install, why should I care?
gnull 28 minutes ago
nothrabannosir an hour ago
okso 3 hours ago
whateveracct 2 hours ago
mikepurvis 3 hours ago
Especially if you use nix2container to take control over the layer construction and caching.
zbentley 5 hours ago
> the Dockerfile has continued because of its flexibility
I wish we had standardized on something other than shell commands, though. Puppet or terraform or something more declarative would have been such a better alternative to “everyone cargo cults ‘RUN apt-get upgrade’ onto the top of their dockerfiles”.
Like, the layer/stage/caching behavior is fine. I just wish the actual execution parts had been standardized using something at a higher level of abstraction than shell.
cpuguy83 5 minutes ago
Give https://github.com/project-dalec/dalec a look. It is more declarative. Has explicit abstractions for packages, caching, language level integrations, hermetic builds, source packages, system packages, and minimal containers.
Its a Buildkit frontend, so you still use "docker build".
bheadmaster 4 hours ago
> Puppet or terraform or something more declarative would have been such a better alternative
Until you need to do something that isn't covered with its DSL, and you extend it with an external command execution declaration... At which point people will just write bash scripts anyway and use your declarative language as a glorified exec.
sofixa 3 hours ago
avsm 4 hours ago
Docker broke out the build layer into a separate component called BuildKit (see HN discussion recently https://news.ycombinator.com/item?id=47166264).
However, Dockerfiles are so popular because they run shell commands and permit 'socially' extending someone else shell commands; tacking commands onto the end of someone else's shell script is a natural process. /bin/sh is unreasonably effective at doing anything you need to a filesystem, and if the shell exposes a feature, it has probably been used in a Dockerfile somewhere.
Every other solution, especially declarative ones, tend to come up short when _layering_ images quickly and easily. However, I agree they're good if you control the entire declarative spec.
mort96 an hour ago
Dockerfile has the flexibility to do what you want though, no? Use a base image with terraform or puppet or opentofu or whatever pre-installed, then your Dockerfile can just run the right command to apply some declarative config file from the build context.
And if you want something weird that's not supported by your particular tool of choice, you have the escape hatch of running arbitrary commands in the Dockerfile.
What more do you want?
mihaelm 4 hours ago
I'd say LLB is the "standard", Dockerfile is just one of human-friendly frontends, but you can always make one yourself or use an alternative. For example, Dagger uses BuildKit directly for building its containers instead of going through a Dockerfile.
harrall 2 hours ago
Declarative methods existed before Docker for years and they never caught on.
They sounded nice on paper but the work they replaced was somehow more annoying.
I moved over to Docker when it came out because it used shell.
toast0 3 hours ago
Oof, not terraform please. If you use foreach and friends, dependency calculations are broken, because dependency happens before dynamic rules are processed.
I'd get much better results it I used something else to do the foreach and gave terraform only static rules.
esseph 3 hours ago
The more you try and abstract from the OS, the more problems you're going to run into.
zbentley 2 hours ago
phplovesong 3 hours ago
You can pretty much replace "docker build" with "go build".
But as long as people want to use scripting languages (like php, python etc) i guess docker is the neccessary evil.
garganzol 2 hours ago
Go is just one language, while Dockerfile gives you access to the whole universe with myriads of tools and options from early 1970s and up to the future. I don't know how you can compare or even "replace" Docker with Go; they belong to different categories.
well_ackshually an hour ago
>You can pretty much replace "docker build" with "go build".
I'll tell that to my CI runner, how easy is it for Go to download the Android SDK and to run Gradle? Can I also `go sonarqube` and `go run-my-pullrequest-verifications` ? Or are you also going to tell me that I can replace that with a shitty set of github actions ?
I'll also tell Microsoft they should update the C# definition to mark it down as a scripting language. And to actually give up on the whole language, why would they do anything when they could tell every developer to write if err != nil instead
Just because you have an extremely narrow view of the field doesn't mean it's the only thing that matters.
osigurdson 3 hours ago
In some situations, yes, others no. For instance if you want to control memory or cpu using a container makes sense (unless you want to use cgroups directly). Also if running Kubernetes a container is needed.
matrss 3 hours ago
yunwal 2 hours ago
> You can pretty much replace "docker build" with "go build".
Interesting. How does go build my python app?
aobdev 3 hours ago
Wasn’t this the same argument for .jar files?
speedgoose 3 hours ago
It doesn't sound like Golang is going to dominate and replace everything else, so Docker is there to stay.
tzs 2 hours ago
I've not done serious networking stuff for over two decades, and never in as complex an environment as that in the article, so the networking part of the article went pretty much over my head.
What I want to do when running a Docker container on Mac is to be able to have the container have an IP address separate from the Mac's IP address that applications on the Mac see. No port mapping: if the container has a web server on port 80 I want to access it at container_ip:80, not 127.0.0.1:2000 or something that gets mapped to container port 80.
On Linux I'd just used Docker bridged networking and I believe that would work, but on Mac that just bridges to the Linux VM running under the hypervisor rather than to the Mac.
Is there some officially recommended and supported way to do this?
For a while I did it by running WireGuard on the Linux VM to tunnel between that and the Mac, with forwarding enabled on the Linux VM [1]. That worked great for quite a while, but then stopped and I could not figure out why. Then it worked again. Then it stopped.
I then switched to this [2] which also uses WireGuard but in a much more automated fashion. It worked for quite a while, but also then had some problems with Docker updates sometimes breaking it.
It would be great if Docker on Mac came with something like this built in.
djs55 an hour ago
(co-author of the article and Docker engineer here) I think WireGuard is a good foundation to build this kind of feature. Perhaps try the Tailscale extension for Docker Desktop which should take care of all the setup for you, see https://hub.docker.com/extensions/tailscale/docker-extension
BTW are you trying to avoid port mapping because ports are dynamic and not known in advance? If so you could try running the container with --net=host and in Docker Desktop Settings navigate to Resources / Network and Enable Host Networking. This will automatically set up tunnels when applications listen on a port in the container.
Thanks for the links, I'll dig into those!
netrem 14 minutes ago
With ML and AI now being pushed into everything, images have ballooned in size. Just having torch as a dependency is some multiple gigabytes. I miss the times of aiming for 30MB images.
Have others found this to be the case? Perhaps we're doing something wrong.
Joe_Cool 5 minutes ago
I have an immutable Alpine Linux running from an ISO that includes a few docker containers (mostly ruby and php). All in about 750MB.
mrbluecoat 5 hours ago
> Docker repurposed SLIRP, a 1990s dial-up tool originally for Palm Pilots, to avoid triggering corporate firewall restrictions by translating container network traffic through host system calls instead of network bridging.
Genuinely fascinating and clever solution!
mmh0000 3 hours ago
Until recently, Podman used slirp4net[1] for its container networking. About two years ago, they switched over to Pasta[2][3] which works quite a bit differently.
[1] https://github.com/rootless-containers/slirp4netns
[2] https://blog.podman.io/2024/03/podman-5-0-breaking-changes-i...
[3] https://passt.top/passt/about/#pasta-pack-a-subtle-tap-abstr...
redhanuman 5 hours ago
repurposing a Palm Pilot dial-up tool to sneak container traffic past enterprise firewalls is unhinged and yet it worked the best infrastructure hacks are never clever in the moment they are just desperate that the cleverness only shows up after someone else has to maintain it.
avsm 5 hours ago
VPNKit (the SLIRP component) has been remarkably bug free over the years, and hasn't been much of a burden overall.
There was another component that we didn't have room to cover in the article that has been very stable (for filesystem sharing between the container and the host) that has been endlessly criticised for being slow, but has never corrupted anyone's data! It's interesting that many users preferred potential-dataloss-but-speed using asynchronous IO, but only on desktop environments. I think Docker did the right thing by erring on the side of safety by default.
Normal_gaussian 5 hours ago
Exactly. "so I hung the radiator out the window" vibes.
arcanemachiner 5 hours ago
toast0 3 hours ago
I don't think SLIRP was originally for palm pilots, given it was released two years before.
SLIRP was useful when you had a dial up shell, and they wouldn't give you slip or ppp; or it would cost extra. SLIRP is just a userspace program that uses the socket apis, so as long as you could run your own programs and make connections to arbitrary destinations, you could make a dial script to connect your computer up like you had a real ppp account. No incomming connections though (afaik), so you weren't really a peer on the internet, a foreshadowing of ubiquitous NAT/CGNAT perhaps.
avsm 3 hours ago
> I don't think SLIRP was originally for palm pilots, given it was released two years before.
That's a mistake indeed; "popularised by" might have been better. Before my beloved Palmpilot arrived one Christmas, I was only using SLIRP to ninja in Netscape and MUD sessions onto a dialup connection which wasn't a very mainstream use.
talkvoix 5 hours ago
A full decade since we took the 'it works on my machine' excuse and turned it into the industry standard architecture ('then we'll just ship your machine to production').
avsm 5 hours ago
(coauthor of the article here)
Well, before Docker I used to work on Xen and that possible future of massive block devices assembled using Vagrant and Packer has thankfully been avoided...
One thing that's hard to capture in the article -- but that permeated the early Dockercons -- is the (positive) disruption Docker had in how IT shops were run. Before that going to production was a giant effort, and 'shipping your filesystem' quickly was such a change in how people approached their work. We had so many people come up to us grateful that they could suddenly build services more quickly and get them into the hands of users without having to seek permission slips signed in triplicate.
We're seeing the another seismic cultural shift now with coding agents, but I think Docker had a similar impact back then, and it was a really fun community spirit. Less so today with the giant hyperscalars all dominating, sadly, but I'll keep my fond memories :-)
throwawaypath 4 hours ago
>massive block devices assembled using Vagrant and Packer has thankfully been avoided...
Funny comment considering lightweight/micro-VMs built with tools like Packer are what some in the industry are moving towards.
avsm 3 hours ago
talkvoix 5 hours ago
Great point about coding agents! Back then, Docker gave us 'it works on my machine, let's ship the machine'. Now, AI agents are giving us 'I have no idea how this works, let's ship the prompt'. The early Docker community spirit really was legendary though—before every hyperscaler wrapped it in 7 layers of proprietary managed services. Thanks for the memories and the write-up!
avsm 4 hours ago
syncsynchalt 3 hours ago
I see this take a lot but I'd argue what Docker did was to entice everyone to capture their build into a repeatable process (via a Dockerfile).
"Ship your machine to production" isn't so bad when you have a ten-line script to recreate the machine at the push of a button.
lioeters 2 hours ago
Exactly my feeling. Docker is "works on this machine" with an executable recipe to build the machine and the application. Newer better solutions like OCI-compliant tools will gradually replace Docker, but the paradigm shift has provided a lot of lasting value.
Gigachad 7 minutes ago
hwhshs 10 minutes ago
In 2002 I used to think why cant they package a website. These .doc installation instructions are insane! What a waste of someones time.
I sort of had the problem in mind. Docker is the answer. Not clever enough to have inventer it.
If I did I would probably have invented octopus deploy as I was a Microsoft/.NET guy.
chuckadams 5 hours ago
It's the ultimate in static linking. Perhaps a question that should be asked is why that approach is so compelling?
blackcatsec 4 hours ago
I question that as well, it's also why Go is extremely popular. Could it just be a pendulum swing back towards static linking?
Wonder when some enterprising OSS dev will rebrand dynamic linking in the future...
jfjasdfuw 3 hours ago
redhanuman 5 hours ago
the real trick was making "ship your machine" sound like best practice and ten years later we r doing the same thing with ai "it works in my notebook" jst became "containerize the notebook and call it a pipeline" the abstraction always wins because fixing the actual problem is just too hard.
zbentley 5 hours ago
> fixing the actual problem is just too hard.
I think it’s laziness, not difficulty. That’s not meant to be snide or glib: I think gaining expertise in how to package and deploy non-containerized applications isn’t difficult or unattainable for most engineers; rather, it’s tedious and specialized work to gain that expertise, and Docker allowed much of the field to skip doing it.
That’s not good or bad per se, but I do think it’s different from “pre-container deployment was hard”. Pre-container deployment was neglected and not widely recognized as a specialty that needed to be cultivated, so most shops sucked at it. That’s not the same as “hard”.
skydhash 4 hours ago
Bratmon 2 hours ago
I mean, walking through a door is easier than tearing down a wall, walking through it, and rebuilding the wall. That doesn't mean the latter is a good idea.
goodpoint 5 hours ago
...while completely forgetting about security
curt15 4 hours ago
>'then we'll just ship your machine production'
Minus the kernel of course. What is one to do for workloads requiring special kernel features or modules?
avsm 3 hours ago
Those are global to the machine; generally not an issue and seccomp rules can filter out undesirable syscalls to other containers. But GPU kernel/userspace driver matching has been a huge headache; see https://cacm.acm.org/research/a-decade-of-docker-containers/... in the article for how the CDI is (sort of) helping standardise this.
Skywalker13 3 hours ago
Oh, thank you... I'm not alone... I'm so tired of seeing crappy containers with pseudo service management handled by Dockerfiles, used instead of proper and serious packaging like that of many venerable Linux distributions.
forrestthewoods 5 hours ago
Linux user space is an abject disaster of a design. So so so bad. Docker should not need to exist. Running computer programs need not be so difficult.
esafak 5 hours ago
Who does it right?
jjmarr 5 hours ago
jfjasdfuw 3 hours ago
forrestthewoods 5 hours ago
whateverboat 5 hours ago
avsm 5 hours ago
An extremely random fact I noticed when writing the companion article [1] to this (an OCaml experience report):
"Docker, Guix and NixOS (stable) all had their first releases
during 2013, making that a bumper year for packaging aficionados."
Now we get coding agent updates every week, but has there been a similar year since 2013 where multiple great projects all came out at the same time?esseph 3 hours ago
TBH I feel as if only docker belongs in that list. Guix and nix have users, sure, but not remotely like docker.
NewJazz 2 hours ago
Yeah they are way better than docker for packaging
zacwest 6 hours ago
The historic information in here was really interesting, and a great example of an article rapidly expanding in scope and detail. How they combatted corporate IT “security” software by pretending to be a VPN is quite unexpected.
the__alchemist 5 hours ago
I'm optimistic we will succeed in efforts to simplify linux application / dependency compatibility instead of relying on abstractions that which work around them.
mihaelm 5 hours ago
Maybe if you only look at it through the lens of building an app/service, but containers offer so much more than that. By standardizing their delivery through registries and management through runtimes, a lot of operational headaches just go away when using a container orchestrator. Not to mention better utilization of hardware since containers are more lightweight than VMs.
Hackbraten 3 hours ago
> Not to mention better utilization of hardware
When compared to a VM, yes. But shipping a separate userspace for each small app is still bloat. You can reuse software packages and runtime environments across apps. From an I/O, storage, and memory utilization point of view, it feels baffling to me that containers are so popular.
Gigachad 2 minutes ago
esseph 3 hours ago
the__alchemist 5 hours ago
Hah indeed that's my perspective. I'm used to being able to compile program, distribute executable, "just works", across win, Linux, MacOs. (With appropriate compile targets set)
__MatrixMan__ 5 hours ago
Agreed.
I've recently switched from docker compose to process compose and it's super nice not to have to map ports or mount volumes. What I actually needed from docker had to do less with containers and more with images, and nix solves that problem better without getting in the way at runtime.
onei 4 hours ago
Assuming I've found the right process-compose [1], it struck me as having much overlap with the features of systemd. Or at least, I would tend to reach for systemd if I wanted something to run arbitrary processes. Is there something additional/better that process-compose does for you?
__MatrixMan__ 3 hours ago
Bratmon 2 hours ago
I'm curious why. To me "We updated our library to change some things in a way that's an improvement on net but only mostly backwards compatible" seems like an extremely common instinct in software development. But in an environment where people are doing that all the time, the only way to reliably deploy software is to completely freeze all your direct and indirect dependencies at an exact version. And Docker is way better at handling that than traditional Linux package managers are.
Why do you think other tools will make a comeback?
Joker_vD 5 hours ago
I am also optimistic we will succeed in efforts to properly annotate the data on the Internet with useful and accurate meta-data and achieve the semantic web vision instead of relying on search engines and LLMs.
brtkwr 3 hours ago
I realise apple containers haven't quite taken off yet as expected but omission from the article stands out. Nice that it mentions alternative approaches like podman and kata though.
avsm 3 hours ago
> but omission from the article stands out.
(article author here)
Apple containers are basically the same as how Docker for Mac works; I wrote about it here: https://anil.recoil.org/notes/apple-containerisation
Unfortunately Apple managed to omit the feature we all want that only they can implement: namespaces for native macOS!
Instead we got yet another embedded-Linux-VM which (imo) didn't really add much to the container ecosystem except a bunch of nice Swift libraries (such as the ext2 parsing library, which is very handy).
tsoukiory 7 minutes ago
I dont no spek anglais
rando1234 an hour ago
Didn't Vagrant/Vagrantfiles precede Docker? Unclear why that would be the key to its success if so.
phplovesong 3 hours ago
We have shipped unikernels for the last decade. Zero sec issues so far. I highly recommend looking into the unikernel space for a docker alternative. MirageOS being a good start.
avsm 3 hours ago
cool! What services have you shipped as unikernels? Docker doesn't have to be an alternative; it can help with the build/run pipeline for them too: https://www.youtube.com/watch?v=CkfXHBb-M4A (Dockercon 2015!)
politelemon 4 hours ago
Somewhere along the line they started prioritising docker desktop over docker. It's a bit jarring to see new features coming to desktop before it comes to Linux, such as the new sandbox features.
Is there any insight into this, I would have thought the opposite where developers on the platform that made docker succeed are given first preview of features.
krapht 3 hours ago
Paying customers use docker desktop.
arikrahman 4 hours ago
I'm hoping the next decade introduces more declarative workflows with Nix and work with docker to that end.
INTPenis 5 hours ago
I thought it was 2014 when it launched? The article says the command line interface hasn't changed since 2013.
avsm 5 hours ago
We first submitted the article to the CACM a while ago. The review process takes some time and "Twelve years of Docker containers" didn't have quite the same vibe.
heraldgeezer 2 hours ago
I still havent learned it being in IT its so embarassing. Yes I know about the 2-3h Youtube tutorials but just...
1970-01-01 2 hours ago
I now wonder if we'll end up switching it all back to VMs so the LLMs have enough room to grow and adapt.
skybrian 2 hours ago
Maybe, but the install will often be done using a Docker file.
callamdelaney 2 hours ago
The fact that docker still, in 2026, will completely overwrite iptables rules silently to expose containers to external requests is, frankly, fucking stupid.
netrem 18 minutes ago
Indeed. I've had even experienced sysadmins be surprised that their ufw setup will be ignored.
brcmthrowaway 4 hours ago
I dont use Dockerfile. Am i slumming it?
vvpan 3 hours ago
Probably? How do you deploy?
rglover 2 hours ago
Just pull a tarball from a signed URL, install deps, and run from systemd. Rolls out in 30 seconds, remarkably stable. Initial bootstrap of deps/paths is maybe 5 minutes.
user3939382 4 hours ago
It solves a practical problem that’s obvious. And on one hand the practical where-were-at-now is all that matters, that’s a legitimate perspective.
There’s another one, at least IMHO, that this entire stack from the bottom up is designed wrong and every day we as a society continue marching down this path we’re just accumulating more technical debt. Pretty much every time you find the solution to be, “ok so we’ll wrap the whole thing and then…” something is deeply wrong and you’re borrowing from the future a debt that must come due. Energy is not free. We tend to treat compute like it is.
Maybe I’m in a big club but I have a vision for a radically different architecture that fixes all of this and I wish that got 1/2 the attention these bandaids did. Plan 9 is an example of the theme if not the particular set of solutions I’m referring to.
forrestthewoods 2 hours ago
I am so thoroughly convinced that Docker is a hacky-but-functional solution to an utterly failed userspace design.
Linux user space decided to try and share dependencies. Docker obliterates this design goal by shipping dependencies, but stuffing them into the filesystem as-if they were shared.
If you’re going to do this then a far far far simpler solution is to just link statically or ship dependencies adjacent to the binary. (Aka what windows does). Replicating a faux “shared” filesystem is a gross hack.
This is a distinctly Linux problem. Windows software doesn’t typically have this issue. Because programs ship their dependencies and then work.
Docker is one way to ship dependencies. So it’s not the worst solution in the world. But I swear it’s a bad solution. My blood boils with righteous fury anytime anyone on my team mentions they have a 15 minute docker build step. And don’t you damn dare say the fix to Docker being slow is to add more layers of complexity with hierarchical Docker images ohmygodiswear. Running a computer program does not have to be hard I promise!!
ahnick an hour ago
Okay, so what's the best solution? What's even just a better solution than Docker? I mean really truly lay out all the details here or link to a blog post that describes in excruciating detail how they shipped a web application and maintained it for years and was less work than Docker containers. Just saying "a far far simpler solution is to just link statically or ship dependencies adjacent to the binary" is ignoring huge swaths of the SDLC. Anyone can cast stones, very few can actually implement a better solution. Bring the receipts.
forrestthewoods 5 minutes ago
The first half of my career was spent shipping video games. There is no such thing as shipping a game in Docker. Not even on Linux. You depend on minimum version of glibc and then ship your damn dependencies.
The more recent half of my career has been more focused on ML and now robotics. Python ML is absolute clusterfuck. It is close to getting resolved with UV and Pixi. The trick there is to include your damn dependencies… via symlink to a shared cache.
Any program or pipeline that relies on whatever arbitrary ass version of Python is installed on the system can die in a fire.
That’s mostly about deploying. We can also talk about build systems.
The one true build system path is a monorepo that contains your damn dependencies. Anything else is wrong and evil.
I’m also spicy and think that if your build system can’t crosscompile then it sucks. It’s trivial to crosscompile for Windows from Linux because Windows doesn’t suck (in this regard). It almost impossible to crosscompile to Linux from Windows because Linux userspace is a bad, broken, failed design. However Andrew Kelley is a patron saint and Zig makes it feasible.
Use a monorepo, pretend the system environment doesn’t exist, link statically/ship adjacent so/dll.
Docker clearly addresses a real problem (that Linux userspace has failed). But Docker is a bad hack. The concept of trying to share libraries at the system level has objectively failed. The correct thing to do is to not do that, and don’t fake a system to do it.
Windows may suck for a lot of reasons. But boy howdy is it a whole lot more reliable than Linux at running computer programs.