Hi! Thanks for offering an AMA here. I don't have a specific question, but I am interested in hearing about the general story of what it was like developing Docker, what the experience was like trying to build a business around it, and what you're up to these days in post-Docker life. Thanks in advance!
I also recently discovered a trove of my old presentations, retracing my early obsession with the same problem, and my repeated failed attempts to get people to care. I shared some of them in a talk a few weeks ago: https://www.youtube.com/watch?v=huRfsLMK5sA
Imitation is the highest form of flattery! Obviously there was demand for an alternative to Docker that was native to the Red Hat platform. We couldn't offer that (although we tried in the early days) so it made sense that they would.
In the early days we tried very hard to accommodate their needs, for example by implementing support for devicemapper as an alternative to aufs. I remember spending many hours in their Boston office whiteboarding solutions. But we soon realized our priorities were fundamentally at odds: they cared most about platform lock-in, and we cared most about platform independence. There was also a cultural issue: when Red Hat contributes to open source it's always from a position of strength. If a project is important to them, they need merge authority - they simply don't know how to meaningfully contribute to an upstream project when they're not in charge. Because of the diverging design priorities, they never earned true merge rights on the repo: they had to argue for their pull requests like everyone else, and input from maintainers was not optional. Many pull requests were never merged because of fundamental design issues, like breaking compatibility with non-Red Hat platforms. Others because of subjective architecture disagreements. They really didn't like that, which led to all sorts of drama and bad behavior. In the process I lost respect for a company I once admired.
I also think they made a mistake marketing podman as a drop-in replacement to Docker. This promise of compatibility limited their design freedom and I'm sure caused the maintainers a lot of headaches- compatibility is hard!
Ultimately the true priority of podman - native integration with the Red Hat platform - makes it impossible for it to overtake Docker. I'm sure some of the podman authors would like to jettison that constraint, but I don't think that's structurally possible. Red Hat will never invest in a project that doesn't contribute to their platform lock-in. Back when RH was a dominant platform, that was a strength. Nowadays it is a hindrance.
There was probably a lot going on behind closed doors, but from the outside, it appeared that RedHat was trying to improve the security and technical details of containers, but Docker was just refusing pull requests and not playing nice. This eventually drove RedHat to make their own implementation (i.e. Podman), so it was a self created enemy and not necessarily one that was built-in/inevitable.
I'm definitely not a fan of RedHat's moves since being acquired, but at least from the outside, this looked like Docker being arrogant and problematic and not a "RedHat problem".
I am painfully aware of that narrative. All I can say is that it is a false narrative, deliberately pushed by Red Hat for competitive reasons. There was a deliberate decision to spend marketing dollars making Docker look bad (specifically less secure), at a time where we were competing directly in the datacenter market.
Ask yourself: how many open source projects reject PRs every day because of design disagreements? That's just how open source works. Why did you hear about that specific case of PRs getting rejected, and why do you associate it with vague concepts like "arrogance" and "insecurity"? That's because a marketing team engineered a narrative, then spent money to deploy that narrative - via blog posts, social media posts, talks at conferences, analyst briefings, partner briefings, sales pitches, and so on. This investment was justified by the business imperative of countering what was perceived to be an existential threat to Red Hat's core business.
It opened my eyes to the reality of big business in tech: many of the "vibes" and beliefs held by the software engineering community, are engineered by marketing. If you have enough money to spend, you can get software engineers to believe almost anything. It is a depressing realization that I am still grappling with.
The most damning example I can give you: we once rejected a PR because it broke compatibility with other platforms. Red Hat went ahead and merged it in their downstream RPM package. So, Fedora and RHEL users who thought they were installing Docker, were in fact installing an unauthorized modified version of it. Later, a security vulnerability was discovered in their modified version only, but advertised as a vulnerability in Docker - imagine our confusion, looking for a vulnerability in code that we had not shipped. Then Red Hat used this specific vulnerability, which only existed in their modified version, in their marketing material attacking Docker as "insecure". That was an eye-opening moment for me...
If it is pure marketing, I wonder why docker couldn't play the same game and be better at it?
E.g. for your most damning example: If docker published this story, blogged about it, made noice in places like HN, it is exactly what the press would love: RH breaks docker security while claiming to be more secure! The Emperor has no clothes! If you take security serious, accept no fake substitutes!
In any case I meant it in an informal software engineering sense: it's bad form for a packager to distribute upstream software under its original name, with substantial modifications beyond what users would expect distro packagers to make - backporting, build rules, etc.
For such a downstream change to introduce security vulnerabilities is a major fuckup. To actively blame upstream for said vulnerability, while competing with them in the market, is unethical.
We heard the feedback that we should pick a lane between CI and AI agents. We're refocusing on CI.
We're making Dagger faster, simpler to adopt.
We're also building a complete CI stack that is native to Dagger. The end-to-end integration allows us to do very magical things that traditional CI products cannot match.
We're looking for beta testers! Email me at solomon@dagger.io
Dagger has been a godsend in helping me cope with the unending misery that is GitHub Actions. A big thanks to you and the whole team at Dagger for making this possible.
Thank you for the kind words! I'd love to show you a demo of the new features we're working on, and get your thoughts. Want to DM me on the Dagger discord server? Or email me at solomon@dagger.io
Exactly. The LLM primitives will remain - we were careful to never compromise the modular, lego-like design of the system. But now we have clarity on the primary use case.
Thanks for giving us another chance! Come say hi on our discord, if you ever want to ask questions or discuss your use case. We have a friendly group of CI nerds who love to help.
Yes, I am the co-founder of Docker and also of Dagger. The other two co-founders of Dagger, Sam Alba and Andrea Luzzardi, were early employees of Docker.
The companies themselves are not related beyond that.
Only listen to your users and customers, ignore everyone else.
Don't hire an external CEO unless you're ready to leave. Hiring a CEO will not fix the loneliness of not having a co-founder.
Having haters is part of success. Accept it, and try to not let it get to you.
Don't partner with Red Hat. They are competitors even though they're not honest about it.
Not everyone hates you even though it may seem that way on hacker news and twitter. People actually appreciate your work and it will get better. Keep going.
'Bridge' was and still is an established network term for joining two broadcast domains into one. Why the hell you decided to name your NAT'ed network layer a 'bridge'?
As far as I know, Docker uses the term "bridge" in the standard way, to designate the use of Linux bridge interfaces (basically virtual ethernet switches) to interconnect containers. Containers connect to each other via a layer 2 bridge, not NAT.
It has as much sense as calling all the car roads in the world 'bridges'. They are interconnecting some areas via a physical connection, not some 5th dimension magik, after all.
It's even more egregious with 'ipvlan' and 'macvlan' drivers:
> ipvlan Connect containers to external VLANs.
Duh, that's a 'routed network' and nobody cares if it's on a separate vlan or not.
> macvlan Containers appear as devices on the host's network.
Which reminds me that BuildKit does not have support for specifying a network which is crazy given how you can configure the daemon to not attach one by-default.
Yes, of course. I was also an avid user of vserver and openvz on Linux, back when they required patching the kernel, and lxc didn't exist yet.
When we open sourced Docker, we had considerable experience running openvz in production, as well as migrating to lxc - a miserable experience in the early days because the paint was still so fresh. To my knowledge we were the very first production deployment of managed databases and multi-tenant application servers on lxc, back in 2010.
It's a common misconception that Docker was a naive reinvention of, or a thin wrapper around, pre-existing technology like solaris zones or lxc. In reality that is not the case. Those technologies were always intended as alternative forms of virtualization: a new way to slice up a machine. Docker was the first to use container and copy-on-write tech for the purpose of packaging and distributing applications, rather than provisioning machines. Before Docker, nobody would ever consider running a linux container or solaris zone on top of a VM: that would be nonsensical because they were considered to be at the same layer of the stack. Sun invented a lot of things, but they did not everything :)
AMA.