"The problem is that Docker the technology became so successful that Docker the company struggled to monetize it. When your core product becomes commoditized and open source, you need to find new ways to add value."
No, everything was already open source, other had done it before too, they just made it in a way a lot of "normal" users could start with it, then they waited too long and others created better/their own products.
"Docker Swarm was Docker’s attempt to compete with Kubernetes in the orchestration space."
No, it never was intended like that. That some people build infra/business around it is something completely different, but swarm was never intended to be a kubernetes contender.
"If you’re giving away your security features for free, what are you selling?"
This, is what actually is going to cost their business, I'm extremely grateful for what they have done for us. But they didn't gave themselves a chance.
Their behaviour has been more akin to a non-profit.
Great for us, not so great for them in the long run.
It didn't help them that they rejected the traditionally successful ways of monetizing open source software. Which is, selling support contracts to large corporate users.
Corporate customers didn't like the security implications of the Docker daemon running as root, they wanted better sandboxing and management (cgroups v2), wanted to be able to run their own internal registries, didn't want to have docker trying to fight with systemd, etc.
Docker was not interested (in the early years) in adopting cgroups v2 or daemonless / rootless operation, and they wanted everyone to pay to use Dockerhub on the public internet rather than running their own internal registries, so docker-cli didn't support alternate registries for a long long time. And it seemed like they disliked systemd for "ideological" reasons to an extent that they didn't make much effort to resolve the problems that would crop up between docker and systemd.
Because Docker didn't want to build the product that corporate customers wanted to use, and didn't accept patches when Red Hat tried to get them implemented those features themselves, eventually Red Hat just went out and built up Podman, Quay, and the entire ecosystem of tooling that those corporate customers wanted themselves (and sold it to them). That was a bit of an own goal.
Absolutely none of this is true.
Docker had support contracts (Docker EE... and trying to remember, docker-cs before that naming pivot?).
Corporate customers do not care about any of the things you mentioned. I mean, maybe some, but in general no. That's not what corps think about.
There was never "no interest" at Docker in cgv2 or rootless.
Never.
cgv2 early on was not useable. It lacked so much functionality that v1 had.
It also didn't buy much, particularly because most Docker users aren't manually managing cgroups themselves.
Docker literally sold a private registry product. It was the first thing Docker built and sold (and no, it was not late, it was very early on).
for the record, cpuguy83 was in the trenches at docker circa 2013, it was like him a handful of other people working on docker when it went viral, he has an extremely insiders perspective, I'd trust what he says.
I mean you can say that, but on the topic of rootless, regardless of "interest" at Docker, they did nothing about it. I was at Red Hat at the time, a PM in the BU that created podman, and Docker's intransigence on rootless was probably the core issue that led to podman's creation.
I've really appreciated RH's work both on podman/buildah and in the supporting infrastructure like the kernel that enables nesting, like using buildah to build an image inside a containerized CI runner.
That said, I've been really surprised to not see more first class CI support for a repo supplying its own Dockerfile and being like "stage 1 is to rebuild the container", "stage two is a bunch of parallel tests running in instances of the container". In modern Dockerfiles it's pretty easy to avoid manual cache-busting by keying everything to a package manager lockfile, so it's annoying that the default CI paradigm is still "separate job somewhere that rebuilds a static base container on a timer".
Yeah, I've moved on from there, but I agree. There wasn't a lot of focus on the CI side of things beyond the stuff that ArgoCD was doing, and Shipwright (which isn't really CI/CD focused but did some stuff around the actual build progress, but really suffered failure to launch).
My sense is that a lot of the container CI space just kind of assumes that every run starts from nothing or a generic upstream-supplied "stack:version" container and installs everything every time. And that's fine if your app is relatively small and the dependency footprint is, say, <1GB.
But if that's not the case (robotics, ML, gamedev, etc) or especially if you're dealing with a slow, non-parallel package manager like apt, that upfront dependency install starts to take up non-trivial time— particularly galling for a step that container tools are so well equipped to cache away.
I know depot helps a bunch with this by at least optimizing caching during build and ensuring the registry has high locality to the runner that will consume the image.
I’d be surprised if the systemd thing was not also true.
I think it’s quite likely Docker did not have a good handle on the “needs” of the enterprise space. That is Red Hats bread and butter; are you saying they developed all of that for no reason?
I don't feel like RedHat had to do anything to sell support contracts in this case, because that was already their business.
All they had to do was say they'll include container support as part of their contracts.
What they did do, AIUI based on feedback in the oss docker repos, is those contracts stipulated that you must run RHEL in the container and the host, and use systemd in the container in order to be "in support".
So that's kind of a self-feeding thing.
I don't feel like RedHat had to do anything to sell support contracts in this case, because that was already their business. All they had to do was say they'll include container support as part of their contracts.
Correct. Maybe starting with RHEL7, Red Hat took the stance that “containers are Linux”. Supporting Docker in RHEL7 was built-in as soon as we added it to ‘rhel-7-server-extras-rpms’ repo. The containers were supported as “customer workloads” while we docker daemon and cli were supported as part of the OS.
What they did do, AIUI based on feedback in the oss docker repos, is those contracts stipulated that you must run RHEL in the container and the host, and use systemd in the container in order to be "in support". So that's kind of a self-feeding thing.
Not quite right. RHEL containers (and now UBI containers) are only supported when they run on RHEL OS hosts or RHEL CoreOS hosts as part of an OpenShift cluster. systemd did not work (well?) in containers for a while and has not been ever a requirement. There’s several reasons for this RHEL containers on RHEL/RHCOS requirement. For one, RHEL/UBI containers inherit their subscription information from their host. This is much like how RHEL VMs can inherit their subscription if you have virtualization host-based subscriptions. If containers weren’t tied to their host, then by convention, each container would need to subscribe to Red Hat on instantiation and would consume a Red Hat subscription instance.
I was early container adopter at a large RHEL shop and they absolutely required us to use their forked version of docker for the daemon and RHEL based images with systemd.
This was mostly so containers could register with systems manager and count against our allowed systems.
We ignored them because it was so bad and buggy. This is when I switched to CoreOS for containerized workloads.
When Docker was new I had a really bad ADSL connection (2Mbps) and couldn't ever stack up a containerized system properly because Dockerhub would time out.
I did large downloads all the time, I used to download 25GB games for my game consoles for instance. I just had to use schedule them and use tools that could resume downloads.
If I'd had a local docker hub I might have used docker but because I didn't it was dead to me.
OCI Container Runtimes like OCI's runc are "container runtimes", so the runtime spec[2]
Basically, docker started using lxc, but wanted a go native option, and wrote runc. If you look at [0] you can see how it actually instantiates the container. Here is a random blog that describes it fairly well [1]
crun is the podman related project written in c, which is more efficient than the go based runc.
You can try this even as the user nobody 65534:65534, but you may need to make some dirs, or set envs.
Here is an example pulling an image with podman to make it easier, but you could just make an OCI spec bundle and run it:
mkdir hello
cd hello
podman pull docker.io/hello-world
podman export $(podman create hello-world) > hello-world.tar
mkdir rootfs
tar -C rootfs -xf hello-world.tar
runc spec --rootless
sed -i 's;"sh";"/hello";' config.json
runc run container1
Hello from Docker!
runc doesn't support any form of constraints like a bounding set on seccomp, selinux, apparmor, etc.. but it will apply profiles you pass it.
Basically it fails open, and with the current state of apparmor and selinux it is trivial to bypass the minimal userns restrictions they place.
Historically, before rootless containers this was less of an issue, because you had to be a privileged user to launch a container. But with the holes in the LSMs, no ability to set administrative bounding sets, and the reality that none of the defaults constrain risky kernel functionality like vsock, openat2 etc... there are a million ways to break netns isolation etc...
Originally the docker project wanted to keep all the complexity of mutating LSM rules etc... in containerd. and they also fought even basic controls like letting an admin disable the `--privileged` flag at the daemon level.
Unfortunately due to momentum, opinions, and friction in general, that means that now those container runtimes have no restrictions on callers, and cannot set reasonable defaults.
Thus now we have to resort to teaching every person who launches a container to be perfect and disable everything, which they never do.
If you run a k8s cluster with nodes on VMs, try this for example, if it doesn't error out, any pod can talk to any other pod on the node, with a protocol you aren't logging, and which has limited ability to log anyway. (if your k8s nodes are running systemd v256+ and you aren't using containerd which blocked vsock, but cri-o, podman, etc... don't (at least up to a couple of weeks ago)
socat - VSOCK-LISTEN:3000
You can also play around with other af_families as IPX, Appletalk, etc... are all available by default, or see if you can use openat2 to use some file in /proc to break out.
I can't help but see a parallel with some of the entertainment franchises in recent years (Star Wars, etc.) -- where a company seems to be allergic to taking money by giving people what they want, and instead insists on telling people what they should want and blaming them when they don't
yes; its really notable that corporates and other support companies (e.g. redhat) don't want to start down the path of NIH, and will go to significant efforts to avoid it. However, once they have done it, it is very hard to make them come back.
> No, it never was intended like that. That some people build infra/business around it is something completely different, but swarm was never intended to be a kubernetes contender.
That would be news to the then Docker CTO, who reached out to my boss to try to get me in trouble, because I was tweeting away about [cloud company] and investing heavily in Kubernetes. The cognitive dissonance Docker had about Swarm was emblematic of the missteps they took during that era where Mesos, Kube and Swarm all looked like they could be The Winner.
Only because with Google open sourcing Kubernetes, it was a decision on still be able to play the game, or be left completely out, helping with OCI was a survival decision.
As proven later when Kubernetes became container runtime agnostic.
I think what Docker should have done, is charge for Docker Desktop from the start... even $5/mo/user as a discount rate for non-open-source usage... similar for container storage, had a commercial offering for private containers from very early on.
The former felt like a rug pull when they did it later, and the latter should have been obvious from the start. But it wasn't there in the beginning and too many alternatives from every cloud provider popped in to fill that gap and it was too late.
There were a lot of cool ideas, and I think early on, they were more focused on the cool ideas and less on how to make it a successful, long lived business that didn't rely on VC funding and an exit strategy they didn't have to succeed.
I have to agree. Of all the per-seat subs that my employer has, the thing Docker Desktop provides is of so much easily provable value. I tend to agree that making Docker Desktop a commercial product way back then would have probably been good. The only hurdle would be figuring out enough of a 'free tier' to get developers to get into it and get addicted and demand a license, but not so much that everyone just uses the "free tier" or "personal" edition indefinitely - which I suspect many, many companies' developers do to this day with Docker Desktop, with their employers' tacit consent.
This "free to start using" move is best exemplified by Slack, which ended up taking over many companies guerrilla-style. They did a pretty good job of pivoting companies to paying, too.
> No, everything was already open source, other had done it before too, they just made it in a way a lot of "normal" users could start with it, then they waited too long and others created better/their own products.
Yes. It was a helpful UI abstraction for people uncomfortable with lower level tinkering. I think the big "innovations" were 1) the file format and 2) the (free!) registry hosting. This drove a lot of community adoption because it was so easy to share stuff and it was based on open source.
And while Docker the company isn't the behemoth the VCs might have wanted, those contributions live on. Even if I'm using a totally different tool to run things, I'm writing a Dockerfile, and the artifacts are likely stored in something that acts basically the same as Docker Hub.
Arrogance was what actually killed them. They picked fights with Google and RedHat and then showing up at conferences with shirts that said "we don't accept pull requests" tipped the scales so that RedHat and Google both went their own way and their technology was now pushed out of 2 of their biggest channels.
(I don't know of any T-shirts saying "we don't accept pull requests". That sounds made up. We very much did accept pull requests... a great many of them).
For a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.
I'm so surprised to see that folks rediscover Unison in 2026 :) It is a unique piece of software which has been around for 20 years or so. The two way sync is great but also a bit scary since it can wipe files.
I mean, isn't that just about what happened to Docker?
They wrote a really nice wrapper around cgroups/ns/tarball hosting and then struggled to monetize it because a large portion of their users are exactly the kind of people who could set up a curlftpfs document cloud.
> but swarm was never intended to be a kubernetes contender.
Your comment is accurate for the original Swarm project, but a bit misleading regarding Swarm mode (released later on and integrated into docker).
I have worked on the original Swarm project and Swarmkit (on the distributed store/raft backend), and the latter was intended to compete with Kubernetes.
It was certainly an ambitious and borderline delusional strategy (considering the competition), but the goal was to offer a streamlined and integrated experience so that users wouldn't move away from Docker and use Swarm mode instead of Kubernetes (with a simple API, secured by default, just docker to install, no etcd or external key value metadata store required).
You can only go so far with a team of 10 people versus the hundreds scattered across Google/RedHat/IBM/Amazon, etc. There were so many evangelists and tech influencers/speakers rooting for Kubernetes already, reversing that trend was extremely difficult, even after initiating sort of a revolution in how developers deployed their apps with docker. The narrative that cluster orchestration was Google's territory (since they designed Borg that was used at a massive scale) was too entrenched to be challenged.
Swarm failed for many reasons (it was released too soon with a buggy experience and at an incomplete state, lacking a lot of the features k8s had, but also too late in terms of timing with k8s adoption). However, the goal for "Docker Swarm mode" was to compete with Kubernetes.
I love Kubernetes, but it's still a big leap from docker-compose to k8s, and swarm filled that niche admirably. I'm still in that niche -- k8s is overkill for every one of my projects -- but k3s is pretty lightweight, easy to install, and there's a lot of great tooling for k8s I can use with it. Still wish there were something as simple as "docker-compose plus a couple bits" that was swarm mode -- I'm drowning in YAML files!
Thanks for chiming in, I was questioning that assertion myself.
I think the problem was giving up on swarm TBH. At some point it was clear k8s would be dominant, but there was still room for that streamlined and integrated experience.
I joined them after they were clearly in decline and half of the office was empty. Contrary to some of the comments here, there were enterprise products (Docker EE, private registry, orchestration) and a very large sales team.
There were also a lot of talented, well-paid engineers working on open source side projects with no business value. It just wasn't a very well-run company. You can't take on half a billion dollars in VC just to sell small enterprise support contracts.
>No, everything was already open source, other had done it before too, they just made it in a way a lot of "normal" users could start with it, then they waited too long and others created better/their own products.
They made a unique contribution which was significant in its own way. It doesn't matter in the end that others tried to do related things before and failed to get traction. They could have made Docker more restrictive to make money, and they didn't. Open source is hard to actually make money with, unfortunately for those of us who enjoy it.
No, everything was already open source, other had done it before too, they just made it in a way a lot of "normal" users could start with it, then they waited too long and others created better/their own products.
"Docker Swarm was Docker’s attempt to compete with Kubernetes in the orchestration space."
No, it never was intended like that. That some people build infra/business around it is something completely different, but swarm was never intended to be a kubernetes contender.
"If you’re giving away your security features for free, what are you selling?"
This, is what actually is going to cost their business, I'm extremely grateful for what they have done for us. But they didn't gave themselves a chance. Their behaviour has been more akin to a non-profit. Great for us, not so great for them in the long run.