Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I somehow feel compelled to point out that this idea of union/overlay FS layers has nothing to do with containers per se. But on the other hand is somehow critical for why containers got popular as that is the way to make the whole thing somehow efficient both in terms of hardware resources and developer time.


Yeah the title should be "unionfs: a kernel feature having nothing whatever to do with containers, and some ways to use it" but I guess that's too long :-) . Problem is there is not some central marketing department for Linux that can even tell us what "containers" means. There are lots of people who think they are "using containers" who do not use this style of mount, and there are lots of people using this style of mount who do not consider themselves container users.


They really don't, and it was funny that period where you'd see Dockerfiles with all the commands in a single invocation to avoid "bloating" the resulting image with unnecessary intermediate products that ended up deleted.

Maybe it's out there and I've just missed it, but I really wish there were richer ways to build up a container FS than just the usual layers approach with sets of invocations to get from one layer to the next, especially when it's common to see invocations that mean totally different things depending on when they're run (eg "apt update") and then special cache-busting steps like printing the current time.

I know part of this is just using a better package manager like nix, but I feel like on the docker side you could do interesting stuff like independently run steps A, B, and C against some base image X, and then create a container that's the FS of X+A+B+C, even though the original source containers were X+A, X+B, and X+C.


Not sure if you're aware since you already mentioned Nix, but Nixpkgs has a nice function to export "graph-layered" Docker images[0]. There is some overhead in the conversion, but the rest of the build can be parallelized by Nix as usual.

[0]: https://grahamc.com/blog/nix-and-layered-docker-images


Fantastic, this is exactly what I was looking for, thanks for the pointer!

It looks like the layer limitation comes from the underlying union filesystem stuff and not from anything inherent in Docker itself. I wonder if it would be possible to build a new filesystem driver for Docker that could serve up an infinite ecosystem of immutable nix packages without having to actually unpack them. Whether such a thing would actually be of value would probably depend a lot on the use case, but I could imagine a CI type scenario where you wanted to have a shared binary cache and individual nodes able to come up very fast without having to unpack the pkgs and prepare a full rootfs each time.


That's already possible to some extent by comining staged builds and the experimental buildkit engine which can run independent steps concurrently.


I’m still researching, but I got the impression that buildah from Redhat can do this.


As I understand it, containers are just a set of concepts and kernel features put together to provide an abstraction that's not that different from virtual machines for common use cases.


It is a way to create reasonably isolated environment without the VM overhead. On Linux it is usually called container and implemented in terms of clone() and cgroups, BSDs have jails, Solaris has zones and for Plan 9 that is trivial concept. What is special about the Linux and Plan 9 cases is that the implementation is based around stuff that is generic and not strictly tied to this use case.

As a side note: I’m somewhat bewildered by the docker-controlled-by-kubelet-on-systemd architectures as there is huge overlap in what these layers do and how they are implemented.


I'm not an avid container user, but I think at least part of the popularity for certain dev/ops/CI use cases is how easy it is to get stuff in and out of them using bind mounts and the like. For example, having a run environment that's totally different from a build environment. The only way to do this with conventional VMs is to access them over some in-band protocol like SSH (or to have to re-build the run rootfs and reboot that machine, which is typically extremely slow).


In common usage it seems containers is synonymous with "what Docker does". Because meanwhile Docker has blurred in meaning since various other things were called Docker. Such as calling the non-native product "Native Docker" or whatever "Docker Enterprise" is (impossible to tell by the landing page description).


True, you can use other stacking filesystems with Docker (I believe it had/has a ZFS driver at one time?) The example she shows in the comic are just about the filesystem and leave out the Docker pieces, so I'm wondering if this is just one part of a series.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: