It’s astonishing to me that people would argue that docker is some how a simpler solution. Some would happily maintain a container registry, constantly worry about the order of operations in your docker file, create a build pipeline in a format incompatible with non docker use cases, spend time understanding how dockers bridge interacts with multiple Network controllers over learning how your operating system works.
There's docker and there's docker. Docker as a local container builder/runner is simple. Docker as a deployment / networking / scheduling service is not. Not everyone is using Docker for more than "run this script in an environment I want".
The issue with curl|bash is not just the security (you can run curl|bash as an unprivileged user).
It's also the fact that it generally doesn't do anything to integrate well with your system, and just pull all its dependencies in a folder and never update it afterward.
I'm astonished too. The section on use cases doesn't help to justify why we need all this complexity. Maybe they are trying to compensate for the inefficiency of the Unix permission system ( having user, group and world is lacking in a lot of situations by today standards ).
The other solution is to implement a totally different mechanism ( ACL + some other thing, Capabilities ) but then you lose retrocompatibility.
> having user, group and world is lacking in a lot of situations by today standards
A process has supplemental group membership. If you were only limited to the above, then, yes, you would need to share resources through world or utilize a special broker if you wanted multiple privilege domains for a service.
But with supplemental groups you can be as fine-grained as you want. Just create a group for each domain with a unique membership set and make that a supplemental group for each user with access to that domain. When you want to share a file, just chown and chmod appropriately.
To see how this works in practice, look at BSDAuth (OpenBSD's alternative to PAM). Rather than using dynamically loadable modules for authentication methods--which effectively means that any program that wants to use authentication services needs root permissions--BSDAuth uses setuid and setgid helpers in /usr/libexec/auth.
$ ls -lhd /usr/libexec/auth/
drwxr-x--- 2 root auth 512B Mar 24 13:23 /usr/libexec/auth/
$ doas ls -lh /usr/libexec/auth/
total 372
-r-xr-sr-x 4 root _token 21.3K Mar 24 13:23 login_activ
-r-sr-xr-x 1 root auth 9.0K Mar 24 13:23 login_chpass
-r-xr-sr-x 4 root _token 21.3K Mar 24 13:23 login_crypto
-r-sr-xr-x 1 root auth 17.2K Mar 24 13:23 login_lchpass
-r-sr-xr-x 1 root auth 9.0K Mar 24 13:23 login_passwd
-r-xr-sr-x 1 root _radius 17.1K Mar 24 13:23 login_radius
-r-xr-xr-x 1 root auth 9.0K Mar 24 13:23 login_reject
-r-xr-sr-x 1 root auth 9.0K Mar 24 13:23 login_skey
-r-xr-sr-x 4 root _token 21.3K Mar 24 13:23 login_snk
-r-xr-sr-x 4 root _token 21.3K Mar 24 13:23 login_token
-r-xr-sr-x 1 root auth 21.0K Mar 24 13:23 login_yubikey
In this scheme, for any user who you want to grant permission to test basic system authentication you add the auth group to their supplemental group. Et voila, they can now use the framework, but without root permissions. For mechanisms that you may want to grant separately you use a separate group (e.g. _radius).
Like many bad ideas, Docker basically won because you can use it to isolate and package pre-existing, broken software. And because it's easier to find examples of how to [abuse] Docker for sharing files across security domains than on how to use supplemental groups to do this.
I just recalled an interesting difference between BSD and SysV (including Linux[1]) regarding group ownership.
On BSD when you create a file the group owner is set to the group owner of the directory. From the BSD open(2) manual page:
When a new file is created it is given the group of the
directory which contains it.
But on Linux the [default] behavior is to set the group owner to the creating process' effective group. From the Linux open(2) man page:
The group ownership (group ID) is set either to the
effective group ID of the process or to the group ID of the
parent directory (depending on file system type and
mount options, and the mode of the parent directory, see
the mount options bsdgroups and sysvgroups described in
mount(8)).
The BSD behavior is more convenient when it comes to using shared supplemental groups because you can often invoke existing programs with a umask of 0002 or 0007 and files they create (without subsequently modifying permissions) will usually have the desirable permissions--specifically, writable by any process which has a supplemental group of the directory they're located in. For example, with BSD semantics Git technically wouldn't have needed a special core.sharedRepository setting. But because of SysV semantics on Linux Git had to be modified to explicitly read the group of the parent directory and explicitly change the group owner of newly created files.
To me one of the most valuable parts of docker is almost never mentioned. The fact that a Dockerfile exactly describes the steps necessary to get from a fresh OS image to a working instance of whatever you're containerizing is one of the main selling points.
To me that's what other technologies like rkt seem to miss or at least choose not to focus on.
They describe how to do it in an impure non-reproducible way.
They're not much different from nix expressions, or brew formulas or such.
Unlike nix expressions (or rpm specfiles or various other technologies), the dockerfile format never concerned itself with clearly defining the hashes of inputs and dependencies, reproducibility, working without network access on build farms, etc etc,
Sure, it ended up being popular and usable, but at the cost of taking the art of describing how software is installed back in time.
It's not a coincidence that most Docker containers are built using Debian- or Red Hat-derived images. It's because they have proper packaging systems which are consistently applied, which is what allows them to build and maintain 10s of thousands of projects in a reusable format. The intersection of "correct" and "usable" substantially overlap; certainly much more so than "incorrect" and "usable".
> Anecdotally, I've tried to learn Nix for several years.
Out of interest, what did you find difficult? I ask because I spent last weekend setting up NixOS, and the only things I found difficult were discoverability and documentation (I spent a lot of time just reading through the source code).
Edit: Thinking about it it's very possible that this is just a case of me not knowing what I don't know, since I'm not currently trying to do anything particularly complex.
Not sure if this is exactly what you meant by discoverability, but this page helps with at least seeing what configuration options are available: https://nixos.org/nixos/options.html# -- no other distro has anything similar as far as I know.
The language is not too bad, but how it should be used is the missing part. Like a cookbook or a properly populated Stack Overflow site.
When I was on Linux, I liked to distro hop few times a year. Whenever I got back to NixOS, I felt that I had to relearn everything, and that there was nothing to grasp on to. I always take that as a bad smell.