this is something I hate about the "unix design philosophy". when you start digging deeper, you realize that "everything is a file" is true, except for network connections, and A/V devices, and input peripherals, and really that the only things that actually are files, are files. everything else is allllmost a file except in one little weird way, or, many big big ways...
That's not a problem of the philosophy, it's a problem of its implementation. And that is caused by the myriad people and entities to have been responsible for the chaotic evolution of the various parts that make up modern Unix systems.
Design by effectively nobody can be at least as bad as design by committee.
As others pointed out, Plan 9 is much closer to a proper realization of the everything is a file design philosophy. Unsurprising, since it was grown entirely within Bell Labs under the supervision of the original Unix team. It's unfortunate it hasn't reached a point of being ready for mass adoption.
> As others pointed out, Plan 9 is much closer to a proper realization of the everything is a file design philosophy. Unsurprising, since it was grown entirely within Bell Labs under the supervision of the original Unix team. It's unfortunate it hasn't reached a point of being ready for mass adoption.
I don't think it's unfortunate, having used it for a month straight I'm not convinced at all it's the right thing, and in fact, is exactly the opposite direction from where I think we should be going (as far away from the filesystem as possible).
My technical argument is that the filesystem is too complicated an abstraction for questionable benefit. Yes, sockets are really cool as files, but that's not any more useful outside of the unix shell.
I suspect that Plan 9 could be a good dev environment, but is not a great deployment platform—it just doesn't offer enough better than developed *NIXes to port code over.
I don't entirely agree with that characterization. I believe Plan 9 offers fundamental technical advantages that would interest a small but significant and sustainable population of users and developers, and that it's not the inertia of other platforms that it can't overcome, but itself. Its user experience is utter crap, a combination of poor/lack of design and annoying idiosyncrasies by certain of its developers.
It's a research platform, and no real effort has been made to turn it into something more practical. If someone were to pick it up and turn it into something people wouldn't hate using, I think we'd see a community on par with NetBSD or OpenBSD.
Agree completely; the problem is that a clever-sounding slogan like "everything is a file" is so appealing to people that they want to believe it even if it's not actually true. I've run into this before: https://news.ycombinator.com/item?id=2397039
Then there's the whole mess of pseudo-terminals.
If you are like me, you might wonder at first why
pseudo-terminals are necessary. Why not just fork
a shell and communicate with it over stdin/stdout?
Well, the bad news there is that UNIX
special-cases terminals. They're not just
full-duplex byte streams, they also have to
support some special system calls for things like
setting the baud rate. And you might think that a
change of window size would be delivered to the
client program as an escape sequence. But no, it's
delivered as a signal (and incidentally, sent from
the terminal emulator as an ioctl()).
Of course, you can't send ioctls over a network,
so sending a resize from a telnet client to a
telnet server is done in a totally different way
(http://www.faqs.org/rfcs/rfc1073.html)
so I think that in an abstract world of arbitrary reality, stating "everything is a file" is okay but we have a lot of evidence that implementing something where everything actually is a file is just too hard. If the philosophy is good, why do its adherents produce so much that is crap?
Well, for the PTY example above, the reason is backward compatibility. Eventually we need to decide to rid ourselves of the limitations that backward compatibility (with, e.g., bash) brings and clean up the implementation.
I think it's interesting more in the sense of understanding what Unix is today. It doesn't really read as an argument against a Plan9-like realization of everything-is-a-file.
I'd summarize those posts in two main points:
1) Linux isn't a research project.
2) Bringing a Plan 9-like realization of everything-is-a-file into modern Unix really just creates an even uglier chimera than Unix already is.
For a simpler and weirder example, consider the TCP accepting socket. I learned socket programming in the early '90s and accepting sockets were just the way things worked; I never questioned them. A few years ago I had the occasion to teach someone socket programming, and accepting sockets were a giant W-T-F for me as I tried to explain how things worked.
I don't think "everything is a file" was ever supposed to mean "everything works just like a disk file", or even "everything has a name in the filesystem". It really meant "every kernel-managed resource is accessed through a file descriptor". Of course, even this formulation isn't always true (network interfaces and processes are the classic exceptions), but it's a true in a lot more cases. In this sense, "file descriptors" are really just the UNIX name for what Windows calls a "handle".
This is still quite a good paradigm to follow - the semantics for waiting on, duplicating, closing and sending file descriptors to other processes are generally well-defined and well understood. For example, Linux exposes resources like timers, signals and namespaces through file descriptors.
Look into Plan 9's take on the file metaphor. I'm not suggesting they got everything right, but the concept really can be made more general than plain old Unix makes it seem.
Well you're right, not everything is a file. We can't treat a socket like a file because the networks is less reliable/fast/whatever than the file system. You can't treat a pipe just like a file because it needs to have someone reading it if you want to write to it, and there are special files accepting weird ioctls because they have weird capabilities that need to be exploited, etc.
But still, all these do have things in common, mainly they can be read from and written to, and the 'file' is an abstraction that represent just that. To me this is very much like an object oriented concept. This is polymorphism applied to os resources. If several resources have part of their interfaces in common, then exposing it as a higher level abstraction can help simplify programs that then don't have to worry about type specific details if they don't need to.
Anyway even if nothing really is a file, I often find it useful to think about everything as one, as long as I don't forget it's actually not when the abstraction stops being valid. And I'd guess it also helps design and implement the system itself.
I thought 'everything is a file' mostly referred to the fact that you can read/write to every file descriptor, no matter if the file descriptor points to a socket, a pipe, etc., or an actual file.
Before that, you had completely different syscalls depending on the device you were using.