The thing is, many things in X11 are "hardcoded" to the protocol.
For example, there is only one coordinate space in the X11 world. This meant that 1 pixel is always N "real" pixels (typically N=1).
But nowadays we have this thing called HiDPI, where 1 pixel can mean 1.5 real pixels on one screen (150% scaling), 2 real pixels on the second (200% scaling), and 1 real pixel on the third (100% scaling). We need mixed DPI and hence multiple coordinate systems, but you will break existing X11 programs if you tried to add that to the protocol, and at this point if you force your way through the problem it'll just be Wayland.
Existing X11-compatible systems "fix" this by:
1. assuming N=scale factor, thus upscaling all programs. Everything becomes a blurry mess (see XWayland).
2. assuming N=1, and let the programs upscale themselves. The problem is older apps (e.g. GTK2) don't understand HiDPI and you get a GUI for ants.
> But nowadays we have this thing called HiDPI, where 1 pixel can mean 1.5 real pixels on one screen (150% scaling), 2 real pixels on the second (200% scaling), and 1 real pixel on the third (100% scaling).
This is the wrong way to do scaling and even Wayland got it wrong and has been very slowly fixing things. What you actually want to do is just use real pixels everywhere and just tell the client "here's your NxM window size and it will be displayed in a X scale" and let the client do the intelligent thing. If it's a browser, a 3D game, or a PDF reader it will render directly at target resolution and scale with no fuzziness and those are most use cases people care about. GUI toolkits were stuck at integer scaling for a long time and thus the strange solutions to render at higher resolution and then scale down happened. All of this can work in X11, just always use native resolution for everything but Wayland is much better anyway and works great these days.
> What you actually want to do is just use real pixels everywhere and just tell the client "here's your NxM window size and it will be displayed in a X scale"
This is already how the Wayland fractional scaling protocol works. You have a 100x60 window and Wayland tells you the scale is 1.5. You give Wayland a 150x90 framebuffer and call it a day.
The giant problem with X11 is that it is still making the assumption that everything is one giant screen -- even if you have multiple monitors and whatnot. Even the act of sending a scaling event like Wayland does is a challenge on X11.
Wayland has gotten it right in the latest protocols but up to very recently it still did only integer scaling and scaled down for fractional. Scaling the buffer in fake coordinates is also needlessly complex. Just start with the 150x90 buffer to begin with and the scaling is just a UI hint. All of that would work fine with X11 exactly because there's no need for fake coordinates. All scaling is the client's job based on a DPI or scale hint.
Wouldn't that mean every application has to re-invent (or include) some form of scaling? Isn't it better than the display manager handles that for you?
The app knows how to render its UI, the display manager can only upscale or downscale what the app has provided. You get more performance and better quality if the app renders to the needed resolution in the first place, rather than if the display manager scales up or down to it.
The assumptions that X11 uses are incompatible with what personal computing turned into. Drivers, security, the networking part... All of it is just not how things work these days.
It is a major project, it is hard to push through but it's there already, with most of the work complete by now.
And as a programer I must say that underlying Wayland libraries are vastly better than X11 ever was.
Maybe we should consider starting over again in that case. Wayland increasingly appears to be supported by a sunk cost fallacy itself - the A11Y issues are not fixable in the base architecture, which means that every Wayland DM implementation needs to implement them again, which in turn means lots of different APIs, incompatibilities and so on.
go ahead them, show was how to make a good screen protocol that is as secure as wayland, and flexible, and one that support every type of screen, from cellphones, to VR, to desktop, go ahead, i'm waiting, show how these wayland normies(x11 too, they are the same people) how to make a protocol
No they decided working on X11 would not rake in enough consulting money. So they engineered a completely non-working solution called Wayland that is broken by design and takes years and many consulting hours to fix.
> "broken by design" is an extraordinary claim and requires extraordinary evidences
There are many technical aspects that make Wayland broken by design (like default forced vsync, forced double buffering, a fucked up event-loop for single threaded applications or severely lacking functionality for things like window positioning or screen sharing). But the biggest problem is the design-philosophy: Wayland makes life extremely easy for gate keeping "Protocol Designers" and extremely hard for application developers.
> un-sandboxable
Not true. The quick and dirty way would be using Xephyr. Besides that many access control hooks like XACE are present and standardized in the X11 protocol for many years. Application developers just choose not to use them. So if X11 is not secure enough for you, blame GNOME and KDE, not X11.
> like default forced vsync, forced double buffering,
A few things:
1. Vsync-by-default is the norm. X11 was the outlier.
2. Wayland does triple buffering, not double buffering.
>a fucked up event-loop for single threaded applications
I dunno, I wrote Wayland applications and I did not notice any peculiarities w.r.t. the event loop, at least in comparison with other platforms like Win32.
You need to expand a little more.
>window positioning
I suggest to read up on the Gitlab MR for the in-development window positioning protocol. The basic TL;DR is that window positioning has certain implications regarding tiling window managers and other unusual desktop usecases e.g. VR.
>screen sharing
I just shared my screen this morning.
> Not true. The quick and dirty way would be using Xephyr.
...So what you are saying is that you'd need a separate server running. Thanks for telling me that X11 is unsandboxable.
Nobody wants to keep working on X11, its design is fundamentally mismatched with the needs of a modern graphics stack and it contains a ton of legacy code and layers that make development difficult.
Check the lwn.net archives for some articles that explain this in detail.
The X11 DRI3 buffer swap mechanism is identical to the one used in Wayland. The fact that Wayland has still worse performance across the board (especially latency metrics) might be an indicator that Wayland is fundamentally mismatched with the needs of a modern graphics stack.
You are basing your conclusions on a single blogpost. Measurements from other people show that X11 wins in some cases, Wayland wins in other cases, there isn't a clear winner.
> X11 wins in some cases, Wayland wins in other cases
Even according to your blog post uncomposited X11 wins in all cases (or is tied within the error of one millisecond). It especially sweeps the floor with immediate rendering.
Uncomposited anything is madness in 2024. You have the VRAM, use it and save your battery life / power which is arguably far more important. It being tear-free is a very nice bonus on top of the power savings too.
(And no, composition is not just for 3D cube effects or anything like that. Although it certainly enables them.)
MacOS have moved on, Windows have moved on, Linux should do the same.
I see your approach is quite radical. But I think the real issue here is that we expect the developers currently working on Wayland working faster.
I managed to be a bystander in this debate as I've been using X exclusively but I do hope that one day Wayland gets all the functionality and performance of X, and people stop getting frustrated by it.
Yes, if Red Hat wanted to, there wouldn't be nobody. Like you say, if Red Hat wanted to, they could hire those people to work on X11. Conversely if "Red Hat doesn't want to work on X11", then they wouldn't hire those developers, and now nobody wants to work on X11.
Right, so the problem isn't "nobody wants to work on X11", it's that "Red Hat management doesn't want to hire anybody to work on X11", and phrasing it as the former is extremely dishonest.
Why is Red Hat the one who has to pay for development? Why not any other Linux contributor? Ubuntu, Google, AMD. They don't want to hire anybody to work on X11 either. Why not you? That's why it's "everybody doesn't want to pay for development", which is shortened to "nobody wants to".
actually ubuntu also paid people, but x11 is so fucked that was never enough, that's why google never used it, and why they adopted wayland in the chromeOS
Have you ever worked on a project when after several years it became clear that the initial assumptions were wrong and the current design is untenable? Like, users asking for simple features that you know could be implemented in just a few lines of code had the design been done differently but now require many weeks of months of clunky hacks? Because I did and in some cases the initial decisions were mine - I had to admit I was wrong and do what is logical to make it easier for both developers and users.
Are we talking about Wayland or X11 here? Because like the article mentioned, Wayland is now 15 years old and still sucks. Sure sounds like the Wayland design is untenable.
What's stopping you from using (and even developing) X11? It's an open competition.
Wayland might reach feature parity in 2, 5, 10, or 20 years, and I'm fine waiting, because while X11 can kinda work well now, its archaic design with 10 levels of hacks will likely make it extremely hard to support new needs (it already struggles with some existing use cases).
The real problem was that Wayland was extremely bare-bones initially and needed many extensions to approach the usefulness of X11 for desktop use - it basically just supported dispatching input and blitting rectangles. And initially, getting extensions standardized was like pulling teeth. These days, "obvious" extensions get in in a matter of months (about one DE release cycle), not to mention that most already exist by now.
Maybe because no one bothered to improve it for over a decade? ;)
The Wayland developers have limited manpower. Obviously it would take a long them for them to get Wayland to reach a critical mass when there wasn't support from the community.