I got myself a NUC. It's been worth it: tiny, has 16 GB of memory and 504 days of uptime.
I have servers for running VMs and containers but I felt like it would be nice to have this one as a separate device. It's also easy to plug in radio devices.
To me, this is the first time Wayland feels like it's not a waste of time. The display server does not need to have the complexity of window managing on top the surface management. I certainly share the author's sentiment:
> Although, I do not know for sure why the original Wayland authors chose to combine the window manager and Wayland compositor, I assume it was simply the path of least resistance.
Although I'm not sure if it was the least resistance per se (as a social phenomenon), but just that it's an easier problem to tackle. Or maybe the authors means the same thing.
(That and the remote access story needs to be fixed. It just works in X11. Last time I tried it with a system that had 90 degree display orientation, my input was 90 degrees off from the real one. Now, this is of course just a bug, but I have a strong feeling that the way architecture Wayland has been built makes these kind of bugs much easier to create than in X11.)
X11 has some tricky, imposible to fix (within the confines of the existing protoco) issues because of the seperation between Xserver and window manager. Things like (IIRC) initial window placement and what nots, where ideally the window manager would make choices about things before the process continues, but the reality of distributed systems makes everything hard. Combining some things into an integrated process fixes that, but comes with other issues.
There were probably other ways to fix those issues, but it would still be a fair amount of churn.
> imposible to fix (within the confines of the existing protoco)
X11's extension mechanisms can - and has - been used to enable backwards incompatible protocol changes. E.g. BigRequest changes the length and format of every single protocol request.
Very few client libraries are only capable of speaking "the existing" protocol if you take that to mean the original unextended X11 protocol.
Adding an X11 extension that when enabled cleans up a lot of cruft would not have been a problem.
> where ideally the window manager would make choices about things before the process continues
Nothing stops you from introducing an extension that when enabled requires the client to wait for a new notification type before continuing, or re-defines behaviour. That said, using my own custom window manager, I don't know what you mean here. My WM does decide the initial window placement and size, and it's the clients damn problem if it can't handle a resize before I allow the window to be mapped.
The X protocol is crusty in places, but it is very flexible. People haven't fixed these things because they chose to invent compatibility hindrances that weren't real when their response was to invent an entirely new protocol with no compatibility at all.
Too late to edit, but one minor self-correction: BigRequest changes the allowed length and format of every single protocol request. For small requests they are the same, but if length is set to 0, an extra 4 octets are inserted to allow encoding a larger packet length.
I think it's quite ironic that everybody nowadays complains about Wayland and the "good old days" of X. Back in the day, everybody and their dog complained about X being "archaic", "slow", "takes 20 operations to draw a line", etc. XComposite and XRender were just hacks. Everybody hated on X and anything else was considered better.
On a tangent, also very ironic that X (the successor of Twitter) has the exact same logo as X (the window system). It's like Elon Musk just Googled for the first X logo that came along and appropriated that and nobody seems to notice or care.
I think most smaller Wayland compositors are using a library (wlroots, smithay) for most (?) of the compositing. If using a library provides a few extra options, while still allowing sharing code, it feel like the API boundary was put in the right place.
When there was the 90deg off bug, was that a bug in the compositor or in wlroots?
Remote access on X11 is a mess and I won't miss it, at least on Wayland everyone is funneled through EGL or Vulkan and there's a reasonable path to layering remote access on top of that.
X11 remote access have worked really well for me. And the best part is that it worked even when the client machine has no graphical subsystem installed. I can launch GUI applications remotely with a non-privileged account and it shows on my machine as if it was native.
Wayland can use RDP and some other remote desktop protocols, but it is not what I want, I want a window, not a desktop. There is Waypipe now, I heard it works fine now, but I am still doing "ssh -X", because it just works.
The problem with Wayland is that it is very much "batteries not included". To all the things that worked well in X11, the response has been "it can be done, our protocol is very flexible, ask the guys writing the compositor", not "that's how is done". The result, Wayland is 18 years old and it is only starting to work well, with some pain points still remaining, and display forwarding is one of them.
It is funny you mention a "reasonable path" by the way, as it is exactly that problem, I don't want a "reasonable path", I want it to work, and after 18 years, I think it is a reasonable expectation. To their credit, it seems we are getting there: waypipe, and now window managers, we may finally have feature parity.
This is also where I'm at. I don't care what protocol or whatever is running underneath but I just want things to work and Wayland doesn't do that. It has lately been better, previously I would try Wayland and run into problems within minutes, recent attempts have given me hours without running into a problem. And as an end user I don't want to care that the problems I get aren't with Wayland but rather a particular compositor/WM implementation or whatever. I want it to work but it's only in the last year or so that basic functionality like screenshots has become reliable.
What gets me is how old Wayland is. It's now older than Linux itself was when Wayland started. It started in the era of 2.6 kernel series, when most software was still 32-bit, systemd didn't exist, when Motora Razr was more common than iPhones, when native desktop applications were still the norm, Node.js didn't yet exist and Google Chrome was a completely new beta browser. Wayland is now reaching feature parity and some kind of "it works out of the box, usually" state when it's from a completely different era of computing.
The nearest point of comparison is perhaps systemd, another Linux project that is very large in scope, complicated, critical and must interface well with lots of pre-existing software. Four years after Poeterring's "Rethinking PID 1" post that introduced systemd, it was enabled and in use on many distros. The conservative Debian adopted it within five years. Now it's been clearly a major success, but Wayland has been perhaps the slowest serious software product to be in development.
You have some weird memory, screenshots have been a solved issue for something like 6 or 7 years at the very least, if not a decade. I remember taking screenshots on wayland during the Covid era for instance.
Wayland experiences seem to vary wildly. It was most certainly not working fine for me six years ago. Well, six years ago I don't think I got as far as trying screenshots, I'd run into basic window placement or rendering issues that made the system unusable.
But say a couple years ago, I definitely had screenshot issues. Sometimes it just wouldn't capture a screenshot. Or I could only capture one monitor and not the other. Or I had graphical artifacts while drawing the snipping rectangle. Or the screenshot would be taken fine and fail to copy to the clipboard.
I'm well aware people's experiences are very different based on their setup and the implementations used but for me, last year was the first time I could do some work on Wayland without running into major issues, at least until I got to the part where I'd normally use ssh -X.
You have always been able to do ssh -X from a wayland client to a remote X as long as xwayland was running locally.
And waypipe has been solving this need to run a remote app on a wayland remote system. And it performs way better than X forwarding actually. With ssh -X you also need to remember obscure environment variables (looking at you QT) to not have unusable blank windows on some apps.
And I want (wanted) both. And X11 cannot redirect whole display server until you start your session with some Xnest or other semi-standard middleware (nx?).
It is very convenient sometimes to access your locked session on big desktop from small laptop, do something, and later go to big desktop physically and unlock "local" termianl and continue with all same programs and windows without starting new session. This scenario were not supported by X11 very well, unfortunately.
I'll take a reasonable path over no path and just hoping VirtualGL or something will be enough and forgoing color management entirely. I understand that some use cases work better or only in X, but I also see the roadmap for Wayland and it looks like it will solve problems that I care about. While I know a little bit about graphics and GUIs, the people building all this infrastructure know much more and it seems likely that their judgement on how to solve these problems is on average better than people who haven't been working at that layer for a few years.
> Although I'm not sure if it was the least resistance per se (as a social phenomenon), but just that it's an easier problem to tackle. Or maybe the authors means the same thing.
Or maybe it’s desktop environments pulling the ladder up behind them.
How is it an accessibility issue? HTML allows things like little gif files. I've done this myself when I wrote text that contained Egyptian hieroglyphs. It works just fine!
Then use words. Or tooltips (HTML supports that). I use tooltips on my web pages to support accessibility for screen readers. Unicode should not be attempting to badly reinvent HTML.
And why do we not anymore make use of it, but instead implemented separate JSON loading functionality in JavaScript? Can you think of any reasons beyond performance?
I actually gave it some thought. I had written the actual reason first, but I realized that the person I was responding to must know this, yet keeps arguing in that eval is just fine.
I would say they are arguing that in bad faith, so I wanted to enter a dialogue where they are either forced to agree, or more likely, not respond at all.
I've found it nice to have the terminal emulator be able to match text with regexp and upon a click convert it to an external action. For example, I can click Python traceback in terminal and have Emacs go into that exact line in code, or the JIRA issue id and go to the web page.
I wonder though if this is a popular feature. Tilix is under minimal maintenance at the moment, so alternatives would be good to have..
Regarding the jira link example, are you using hyperbole buttons for this, or some other way? I’d like to do it without using hyperbole, it’s a nice package and all but the ‘buttons’ are the only (of the many) feature(s) I would use from it
Wow, thank you, that’ll work perfectly! Of course bug reference mode is built-in, I had no clue (I need to stop being surprised by emacs’ built-in functionalities)
As I understand it, a big part of produced clothing just goes straight to waste to begin with. If everything was created on-demand, it would minimize that kind of waste.
> As I understand it, a big part of produced clothing just goes straight to waste to begin with.
My niece runs a business that relies on the way we discard clothes. She buys clothes from suppliers in India who source them from the bales of discarded clothes sent to them from Europe. Her suppliers have in effect sorted through the mountain of discards to find the ones that have sufficient value to sell back to us. She specifically buys clothes that have 'vintage' appeal (think tailored jackets rather than hoodies) and sells them primarily to students in a northern English city. Her business has done well enough to move from market stalls to a dedicated high street store and she is just branching out into 'vintage' kids clothes.
That would be great, a lot of clothes are made at sizes that don't sell very well and which get discounted, then discarded if they don't sell.
However, made on demand will likely cost more, plus you can't fit items first. Unless they make items for fitting which you can then order to have manufactured.
But yeah the main thing is that on-demand can never compete with mass production even if a big part of the mass produced stuff is discarded.
> on-demand can never compete with mass production even if a big part of the mass produced stuff is discarded.
This is definitely not universally true. E.g. photos are very cheaply printed on demand. Even on-demand books are printed at reasonable prices. Sure, mass production is cheaper (both for books and pictures), but the value difference of the individual product is high enough to bridge the price gap.
For cloth this area has found little exploration. TFA covers production at niche scale. If you would mass produce the looms to reduce the capital expense and heavily lean into customer value, e.g. individual fittings via 3d scans, as my sister comment proposes, or even just letting me customize my sweater with motive, color choice, garment etc., this could radically change the cost to value ratio. The company that has published TFA sells extremely bland apparel in a shop that looks just like any mass produced clothing shop and leaves all of the customer value of custom production on the table.
Last but not least: This "3d knitting" seems to need only a fraction of the labor of traditional sewed clothes. If textile production didn't default to underpaid labor under precarious working conditions in low income countries, it would probably already be cheaper.
From 3d printed clothing, the obvious next step should be to have your phone take a 3d scan of you, and send it to the clothing designer to print it to your actual body size and shape. We could have truly unique sizing (none of this S/M/L/XL stuff)!
Edge case: people who are in the process of changing their body size/shape. Growing children, people losing weight, people gaining weight (they're out there), will all occasionally want to buy for where their body is going to be in the future, not where it is now. How to accommodate them?
I'm sure models predicting how their body changes (based on various parameters and previous scans of the particular person and other people) could be built, allowing to optimize for longest time period of "decent fit" at the cost of "perfect fit now".
Yes, and people have been chasing that Grail for decades. It's always right around the corner. (Despite what another poster said, it IS being pursued commercially. And unobtainable so far.)
The top-level README gives a bit better idea. Armed with that the explanation might sound a bit more understandable.
I'm not familiar with the project (or Clojure), but let me try to explain!
> Mycelium structures applications as directed graphs of pure data transformations.
There is a graph that describes how the data flows in the system. `fn(x) -> x + 1` in a hypothetical language would be a node that takes in a value and outputs a value. The graph would then arrange that function to be called as a result of a previous node computing the parameter x for it.
> Each node (cell) has explicit input/output schemas.
Input and output of a node must comply to a defined schema, which I presume is checked at runtime, as Clojure is a dynamically typed language. So functions (aka nodes) have input and output types and presumably they should try to be pure. My guess is there should be nodes dedicated for side effects.
> Cells are developed and tested in complete isolation, then composed into workflows that are validated at compile time.
Sounds like they are pure functions. Workflows are validated at compile time, even if the nodes themselves are in Clojure.
> Routing between cells is determined by dispatch predicates defined at the workflow level — handlers compute data, the graph decides where it goes.
When the graph is built, you don't just need to travel all outgoing edges from a node to the next, but you can place predicates on those edges. The aforementioned nodes do not have these predicates, so I suppose suppose the predicates would be their own small pure-ish functions with the same as input data as a node would get, but their output value is only a boolean.
> to define "The Law of the Graph," providing a high-integrity environment where humans architect and AI agents implement.
Well, beats me. I don't know what is "The Law of the Graph" and Internet doesn't seem to know either. I suppose it tries to say how you can from the processing graph to see that given input data to the ingress of the graph you have high confidence that you will get expected data at the final egress from the graph.
I do think these kind of guardrails can be beneficial for AI agents developing code. I feel that for that application some additional level of redundancy can improve code quality, even if the guards are generated by the AI code to begin with.
That's mostly correct, one small correction is that cells don't have to be pure. They just have to focus on doing a single task with some hard boundaries.
And what I meant with the law of the graph was simply that the graph defines the actual business logic, and then each cell is a context free component that can be plugged into it. I guess I was just trying to be clever there.
The key benefit I'm finding is that cells can be reasoned about in isolation because they know nothing about one another. You don't have implicit coupling happening the way you do in normal programs that's embedded in the call graph.
My approach is to use inversion of control where the cell gets some context and resources like a db connection, does some work, and produces the result. That gets passed on to the graph layer which inspects the result, and decides what cell to call next.
With this approach you can develop and tests these cells as if they were completely independent programs. The context stays bounded, and the agent doesn't need to know anything about the rest of the application when working on it.
The cells also become reusable, since you arrange them like Lego pieces, and snap them together into different configurations as needed.
That's not lower power, is it? E.g. RuuviTags can run 3 years or longer while sending sensor data 2.5 times per second, with a single CR2477 (3V 1000mAh). A single AA alkaline battery has 1.5V and 2100-2700 mAh (https://batteryskills.com/aa-battery-comparison-chart/ , somehow this data was difficult to find so I'll add this link :)).
Bluetooth is lower energy than WiFi, but in your scenario the energy used for the radio is quite low anyway.
There definitely are lower-powered options; I mostly meant that as an example that as an hobbyist, an ESP32 - possibly even on a standard dev board! - could easily be good enough for your use case.
I never did a formal study to see how much of that power use was standby vs. power-on usage, how much of the standby usage was the ESP32 vs. the board/voltage regulators/pulldowns, how much of the power on usage was radio vs. e.g. all the crypto (we're doing asymmetric crypto for the TLS handshakes on batteries here, that isn't going to be cheap!) etc.
I just slapped it together and found it good enough to not care further.
Actually I've read claims that ESP32 C6s are pretty decent battery-consumption-wise. So much so that I bought a few, hoping to make at least a doorbell out of it. Alas I don't have a device to measure microampers, so I guess I'll just see how long they're fare..
You can use ohms law - let it draw power through a 10k resistor, and put your multimeter across the resistor. Every .01 volt is 1 uA. This also means that if you're powering it with 3.3v and it browns out at 3.0v, you'll only be able to draw 30uA before browning out.
You can use a different resistor according to the power draw and how sensitive your volt meter is.
You'll probably need to power it up with the resistor shorted, and only remove the short once it's in sleep mode, to measure the current.
While that one might not be of much use, yours might be :). Actually I just have a small midikbd next to my computer, so maybe I could find use for this.
I did think of one potential use for midihidi: using MIDI as input device when running a tracked in an Amiga emulator.
I have servers for running VMs and containers but I felt like it would be nice to have this one as a separate device. It's also easy to plug in radio devices.
reply