on my machine, noticeable. I seriously tried it, but went back because I could notice a small end-end latency, between keypress and action. But I'm also 240hz user.
Where are you measuring the keypress from? The nerve signal to your finger muscles?> Or the time the keycap hits bottom? What if the switch closes before the cap hits bottom: then we are getting a latency figure that looks better than it really is.
I've had a keyboard like that and with it, xterm (and nothing else) felt like it was displaying the characters even slightly before I had pressed them. It was a weird sensation (but good)
Yes, I know this feeling, it's like typing on air. The Windows Terminal has this same feeling. 8 years ago I opened this issue https://github.com/microsoft/terminal/issues/327 and the creators of the tool explained how they do it.
xterm in X11 has this feeling, ghostty does not. It's like being stuck in mud but it's not just ghostty, all GPU accelerated terminals on Linux I tried have this muddy feel. It's interesting because moving windows around feels really smooth (much smoother than X11).
I wish this topic was investigated in more depth because inputting text is an important part of a terminal. If anyone wants to experience this with Wayland, try not booting into your desktop environment straight into a tty and then type. xterm in X11 and the Windows Terminal feel like this.
Nerve signals yes. I just try them side by side, usually running vim on both terminals and measuring how it feels. If you can feel difference, the latency is bad.
I find Richard Werner's take on money one of the most grounded. He has done a lot of work to track how it moves in the pipes. He has done a lot of communication around the subject, that one can find easily. The same guy that is said to have invented QE.
Even if they do (often not the case) this will be far from exhaustive, and likely won’t reflect the structure of the application very well. Vision based testing is often combined with accessibility based testing
If people want to study this, perhaps it makes more sense to do like we used to: don't include the "labels" of relativity into the training set and see if it comes up with it.
This inspired me to generate a blog post also. It's quite provocative. I don't feel like submitting it as new thread, since people don't like LLM generated content, but here it is: https://telegra.ph/The-Testimony-of-the-Mirror-02-12
> since people don't like LLM generated content, but here it is
Perhaps you could have made that comma a period and stopped there, instead of continuing to share a link to content you already said people won't like?
reply