On a more practical note, why would you want to have seconds on the clock? I find that most of the time I don't want to know the time to a precision of better than 5 minutes. That may just be me, but when someone asks the time and I read it off my phone as 3:56 it really bothers me and sometimes I convert that to 5-to-4 just to avoid what in most cases is arbitrary extra precision. Sure there are times when minutes count, and there are occasions when I want to time something to the second, but from a UI perspective those are not common use cases that warrant screen real estate for extra digits especially back then.
It depends on how often you find yourself timing things at that level of fidelity. For me it's quite often and having seconds always displayed really helps out. Especially when developing software and checking things like compile times, load times, etc
If you find yourself measuring compile times frequently, I think you'll be much happier with a command that actually times the operation than trying to eyeball it from the status bar. I can't imagine staring at a compilation and being unable to look away lest I miss the finish.
Well it's more out of curiosity and I don't specifically measure compile times often. I develop in a variety of domains and for some things a fuzzy count of how many seconds something takes is useful. If keeping track of things with finer grained tools is required, that's fine. Whichever is the most productive for the given problem.
As a MacOS user for seven years I can't say I've ever had that issue. Certainly not enough to need the seconds to be an indicator. I've had my system completely lock up maybe four times and those times my mouse refusing to move was enough of an indicator.
Heck, right now it's been two and a half months since I've rebooted... and probably a year since I've rebooted because it froze.
I'm not an Apple fan boy either. In fact my next computer will probably be Linux. But I have to admin the last three Macs I have owned have been great computers.
You must not work in an organization that insists on installing security software for you. If I have >2w uptime on my machine I generally have to restart for some reason or other. It's like people have forgotten that computers can and should have more uptime than that.
A faulty update from one of our security software vendors caused a once-per-hour kernel panic and forced reboot for a day or two. It was really nice.
(Obviously none of this is the fault of OSX or Apple but I would kill for 2.5m of uptime on my work laptop).
I feel you there. It got to the point at one gig where engineering folks--who were given root on their preconfigured laptops--often spent their first day figuring out how to kill and extricate Sophos from their machines. For developers who weren't seasoned jerks, it could be a real challenge until somebody took pity on them and helped them find single-user mode, yank out the kexts, and manually delete everything. (The happy .pkg BOMs helped out, too!)
Ordinarily I wouldn't do this and just punch out on a gig (because seriously, it's that annoying and that effectively-useless), but we were a recently-acquired startup subsidiary, so we were also our own IT shop--the CTO actively approved.
Is it still possible to do this kind of thing on the most recent macOS? I recall the issue with the 'git' binary not being modifiable, even with sudo, unless you mounted the macOS filesystem under Linux and made the changes from there.
It was literally there from day one (from day minus-a-bunch if you count the beta releases) and the hyperbolic inanity that led normal people to the misconception you had was never necessary in the first place.
I must admit that I am very curious why "git and 64 other files in /usr/bin are all the same size (18176 bytes on my machine)", why dtruss and strings fail, and what behavior changes when you turn rootless off.
I switch between two external displays at work then go home and plug the computer into a single external display.
Worst case, sometimes the layout of my windows gets messed up and I have to move them around to get back where I want them to be.
The only major issue I've had is sometimes bluetooth gets confused and it takes me a few minutes to get my track pad connected when I switch between home and work. (by a few I mean almost half an hour :( )
My Bluetooth mouse has about a 50 percent chance of being recognized when my laptop is coming out of sleep.
My workaround has been to just open the Bluetooth panel via Alfred and so long as Bluetooth has not totally shit the bed (Bluetooth has turned itself off and the Turn On button does nothing) my mouse is connected in less than 30 seconds, often times under 10.
> Worst case, sometimes the layout of my windows gets messed up and I have to move them around to get back where I want them to be.
Hyperswitch combined with SizeUp works wonders in this case (you use Hyperswitch to directly target individual windows and SizeUp to properly full-screen them).
Why Apple still doesn't have proper keyboard-only management for windows (Cmd-tab only allows to select the foreground app, not individual windows of it, and what Apple calls "fullscreen" is an abomination) is way beyond my understanding.
> Apple still doesn't have proper keyboard-only management for windows
False. Cmd-~ will switch windows within an app. Control-<Left> and Control-<Right> will move left and right between spaces (including fullscreen apps). I use these hundreds of times a day.
I do wish we had more robust features, like keyboard-based window placement/resize. As far as I can tell, third-party tools are the only way to enable that. But I find the above perfectly adequate for 90% of what I do.
Yet what I as a user want is to switch to just the next window in the stack, regardless of application. Ala, the Alt+Tab that Windows had back in Windows 95.
(I usually have multiple browser, terminal, and/or editor windows open, but I'm usually only using one of each at a time. Dragging the entire app to the foreground almost invariably means that some window that I'm not using and don't care about ends up on top of a window from a different app that I do care about. That is, me indicating that I care about a particular window of an app doesn't imply that I care about all of them, but OS X thinks it does.)
I use two external monitors on a pre-touchback mbp and disconnect multiple times a week and never have issues, other than my Windows vm getting confused about which display is high dpi.
I used to have a lot of problems with my imac, but that was more a function of have extremely low hard disk space and apps that don't behave well when that happens.
I returned to the Mac 7 years ago, and work on one every day (development, Xcode compiles, run web server, WebStorm, business stuff like numbers/pages, etc). Not sure I ever remember having a kernel panic.
Cursor movement works for that. Or if the system freezes in a way that allows the hardware cursor to keep working, then move the cursor over something that would normally react to it.
I mean, come on. It's a clock! Surely seconds are relevant.
I understand that everyone might not want to see them, but this isn't a great argument for leaving off a feature that is surely useful in other circumstances. The article is interesting, but I don't think these sort of technical explanations Chen gets into should be accepted as valid reasons to leave out a feature. They're interesting in context, but we should all be able to recognize that showing seconds is a good feature for a clock. Even the team in question would have left the feature in if it had performed OK at the time, which indicates they agreed with its utility.
Some years ago, I would set the clock of my linux (ubuntu? gnome? don't remember specifics) to display a "fuzzy time". It would tell time in that way : a quarter to eleven... two and a half... And it was very good, instead of worrying if I was being productive by checking the time twice every minute, I would just let it go.
You must be toying with us. Please, tell me it is a joke. I can't find literally any reason to start a phone call down to seconds. What's wrong with starting it just when minutes counter changes?
It might not be the memory concern it once was, but today it might very well be a power concern. Modern laptop display pipelines can save power if they don't have to redraw any of the screen: constantly updating status bar animations break that.[1]
Neither Chrome or Firefox support HLS on the desktop despite providing support on mobile (since Android 3.0 I believe). They have categorically refused to add desktop support, if you have any complaints I would take it up with them.
I can't find any reference for them supporting it on mobile, and it doesn't work in either on my phone (Android 7.0, for what it's worth). In fact, in Chrome it even fails to display the "not supported" message.
Besides that, I would rather complain that Apple is using an obscure format than complain that Mozilla and Google don't support an obscure format.
EDIT: sorry, it seems mobile Chrome is supposed to work, but it just... doesn't. Still don't see any reference for firefox, though.
We are in a different era now, where desktop apps are written in JS. Performance is implicitly thrown out of the window(s) when compared to software written for control over every bit of CPU and memory usage.
But the JS ecosystem learns things! They implement tree shaking and other stuff to improve performance!
Obviously real programmers learned all this decades ago - same for the "NoSQL" followers who after years of struggling with piles of cr.p finally (re)discover what a true ACID-capable database is...
I'm just waiting for the JS world to rediscover threads and proper multitasking, but given the track record I don't expect that in the next decade.
Seriously, the JS world is a hellhole of disasters, and many of them easily preventable if the people acting as "evangelists" actually had a bit of clue about what they're talking. And to top it off, a JS-only implementation will never be nearly as fast as well-written C or even C++ code, no matter how hard "evangelists" push it.
You'll probably get down-voted for your choice of language, but I agree.
I actually feel what's going to happen is the JS community will start implementing WebASM and other very low-level mechanism to improve the speed of slow software components and, given enough time, will eventually just have entire binary applications being delivered through the browser (automatically installed, without asking you.)
We'll go from `yum install software-suite` to `https://www.software-suite.com/` and the results will be the same. We'll have gone full circle.
I agree with you. Actually I think that at this stage, the browser should be viewed as a sandbox that can fetch and execute arbitrary binaries; then we can finally get past all the hacks that Web tech (js/dom/html/css/templates) is using to approximate that and actually build it for that express purpose, which should lead to better security (since we aren't pretending about what's happening -- it's untrusted code execution) and a better UX overall.
I don't think the developers of any of the main browser's JS engines pretend they're not going to be executing hostile code. In fact I struggle to think of better embeddable software sandboxes than V8, SpiderMonkey, etc., especially now that they support WebAssembly.
We need to just skip all this JS and other web standards nonsense and skip straight to the writing Golang, Rust, C, whatever, that is delivered (as a binary) through the browser and into a nice little sandbox that exposes system level APIs (notification, temp file storage - everything we can already kinda do in JA now) so we can get on with a faster, better web.
(Kind of makes me think we have this now: Docker/containers. Hmm... hit an HTTPS endpoint and a Docker container is downloaded and the application is launched...)
This isn't a JS issue. It's an optimization bug in the CSS rendering.
Holy crap the superiority complex is just gushing out of you. Sometimes it's just incredible seeing how much shit "real programmers" will throw at the "fake programmers."
The Electron platform is popular because it's easy to work with when building portable GUIs. It's especially easier for webdevs who use the same languages all the time. They can accessibly hack on the editor, create plugins, etc. Just like Java devs use Eclipse and IntelliJ, and C# devs use Visual Studio, and Obj-C / Swift devs use XCode.
If it's so easy to build portable GUIs the "right way", why are Slack, WhatsApp, Google, game producers, hardware vendors, browsers, etc all using this technology?
Also, I don't understand why you keep complaining about all of these Electron apps. Do you work on desktop GUIs yourself? Have you built GUIs that hook into mainly CLI programs like many of these Electron apps?
But C++ GUI toolkits are disaster. While any kid can put together a reasonable looking JS app, the default options in toolkits like Qt are junk. Common features like centering a QComboBox aren't there. Quickly, you're going to have to custom build every widget. Then you need to hire engineers to manually press all the buttons. These engineers won't be too good at art stuff, so guess what, now you need to hire graphic designers - many of whom aren't familiar with Qt. You're best bet is to pay a consulting company tons of money to make a GUI, which you will struggle to keep updated. Even with unlimited cash, the lead time for this is unjustifiable.
Do customers really want to trade performance for actual money? Are people still browsing the fatweb?
From what I remember centering in Qt is possible and easy.
Why would you need designers to know, Qt, they will show you a picture and you implement that. hopefully they don't want elliptic windows with elliptic buttons with animated shiny/metalic colors and animated shadows, you can get that but you may need to create custom widgets.
Don't even compare a GUI Listbox or similar widget that can handle 10000 items because it smart to create and paint only visible stuff and the html DOM UL/OL that can't handle so many items, and you have to implement the smart list yourself.
Same with DataGrids , for html you probably need to buy some code that has part of the functionality present in normal GUI toolkits.
> Common features like centering a QComboBox aren't there
Well, browsers didn't have proper vertical centering for UI elements for ages, too. Not to mention it's between a PITA and impossible to do "simple" things like styling a file upload button (only works via pseudo CSS on Chrome) or cross-browser styling of a scrollbar (usually people tend to handroll JS stuff, which is expectedly slow and unintuitive).
> You're best bet is to pay a consulting company tons of money to make a GUI, which you will struggle to keep updated.
It's the same in the Web sphere, with the added difference of clients not simply accepting "you cannot style a file upload button/input element cross-browser-like", they will usually answer you something along "it's the Web and HTML5 after all!!!". You will always need specialized engineers, designers and UX designers for a well-working app, no matter if native or web.
(But yes I agree with you that C++ GUI toolkits are a desaster, especially when cross-platform! And especially the build tooling coughs at autotools)
What saddens me is that instead of improving the web software development process with tried and tested methodologies, the inverse happened, and the web and its terrible software ecosystems and perpetrators are somehow seen as the "cool kids" of software development. When observer pattern is seen as the holy grail of modern web development, you know we are in trouble.
The observer pattern isn't sold to everyone... The thing is, Web browsers have been nearly as diverse as OSes both historically and recently... But, you get a mostly compatible UI and runtime engine that's installed on pretty much every personal computing device out there. There's a lot to be said for that.
As to the "cool kids", they exist in every corner of things that get done by people. I think WebASM will open up to higher-level tools once it's widely available, and signalling between the UI layer and WebASM code becomes easier to deal with.
> But, you get a mostly compatible UI and runtime engine that's installed on pretty much every personal computing device out there. There's a lot to be said for that.
Only "lowest common denominator" comes to mind.
But to the point of this thread, what bothers me is the transcendence of web tech outside of browsers, where a technology known historically for terrible practices with a very low barrier to entry is now considered the "assembly" of all these platforms, such as mobile and desktop, and the developers, that until yesterday could only make a web page that loads 20MB of dependencies to show a few paragraphs of text, now call themselves "mobile developers" or "full-stack developers".
Well, I take a bit of offense to the last statement... in only that JavaScript is just as, or more valid as a server-side language for development as any other scripting language is. The engines themselves tend to be more optimized than alternatives, and combined with NPM/CJS modules a much nicer experience than many of the alternatives.
As to being able to target mobile/desktop, why not? I mean with Cordova and Electron you can target 4+ platforms with minimal code variance. And in most cases the performance is good enough, until you need more. And React-Native can go even further towards native/compiled language performance, with slightly more variance.
Yes, there is more memory and cpu use than alternatives, but there's also a real cost in developer time, and time to launch. In many cases it's the difference between having platform X, or not... the alternative is nothing, not something better in most cases.
"Tried and tested" cannot be sold for money (as it's already known).
So in order to create demand for consultants the ecosystem repeats history and willfully messes up... and we all know where the consultantocracy finally ended up goes looking at Enterprise FizzBuzz
Assuming this is Qt Widgets we're talking about, this issue can be sidestepped entirely by just going with native style controls with tastefully placed brand accents. The need for each app to have a unique UI theme unto itself is questionable at best.
This is a good article, but keep in mind the timeframe it writes about. Not only was it 22 years ago, its writing about systems at the bare minimum system requirements. (Personally speaking, I remember speccing out a previous gen Windows 3.1 machine with 8MB of RAM in 92... and this is covering 4MB systems in 1995. By that time, 8-16MB would've been common.)
Another story from the time is that I went to CompUSA (as a 'spectator') on the night of the Windows95 release. (It was a midnight release and the stores were open late.) There was a line out the door that snaked around through the store past the Windows 95 boxes, the Plus Pack, boxes, MS Office, and then the memory upgrade desk...
I worked there at the time. That line didn't go away, it lasted for days. It was reminiscent of empire strikes back. 4MB SIMMS at that time were more expensive than 4GB sticks today. People had 16MB of RAM, but that was mostly reserved to gamers and programmers. Or Macs at graphics design shops.
The Windows 95 hypetrain was truly something to behold. The Empire State Building lights in Windows colors, Friends castmembers appearing in a (truly lame) promotional video... Windows 95 was the release that put the phrase "operating system" on the common man's lips -- for better or worse.
I'd never heard of this "Friends castmembers" video, so I did a bit of YouTubing... I have no words (nearly an hour long, and I think there's a part-two): https://www.youtube.com/watch?v=kGYcNcFhctc
The 90s were really great in a lot of ways. But as you've shown, it was really awful in a few others.
I wonder if there's a parallel universe out there where Windows 95 never happened and Microsoft went out of business somehow. That would be the universe where the 90s were a truly wondrous age.
Heh, still recall having to use a special bootloader that would load a BIOS extension, or some such, because the new HDD i got, so that i could run Win95 and still have room for anything else, was not compatible with my 486 otherwise.
Even given that i can't help think computing was less complicated back then.
I worked on a visual programming language in late 80s. It would compile it's programs but the compiler was really slow, took minutes for small programs.
We worked on optimizing it but couldn't find much to optimize. Then the founder took out the code that updated the progress bar as it compiled and compiles finished 10x faster.
No cache on a Plus. It was probably just updating the progress bar too frequently.
The 68k in the Plus is an 8MHz 32 bit processor with 16 bit data bus, 4 clock cycles per bus access if there are no wait states, and most instructions are more than one word long. There's not a whole lot of time to do useful work!
And on current systems, updating every second would decrease battery life by preventing your system from sleeping. Modern systems can easily sleep for more than a second at a time, even with the screen on (and not updating).
I also, personally, can't stand to have distractions like that on my screen away from where I want to focus.
I had completely forgotten about the Time Machine icon animating before. Interesting. Odd choice perhaps because I imagine most people have the hard drive at their desk, when wall power is easily available. There is a setting that enables backing up on battery power but I think the default value is disabled.
In Windows 7 they've finally given up and the UI lets you easily scroll up and down through months after clicking on the clock. (Edit: sorry I meant Windows 10)
Would have thought it would be useless because our eyes notice movement in the periphery and it distracts you to look at it. I would be annoyed with some animation going on that I don't care about.
On Linux, there is a very practical reason: to reduce the number of wakeups per second, and save power.
You may think it is far fetched, but try a minimalist XFCE or LXDE desktop, run powertop updating every 5 seconds (so that screen refresh from powertop do not alter your measures) and see for yourself.
For that same reason, you may want to disable the blinking cursor, and conky, and other gizmos that blink things on the screen.
I mean, the article describes why in versions of Windows from an era that was much more memory-constrained. Even in 2003, Windows 95 was as old as Android 2.2 is now. I configured my task bar to display seconds and can't say Cinnamon is hogging that much more memory or CPU because of it.
From the screenshot, someone not familiar with strftime would be lost; but most people would be able to figure out the invented format. Also, it looked like users got to choose from a drop down of suggestions -- the "format string" was probably only for users to read, not actually parsed.
Also, it looked like users got to choose from a drop down of suggestions -- the "format string" was probably only for users to read, not actually parsed.
Actually it is possible to write your own format string, since those are comboboxes:
Also, I would agree that the Windows UI is better since it provides a short description of the format letters; but they could've just as easily done that for the commonly-used strftime() specifiers too, with perhaps a link that opens a popup with all of them. The live preview at the top also helps.
I always have the seconds on (Ubuntu) and it's a good feature. I leave work every day at 17:28:15 sharp, otherwise I will miss my train (London DLR) and yes, seconds matter in this case.
If that isn't satire, how are you using the on-screen clock to do that?
Do you start staring at it at around 17:20 (or even earlier?) and wait for it to hit 17:28:15, or is your internal clock so good that you only have to start staring at it at 17:28:00?
Either way, why spend time staring at an on-screen clock if you also could spend that time walking to the train station, or even walking on the platform?
I would find some tool that allows me to set daily alarms at around 17:15 (as an early warning to finish whatever I am doing) and 17:28:15 (as a sign to leave now), so that I wouldn't have to waste time staring at that clock.
It's not that hard once you're used to it. I just glance at the clock several times towards the end of work to know exactly when to wind things down and stand up from my chair at exactly 17:28:15. The optimal time was determined by observing elevator business patterns, crowd thickness downstairs and delay margins of train arriving at my stop.
If it's just for benchmarks, why not make it a preference (defaulted off), so benchmarks can be run with the default settings (off) just fine, but people can turn it on if they want to?
Displaying seconds is a valuable feature, so it's not a matter of having unnecessary code. As for being slow, the ability to display seconds was removed because of the minimum system requirements for Windows. Anyone with a machine that has more than the bare minimum can probably spare the kilobytes necessary to draw seconds.