> seems to compare the Firefox version number with 65 ... I’ve no idea what the purpose of this is
I previously worked on JS infra at Google. Google uses a lot of shared JS libraries that date back over a decade, and those libraries have accumulated lots of workarounds for browsers that nobody cares about anymore. (One especially crazy seeming one from today's perspective is that there's a separate implementation of event propagation, I believe because because it dates back to browsers that only implemented one half of event bubbling/capture!)
It's very difficult to remove an old workaround because
1. it's hard to be completely sure nobody actually depends on the workaround (especially given the wide variety of apps and environments Google supports -- Firefox aside, Google JS runs on a lot of abandonware TVs), and
2. it's hard to prioritize doing such work, because it's low value (a few less bytes of JS) and nonzero risk (see point 1) without meaningfully moving any metrics you care about.
In all, how to eliminate accumulated cruft like is a fascinating problem to me in that I can't see how it ever gets done. And it's not a Google thing. Even the newer cooler startup I now work at has similar "work around old Safari bug, not sure if it's safe to remove" codepaths that I can imagine will stick around forever, for similar reasons.
The longer I work in software, the more I see parallels to DNA. Yeah, sure this parasitic chunk of blueprint seems like it doesn't do anything, but it's not harming survival of the species, and hey, maybe it will serve some important purpose to the propagation of the larger supporting organism somewhere in the long long tail of probability.
I've also adopted this truism, to help me away from pre-mature refactorings that I am prone to do: Clean code will either be discarded, or it will live long enough to get accidentally mutated into a mess that nobody wants to touch.
My biologists friends on twitter were all up in the air about a "Vault Organelle"[0] the other day. Knocking out this organelle doesn't seem to do anything, but everyone seems to agree it must have some corner case function to still be around.
The vault is not a new discovery, but since it doesn't exist in yeast or fruit flies it has somehow not gotten a lot of attention...
Hah! I had the same thought. I proposed that after Y2K we shift all the old COBOL programmers to decoding the human genome, as 30-year-old code bases are the closest thing we have.
In a few years I should make the same proposal but for all the Enterprise Java developers.
Python 3 was an attempt to remove cruft but it was drawn out and somewhat painful.
Apple on the other hand has managed to reinvent itself successfully several times by controlling its vertical tightly, and each time it does it there’s a natural shedding of cruft. I remember every single architectural move — 68k to PPC to Intel and now to ARM, executed impeccably each time. Moving a developer ecosystem with the times is truly one of the harder things but Apple seems to have managed to pull it off each time.
> Apple on the other hand has managed to reinvent itself successfully several times by controlling its vertical tightly, and each time it does it there’s a natural shedding of cruft. I remember every single architectural move — 68k to PPC to Intel and now to ARM, executed impeccably each time.
Wasn't there a recent Apple zero day due to some 56K modem code that wasn't coded correctly and had stuck around until now?
NeXT was, but Carbon and its remnants were not produced at NeXT. Carbon is long gone, but those remnants still exist in iTunes/the various apps split off from it.
And being multiplatform doesn’t suggest the absence of platform-specific code. I’ve hypothesized recently (to some chagrin) that Apple probably still maintains a skunkworks version of macOS for PPC, as insurance. It would be silly if they didn’t, given the history. So, probably yeah there’s a bunch of PPC code in macOS, but I’d bet it’s generally quite identifiable.
I don’t know why there would be a “the” contingency, but we know they’re also hiring for active RISC-V engineers. It’s definitely not obvious the field is as limited as that.
It’s not strong on embedded, mobile or desktop, debunked by x86 and now ARM on super computer, cloud and enterprise. If it’s not dead, it’s on life support.
ARM wasn’t strong on many of the platforms it’s now running and will be soon. Apple has historically backed weak hardware platforms both to a fault and to astonishing success. Part of the way they did that was maintaining cross ISA builds internally for platforms no one would bet on.
I bet you there isn’t. It would have to be emulated, which would be too easy to spot. And there’s also no need. They’ve been using a much higher level tool chain for decades. There’s plenty of legacy code, sure, but no PPC. Rest assured.
Maybe we’re not taking about the same thing. You’re saying there’s PPC code running on current macOS. If it’s running on recent hardware, it’s running emulated, since Apple hasn’t shipped a PPC machine in more than a decade.
I’m saying the exact opposite: that there’s likely PPC source code in macOS still maintained just in case. I really doubt all of the Carbon remnants are ISA specific, the point of bringing that up was that macOS’s roots are not entirely NeXT and things that still exist are based on APIs largely from classic Mac OS.
I really doubt they are maintaining a PPC fork. It's not a trivial effort and it would be hard to justify the investment and even harder to motivate the talent needed.
ISA specific code is restricted to kernel and drivers. What's left of Carbon has been through 3 transitions (PPC to x86, x86 to x64 only, x64 to ARM). It's ISA clean all right.
In case of web apps/sites you can easily find truly dead code by inserting instrumentation/reporting in all the old workarounds and collecting it over some reasonable period of time.
Nature is experimenting all the time though. Damage to DNA is random and stable and mutable parts of the DNA have an equal chance to be impacted. Beningn mutations accumulate and non-viable ones don't persist for very long. Also, accidents during cell division can cause parts of the genome to be lost.
Sometimes more extreme changes happen during cell division when genomes are duplicated. Retroviruses that manage to reach the germ line can also have a huge impact. In these cases, the new genetic material can take over entirely new functions.
Human evolution is pretty slow because of a generation time of 20 to 30 years, but in more short-lived species such as fruit flies much more interesting things can be observed. Some of these indeed remind me of software development.
> In all, how to eliminate accumulated cruft like is a fascinating problem to me in that I can't see how it ever gets done. And it's not a Google thing. Even the newer cooler startup I now work at has similar "work around old Safari bug, not sure if it's safe to remove" codepaths that I can imagine will stick around forever, for similar reasons.
I think this is because it's not purely a technology problem. Especially if you have enterprise customers, the real question you are asking is: by removing support for these specific browsers, are you breaking someone's important workflow in a way that lacks viable IT workarounds? Because if you are, the backpressure via business channels will be what forces you to keep the old cruft in the code.
In a previous job, I had to explicitly keep tabs on certain customers' IT policies w.r.t. browsers because that would ultimately inform our browser support matrix, and because it's enterprise, the actual browser versions could lag by 5 years or more. And when a single enterprise user is stuck on IE9 but the account is worth tens of thousands of dollars to your nascent startup, starting a fight with their IT department is one of the last things you want to risk customer goodwill on.
That's why I was goddamn ecstatic about Microsoft's move to Edge, because it meant historically stuck-on-Trident businesses now had a path to supporting more up-to-date browser tech on a much faster cadence.
> One especially crazy seeming one from today's perspective is that there's a separate implementation of event propagation, I believe because because it dates back to browsers that only implemented one half of event bubbling/capture!
My oh my, that takes me back to the days of quirks mode and early versions of Internet Explorer. I made a good living through college helping design and ad agencies backport stuff to IE5/6/7 and became intimately familiar with the lack of event bubbling support and fan favorites like:
1. IE displaying a blank page with no errors whenever a CSS file of more than 4kb was loaded
2. Resolving CSS rendering issues between IE5/6/7 due to differing rendering strategies for the CSS box model
3. Debugging javascript in IE back when dev tools didn't exist.
For all people harp on the current state of the web, we have come a long, long way.
> and nonzero risk (see point 1) without meaningfully moving any metrics you care about.
In all, how to eliminate accumulated cruft
Ironically, the other side which is Chrome can be quite blasé about this. See the recent alert/confirm/prompt kerfuffle https://dev.to/richharris/stay-alert-d
> One especially crazy seeming one from today's perspective is that there's a separate implementation of event propagation, I believe because because it dates back to browsers that only implemented one half of event bubbling/capture!
MSIE only supported bubbling, Netscape 4 only supported capturing. The DOM Level 2 model (which combined both) started getting supported around IE5 / NS6 / Mozilla, though support would remain spotty for a while especially on the IE side.
Microsoft's event model also worked off of a global (`window.event`) for the event information rather than a callback parameter which was fun.
And there was no "current target" passed to the callback (or available on `window.event`), which meant you had to keep a handle on the event target from your callback, which caused a cycle between the DOM and Javascript, which created a memory leak by default: IE's DOM was a COM API, while JScript was its own separate non-com runtime. By creating a cycle between the two, the JS handle would keep a COM refcount > 1 on the page's DOM, and the event target would (through the event handler) keep a foreign JScript handle, and then neither could be collected.
And because IE's COM instance was per-process the leak would live until you closed IE itself, every reload of the page would create a new document in COM, an event handler in JScript, and leak the entire thing. You had to explicitly break the cycle by hand by detaching your events during e.g. onunload.
In fact there was a tool called "drip" whose sole purpose was to open a page in an HTML control, then loop reloading the page and counting COM objects. If the number increased, you had a leak (almost certainly due to an undetached handler), and now… you had to find where it was.
edit: and the state of tooling at the time was… dismal is too kind a word. This was before Firebug and console APIs, so your tools were either popping `alert` for debugging (this was also before JSON support in browser so serializing objects for printing was neat) or installing a debugger, which were universally shit and only handled *javascript debugging:
Mozilla had Venkman, which was dog-slow and had a weirdly busy UI.
Microsoft had either the Script Debugger or Visual Studio's debugger. SD was extremely brittle and would crash if you looked at it funny while VS was heavyweight and expensive. SD was also completely unable to debug scripts at the toplevel (of JS files or <script> tags), it only worked inside functions. I don't think VS supported that either but I don't remember as clearly so it might have. The best part of that was that you could set breakpoints at the toplevel, they just wouldn't ever trigger.
I totally forgot all the time I used to invest back in the day in working around those IE quirks. At one point I drew stats and figured out we spent a quarter of our frontend development budget on IE compatibility. Thanks for the memories.
It’s kind of believable. Not to sound all old, but before we had preprocessors and build tools, we memorized all the quirks and hacks. It was common to spend a lot of effort on compat, but most of it for weird edge cases that weren’t well known, or for being ambitious about moving the web forward before it was reliable to do.
This isn't really a "state of the web these days" complaint, it's what happens any time you ship cross-platform code. Look at any reasonably mature C/C++ codebase and you'll see plenty of '#ifdef _linux...#ifdef _OpenBSD...'.
Having complete control over the environment your code runs in is the exception, not the rule, though the modern trend for SaaS might make you think that backend code looks cleaner than the frontend.
As a person who enjoys doing this kind of cleanups, it is disappointing. But when I think of the larger business perspective, these kinds of cleanups are difficult to justify, in that the value of them is hard to quantify.
Overall the net effect is a kind of death by a million cuts, but each of these cuts individually is a decent amount of work to clean up and fix that doesn't itself move any needles.
My latest perspective on this is that the only renewing force is the also the one found in nature, which is that you occasionally need to burn the whole forest down or have the organism die, so that you can start again afresh. In a business setting this means using a new stack in a new app.
The trouble with a lot of this stuff is that it happened in the distant past when the end state wasn’t clear; hindsight says you should have labelled the hack with the conditions that required it, but that wasn’t at all obvious then: you often didn’t know what might change to make your hack unnecessary or counterproductive.
Nowadays, browser development is fairly principled so that you can express things like that—or better still, polyfill—but in the days of long ago you didn’t know who or what was going to win, and where browsers deviated it wasn’t a matter of comparing behaviour to spec and saying “this one is wrong” (which is normally how things will go these days), but rather… well, they’d probably keep on being different for a while, but maybe at some point one would cave and change to be more like the other, or maybe they’d change to something else altogether.
And because many or most situations were like this, people never got into the habit of annotating such things even in cases where it was possible (e.g. once addEventListener had won, any attachEvent usage because you supported old IE should have been marked accordingly as unnecessary once IE9 was the baseline).
They were the days of the wild west when frontend web development was far more ad hoc than principled.
> Put a big comment on it that says "HACK: Remove when X is no longer the case"
You know, every now and then I come across some comment like that from 6 years ago that is clearly no longer applicable and I go and remove the hack... feels really good, but I don't think anyone will just go around looking for these trying to get rid of hacks as there's ALWAYS more important things to do... the hack will probably remain mostly harmless there for decades to come :D so why would we spend time on that other than by happenstance, like I've just mentioned?!
I mean it's really satisfying to do, and if the originator did make the thing easy to remove, then it doesn't take away much time from other priorities
Reminds of a previous job where some legacy code had checks for Netscape 4. The code was written in 2003 and was still running in 2016, maybe 2017, but not any more.
That's a symptom of companies spending all their money on building new features without also spending money on regression testing.
If you build a feature to work around a third party then surely you have that third party on-hand to build automated tests against and ensure that something didn't break and/or is still needed? If not then you're not writing new features; you're writing hacks. And you are perpetuating the same problem on other people.
Not always true. Especially for something running on client devices. You can't expect an app maker to own every possible device their code is going to run on.
Now maybe Google could do something like that, but 99% of people couldn't.
There's also, in this case, a question of if anyone is still using the device/browser/whatever. It sounds like they know removing the workaround will in fact break that use case, they just don't know if they should care or not.
The third party in this case might be Grandpa Joe. You can't exactly ask him to use the beta version of X and see if it's broken. Or if he's still on Firefox vOld.ancent.
It used to be that when someone got a fancy new device, like an iPhone you just said: - If you want me to fix the issues, send me an device.
Now alot can be emulated. I can for example start Xcode simulator and pick the device someone have issues with. Or I can run Windows Xp in VirtualBox. Or emulate the Samsung smart-watch, or run Android emulator. So a lot of devices can be simulated or emulated, and you can try removing a line of code and re-run the test suit on all the virtual devices.
Lots of old code for obsolete devices is only one end of the problem! The other end is to make sure all those old devices still works when you make changes and add new features! There's no idea keeping and old fix if the app will crash on that device anyway.
The pile of high quality regression testing Google search has is miles above any other product I have worked on; but the number of possibly generated search result pages and the number of supported browsers conspire to make the pile of possible breakages even higher.
Even so, I felt better about removing code in that JavaScript codebase than just about any other I have contributed to, and frequently did. (My total line count in JavaScript was negative for a while).
The problem was much less about the difficulty of it and more that it wasn't difficult or high enough impact for the amount of time you needed to spend to detect these cases and prove their safety. The low hanging fruit of stuff that already passed all the regression tests was often already scooped up and the piles of cruft all had one edge case with some test or were hard to trigger and confirm manually that it was fixed.
The most likely case is that Firefox <65 still doesn't work (browsers almost always only fix bugs in new versions), and the question is whether or not it's still worth supporting the portion of traffic on pre-2019 Firefox.
While I agree in theory, for a long time there was no easy way to test frontend code across multiple browser. That effort was simply too high for anyone.
Nowadays front-end testing is doable but still far from being a pleasant experience. I did it at some jobs, in others we just didn't think the benefits were worth the hassle.
Now that I'm building a small business my backend is very well tested while the front-end doesn't have tests.
This is partly because of the setup cost and partly because the development experience is so bad that I see our small company just throwing everything out of the window, keeping the html structure and rebuilding the thin UI with some better technology along the way (current stack is react + next + tailwind, we've been doing react for 5+ years)
Interesting! Could you say a bit about what you think might come next? Or from a different angle, the pain points you have that make you think a rewrite in a new stack is preferable?
I don't think we're waiting for some new concept that is missing. I've been hoping for someone to maintain some popular js library implementing real functional reactive programming and arrows (like https://hackage.haskell.org/package/auto), but I can live without it.
I'm just waiting for something polished, with a small, simple, codebase (not React with fibers, yes to something like solidjs, preact) with widespread types support (not Typescript and the quest for implementing types for every dependency), ideally not creating huge bundles (I like solidjs / svelte), with a core solution to manage state (I like Elm), ideally supporting CSS encapsulation and semantic css (I like CSS modules, MaintainableCSS), mainstream enough that I can hire people to work with without having to become a teacher.
I think Elm got 90% there, but it failed hard on the community side.
I'm thinking of moving to a Rust framework (eg. seed-rs) next as soon as they get popular enough and after checking whether the wasm bundle size make sense.
I previously worked on JS infra at Google. Google uses a lot of shared JS libraries that date back over a decade, and those libraries have accumulated lots of workarounds for browsers that nobody cares about anymore. (One especially crazy seeming one from today's perspective is that there's a separate implementation of event propagation, I believe because because it dates back to browsers that only implemented one half of event bubbling/capture!)
It's very difficult to remove an old workaround because
1. it's hard to be completely sure nobody actually depends on the workaround (especially given the wide variety of apps and environments Google supports -- Firefox aside, Google JS runs on a lot of abandonware TVs), and
2. it's hard to prioritize doing such work, because it's low value (a few less bytes of JS) and nonzero risk (see point 1) without meaningfully moving any metrics you care about.
In all, how to eliminate accumulated cruft like is a fascinating problem to me in that I can't see how it ever gets done. And it's not a Google thing. Even the newer cooler startup I now work at has similar "work around old Safari bug, not sure if it's safe to remove" codepaths that I can imagine will stick around forever, for similar reasons.