For a long time, doing training of programmers, I'd explain that bugs come about when we write code. Code left alone "does not rust".
In the last decade I've swung the other way. While the code does indeed "not rust" it does decay.
Primarily this is because software now is more integrated. Integration means communication and there are changes happening all the time. SSLv3, TLSv1, went away. XML became JSON. And don't get me started on SaaS APIs that have a "new version" every couple of years. And yes, the elephant in the room, security.
Unfortunately lots of developers in my age group believe in "if it's not broken don't fix it". They are the hardest cohort to convince "update things even they're important. Updating when its urgent is no fun at all."
There's no habit as hard to break as a best-practice thats turned bad.
I think I am uniquely qualified to speak authoritatively on this subject as I regularly work in code bases dating back to the mid 1950s (nearly 70 years old if you are counting).
Code left alone absolutely does rust. Everything around the code changes, you can wax philosophically about how "technically the code didn't rust the world did", but at the end of the day that old code aged into a position where it no longer functions optimally or even correctly.
Just to tease with an example, the original machine some of this code was written for only had a few kilobytes of memory, half of which was taken up by the OS and compiler. In order to fit the problem you are working on into memory, they effectively wrote a custom swap allocator that looked and felt like you were using in-memory arrays but were actually using tape storage. Fast forward to today where we have a single compute node with 128 physical CPU cores and a terabyte of memory, the code still diligently uses disk storage to minimize RAM consumption but runs like dog shit on a single thread anyways. Not to mention all the integers used to store pointers have had to be widened over the years as you went from 48 bit words to 32 bit words to 64 bit words.
Integration means communication and there are changes happening all the time. SSLv3, TLSv1, went away.
The problem is a lack of modularity and standardised interfaces where it matters (and modularity where it doesn't, causing additional useless complexity), and incentives that reward planned/forced obsolescence.
I’d say interfaces are part of the problem. People are happy to change interfaces when it makes the implementation of a component easier. I’ve seen interfaces change without notice and this imposes maintenance overhead.
I’d say that complexity is the other part of the problem. As developers, we like to organize functionality into packages, and version them based on the type of changes we make. We like layers of abstraction to contain classes of problems to work on them independently. But all software is a list of instructions, and all the ways that they can interact cannot be categorized in a Symantec version number, or a unit test. Subtle changes can break functionality in surprising ways.
One of my more favorite forms of testing is offline E2E testing. Python and Go are very adept at doing this kind of testing, which is nice. The idea is that you use exposed APIs (read: anything for consumption. Could be REST but also could be important packages) just as the user would use them. The nature of testing is that it captures the discreet contracts you're mentioning by reusing the API itself and ensures them.
There's a discussion to be had that's the volume of E2E vs unit tests, but that isn't this discussion.
I fear good ol’ fashioned information hiding is being forgotten in modern software design. Systems are increasingly complicated and developers are busy just trying to get all their code to fit together to worry about the difference between an interface and encapsulation.
Some of the modern ecosystems have gone entirely bonkers (think nodejs/npm): hundreds and thousands of dependencies for the simplest of things, basically an unmanageable "supply chain".
Sure, we can talk about what's good approach to update and dependency hygiene, how packages should "freeze" their dependencies, how should breaking changes be communicated through version numbers (or not), but we've seen the rise of the non-NIH that can't bother to implement 5 lines of code.
If you commit to update whenever there's a newer version of any dep you are using, you commit yourself to a world of pain and fighting even more bugs.
I actually believe LTS repos of public repos (PyPI, NPM, ...) could be a decent way to earn some money in today's dependency-heavy world.
That's really a separate issue. Even if all of your code is first party and you've been crazy enough to write your own TLS library, XML parser, etc. all the things he said still apply because most code lives in an ecosystem of other systems.
He was advocating for continually updating whenever the environment changes. Dependencies are a natural part of that environment, and I am highlighting how even doing just that is troublesome. With any mildly complex project, you would simply be spending all your time doing dependency updates.
I think we need to be looking at a better balance of backwards compatibility in the tools we use (both external systems and external libraries), understand the cost for importing almost-trivial dependencies, and I believe there might be even an opportunity for someone to start a business there ("I'll backport security fixes for your 10 dependencies for you").
On the other side of the coin, if you freeze your dependencies and commit to not update whenever there's a newer version of any dep you are using, you commit yourself to having to continuously (and rapidly!) evaluate all the security advisories of all those dependencies to see if there are any bugs you have to mitigate before your system gets exploited.
You can't simply choose to never update any dependencies - the only question is how you decide when and which updates will get made, or delegate that decision to others.
Yeah, I don't think that's an answer either, which is why I talked about LTS-approach (Long Term Support) for your core dependencies.
Eg. all packages in Ubuntu LTS "main" archive, or RedHat releases, get supported for 5-10 years with security fixes, while maintaining backwards compatibility.
However, even Canonical has realized people will accept breakage, so "main" has been dwindling over time to reduce their costs. That also applies to snaps and flatpaks — no guarantees about them at all.
This overacceptance of external dependencies and compatibility-breaking API changes all started with the shift from software-as-product to software-as-a-service. Honestly it feels like the resulting churn and busywork is on some level a deliberate ploy to create more job security for software devs.
I don't think it's a deliberate ploy: it's a micro-optimization that's localised to the team/company building the API/product, and it improves efficiency at that level.
You can really build and maintain only a single version of your product, forgetting about maintaining old versions. This means pushing backwards compatibility challenges onto your customers (eg. other developers for an API) — the fact that customers have accepted that tells us that the optimization is working, though I am very much not a fan.
Someone already mentioned that your ECU doesn't really need to talk to other systems (besides perhaps via the OBD-II port with a well established, industry standard, backed by regulation protocol).
And I'll also add that at no point are you ever going to need to flash the ECU with updated firmware.
So while a cute example, it isn't really a practical example. Typically when people complain about "code rusting", it is in the context of a codebase that needs to adapt to the world around it; whether that is communicating with other systems or even just accepting new features within its own little fief.
If you had an appreciation for emissions technology and policy you would be able to see how it has swung past the point of it's intended purpose. People are dismantling new engines to find all kinds of issues that will severely limit the useful life of the vehicle and/or cost the consumer a ridiculous amount of money in repair or replacement costs.
The lengths to which manufacturers have gone to "hit the numbers" has resulted in these major deficiencies. This has all formed out in the last decade. Previous to that, a lot of the emissions standards had a positive affect (in most cases).
The automotive manufacturers and the regulators are colluding without enough transparency and it has corrupted the process.
Much likely not, and that's a good thing, because nobody has yet found a way to limit emissions without seriously impacting performance (especially concerning responsiveness and low regime). May be damned them who invented drive-by-wire!
In the last decade I've swung the other way. While the code does indeed "not rust" it does decay.
Primarily this is because software now is more integrated. Integration means communication and there are changes happening all the time. SSLv3, TLSv1, went away. XML became JSON. And don't get me started on SaaS APIs that have a "new version" every couple of years. And yes, the elephant in the room, security.
Unfortunately lots of developers in my age group believe in "if it's not broken don't fix it". They are the hardest cohort to convince "update things even they're important. Updating when its urgent is no fun at all."
There's no habit as hard to break as a best-practice thats turned bad.