Your point stands, but FWIW, this is actually Globalsign GMO[0], one of the largest TLS certificate authorities (CA's), so certainly they have a vested interest in making sure OpenSSL is secure. (Globalsign also partnered with CloudFlare for TLS certificates[1])
I don't think the post is inaccurate or the authors untrustworthy, but I don't think it's a good idea to rely on their blog to get OpenSSL alerts, especially when there is an official, high signal-to-noise, alternative. If someone reads this HN submission and wants to make sure they get alerted about the next critical vulnerability, they should subscribe.
It’s not practical to subscribe to security feeds for every OSS project. Keeping in touch with the tech community is a valid alternative, in combination with patching best practice.
Ubuntu 22.04 & RHEL 9 are the major distros impacted. Docker images built on ubuntu:latest will also be impacted. The latest releases of Alpine/Debian/AL2 are all not impacted, they use 1.1.x lineage.
Amazon Linux 2022 Preview (the upcoming successor to Amazon Linux 2) is also using OpenSSL 3. Had to shut down one host I have using that for an ongoing project until Amazon makes another RC package update.
That was, as it turns out, the only OpenSSL 3 host I'm presently dealing with.
>If you’re using version 1.1.1, this vulnerability doesn’t affect you, but there is a 1.1.1 update coming on Tuesday as well, version 1.1.1s, which you’re still going to need to update to anyway so you might as well schedule some time on Tuesday, too.
> And by widely leveraged, I mean almost completely ubiquitous, if you’re using HTTPS, chances are you’re using OpenSSL. Almost everyone is.
This is probably a bit of an exaggeration. There are quite a few other SSL implementations that actually are also "widely leveraged"[1]. In particular, LibreSSL was forked and cleaned up after Heartbleed. Google uses BoringSSL. GnuTLS is widely used and unrelated to OpenSSL.
Yep we've been using rustls for most of a year no issues. Memory safety go brrr :) Would encourage others to take a look at it. Ring is a good base that's been audited and the lower attack surface from no old openssl bloat code makes a difference.
Ring, unfortunately, has quite toxic project leadership with a history of making hostile decisions towards their contributors and userbase ( see https://github.com/briansmith/ring/issues/774 for one example ). Something to be aware of if you're considering building with it.
That looks reasonable to me. The author is blunt about how he does things and why he does them, while being polite.
Don't know what 'yanking' a crate means specifically, but that seems like an ecosystem problem; in Java for example, maven dependencies are supposed to be immutable and the largest distributor (mvnrepository) doesn't allow updating a package.
Yanking crates makes them entirely unavailable for downstream users unless they've already locked the yanked version locally. This breaks dependencies and unlocked builds. Yanking crates is a last resort measure (a fundamentally broken release or a release for which you've issued a security advisory) and not something which should be done trivially regardless of whether or not it breaks all of your users for reasons related only to one's own conflicts of interest (i.e. You won't support it because the person asking you doesn't have a support contract). The author is most certainly entitled to be blunt about their support policy. They are not entitled to disregard community conventions when using community provided package hosting services.
Note that the ring maintainer has long since stopped yanking releases.
More importantly, it seems ring has recently hit a long dry spell of getting no new commits at all. There has been some light maintenance work recently, but outside contributions haven't had a credible path into the main branch for a long while now.
The the old yanking policy was extra work I did with the intent to help people. It was unfortunate that Cargo had that bug, but also I should have been much more diplomatic in how I dealt with it.
I've just returned from a long break and I do have a concrete plan to catch up on the backlog. I have concrete plans for making it easier for people to get their PRs merged, making ring portable to all platforms, and eliminating all the remaining bits of C code in the next two quarters.
Feel free to reach out privately if you want to talk: brian@briansmith.org.
That's awesome. Thanks heaps, Brian. I really appreciate your re-commitment. Apropos, Re: making ring portable to all platforms: IBM have been graciously maintaining a up to date patchset for Ring for years now and there's an outstanding PR here you may not have seen since they filed it in 2020... https://github.com/briansmith/ring/pull/1057
I wouldn't call it toxic per se. It's a very business oriented approach.
Using ring without a support contract is clearly a terrible decision. It's not a library any open source project or other library should depend on. Doing so will break your builds and the builds of others that use your library.
Generally you are right that this yanking policy isn't great, there is better tooling around to address security vulnerabilities than cargo yank by the author of the library. But it seems to me that it has been reconsidered. ring hasn't yanked versions of its library for a while, outside of one very recent yanking of an alpha release.
There are also other recent improvements, like ring used to require latest rustc very quickly, nowadays it's having MSRV's of older than six months. it also used to be impossible to link together multiple versions of ring in one binary, due to native dependencies used by ring. This issue has also been addressed thankfully.
It looks like author of ring library tries to protect their user base from security vulnerabilities, but hit by bug(?) in cargo, which changed from warning to error for yanked libraries. Anyway, no promises until API version 1.0.
I don't really get the reasoning (if there's a security bug, surely you know you've fixed it?). I fully support taking down versions that have known vulnerabilities, like the broken OpenSSL crate mentioned in that discussion.
The bug in cargo made the issues worse but not by much. Pulling crates/packages from the repository will break the build for almost everyone because updating your dependencies to the latest version, especially with an API that is not backwards compatible, takes time. The author suggests bumping versions until the application compiles again and that approach an easily take an afternoon if one of your dependencies hasn't updated yet.
It's the equivalent of the curl team deciding to pull all non-modified distributions of curl more than two versions old, or Angular/React/Svelte/Whatever doing the same, because they don't know if those versions are vulnerable or not.
There would be chaos as suddenly Linux distributions could no longer be built, reproducible builds would probably fail in the forked copies, most web applications would fail to start.
Nobody is asking the author to support older versions, or to not break the API between versions. The API is unstable, it's not had a 1.0 release and it probably never will. All people ask is leaving the old versions up so other projects can still build.
Since the project is open source, I suppose it's possible for someone else to created a ring-unpulled crate that's just the ring crate but with old versions still up. Should be doable with nothing more than a bash script running on a server somewhere.
Security is a tradeoff between usability and safety. In this case, the author of ring received a suggestion from Rust security group to yank old, unsupported versions to be on the safe side, which created usability problems with ring. Security-minded people are OK with that, while security-ignorant people are not.
Availability is an important part of the CIA trifecta. A dependency being at the verge of being pulled at any time is a security issue by itself.
Not being able to build a fixed release for a vulnerability you discovered in your own code has a bigger impact than a theoretical vulnerability that results in... not getting feature and API updates?
The code author can release his code in whatever way he wants and he can take down all but the very latest version of his package if he wants. It just makes his package unusable as a dependency for any real-world applications.
That's a rather charitable interpretation of events. Events which didn't need to happen. Events which happened for dubious reasons and which depended dubious reasoning.
Oh no. Just because they donate their time and effort the community can't criticise them; even if how they work and interact with the community hurts and embarrasses the community... it was all free after all and the community added nothing to it! /s
That's really not how it works when you work with others and use other's resources and time. Nobody's entitled to their effort, but they're also not immune to criticism from others for that effort if they hold that effort out for others - if they expect immunity because it's 'free' they're just wasting people's time and preventing the development of a solution which doesn't.
Is GnuTLS widely used? People bring it up, but I've never actually seen it in a codebase (and I deal with a lot of x509 + PKCS code). I've seen more wolfSSL and mbedTLS than GnuTLS.
From looking at my currently installed packages, there are quite a few that depend on gnutls: ffmpeg, gnupg, libcups, vlc, and wget look like the ones that would be most well known. It's possible that they only use it as an option, but normally I'd expect it to only be an optional dependency if that were the case. I haven't looked at any of their codebases though, so no idea what they use it for!
How is wolfSSL/SSH? I have come across them a couple of times, but not having anyone/anything ”famous” behind them when compared to GNUtls or mbedtls always made me somewhat wary.
I went through that process when I first heard the announcement. The fixes have been applied to master which is tagged for a release. You can search issues by severity tag and it becomes pretty obvious which of the few issues is related to the problem (one of the contributors flat out stats a change must be merged for a major security fix). Went looking at PRs and came across a buffer overflow. I stopped at this point you are welcome to reverse engineer the changes and create the exploit.. I moved onto more interesting problems
Edit: once upon a time I went to a google container security conference and the kubernetes vulnerability disclosure process was described. I noticed there is at least 12-18 hours from patching a vulnerability before binaries are generated and the public notice is made. More than enough time to identify, exploit, and 0day into the wild
This is a stupid question but is the patch not being released until Nov 1, or is the security patch already in the Ubuntu updates and they're just not publicly releasing the vuln until Nov 1?
I don’t think I was clear in my original post. The patch is in master but the latest release 3.0.7 has not been tagged yet and so releases have not been drafted. The patches maybe released early to large or popular organizations but I’m not sure of OpenSSL critical patch process
The idea/hope/point of the embargo is not keep the issue secret forever, but to keep it secret long enough to allow fixes to be released and that most people can update when/before the details become widely known.
I wouldn't be surprised if there's a "proper announcement" of the actual issue shortly after patches are widely available.
> If you’re using version 1.1.1, this vulnerability doesn’t affect you
AFAIK, LibreSSL forked even before that - when OpenSSL was version 1.0 or 0.9 even. So likely not affected - unless a similar issue appeared there after the fork.
Parallel forks sometimes keep incorporating quite a lot of changes from each other, in the *BSD fork tradition. I'd also guess that LibreSSL is not affected but it's not a foregone conclusion.
In the previous OpenSSH vs OpenSSL 3 bug it went like this:
> The issue has been identified in OpenSSL version 3.0.4, which was released on June 21, 2022, and impacts x64 systems with the AVX-512 instruction set. OpenSSL 1.1.1 as well as OpenSSL forks BoringSSL and LibreSSL are not affected.
(https://thehackernews.com/2022/06/openssh-to-release-securit...)
What puzzles me is that I am using libssl.so.10 and libssl3.so at the same time. libssl3.so belongs to the nss package and not to the openssl package. Am I affected?
The libssl3.so shared object from NSS just has a similar name. This is very confusing, but NSS has been around for a long time, before OpenSSL became the de-facto standard (sort-of), and certainly long before OpenSSL 3, so now we're "stuck" with this confusion.
It is very unlikely that this affects OpenSSH regardless. Only the cryptographic primitives are used from OpenSSL, and none of the complexity of the SSL functions. The cryptographic functions themselves are small and extremely well tested.
OpenSSH (or commonly used variants thereof?) supports X.509 certificates, would they really reimplement that can of worms instead of using already linked libssl functions? Especially since on OpenSSH's home platform libssl is LibreSSL which they consider safer than OpenSSL.
If you're using distro packages, their own "packaging policy" should offer some level of assurance via policy about static linking (since as far as I know, all major distros dynamically link, to make this kind of bugfix easier).
If you're talking non distro packages (proprietary, or anything built manually from PPA or equivalent, or binaries dumped inside containers), this won't help.
Sure, but there might also be binaries outside the distro which link things statically, because makes distribution easier.
This is one of the "benefits" of go, where afaik many things are linked statically.
I'm currently playing around with grepping some function-names to find out if something uses libssl, and then check if ldd to see if libssl is loaded dynamically or not.
> Sure, but there might also be binaries outside the distro which link things statically, because makes distribution easier.
Then you'd use ldd to print the shared objects (shared libraries) required by each program or shared object: find … | xargs ldd.
> This is one of the "benefits" of go, where afaik many things are linked statically.
The static linking makes the situation worse: with dynamic link you update one package, and then restart any currently running processes. There are even helpful utilities to help you do the latter:
Indeed - this is one of the positives of Go, as all the dependencies get linked statically, giving nice portable "single binary" solutions.
Grepping function names seems a reasonable approach, as long as you're not trying to detect something that is obfuscating its use of libssl (i.e. by mangling strings together).
It appears if you strip a binary, any definitive information about the libraries linked in statically is lost, beyond function names and argument combinations. I believe there are tools in IDA and similar, which can match functions based on their input argument types and name. That might help you match to a rough version of the upstream library, if parameters changed.
Unsure why people keep referring to static linking as a positive in this thread? Downstream consumers of statically-linked binaries have no practical way to scan their systems for known-vulnerable versions of libraries, that seems a profoundly negative consequence to me.
In Go you can use "go version -m my-go-binary" and get a list of dependencies and other build information. For example (may wrap a bit ugly on HN):
[~]% go version -m =godoc
/home/belta/bin/godoc: go1.18.3
path golang.org/x/tools/cmd/godoc
mod golang.org/x/tools v0.1.12 h1:VveCTK38A2rkS8ZqFY25HIDFscX5X9OoEhJd3quQmXU=
dep github.com/yuin/goldmark v1.4.13 h1:fVcFKWvrslecOb/tg+Cc05dkeYx540o0FuFt3nUVDoE=
dep golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 h1:6zppjxzCulZykYSLyVDYbneBfbaBIQPYMevg0bEwv2s=
dep golang.org/x/net v0.0.0-20220722155237-a158d28d115b h1:PxfKdU9lEEDYjdIzOtC4qFWgkU2rGHdKlKowJSMN9h0=
dep golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f h1:v4INt8xihDGvnrfjMDVXGxw9wrfxYyCjk0KbXjhR55s=
build -compiler=gc
build CGO_ENABLED=0
build GOARCH=amd64
build GOOS=linux
build GOAMD64=v1
So scanning your binaries, if you want, is fairly easy.
Other than that, this is the "Great Static vs. Dynamic Linking Debate", which has been done quite a few times. I have little desire to repeat it, but both approaches have their advantages and downsides, and with good tooling (like the above) I think many downsides of static linking can be managed quite well.
That's a great solution, actually. I'm not familiar with go, but I thought it compiled to native code and did not require a virtual machine at runtime. So is this metadata enforced/inserted by the compiler into the binary, or does it require specific build tooling that may or may not be available on end-user systems?
The compiler adds it in the .go.buildinfo section; something like "objdump -sj .go.buildinfo" should also display it, kind of, but it needs a bit more parsing to display well. You need some tooling, but it's not horribly complex, and I think some kind of tooling would be unavoidable anyway.
I think there's a lot of potential for better tooling here in the form of language support, package manager support, etc.
Unfortunately that shows the Go libraries involved, but not the C libraries those Go libraries depend on (right?). So that might tell me that a Go program includes 'crypto', but not whether or not that 'crypto' linked in OpenSSL 1.1.1 or OpenSSL 3.x.
So the static linking dependency analysis problem remains.
cgo stuff gets dynamically linked by default, although you can link it statically (usually); the OpenSSL bindings for Go seem to support it, but almost no one uses these[1].
You're right that these don't show up, but in practice it's mostly a non-issue for the most part as cgo isn't used that often and purely statically linked cgo even less so, and when it is it's often in the form of e.g. go-sqlite3 where the SQLite version is tied to the module version.
The problem is that "static linking" means "grab whatever is in /usr/lib/libfoo.a, whatever that may be". I'm not sure if there is a good way to solve this in a generic way that works everywhere, outside of keeping track of sums.
Indeed, there are certainly good reasons to statically link programs, but this type of security issue is the complete opposite, it's a known drawback that you may accept in exchange for the other benefits of static linking
Search the binaries for strings/messages that are unique to libssl. It's the only way (and it still doesn't give a conclusive answer) if the provenance of the binary is unknown.
Clearly some people prefer to know in advance, to make sure they're prepared to patch critical servers on that day, and perhaps even take some offline until then.
It's useless because I don't know if I need to care. Vulnerabilities in openssl are nothing new so as far as I know this is just par for the course, and I get nothing out of it as of yet.
So what do you propose is the alternative? Not tell you about the vulnerability at all until a patch is released? Publish all the details about the vulnerability before a patch is available?
"be prepared to update affected systems at $point_in_time" seems actionable to me. You for some reason thinking that such a warning doesn't warrant taking the recommended action doesn't mean it isn't actionable, it means you choose to ignore it.
An update is nothing special that I have to be prepared for. I do it all the time across all my systems, and it's largely automated. If a single update is such a burden that you must prepare days in advance for then perhaps there's room to improve the processes.
ok, so the actionable thing is "make a note to check for and run updates on Nov 1, even though it might a public holiday for you" and you're done. actionable != lots of effort, but I still appreciate a warning if I'm supposed to work on a holiday.
And yes, plenty places are not at the point where this is a "press button and done" activity, even if it should be. (e.g. pretty much everyone who is buying any kind of "appliance" and isn't just running open-source stuff now knows to go check with vendors)
"I'm good at DevOps and everyone else should be too", feels tangential, and isn't going to help you when your banking session gets compromised because your bank wasn't prepared to roll this out through any expedited process versus their regulatory compliant, slow process.
(As an example/thought experiment. I make no claims about the vulnerability at hand.)
I don't know anything about the update processes banks use. I would hope they wouldn't have to jump through hoops to apply a security update. Didn't they learn this already?
What do you expect banks and other regulated industries do? YOLO patch whatever and whenever?
I don't work in a regulated industry where it's required, but we do similar with a proper change control process and there's not a single individual that's authorised to perform changes without oversight, (even if that oversight from senior leadership comes retrospectively).
I spent a few minutes checking my cmdb for openssl3, and have allocated 30 minutes on Tuesday to upgrade the few machines that have openssl3.
When corporate infosec starts to panic, probably about Thursday based on the jndi issue, I'll be able to point them to our log which shows how it was handled.
No, I'm not convinced that the vulnerability is something I need to care about, because there's no details about it. I can make that determination when I have details. I am well aware of that project's history. I see no information given that would imply this to be anything more special than a regular update for me, for which the process I have already streamlined. I understand the practice of not giving the details until there's a patch and I'm OK with that, but there's now been over 10 submissions to HN about it with over 150 combined comments and all we know is that an update is coming. I'm not buying into the hype.
This is most likely but another possibility is that it was coordinated to happen on a bank holiday so systems can be dated with less impact from service outages.
companies that abuse on call duty for planned maintenance suck. If it's something predictable or plannable, it's not on call. Hire people to work that day.
Are there TLS implementations in Ada that have a track record? I don't think that remark was aimed at the language, only pointing out that writing a new TLS library in a safer language is not exactly a trivial expenditure of resources.
That's one possible reading of the situation, certainly. Another, however, is that openssl is uniquely poor quite separately from the language it happens to be implemented in. Unfortunately TLS libraries are thin enough on the ground that we can't really pick out patterns. It's like saying that microkernels are bad because HURD has serious issues; it could be true, but comparing by the well-known problem child isn't necessarily a good way to tell.
> Another, however, is that openssl is uniquely poor quite separately from the language it happens to be implemented in.
There are well-known, proven solutions to the memory management problems that affect OpenSSL (a C library) in a regular basis, and there are languages that implement those solutions. Among them, Rust, Ada and others.
While it is likely that the issue affecting OpenSSL is memory safety related, how do you know for sure that it is and that rust would have prevented it without having a performance impact? While a language which prevents certain memory errors by design is going to be safe by design, it's entirely possible this bug is in code so hot that it has been repeatedly optimized further and further and that an equivalent implementation in rust would require an unsafe block. Now it is all great to respond to this by saying "safety should never preclude performance" but in that case, why would you write in rust when there are even safer languages such as spark? Moreover, if your secure solution is too slow, nobody will want to use it.
I do agree that OpenSSL is a pile of trash, but an overwhelming amount of the crust of OpenSSL is not C related.
The F-35 JSF project used C++, and had a draconian coding standard, and likely used all of the tools you mentioned. It was plagued with defects that costed the taxpayer billions of dollars.
Genuine question, should I prefer rustls or openssl for security in rust.
A reasonable number of crates let you switch between them with just a cfg flag, and when using them the only difference that makes it way to the user is security (and maybe performance), but I'm not sure which I should consider to be more secure?
At this point we consider rustls about as secure or a little more. Ring is a high-quality set of primitives thats basically some carefully ported boringssl code, and both it and rustls have been around for a good while and have plenty of contributors. Tbh we like the lack of old legacy cruft (where most openssl bugs get found) as much as a memory-safe language, ditching all that ancient code we don't need really cuts down attack surface.
The start has happened, but it's happening unevenly with different users. Lots of TLS servers (in reverse proxies but also in backend apps and even Apache httpd[1]) are in memory safe languages.
For example Go, while not totally memory safe, has shipped their Go-implemented crypto/tls library for a long time. (And also had some crypto correctness bugs - reminding us that memory safety is "necessary but not sufficient" for a TLS implementation)
Yep, this is interesting, I wonder what we can deduce about the nature of the bug from the fact that 2 separate implementations, one that's mostly memory safe, are impacted. (Of course Go didn't announce it's about the same thing, so it might be random, or might be some security research that found different bugs)
Might it be a crypto bug, or logic bug (eg in x.509)? Is there code that's used by both OpenSSL and Go (eg assembly implementations of algorithms both imported or modeled after a reference)?
In the linked article, Global sign says they don't know what exactly the vulnerability is. I imagine the public root CAs to be informed if this was an x.509 related bug. But again, Global Sign is a pretty shitty CA to begin with, so I wouldn't be surprised if they were not informed intentionally.
I'm not sure why CAs would be invited to the embargo, they're in the business of signing certs and while they do process untrusted certs so do zillions of other cert using folks.
Just an speculation; for an x.509/web of trust related vulnerability, I expect the CAs to be a prominent target. There are hundreds of them, and I'm pretty sure there are at least a few of them that use OpenSSL somewhere in their certificate issuing process. Just to avoid DigiNotar-like fiascos revoking certificates en-masse, it probably makes sense to give a head-start to CAs.
They don't have GC so they either make programs difficult to write (Rust) which hinders delivering secure replacements, or have use-after-free security problems (Zig) [1].
Use a GC when you can, it's the biggest programming productivity and quality improvement in PLT of the last 60+ years.
Rust doesn't have GC, but it has very good automatic memory management. GC or memory management doesn't make programs immune to buffer overflows, which is the most common security vulnerability these days, while use-after-free is at 4th place.
What do you mean by automatic memory management here?
(I misspoke a bit with "Rust doesn't have GC", it does have opt in basic GC in the form of ref counting, but it's not used much because a headline feature of Rust is code without GC and I guess libs with interfaces requiring GC would be considered uncool)
I'm not a fan of Go but it is memory safe (with a very minor exception[1]), Zig isn't (it will likely end up safer than C, but it will be “modern c++”-safe but not memory safe.
[1]: there can be memory safety issues in the presence of data races, but this has ever been proven exploitable, doesn't cause the compiler to completely miscompile and is very rare in practice, so it's not comparable to unsafe memory languages.
(And also people shouldn't take "nobody developed an exploit for this vulnerability yet" as any kind of strong evidence, attacks techniques always get better, never worse, over time etc - crypto algorithm people have it right when they start bracing for impact quite early after signs of a theoretical break)
Features such as sum types (enums) that you can pattern match on. Or generics (well now Go got them as well, for a reason).
Maybe it doesn't immediately sound as if this is related to security, but it is. If it is hard to model your data and hard to work with it, then people will go the "easy and fast" path.
Think Java: for each type you have to create a new file. Even with modern tooling that is still annoying. So people often shortcut and just use "String". Now you have "String password" and "String userid" and you can swap it up and print the password by accident. Artificial example, I know, but I hope it explains what I mean in general.
> Think Java: for each type you have to create a new file.
That's untrue, Java has inner classes, and they can even be public. A "public static" inner class is nearly indistinguishable from a normal top-level class (the only real difference is that its name in the bytecode has a $ character separating the names, that is, its name in the bytecode ends up being something like "org.example.Outer$Inner").
Well, you still have to define them inside of another class then and have to find one that makes sense. They also have an reference to the outer class which might not be desirable.
That being said, it maybe makes it slightly better, but I hope you agree that this is still very much a supoptimal solution and probably comes from a time where searching filenames was the best way to navigate code in the lack of modern IDEs.
We have no indication that memory safety is even involved in this bug. For all we know, it could be a timing vulnerability that allows factoring key material, data being copied from the wrong object, or a protocol flow bug that let's the attacker bypass validation. You can create a vulnerability by adding || where you meant && in any programming language, even Rust without unsafe code enabled.
its hilarious how HNers on the "pro memory safety" side of the fence have this moronic attitude that memory disclosures are likely to happen in memory-safe languages. its simply false. you may be right in 1% of cases, but no more.
I want it to be never. Controversial opinion: as long as those who use "security" to oppress us have the upper hand, code written in "unsafe languages" will always leave a path to freedom from the authoritarian dystopia of corporations and governments who will seek to increase their control over our lives. We've already seen the battle start at DRM, jailbreaking/rooting, etc. IMHO the periodic but not-too-often occurrence of vulnerabilities like this, just like a nonzero amount of (cyber)crime, is a justifiable cost that we must continue to tolerate and pay for the sake of our freedom.
Software inhibiting user freedom (like drm) often gets broken using buffer overflows, string parsing mistakes, ... (as seen on many game consoles). Broken (drm) software allows for more user-freedom.
If this software was using Rust, it would be much harder to break them than is currently the case.
And tbh, I have to concur (somewhat). I have lost little-to-nothing due to software exploits, but have gained significantly. For example, reading epubs with my kindle or loading homebrew on game consoles.
The idiom for this is “biting your nose off to spite your face.”
You should do good things consistently, not bad things to offset worse things. If DRM is a serious problem in your life, put your money and time where your opinions are and avoid hardware and software products that enforce it rather than mandating insecurity for everyone else.
It's impossible to get AAA games (and most AA games) without some form of DRM (with some notable exceptions), the same as high(er) budget media productions.
The Kindle was already 8 years old when I got it, isn't it better to re-use it with more current software? The same with router hardware that gets exploited to flash OpenWRT.
It's very hard to get a modern Smartphone (with acceptable cameras, battery life, performance and software availability) with manufacturer-intended root access.
While I agree that people should adopt Rust (and other approaches) for their security porperties, it's not hard to see how it may lead to exploits getting rarer and to more categories of devices & content that can't be reasonably used in a "free" way, even if not intended by the manufacturer. Thus making it much harder to have control over the devices you own (without becoming some kind of luddite).
I empathize with this position: there are a lot of people out there who are discovering that they don't really own the content they've paid for, because they're tied to electronic ecosystems they have no control over.
That being said: I don't think the world is necessarily a worse place if (1) everybody's devices are more secure, and (2) consumers are a whole are disincentivized from buying into ecosystems that fundamentally don't respect their rights. At the risk of sounding like the luddite you mentioned: maybe we really could use a little separation between technology and literally every other domain of our lives.
I see the same attitude from people insisting on using an "open" Android-based phone that Google uses to spy on them mercilessly, while eschewing Apple because they are so "authoritarian" and sneaky. The logic often stated is that Apple can't be trusted because they're considering the option of maybe starting an ad business.
Those who use them will, and have already been doing so even without memory-safe languages, one notable example being that company named after a fruit; but for a long time, there was always a way out.
The metaphor I like to use is "giving them better nooses to put around our necks."
...and I suppose you could argue that neither do guns kill people...?
And much more effectively, and at much larger scale.
The same security flaw that lets you jailbreak a phone could also allow a hostile entity to say "we don't need you to unlock your phone/laptop, we'll just seize it and break into it using known security vulnerabilities".
Buy devices that you control. Don't try to make other people's devices less secure because you want to break into your own.
I was gonna say - who's more likely to benefit from memory corruption bugs: the general populace, or the trillion-dollar military-intelligence complex?
> If the Go issues were distinct I’d imagine they’d choose a different day to disclose/release?
I think it's just a funny coincidence. That's going based on what I know about the OpenSSL one; I don't know anything about the Go one. We'll find out!
True, but outside the kernel Windows has enough infrastructure running in .NET code.
Additionally even if C++ is unsafe, it is still better than plain old C, which since Vista has been the migration path from kernel code. Nowadays there are even template libraries that can be used on kernel and drivers like WIL.
Finally the Microsoft Security Guidelines are:
1 - use managed languages if one can afford it
2 - use Rust
3 - use C++, alongside SAL and Core Guidelines checkers
This isn't a problem with unsafe languages. This is a problem with primarily OpenSSL, but also the entire structure around SSL, which does too much, which means the potential attack surface is too high.
Wasn't there a fork the last time they fucked up badly with a security issue anyway?
Until we finally get liability widespread across the industry and not only in domains where human lives are at risk, just like in every other industry.
When the pocket money gets affected all companies will care about security.
Things are thankfully moving into that direction, US security bill already calls out that one needs to think about delivering software written in C and C++. Only a matter of time until one needs some kind of clearance to deliver software to government agencies with unsafe languages.
when will browsers learn and just replace ssl/tls with one line of code to verify that the public key portion of the URL matches the session established by the website? C is only a hundredth of the problem here (that 100th is still big). its ironic that this news was brought to us by a scammer's corporate blog
> when will browsers learn and just replace ssl/tls with one line of code to verify that the public key portion of the URL matches the session established by the website?
I have done security releases before (not in OpenSSL), and the first line we have there is that don't push upstream in the flashiest text possible. In OpenSSL's case, they might share it with other major OSs beforehand (because many software statically link to OpenSSL), but there is always a secure channel in place to make sure the patches/commits are not leaked.
In the unfortunate event that the commits were pushed to a public repository, the most sensible thing to do is to just release the tagged release with the security announcement anyway.
The relevant commits and pull requests are confidential and per OpenSSL's normal operating procedures are only available on an embargoed private fork to embargo participants.
I don't know a lot about C or the internals of OpenSSL, but going by the commit message, does this mean we should disable TLSv1.3 until we've had a chance to patch OpenSSL?
Edit: Actually, reading through the code a few times, maybe TLSv1.2 should be disabled?
I really wish we had some way to protect ourselves until the patch is widely available.
See the other comments for why the parent is wrong.
> I really wish we had some way to protect ourselves until the patch is widely available.
I would hope/expect that the OpenSSL project has no indication that this vulnerability is used in the wild. And that is probably why they preferred announcing a patch date instead of releasing a fix right away. (But I don’t know their policies, so this is just speculation.)
That would mean that you don’t really need to do anything you shouldn’t have already been doing prior to this announcement to protect yourself until the patch is out.
Unless the vulnerability is easy to find — in which case we’d already hear about exploitation attempts, so I don’t think it is — worrying about this is as useful as worrying about the other critical yet-to-be-found vulnerabilities in the software you use (which most certainly exist).
https://mta.openssl.org/mailman/listinfo/openssl-announce