>How do you know that flag does anything except attempt to hide sending of telemetry?
I don't but neither does anyone who runs VSCodium because they also run a random binary from the internet without having any idea whether that binary is in fact compiled with the source code provided, and I have the suspicion nobody who runs it has read that code either.
This is classical security theater where people will run binaries from basically anonymous people on the internet and claim this is more trustworthy than running something provided by Microsoft.
I'm sure we're in a minority but some of us do actually build things like this from source (common for Gentoo, Arch, Guix, and NixOS) and do a quick cursory glance of source changes on every upgrade/rebuild. For a flag like this I may dive into the code of the version I'm running and take a look what's going on.
So with a sample size of one I can tell you that "nobody" is false.
Even not doing that, just the fact that I can know that it's built from a known tag from master on a public high-interaction git repo makes it a completely different story than downloading some arbitrary binary.
> This is classical security theater where people will run binaries from basically anonymous people on the internet and claim this is more trustworthy than running something provided by Microsoft.
With a large enough group of "anonymous people" [0] inspecting the code, the chance for a security hole, intentional or otherwise, lowers [1]. Notice that this is NOT a guarantee by any means -- it's a chance. [2]
Contrast that to a blob of binary code with a EULA stating you aren't allowed to inspect it. There are obviously non-malicious reasons for doing that, but it doesn't (and shouldn't) sow trust. So some people don't trust it. They are not irrational for doing so.
In terms of probability, I would put my money that Microsoft is overall better than the median set of developers at writing code with fewer technical bugs. However, I would also bet that they are more likely to intentionally add in more telemetric data than they let on, and/or misrepresent what toggles and settings actually change.
Whether I actually (can) read even a single line of code does not change any of that. Just the fact that someone can view your code has a large effect on how you write it [3].
We can talk all day about whether specifically VSCodium meets some threshold of actual reviewers/auditors, but that's not the point.
[0]: There are established lines of trust via things like: comment history, other projects, and even other commits. FOSS devs aren't (always) just purely anonymous.
[1]: "Many eyes make all bugs shallow"
[2] This also says nothing like "all FOSS is created equal" or that "projects with thousands of contributors are magically more secure".
[3]: And yes, of course that could mean they just obfuscate it more. But that still takes more time and effort, reducing the chances/number of cases, and increases the chance of detection.
I don’t though; so this effort is somewhat beneficial. Despite the fact that it’s really lacking in some features.
Though: that is quite telling to be perfectly honest. Some components can’t be replicated without proprietary elements- which indicates that there’s a lot more binary blobs than normal.
I’ll admit to not looking at the code, as I do not currently use vscode.
I don't but neither does anyone who runs VSCodium because they also run a random binary from the internet without having any idea whether that binary is in fact compiled with the source code provided, and I have the suspicion nobody who runs it has read that code either.
This is classical security theater where people will run binaries from basically anonymous people on the internet and claim this is more trustworthy than running something provided by Microsoft.