Also Facebook has a history of releasing the source code for something. Making a huge splash and then essentially doing nothing after 3 months (aka after someone gets their review).
They use the code internally but fail at making sure it has use externally. This is doubly the case for anything infrastructure
Buck2: Was released. Never could be built correctly without the latest nightly of Rust and even then it was fragile outside of Meta's build architecture
Sapling: Has a whole bunch of excitement and backing when it was announced. Has been essentially dead 3 mos after release.
I used to work for Meta infra. I know the MO. Have a hard time trusting.
Astral use-case is external and has a better chance of actually being supported.
We know we can't just ask for trust upfront. Instead, we want to earn it by showing up consistently and following through on our commitments. So, take us for a spin and see how we do over time. We're excited to prove ourselves!
Sorry I didn't mean it's not dead but it really hasn't got as much feature support. Things like LFS support got deproritized just because the internal team asking for it got a different feature.
Both are EXTREMELY active but only for the needs of Meta and not for the community.
Adoption outside of Meta is nearly non-existent because of this.
Look at something like Jujitsu. instead of Sapling and you can see a lot more community support, a lot more focus on features that are needed for everyone (still no LFS support, but it wasn't because Google didn't need it).
I guess I don't consider a larger number of commits as actively supporting the community. The community use is second place and the open source is just a one time boost to recruiting PR.
When I was there (which was a while ago) almost every decision was based around PSC (Performance Summary Cycle) and it's easy to justify a good rating for a large project being open sourced. Less so to make sure it's well supported for the use cases of the community.
I am very concerned that "own assessment" of what is a DoS means that source code is expected to be hosted only on large platform or by large corporation which is another way to say that "the little guys don't matter".
Self hosting of source code should be an option and the proxy should be there to reduce the traffic load, not amplify or artificially increase that load despite the "level of DoS".
One thing Drew is asking for is to respect robots.txt to allow the operator to determine what a reasonable level is for that operator and not apply a github bias to it.
> This doesn't define for whom it must be backwards compatible. Breaking changes are not all created equal. Semver is a pessimistic measure. You bump the major version if a change could break at least one user, in theory. In practice, most "breaking" changes do not break most users.
I think this mean API breaking usually resulting in packages that won't even compile.
Depending on how you look at it some platforms are already running Linux.
QFX5100 runs a Linux KVM hypervisor with the traditional FreeBSD based Junos running as a guest. Same with the new x86 based SRX (SRX1500). There are more moving to this model.
The x86 based SRX is the most interesting with the forwarding daemons running on the hypervisor OS and the Junos based VM is just for management and protocols. It's not that hard to see that management and protocols can be easily ported.
There are more details that are locked behind an NDA.
They use the code internally but fail at making sure it has use externally. This is doubly the case for anything infrastructure
Buck2: Was released. Never could be built correctly without the latest nightly of Rust and even then it was fragile outside of Meta's build architecture
Sapling: Has a whole bunch of excitement and backing when it was announced. Has been essentially dead 3 mos after release.
I used to work for Meta infra. I know the MO. Have a hard time trusting.
Astral use-case is external and has a better chance of actually being supported.