You seem to be implying that these goals were optional, but I don’t understand how #2 cross-lang interop could ever have been optional. Isn’t running non-JS languages the entire point of WebAssembly?
Given that, do you really think goal #1 non-Web APIs really added much additional delay on top of the delay necessitated by goal #2 anyway?
> but I don’t understand how #2 cross-lang interop could ever have been optional
This problem hasn't been solved outside the web either (at least not to the satisfaction of Rust fanboys who expect that they can tunnel their high level stdlib types directly to other languages - while conveniently ignoring that other languages have completely different semantics and very little overlap with the Rust stdlib).
At the core, the component model is basically an attempt to tunnel high level types to other languages, but with a strictly Rust-centric view of the world (the selection of 'primitive types' is essentially a random collection of Rust stdlib types).
The cross-language type-mapping problem is where every interop approach eventually runs aground. The component model's challenge is the same one that hit every bridge technology before it: whose type system is "canonical"?
.NET's Common Type System was supposed to be the neutral ground for dozens of languages. In practice, it had strong C# biases — try using unsigned integers from VB or F#'s discriminated unions from C#. The CLR "primitive types" were just as much a random collection as the WIT primitives are being described here.
The practical lesson from two decades of cross-runtime integration: stop trying to tunnel high-level types. The approaches that survive in production define a minimal shared surface (essentially: scalars, byte buffers, and handles) and let each side do its own marshaling. It's less elegant but it doesn't break every time one side's stdlib evolves.
WASM's linear memory model actually gets this right at the low level — the problem is everyone wants the convenience layer on top, and that's where the type-system politics start.
Except you are missing the part that CLR has a type system designed specifically for cross language interop, and is taken into account on the design of WinRT as well.
> The CLS was designed to be large enough to include the language constructs that are commonly needed by developers, yet small enough that most languages are able to support it. Any language construct that makes it impossible to quickly confirm the type safety of code was excluded from the CLS so that all languages that can work with CLS can produce verifiable code if they choose to do so.
The WIT types don’t seem random or Rust-centric to me, they’re basic types common to every major current-generation language, not just Rust but also Swift, Kotlin, even Zig. It’s true that languages with type designs from the 90s can’t take full advantage of WIT types, but WIT does seem perfectly capable of representing types from older languages, which seems like the only possible sensible design to me—older languages are supported, but that support needn’t burden interop between modern languages.
Does it bother anyone else when an article is so clearly written by an LLM? Other than being 3x longer than it needs to be the content is fine as far as I can tell, but I find the voice it’s written in extremely irritating.
I think it’s specifically the resemblance to the clickbaity writing style that Twitter threads and LinkedIn and Facebook influencer posts are written in, presumably optimized for engagement/social media virality. I’m not totally sure what I want instead, I’m pretty sure I’ve seen the same tactics used in writing I admired, but probably much more sparingly?
What is it that makes tptacek’s writing or Cloudflare’s blog etc so much more readable by comparison? Is it just variety? Maybe these tactics should be reserved for intro paragraphs (of the article but also of individual sections/chapters might be fine too) to motivate you to read on, whereas the meat of the article (or section) should have more substance and less clickbaiting hooks?
Specifically there’s a lot of clickbaity constructions like: “setup: payoff” or “sentence fragment, similar fragment, maybe another similar fragment”.
This paragraph has both:
> The symptom is familiar: a stream that occasionally "locks up" briefly before catching up, jitter in audio or video, or a latency spike that appears to come from nowhere, a "hang" in the application when it gets blocked waiting for a packet. It comes from a single packet forcing the entire pipeline to pause. The underlying network recovered quickly; TCP's ordering guarantee is what made it visible.
So does this!
> WireGuard's protocol is a fundamentally different design point. It's stateless — there's no connection to establish upfront, no session to track, and no certificate authority in the picture. Two keys, a compact handshake, and you're encrypting. And unlike TLS, WireGuard's cryptographic choices are fixed: Noise_IKpsk2 for key exchange, ChaCha20-Poly1305 for authenticated encryption. There's nothing to misconfigure.
Are you pretending you didn’t even have an LLM help you reword it before publishing? Because that would be an obvious lie. If you were to propose a sufficiently trustworthy way to prove one way or another, I’d bet $1,000 on it.
In theory they try to get people hired for their competence rather than their network. A widely-cited anecdotal example of this reportedly working well is the Rooney Rule: https://www.espn.com/nfl/playoffs06/news/story?id=2750645
This thread also has a lot of anecdotal examples of failure modes of 'diverse slate' rules, though, such as people who have already decided who to hire still interviewing women candidates just to appease the rule, thus wasting everyone's time.
key features implemented in a multi-process architecture, using plenty of C++ and Rust written modules
Which is exactly the point—the UI is written in HTML/CSS, not the native platform language, and the high-performance modules are written in C++ and Rust, also not the native platform language.
There are no second chances in the court of public opinion, no punishment severe enough, no act of restitution sincere enough.
This just isn't true. Look into how Dan Harmon gave a genuine apology and accounting for his wrongdoing and was forgiven. Can you point to Sabatini doing anything that even arguably rises to that level of contrition?
Should a lack of contrition prevent Sabatini from ever working again? He should be indefinitely held in contempt of the court of public opinion?
I fully understand criticism of his personal manner but I have not seen much criticism of his actual work. Why not allow a position where he is not managing career paths?
Edit: He lost his positions, he suffered consequences. When are the consequences enough, is there ever a point?
If learning that Cloudflare took action against literal, self-identified Nazis—who praise Hitler, deny the Holocaust, and drove a car into a crowd and killed a woman—made you worried that Cloudflare might take action against you, you're really telling on yourself.
I don't condone that stuff at all. The big risk comes from 5 or 6 digit member enterprises where you can't properly vet your partners, employees, or contractors. What if a situation like Yandex happens and you find out your code has obscene comments on the backend? What if you have a sponsor that wasn't properly vetted. What if you wind up in a catastrophic PR situation, say BP oil spill. Or, what if someone goes onto a public comment form and posts that obscene stuff and you don't realize it?
Enterprises are massive machines that move EXTREMELY slow. And the risk of not being able to catch something in time is there, and since Cloudflare has now done it once, that means they could be pressured into it in the future. And would the media or Twitter be defending a poor Oil and Gas company if their source code had obscene things in it from a malicious developer, or would they push for Cloudflare to remove them?
Even if the odds of this are so low. I would be doing a disservice to my clients and enterprises if I didn't advise them of this possible risk that could cripple a company. Anything that can be easily stopped by using another vendor, so essentially free, when compared to a risk of something that could result in a company losing hundreds of millions or billions of dollars is an easy choice to make.
Corporations are risk averse, and this plays into both sides. Activists abuse this to pull advertisers of people they don't like. But they also have to understand that services which can cause risk are avoided like the plague.
I strongly disagree with the analogy between CDNs and ISPs. ISPs operate on the user-side, they have no business filtering what the user sees. CDNs operate on the server-side, they have the power and responsibility to decide who they want to do business with, and to not provide services to harmful customers—I'm sure we agree ISPs shouldn't provide services to harmful customers either (spam, malware, phishing, etc).
Given that, do you really think goal #1 non-Web APIs really added much additional delay on top of the delay necessitated by goal #2 anyway?
reply