Monopoly isn't the only thing that allows you to charge large margins.
API inference access is naturally a lot more costly to provide compared to Chat UI and Claude Code, as there is a lot more load to handle with less latency. In the products they can just smooth over load curves by handling some of the requests slower (which the majority of users in a background Code session won't even notice).
You can also open all of those documents with LibreOffice. My point is the support LibreOffice has for Office documents is fine and not at all a problem, because Office formats have not meaningfully changed over time and they have been supported in LibreOffice for all of that time.
The comission has themselves made the commitment to reduce the reliance on exactly such standards. You are arguing for a position that they themselves aren't even arguing for as they recognize the importance of the problem.
By your logic, it would have been fine for Apple to stick with the Lightning port for charging because USB->Lightning charging ports are widely available so "it's not a problem at all".
No, because Lightning is not an open standard. OOXML has been an open standard for nearly two decades.
And a big issue for me is this blog post hurts LibreOffice because the largest reason enterprises won't touch it is that it is perceived as incompatible even when that compatibility works just fine.
If success and good marketing was the focus, LibreOffice would happily promote that it will take any file format they get because they have robust interoperable software that doesn't actually mind handling XLSX!
I think you overestimate how niche it really is. In the browser it's an essential part for many creative/productivity tools, e.g. Figma or Miro. On the backend it's used quite regularly as sandboxing mechanism or plugin system, e.g. Istio, Helm or OPA (so generally a high prominence in the CNCF ecosystem).
There are a lot more niche web standards that have a lot less usage that stuck around for a log time (e.g. the recent debate around removal of XSLT)
The problem is that it is lagging behind enough that it is falling out of the support window for a lot of libraries.
Imagine someone releases RustPy tomorrow, which supports Python 2.7. Is it maintained? Technically, yes - it is just lagging behind a few releases. Should tooling give a big fat warning about it being essentially unusable if you try to use it with the 2026 Python ecosystem? Also yes.
3.11 still has 2 years of active security patches, and has most of the modern python ecosystem on tap. That is a whole different ballgame than stuff stuck in the pre-split 2.x world
Yes they did, but the social bump that was there shortly after release has significantly calmed down already.
It did rekindle my love for the game, but most outposts are empty, even in the international districts, so I think it's hard to get hooked on it for new joiners.
EDIT (as I can't edit the orginal comment anymore): The America - English disticts are very lively and it seem like everyone in Europe is also now using those.
> Tech companies are in the business of nurturing teams knowledgeable in things
It pains the anti-capitalist fibers in my body to say this, but no they are not. At the maximum the value is in organizational knowledge and existing assets (= source code, documentation), so that people with the least knowledge possible can make changes. In software companies in general, technical excellence and knowledge is not strongly correlated with economic success as long as you clear a certain bar (that's not that high). In comparison, in hardware/engineering companies, that's a lot more correlated.
In the concrete example of a legacy codebase we have here, there is even less value in trying to build up knowledge in the company, as it has already been decided that the system is to be discarded anyways.
I might still be naive about the industry, but if you don't know how the legacy codebase works, you might either delegate the change to someone else in the company who does, or, if there is no one left, use this opportunity to become the person who knows at least something about it.
In the Azure Foundry, they list GPT 5.2 retirement as "No earlier than 2027-05-12" (it might leave OpenAIs normal API earlier than that). I'm pretty certain that Gemini 3, which isn't even in GA yet will be retired earlier than that.
That's true only in theory, but not in practice. In practice every inference provider handles errors (guardrails, rate limits) somewhat differently and with different quirks, some of which only surface in production usage, and Google is one of the worst offenders in that regard.
How does this compare to solutions like e.g. Clara[0] that have been around for a decade?
A lot of similar solutions came up in the early chatbot era, when Facebook published Ducking and it became trivial to parse dates from natural language. I also looked into building such a product in the time, but ultimately found it hard to find an entry to the market: Most people that actually need something like this do have secretaries (who will also schedule a lot of other things in regards to the meeting) and most other people that have a less severe form of that problem rarely want to actually pay for such a product.
Great question! We have a lot of friends in the b2c space. What Vela is designed for is the subset of scheduling where nothing in the market works, specifically for businesses. Think a staffing firm coordinating across candidates, clients, recruiters, and client development to schedule interviews/meetings. Or another one doing 1,000+ interviews a week, wrangling across phone, SMS, and email. These are scenarios where companies tried every tool out there and eventually just did it themselves because tools couldn't meet their customers where they are and didn't handle the workflows/behaviors of their industries.
API inference access is naturally a lot more costly to provide compared to Chat UI and Claude Code, as there is a lot more load to handle with less latency. In the products they can just smooth over load curves by handling some of the requests slower (which the majority of users in a background Code session won't even notice).
reply