Xfinity has a massive market share and is often the only game in town or one of only two. The only alternatives are often smaller resellers of the exact same infrastructure under a different company name. Or, there's also satellite Internet, but it's an order of magnitude slower, has a low data cap, and costs at least as much as the others.
Sometimes, the situation is so bad that people start their own ISP rather than suffer the exorbitant prices and lackluster support:
Agreed. You haven’t really won until it stops becoming noteworthy and “oh look X is using Blender!!”
Nobody talks about how Linux dominates the server space anymore. Nobody talks about how “git is winning” or getting “battle tested”. These are mundane and banal facts.
I don’t believe the same has happened to Blender yet.
That is not a very big studio or very big production, Blender falls over in the pipeline department. It’s a constantly changing API that doesn’t allow for the extensibility needed to get a major project out the door, just the fact that only a Python API is provided is enough for most people who have worked on massive scenes with massive amounts of data to consider it a non starter.
Saying Evangelion isn't big is like saying Minions are some irrelevant little flick. Evangelion is quite possibly the biggest series in Japan for 3 decades running. You won't find a person who has not seen it to some extent. Evangelion goods are sold everywhere at all times. You really cannot escape it. For the biggest series in Japan to use Blender is a huge sign to the rest of the industry in one of the most risk-averse countries that yes, it's good enough.
A relevant opportunity may not occur again so here is a great video by Red Bard on whether it's possible to live entirely off of Evangelion merchandise: https://www.youtube.com/watch?v=_0Qr9rztRw4
The same way Star Wars was still running in between the original series and the prequels. It had an active fan base and lots of side content that was constantly being produced.
I'm sure "major project" is a subjective label, but Flow made headlines earlier this year with an Academy Award (Best Animated Feature) and Golden Globe (Best Animated Feature Film)
Flow is good filmmaking expressed through low-tech production, which is totally valid, but it doing a lot with a little isn't going to stop Disney from one-upping themselves with the next Zootopia movie so Blender needs to handle that angle too if it's going to become a catch-all solution for all kinds of production.
For sure, it was made by a small team and rendered on a single computer using the Eevee renderer (the fast partially-rasterization one). It's a major project, just not an enormously huge bleeding edge major project. Here's hoping Blender can keep on rolling toward those types of capabilities.
Not disagreeing that usage in large productions is something that Blender isn't really designed for, but I don't think that it's for a lack of Python API features (if a studio wants something specific it could just maintain an internal fork) or the ever changing Python API surface (the versions aren't upgraded during a production anyways)
VFX studios have been using Python APIs for twenty+ years, backed by C. They were one of the first industries to use it. That's where I learned it, around the turn of the century.
3.0+1.0 was the highest grossing box office release that year in Japan and has a worldwide fanbase. The original series + End of Evangelion are considered by many critics and fans to sit among the best anime series of all time, and the Rebuild movies were absolutely huge.
Personally, I think they pale in comparison to the original series and lose a lot of what makes Eva special and interesting to begin with, so I'd kinda love to dump on them a bit, but... it's about as big of a production as it gets in the anime industry. They're of course nowhere near Pixar level or similar, but it is clearly an example of Blender being battle tested by a serious studio on a serious project.
> constantly changing API that doesn’t allow for the extensibility
You pick a (stable) version, and use that API. It doesn't change if you don't. If it truly is a _major_ project, then constantly "upgrading" to the latest release is a big no-no (or should be)!
And these "most people" who are scared of a Python API? Weak! It should have been a low level C API! ;-)
> And these "most people" who are scared of a Python API? Weak! It should have been a low level C API! ;-)
I wouldn't frame it as "scared". The issue is that at a certain scene scale Python becomes the performance bottleneck if that's all you can use.
> You pick a (stable) version, and use that API. It doesn't change if you don't. If it truly is a _major_ project, then constantly "upgrading" to the latest release is a big no-no (or should be)!
This is fine if you only ever have one show in production. Most non-boutique studios have multiple shows being worked on in tandem, be it internal productions or contract bids that require interfacing with other studios. These separate productions can have any given permutation of DCC and plugin versions, all of which the internal pipeline and production engineering teams have to support simultaneously. Apps that provide a stable C/C++ SDK and Python interface across versions are significantly more amenable to these kinds of environments as the core studio hub app, rather than being ancillary, task specific tools.
If you had multiple shows in production, I would expect that standards be set to use the same platforms and versions across the board.
If the company is more than a boutique shop, I would expect them to have a somewhat competent CTO to manage this kind of problem - one that isn't specific to Blender, even!
Also, if the company is more than a boutique shop, I would hope it would be at a level and budget that the Python performance bottlenecks would be well addressed with competent internal pipeline and production engineering teams.
But then again, if the company is more than a boutique shop, they would just pay for the Maya licensing. :-)
Small timers, boutique shops, and humble folks like me just try to get by with the tools we can afford.
On a related note, though: I built a Blender plugin with version 2.93 and recently learned it still works fine on Blender 4. The "constantly changing API" isn't the beast some claim it is.
> If you had multiple shows in production, I would expect that standards be set to use the same platforms and versions across the board.
Considering productions span years, not months, artists would never get to use newer tools if studios operated that way. And it really only works if shows share similar end dates, which is not the reality we live in. Productions can start and end at any point in another show's schedule, and newer tools can offer features that upcoming productions can take advantage of. Each show will freeze their stacks, of course, but a studio could be juggling multiple stacks simultaneously each with their own dependency variants (see the VFX Reference Platform).
> Also, if the company is more than a boutique shop, I would hope it would be at a level and budget that the Python performance bottlenecks would be well addressed with competent internal pipeline and production engineering teams.
That would be the ideal, something that can be difficult to achieve in practice. You'll find small teams of quality engineers overwhelmed with the sheer volume of work, and other larger teams with less experience who don't have enough senior folks to guide them. The industry is far from perfect, but it does generally work.
> But then again, if the company is more than a boutique shop, they would just pay for the Maya licensing. :-)
And back to reality XD
That being said a number of studios have been reducing their Autodesk spend over the past few years because it's honestly a sick joke the way the M&E division is run. It's a free several hundred million a year revenue earner, but they foist the CAD business operations onto it and the products suffer. Houdini's getting really close, but if another AIO can cover effectively everything in a way that each team sees is better, you will start to see the ramp up of migrations occur. Realistically this comes down to the rigging and animation departments more than any other. But Maya will never go away completely as it'll still need to be used for referring to and opening older projects from productions that used it, beyond just converting assets to a different format. USD is pretty much that intermediary anyways, it's the training and migration effort that becomes the final roadblock.
There's a recent book called "Plunder: Private Equity's Plan to Pillage America", and seeing this news makes me want to revisit it. The author outlines the usual tactics used by private equity firms to turn a functioning business into their own short-term profit factory, often driving the business into bankruptcy in the process. EA already has a reputation as a semi-broken company, but things can probably get a lot worse.
One method they use is the consolidation of a bunch of small, related businesses. For example, PE firms buy out all of the local veterinary offices in a tri-city area, cut costs, lay off the most qualified vets and replace with less-qualified ones, increase prices for services, and operate a local monopoly.
Clearly, that particular tactic is much harder to pull on the massive oligopoly that is the gaming industry, but it was the one that stuck with me from the book. There are more baffling ones like selling off all the company's real estate, making them rent it back at a much higher rate than their current mortgages (which may already have been paid off), and then filter revenues out of the company via "consulting fees" paid to themselves and their friends for this bad advice.
The book is a little bit repetitive, and some of the tactics are beyond my grasp, but I'm excited to make a personal bingo card of them and see which ones get used on EA as they drive it into the ground.
I read this book, and it's OK but both repetitive and biased. Surprisingly more "textbook" oriented works on PE are harder to find. Wiley had one but it's about 10 years old. The term "PE" also covers a lot of different models, from the worst 80's-style LBO "greed is good" ones, to honest invest-advise-stay out of the way funds. These modern huge deals almost always seem closer to the former. Having now been very close to two PE buy-outs and a big VC funding event, I think I actually prefer VC. Everyone is very transparent and open about what they are trying to do: pour rocket fuel on a fire and get rich. PE wants it all: big annual cashflow, cut all mid/long term costs (R&D, investment, etc) a juicy multiplier on the sale when they flip it to the next PE fund. Most recently I was at a 15 year old company that was doing 25% YoY ARR growth - amazing right? Well no, we were not covering our debt servicing from the most recent PE purchase, so had to cut everywhere and had a hiring freeze. You couldn't get any support to build for the next decade, because funds don't last that long and you don't want to be selling a company in the middle of a project that isn't generating revenues this year. It all makes me mad, sad and very tired.
EA is the perfect candidate for private equity to destroy. There are zombie companies that need to be eviscerated, digested, and then excreted back into the world. The megacompanies of the video game industry are the result of a broken market and this is "nature healing itself." The free market will course correct and private equity is a perfectly acceptable vehicle for something like EA
We don't need to guess here though. PIF and Silver Lake have a pretty solid track record of over-investing in companies. Affinity Partners seems to be a shell company for the Trump family so I don't see them being active.
Mozilla still uses Mercurial for Firefox development [0]. They're in the process of moving to git and GitHub [1]. I don't know the status of the move, but I contributed a bit to Firefox's build system via Mercurial, and I am glad to see they're moving to git since it's the industry standard. Mercurial felt so alien when I was using it. Git has its own share of UX problems, but finding support for Mercurial was much harder for me, precisely because nobody really uses it anymore.
According to the top comment, hg is synced from git (rather than the other way around), and they basically finished the migration AFAIK, with some compatibility/legacy hg stuff for the time being.
Will there be a follow-up toolkit for Artificial General Intelligence called Agitator?
Dumb jokes aside, I took a look at your GitHub page, and this is exactly what I've been looking for when I do local LLM work. Cogitator seems like a nice, pythonic approach vs. using the raw `ollama run` command, esp. given the focus on chain of thought. I think I'll start using this tool. Nice work!
Yes, this is correct. Google pays Mozilla hundreds of millions of dollars annually to be the default search engine. This makes up the vast majority of Mozilla Corporation's revenue. It's somewhere in the ballpark of 85% of all their annual revenue last I heard.
They've tried hard in recent years to get out from under Google by diversifying into other areas. For example, they have a VPN service that is a wrapper around Mullvad, and they've made some privacy tools that you can pay to use, also largely wrappers around other companies' tools.
I was an employee of Mozilla Corporation and saw first-hand the effort they were making. In my opinion, it's been a pretty abysmal failure so far. Pulling Google funding would effectively hamstring Mozilla Corp.
I upvoted this post and would not have commented if not for yours, so maybe my reason for (almost) abstaining is similar as other people who upvoted. What follows might be a ramble and is just my anecdotal experience on HN:
I've been an HN user for years, and I've found it somewhat hard to comment on anything related to economics or non-technical / pop-cultural topics. Many HN users are experts in their technical fields, and they seem to think this automatically translates to expertise in political science, sociology, psychology, and all the other fields of endeavor where we can't just point to source code to justify our positions. I mostly find that HN commenters are a thoughtful bunch. But, there's a small, noisy group of armchair experts waiting to swoop in and correct your grammar or disagree on some technicality over social issues like this.
Since HN is tech-focused, even posting something not directly related to technology can get your post flagged and taken down as irrelevant. So in a way, discussing these things is disincentivized by the site's purpose. I get that WaPo is an online platform and therefore in-scope, but it's "scarier" to comment on because it's more social than technical.
In part, the act of being a thoughtful commenter also means steering well clear of any flame wars (that aren't related to NixOS, Rust, or LLMs). I.e., it's like jazz in that it's about the notes you don't play— it's the comments you don't make that foster a good online experience.
This is also a US-specific article, and lots of American people are overwhelmed by the onslaught of post-election political news; so, this might be a cultural thing in that people are not commening as much because they're dealing with a big inbox of emotions to sort through.
I still take active interest in political posts like this and personally think Bezos is a modern-day robber baron in the new Gilded Age. But, I'll seldom say so, opting instead to upvote so others can see the post and then move on silently.
It's exciting to see a an OpenWRT router where compatibility is guaranteed! I've been running OpenWRT at home for years, and whenever it comes time to upgrade, it's always a deep dive into their Table of Hardware [1]. Many of the newest routers with an absurd number of antennae that you might see at big-box stores like Costco have incompatible chipsets, so usually I have to buy something a bit older.
Most recently I bought a couple of Belkin AX3200 routers because they support WiFi6 and are only about $50 USD. The annoying part is that they're a Walmart exclusive, but they have worked flawlessly so far. Still, I'd rather have the new, officially-endorsed one.
BTW none of the links to online on the OpenWrt pages currently work; everything goes 404 for me.
The only working online store that agrees to sell the device to me is an AliExpress shop I found via shopping.google.com, and its list price is $116. (Now marvel at Walmart's pricing power.)
I've been living with the same router/modem combo (Fritzbox) for around 11 years, and I don't plan to replace it before I get FTTH. (Otherwise I would have.)
Nothing against OpenWRT, I have used it in the past, but I doubt I would have switched routers more often if I was still using it...
I wanted to leave this comment, but now I’m going to have to leave a helpful correction to your comment instead: prescribed is closer to “forced”, or “made the rule”, than to “recommended”. :)
Once I saw that the headline image was AI-generated, I skimmed the first paragraph and didn't find a lot of meaning in it. The dearth of content combined with an AI image made me suspect that the article itself might be AI-generated.
As a litmus test, I decided to check for the word "delve" to see whether it appeared in the text. According to an article I read in The Guardian[1], this word is more likely to appear in AI-generated responses to prompts. Sure enough, "delve" was right there in the second paragraph.
Of course, these two things combined aren't exactly a "smoking gun" proving that the whole thing is AI blog-spam, but I would bet it is (as first mentioned in another comment here). It's pretty wild to be living in a time where we have to be so wary of an entire article being prompt-engineered into existence by a lazy "author" eager for clicks.
Sometimes, the situation is so bad that people start their own ISP rather than suffer the exorbitant prices and lackluster support:
https://arstechnica.com/information-technology/2021/01/jared...