Hacker Newsnew | past | comments | ask | show | jobs | submit | nekitamo's commentslogin

100% this. Ant was bad in many ways, but at least it was lightning fast. Gradle is just tragic.



I'd like to mention the web framework I'm using these days, Jooby:

https://jooby.io/

I've found it quite satisfying compared to the other "new" ones.

As for the original topic, I just want to echo what others have said, and say that I am happiest in Java when writing it as if it were Golang code. That an the first-class runtime and performance and deep ecosystem make it a great choice in 2026.


I one hundred percent support the above. Jooby is a great performant framework, simple to use with tons of flexibility and features. Super happy with it!


I've participated in some corporate shit-shows in my day, but man I don't think I've ever seen one burn cash this fast.

Another thought: they say the software you ship reflects your org chart ("you ship your org chart"). Given how far Meta has slipped in the last year in the AI race, their org-wide dysfunction is starting to seriously harm them, from Financials to execution to talent. They need to get their act together, starting from the top.

I'm not a fan of Meta, but I'm a big fan of Llama. It was the first notable open weights model, and paved the way for all the others. Just for that I want to say: I'm rooting for you guys. Hope an amazing Llama 5 release comes after all this pain and churn.


These sort of people do not care about any of us.

Meta is building a more powerful version of Llama that is likely not going to be open-weight anymore and will move to being closed up. [0].

You're more likely going to be using Deepseek v4 or Deepseek R2 as an open weight model than Llama 5 at this point.

[0] https://www.digitimes.com/news/a20251211PD206/meta-llama-dev...


Which comparable US dedicated server providers do you prefer?


I tend to mostly use dedicated servers from Hetzner for my own projects and for my client's projects. Whenever they explicitly want US servers, I tend to go with Vultr's dedicated servers which been serving us well for many years.


OVH has dedicated in USA and Canada


I've read several reports from customers saying that their customer service is really bad. Difficult to know with online reviews of course. Does anyone have positive stories to share? I am looking at Australian hosts specifically and Hetzner doesn't have any data centers here.


We use them heavily for test boxes and running experiments. Standard off-the-shelf machines are provisioned almost instantly, and never had any problems.

More custom stuff (eg 100Gb/s NICs) takes a bit longer, but they've always been super responsive and quick to sort out any issues!

The price / performance you get from something like their AX162 is just crazy, although unfortunately with the whole RAM / NVMe shortage the setup fee has gone up quite a lot.


Using them for production for years, never dissapointed.

What you should be aware of is their new exploration of s3 storage. I mean, the s3 works and everything but it's still too eaely - the servers are kind of slow and sometimes fail to upload/download. They are still tuning out the storage architecture. The api key management is kind of too primitive (although much more headache free than configuring aws), and the online file browser is lacking

But for vps servers - they are battletested veterans


In my tests, Qwen3.5-35B-A3B is better, there is no comparison. Better tool calling and reasoning than Qwen3-Coder-Next for Html/Js coding tasks of medium size. Beware the quants and llama.cpp settings, they matter a lot and you have to try out a bunch of different quants to find one with acceptable settings, depending on your hardware.


Thank you. The difference was quite noticeable today.


Because then they get some form of control over Anthropic. Solely through the act of using it, they claim some form of ownership over it.


Getting banned from Gemini while attempting to improve Gemini is the most Googley thing ever :D imagine letting your automated "trust and safety" systems run amok so that they ban the top 0.01% of your users with no recourse. Google really knows how score an own-goal.


I really don't understand what is his usage pattern would have triggered that obviously automated ban. Can somebody let me know what they might think is adversarial enough to be considered 'hacking' or similar by a bot?


Google is dealing with a wave of abuse over its Antigravity IDE, with 'account switching' tools designed to use a ton (20+) of free or pro accounts, giving the user essentially unlimited usage. I'm guessing they've deployed some rather aggressive countermeasures to stop this, including banning clients that seem to be accessing "private" APIs outside of a Google product.


The same as the distribution of companies which are profitable over time and grow steadily, vs the others which clumsily flail around to somehow stay alive. To the winner go the spoils, and the winners will be a tiny fraction of companies, same as it ever was.

A way I look at it is that all net wealth creation in public companies has come from just 4% of businesses:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2900447

https://www.reddit.com/r/investing/comments/rk4udc/only_4_of...

It'll be similar with software companies. 4% of them will hit on a unique cultural and organizational track which will let them thrive, probably using AI in one form or another. The other 96% will be lucky to stay alive.

Same as it ever was.


This is a common problem for people trying to run the GPT-oss models themselves. Reposting my comment here:

GPT-oss-120B was also completely failing for me, until someone on reddit pointed out that you need to pass back in the reasoning tokens when generating a response. One way to do this is described here:

https://openrouter.ai/docs/guides/best-practices/reasoning-t...

Once I did that it started functioning extremely well, and it's the main model I use for my homemade agents.

Many LLM libraries/services/frontends don't pass these reasoning tokens back to the model correctly, which is why people complain about this model so much. It also highlights the importance of rolling these things yourself and understanding what's going on under the hood, because there's so many broken implementations floating around.


I used it with OpenAI's Codex, which had official support for it, and it was still ass. (Maybe they screwed up this part too? Haha)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: