Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And?

My experience still counts for something and the example with those 700 VMS is something I didn't just saw once.



Having huge sprawling swarms of VMs is, for some teams, a problem to be solved, not a fact of life to be designed around.


Sry I'm not getting your point.

If I understand it right: VMS were not there because people needed VMS they were there because people needed compute.

We moved everything to k8s and we were able to do this because k8s can


The point is to deliver a small set of applications, not to come up with the most horizontally scalable possible deployment fabric.


I ran 1000+ VMs on a self developed orchestration mechanism for many years and it was trivial. This isn't a hard problem to solve, though many of the solutions will end up looking similar to some of the decisions made for K8S. E.g. pre-K8S we ran with an overlay network like K8S, and service discovery, like K8S, and an ingress based on Nginx like many K8S installs. There's certainly a reason why K8S looks the way it does, but K8S also has to be generic where you can often reasonably make other choices when you know your specific workload.


And you don't think k8s made your life much easier?

For me it's now much more about proper platform engineering and giving teams more flexibility again knowing that the k8s platform is significantly more stable than what I have ever seen before.


No, I don't for that setup. Trying to deploy K8S across what was a hybrid deployment across four countries and on prem, colo, managed services, and VMs would've been far.more effort than our custom system was (and the hw complexity was dictated by cost - running on AWS would've bankrupted that company)


[flagged]


I'm not bragging.

I'm not a 'bro' and 'cringe' this is not tiktok.

It gives context.

Don't you have anything to add to the discussion?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: