Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, the argument is that it's insane to pay developers north of $100k per year + benefits and then don't invest $4k every 3 years to buy them marginally faster hardware.

And I don't think that argument is particularly convincing. Typing this from my 3.5 year old work MacBook Pro.



> marginally faster hardware

Not sure where your experience is originated in. It obviously also depends on what exactly you do with your computer. Mid 2015 Macbook Pro to Mid 2018 Macbook Pro got a 50% improvement for compute workloads according to Geekbench [1].

I work on largish (win/mac/linux/ios/android) C++ code bases and compile and test run times are definitely an issue. Switching to newer computers every few years definitely boosted productivity for me personally (and for my co-workers as well). We mostly saw that compile times halved for each generation jump (3 years) as a combination of IO, memory and CPU performance increases.

Not sure what marginally faster hardware means exactly for you, but for us it's definitely been significant, not marginal.

YMMV, but if you do the math of saving 10 minutes / day * $200 /h * 200 days that's > $6000 per year it becomes pretty hard to economically argue against investing in faster tooling of some sort.

Typing this from a 2.5yr old Macbook Pro.

[1] https://browser.geekbench.com/mac-benchmarks


If you're dealing with something intensive then upgrading makes a lot of sense. If you've got large compilation times in your pipeline or if you're doing machine learning and need to throw loads of hardware at a problem I totally get that. I'm sure there are plenty of other situations that justify this too.

But if you're like me where most of that happens off on the build machines there is very little impact in upgrading your hardware.

A 50% improvement on compute workloads probably wouldn't be noticeable on the setup I run. Outside of compiling I don't think I push a single core much above 30%.

I guess it really comes down to what you're doing.


> Mid 2015 Macbook Pro to Mid 2018 Macbook Pro got a 50% improvement for compute workloads according to Geekbench

If this is enough to make a huge difference then you should be running the workload on a low end or better server instead of a mac book. You'll get much more performance for a fraction of the cost and won't have to pay for the things you don't need, like a new battery, screen, etc that come attached the the CPU you need.

> I work on largish (win/mac/linux/ios/android) C++ code bases and compile and test run times are definitely an issue. Switching to newer computers every few years definitely boosted productivity for me personally (and for my co-workers as well). We mostly saw that compile times halved for each generation jump (3 years) as a combination of IO, memory and CPU performance increases.

Have you exhausted all other avenues there? Do you have distributed builds? Is Everything componentized? Do you compile to RAM disks?

For that matter, why a macbook? Why not a high end gaming laptop with more CPU, RAM, GPU resources?


YMMV indeed. Most devs aren't working with large C++ codebases with slow compile times and making $400k working only 40 weeks a year.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: