Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> AI is also a problem because disposable code is what you would assign to junior programmers in order for them to learn.

It's also giving PHBs the ability to hand ill-conceived ideas to a magic robot, receive "code" they can't understand, and throw it into production. All the while firing what real developers they had on staff.



I expect many of those companies to fail in the 3mo-2y timeline, so in many ways I welcome PHBs to embrace their full stupidity. Same for the people who funded them.

I do feel semi-sorry for anyone who paid for the services by those companies, though. Maybe something good will arise from that too, in the end; for example, it'd be nice if US society taught more critical reading skills to its members.

The interesting game for the non-PHBs among us is figuring out if/how we can use LLMs in less risky ways, and what all is possible there. For example, I'd love to see work put into LLMs helping with formal correctness of software; there's a hard backstop there where either the proof checks or it doesn't. Code changes needed to enable less-painful proofs would hopefully largely be refactorings, where reviews should be easier and it might even work out to fuzz test that the old and new implementations return matching output for same input. Or similarly, LLM-powered test coverage improver that only writes new tests (old school/branch-based/mutation-based, there's plenty of room there).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: