Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's worse than that. It's ultimately a military technology. The end-game here is to use it offensively and / or defensively against other countries. Whoever establishes dominance first wins. And so you have to push adoption, so that it gets tested and can be iterated. But this isn't about making money (they are losing it like crazy!) This is end-of-the world shit and about whoever will be left standing once all the dominoes fall -- if they ever fall (let's hope they don't!)

But it's tacitly understood we need to develop this as soon as we can, as fast as we can, before those other guys do. It's a literal arms race.



Yeah, if you consider a military-grade AI/LLM with access to all military info sources, able to analyze them all much quicker than a human… there’s no way this isn’t already either in progress or in use today.

Probably only a matter of time until there’s a Snowden-esque leak saying AI is responsible for drone assassinations against targets selected by AI itself.


>Yeah, if you consider a military-grade AI/LLM with access to all military info sources, able to analyze them all much quicker than a human… there’s no way this isn’t already either in progress or in use today.

Still wouldn't mean much. Wars are won on capacity, logistics (the practical side, not ability to calculate them), land/etc advantages, and when it comes to boots on the ground, courage, knowledge of the place, local population support, etc. Not "analyzing info sources" at scale which is mostly a racket that pretends to be important.


Ok, I didn’t say anything about what you said though. I said it’s definitely either in progress or already implemented.


And I didn't refute anything about what you said. I said "Still wouldn't mean much."


This 100%. We're in the middle of an AI Manhattan Project and if "we" give up or slow down, another company or country will get AGI before "us" and there's no coming back after that. If there's a chance AGI is possible, it doesn't make sense to let someone else take the lead no matter how dangerous it could be.


The better analogy would be https://en.wikipedia.org/wiki/Project_Stargate

"If there's a chance psychic powers are real..."


One often forgets this.


With all the wackiness around AI, is this some Mutually Assured Delusion doctrine?


>The end-game here is to use it offensively and / or defensively against other countries.

Against other countries? The biggest endgame is own population control. That has always been the biggest problem/desire of elites, not war with other countries.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: