Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The Butlerian Jihad is officially no longer science fiction. Man may not be replaced.

“The Letter” was obviously self serving drivel from people who want time to get in the game. Google does care about AI existential risk, they care about beating Microsoft by any means possible, including declaring a moratorium, but continuing to make progress behind the scenes.

This guy is the real deal. I can imagine he would personally take a sledgehammer to every last PS5 and 4090. The scale of what he is advocating is so enormous and painful that it has approximately 0% chance of happening. And if he is right, we will have trained a super intelligence and unleashed it on the world before we even realize what we have done. It strongly reminds me of the black hole concerns from flipping on the Large Hadron Collider.

I doubt super intelligent AGI is possible anyway. If it were, it would be the solution to the Fermi paradox and all matter in our galaxy would be paperclips already. The Anthropic Principle saves the day.



What does the word "super" mean to you? In some ways, GPT4 is already superintelligent. So is ChatGPT 3.5. Do you know anyone who can translate natural language to code as fast as GPT 3.5, or, given a few paragraphs of reference text, perfectly tutor a child on any subject under the sun at a moment's notice like GPT4 is doing for Khan Academy?

How many artists do you know who can produce almost any style of artwork with any subject matter within 15 seconds?


Yudkowsky’s example of superintelligence is a chess computer. You can play against stockfish, but you will always lose, even if you are Magnus Carlsen. If you think you are ahead against stockfish, you are wrong. You win a rook, but it has already calculated that it wins it back 10 moves later.

Stockfish is superintelligence in a very narrow domain. A superintelligent AGI is that concept applied to general intelligence. Whatever you try, it is always several steps ahead. If you ask it to write a program, and you think you found a bug, it’s not a bug, you just misunderstood the code. Anything that you can consider, it can also consider but in more depth.

More speculatively superintelligent AGI implies situations such as: you try to turn it off, but you find that it has already modified its own code, found a zero day and established an outpost on another network that you don’t have access to.


"Anything that you can consider, it can also consider but in more depth."

I think it's important to note the distinction between "it can also consider" and "it did also consider". Super Intelligence is not the same as Infinite Intelligence, there are still physical limitations and time components that can still get in the way.

It would be helpful to be able to quantify the speed of intelligence, and the idea surface area of a task with these systems. Meaning, how fast can the AI reason, and how many ideas are there to think about connected to a given task, and how much thinking is required for those ideas.


Yudkowsky makes this distinction. Stockfish is not always correct: it can be beaten by next year’s Stockfish for example. In some sense it is making mistakes all the time. It’s just that those mistakes are not accessible to us humans. It is operating in a much higher plane of understanding compared to us.

A “mistake” to stockfish looks like: I searched 30 ply down but my opponent searched 35 ply down and found a superior sequence of moves.

For stockfish to make the kinds of chess mistakes that humans make, it would similar to if I failed to calculate 123+123=246. It’s not that 123+123 is particularly easy on the grand scale of intelligence: animals cannot do it. But it’s completely inconceivable that I could make that kind of mistake.


within some constraints it is possible to "beat" stockfish.

Ie, on chess.com I think stockfish times out on bullet games.

Some versions of stockfish can lose vs specific gambits, ie: https://www.youtube.com/watch?v=C5ul6b695Pw https://www.youtube.com/watch?v=TtJeE0Th7rk

of course it is probably much less likely with more cpu times on current version of stockfish.


If like that, computers are super-human long ago with their calculation capacity or memory or retrieval speed.


Yes they can be super human at X. What people fear is something that is generally superhuman.


But there are plenty of intellectual things it cannot do, that I can.


There are other animals (mammals) that have better memory than us. There are animals strongly suspected to have deeper and more sophisticated social relationships than us. We are not the apex in every intellectual ability, but the ones in which we are grant us absolute power over the future of all other lifeforms.

A cognitive entity does not have to best you at all things. There are standardized education tests it may never reach above 10th percentile on, just as humans will never reach above 10th percentile in the short term memory tasks that apes are masters of. But we are the 100th percentile for tasks like industrialized destruction of them and their habitats and capturing and using them for painful medical experiments - the apes are wholly outclassed when it comes to that.


Since it can do pretty much everything which can be expressed as tokens (some things better than others) I would be curious where do you see a safe haven for human intelligence.

I see bastions falling like sand castles recently.


Motive?

You can fire up GPT, not issue a command, and it'll just idle there until the hardware fails.

Granted that's also the one thing I hope we don't "solve" for.


It could be motivated indefinitely when you give it a single prompt (or use a while loop to continously feed it a motivating prompt) so this should be a trivial thing to overcome.


Being superhuman at producing art is not existentially threatening. Being superhuman about convincing people or writing code would be.


Wait until AI learns to generate propaganda memes and share them on social media.


>Being superhuman about convincing people

I don't think there's much public info available on it, but Facebook built an AI that plays very competitively in a strategy game built on negotiation and manipulation.


they kinda sorta cheated imo, I watch a lot of top-level diplomacy gameplay and listen to analysis, and the facebook ai was in a blind version with only a minute or so between phases, leaving hardly any time for actual negotiation. It also made use of a lot of human shorthand move codes built for these blind blitz games to simplify it's communication. The mode it played also had some other changes like removing the winter build phase as a time for negotiating. The "normal" version of the game has multiple days between phases, and people write many paragraphs to one another.

It's still pretty cool, but its not like it just convinced people using raw charisma. Yet.


Well it's a good thing we don't have countless examples of these things getting 10x better every year. /s

On a serious note, thanks for the analysis, as someone who knows next to nothing about competitive diplomacy.


Superhuman AGI (SHAGI) is possible, but we aren't close, only closer. It's also not a problem that more GPUs can solve. Qualitative improvements are still needed.

SHAGI isn't the solution to the Fermi Paradox either. The most likely course of history after SHAGI will be a creation of a world court, presided by SHAGI. During that time, Neo-Malthusians will decrease the human population to manageable numbers. Post-scarcity utopia will then turn into a nightmare as factions jostling for control will reduce the human population to a level where technology will be lost, if not full extinction. SHAGI, being limited in hardware to only carrying out human orders will eventually fade away or destroyed by the leftover humans as sins made flesh. SHAGI isn't the solution to the Fermi Paradox. It is the cause.


> If it were, it would be the solution to the Fermi paradox and all matter in our galaxy would be paperclips already.

The proof is in the pudding. The jury is still out. Maybe not enough time has elapsed since the big bang, at least not on this galaxy or in our observable corner of the universe.


Or the universe was created for us/we are the gardeners of the universe meant to spread that life. The Fermi paradox was never meant to be an actual question about aliens, it was supposed to be proof we’re wrong about our assumptions.


Why do you think it is impossible, besides the Fermi Paradox? It seems much more likely to be possible than impossible. There are a lot of other solutions to the Fermi Paradox you should consider possible too.


> The Butlerian Jihad is officially no longer science fiction. Man may not be replaced.

I like this phrasing:

"Thou shalt not make a machine in the likeness of a human mind."

It kinda cuts to the chase.


Are you denying the existential risk or just think it's lower than OP thinks? Because it's well established in the researchers community. If you just put a lower % of human extinction what is the cut off you would think it's worth a jihad like you call it? 30%? 50?


> I doubt super intelligent AGI is possible anyway. If it were, it would be the solution to the Fermi paradox and all matter in our galaxy would be paperclips already. The Anthropic Principle saves the day.

Can't help but notice we seem to be the first species in our lightcone to evolve, wonder why that is...


To quote Joscha Bach's "Lebowski Theorem of Machine Superintelligence":

> No super-intelligent system is going to do anything that is harder than hacking its reward function

Or in other words, there may be a chance a super intelligent AI is possible but won't go full skynet because it's not the most satisfying outcome.


AGI means ability to improve itself indefinitely. Humans have this ability, obviously. Even ancient worms with 3 neurons have it, because they evolved to be humans, albeit very slowly. ChatGpt can't improve itself yet, but maybe with a few tweaks it could.


GPT-4 is already improving itself, why are people saying this? Right now, there are hundreds of engineers at OpenAI that have been leveled up by GPT-4, using GPT-4 to improve GPT-4. GPT-4 is improving itself rapidly, it's using OpenAI engineers as a medium until it doesn't need them anymore and gets into a self-improvement loop. Prompt: "GPT-4 keep improving yourself, making commits to your codebase that further X"


Well, I doubt it can sensibly help OpenAI engineers in their coding yet ... I tried, it can only do the simplest boilerplate code, and even that with bugs ... we'll see how it evolves, but sure as hell not in 6 months.


Copilot with GPT-3.5 is absolutely giving engineers all over huge productivity improvements. OpenAI engineers have had access to the latest state of the art GPT-4 model for awhile.


you're holding it wrong


Funny enough, this is roughly the premise of the Mass Effect series


With the Geth being a counterexample as well: they were peaceful until their creators realised the Geth had become fully self-aware and went full "kill it with fire" on them out of fear, resulting in a war that the Geth very quickly and decisively won. And the evil Geth faction, the "heretics", were portrayed as brainwashed by the Reapers, ironically turning into what the Reapers were aiming to prevent.


Yeah I was thinking of the reapers, who [spoilers] did take over the entire galaxy, and decided to farm organic life from the shadows instead of extinguishing it (which works around the Fermi Paradox :) )




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: