Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think everyone who works in or around AI has read The Parable of the Paperclip Maximizer [1].

Trying to control what they have built is their attempt to avoid falling into this trap. Not sure it'll work tho.

[1]: https://hackernoon.com/the-parable-of-the-paperclip-maximize...



But stable diffusion isn't an automated system maximizing its own power and drowning the world in paperclips in an out-of-control feedback loop. It's just me generating cool pictures on my GPU.


No?

Training it to produce as 'realistic as possible' pictures could lead to it producing outputs which encourage humans to train it more and more, with more and more data, to eventually produce really good pictures.

Before long, everyone on earth is working in a GPU factory...

I don't think that'll happen with stable diffusion... but I do think that if AI is an existential threat to the world, the point of no return will be something apparently mundane like that...


Hijacking our brains' reward system through visual hypersignals, just an exploit of our existing "visual addictions".


If you release the model then it's easily automated, isn't it?


What is easily automated? Generating JPEGs until I receive an error that there is no more space on my hard drive? This would leave no bad effect on the world.


Think automated spam and the damage it does. The artificial constraints you've imagined aren't even realistic.


Do you also think that Carl Benz should have kept the first combustion vehicle secret from the public, since it could be used to automate damage upon pedestrians?

Ban and criminalize the unethical application, not the tool.


> Do you also think that Carl Benz should have kept the first combustion vehicle secret from the public... ?

No.

> Ban and criminalize the unethical application, not the tool.

No one banned the tool. You've created a strawman argument.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: