Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm sorry, I'm not following. Are you saying publishing novel research about strong AI is analogous to releasing a virus, or not taking antibiotics for their full cycle?


No, not quite. I strongly suggest you familiarize yourself with the gain-of-function bioethics literature and recent debates, to get a better sense of what I'm trying to convey.


Why don't you just summarize your actual point or at least provide further guidance? You literally posted a link without any further clarification about its relevance.

As it stands, you're not giving me any incentive to "strongly reconsider" my position.


[flagged]


I sound uncharitable because you posted a (from my perspectively, completely random) link to a research article in a different field and implored me to change my thinking, without any other commentary. That's not a substantive addition to the conversation, because on my end I have no idea whether to take your link seriously and how much time or effort to invest in learning about it.

After reading it for two minutes, it's not obvious how to take a productive insight about artificial intelligence from what seems to be an article about mutations. I offered my sincere first thought about what you might have meant and asked for further clarification, and you shot that down without any further clarification.

Now after I've twice asked you for clarification and you haven't provided it, you're telling me you're wasting your time. Do you see how this is unhelpful? It's borderline Kafkaesque.


He's probably suggesting that in Virology it's generally frowned upon to publish research on how to for instance make Smallpox airborne and as virulent as the Flu. So it could be wise to leave out the exact details on how to make an AI recursively self improving when publishing. If you've spent time in Research many groups already do this by leaving out small but critical details that make replication difficult or impossible. This is across fields not just CS/AI


*She ---but yeah, as a first-order approximation, that's in the vicinity of it. Thanks!


Ok, so this would lead to restricting initial access to strong AI to a few well funded corporate and government groups.

Very briefly though, because it's way easier to leak code than to obtain source material and tools to weaponize a deadly virus.


The exact details on how to make a Thermonuclear weapon are classified, do you think that is done so improperly?


Are you comparing building a nuclear bomb to running a piece of code?


Well, it depends on the piece of code, doesn't it? How many people can actually run Google's search engine on their own hardware? Assuming they had access to the code.


See my response above.


We are talking about an actual AGI right which certainly has the potential to do damage comparable to Nuclear Weapons.


Very briefly though, because it's way easier to leak code than to obtain source material and tools to weaponize a deadly virus.

True, but perhaps AGI would require substantial, non-trivial supercomputing resources as well...


Let's say you need 1000 of p3.16xlarge instances to run it (that's 8k of V100 GPUs), that's $25,000/hour. So the launch price is within reach of most US programmers. After launch, a true AGI can probably find a way to stay "alive".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: