Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think computer scientist/programmers (and other intellectuals dealing with ideas only) strongly overvalue access to knowledge.

I'm almost certain that I can give you components and instructions on how to build a nuclear bomb and the most likely thing that would happen is you'd die of radiation poisoning.

Most people have trouble assembling ikea furniture, giving them a halucination prone LLM they are more likely to mustard gas themselves than synthesize LSD.

People with necessary skills can probably get access to information in other ways - I doubt LLM would be an enabler here.



A teenager named David Hahn attempted just that and nearly gave radioactive poisoining to the whole neighbourhood.


Wow, never heard about that. Interesting.

For the curious: https://en.wikipedia.org/wiki/David_Hahn


What a shame. That boy lacked proper support and guidance.


Yeah, sad to see he was a victim of drug overdose at 39.


>I'm almost certain that I can give you components and instructions on how to build a nuclear bomb and the most likely thing that would happen is you'd die of radiation poisoning.

An LLM doesn't just provide instructions -- you can ask it for clarification as you're working. (E.g. "I'm on step 4 and I ran into problem X, what should I do?")

This isn't black and white. Perhaps given a Wikihow-type article on how to build a bomb, 10% succeed and 90% die of radiation poisoning. And with the help of an LLM, 20% succeed and 80% die of radiation poisoning. Thus the success rate has increased by a factor of 2.

We're very lucky that terrorists are not typically the brightest bulbs in the box. LLMs could change that.


I would say if you don't know what you're doing LLM make the chance of success 1% for nontrivial tasks. Especially for multi step processes where it just doubles down on hallucinations.


The Anarchist Cookbook - anyone have a link?

THE ISSUE ISNT ACCESS TO KNOWLEDGE! And alignment isn’t the main issue.

The main issue is SWARMS OF BOTS running permissionlessly wreaking havoc at scale. Being superhuman at ~30 different things all the time. Not that they’re saying a racist thought.


I'm not saying that LLM bots won't be a huge problem for the internet. I'm just commenting on the issues raised by OP.

Thing is there will be bad actors with resources to create their own LLMs so I don't think "regulation" is going to do much in long term - it certainly raises the barrier to deployment but the scale of the problem is eventually going to be the same as the tech allows one actor to scale their attack easily.

Limiting access also limits the use of tech in developing solutions.


No, we don't. Knowledge is power. Lack of it causes misery and empires to fall.


Knowledge is power true, but even more powerful and rare is tacit knowledge. A vast collection of minor steps that no one bothers to communicate, things locked in the head of the greybeards of every field that keep civilizations running.

It's why simply reading instructions and gaining knowledge is only the first step of what could be a long journey.


More than anything, technology can make it easier to disseminate that knowledge. Yet another reason why we shouldn't understate the importance of knowledge.


LLMs impart knowledge without understanding. See the classic parable of Bouvard and Pechuchet.


There's different kinds of knowledge - LLM kind (textbook knowledge mostly) isn't as valuable as a lot of people assume.


The problem of AI won't be forbidden knowledge but mass misinformation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: