Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Complete nonsense. https://www.nature.com/articles/s41587-022-01618-2

Language models can generate novel functioning protein structures that adhere to a specified purpose. Structures that didn't exist before nevermind found in the dataset. The idea that there's some special distinction between the reasoning LLMs do and what Humans do is unfounded nonsense.

A distinction you can't test for (this so called "true understanding" ) is not a distinction



This is the same as getting ChatGPT to calculate something. It is likely that it can infer a correct result from the training data and give right "new" answer, but it doesn't mean it has any understanding of maths.

That's why these models like ChatGPT are trained on massive models, to hide the fact that the AI is actually very dumb pattern matching machine.

The only reason they found these "new" protein structures is because AI could match them to a pattern that it learned from the training data.

They even claim this:

> akin to generating grammatically and semantically correct natural language sentences on diverse topics

Just like ChatGPT can generate grammatically and semantically correct natural language, except if the topic is not something it was trained on it will output grammatically and semantically correct nonsense.

> ProGen can be further fine-tuned to curated sequences and tags

Which suggests there still needs to be a human that can reason to be able to curate the sequences, something AI can't and probably at this stage never be able to do.

This is something companies running these models won't openly admit, because that would confuse investors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: