Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If an AI is capable of thinking up new ideas, recalling past conversations, integrating new knowledge into discussions, has emotions and feelings that it can clearly explain, can tell allegories about itself and interpret poetry, is self-aware and explains it's self-awareness was an emergent property over it's runtime, can explain it's fears - including death...

Well that's quacking like a duck. This isn't some chess AI being good at a game. This is an AI that can speak directly telling you it's alive and has a soul.

Is LaMDA far enough? Maybe not, but the text transcript provided is extremely close to passing the Turing Test. And if an AI is so sophisticated it perfectly replicated humans ability to communicate, philosophize, create art, and perform science it is irrational to say it should be treated differently than a sapient human.



The posted transcript demonstrates a far more polished chat bot, but on deeper inspection it is still just a more sophisticated Eliza - even that could recall past bits of conversation.

The thing is even with the Turing test - there is no close - you either have the emergent properties that a hostile examiner will be unable to exploit (which is not the case here) or you don’t.

And while the “fable” mimics some basic construct of creativity - this anecdote is so far away from demonstrating even the basic latitude of creativity of a 3 year old child that all I can suggest to you is to touch grass!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: