I’ll byte… Current generation of AI / ML is simply an approximation function with large number of parameters that are “tuned” on a given set of inputs. The output of AI / ML is “good” if the approximation is good and most of the time this means that the new input is similar to the training data. However, if the input is very different from training data then at worst the models will be completely wrong (but very sure) and at best - simply not produce any output. We are far away from real sentience that would not stop by use an unusual input (situation) to learn new things about the world. Yes, it is amazing that we can create an approximation function automatically for very complex input data sets. But no, this approximation is dumb and not sentient. Not even close.
If you don’t think the human brain is capable of being “completely wrong”, just take your average Faang engineer, put him inside a New York nightclub and watch what happens
Your brain is literally just a giant neural network approximating what to say and do at each point in time in order to optimize survival of your progeny