Asking for the information about non-public individuals, including myself. RAG-assisted GPT-4 easily provides such information. GPT2 output is consistent with a good model without RAG (it tries to speculate, but says it doesn’t have such information ultimately). I liked that it doesn’t try to hallucinate things.
You can ask it's knowledge cutoff and it will respond November 2023.
It have no idea of the big events of the beginning of 2024, like the earthquake in Japan.
> You can ask it's knowledge cutoff and it will respond November 2023
It probably just repeated something based on what common AI cutoffs there are, LLMs doesn't have a sense or self or thought process, they don't know more about themselves than the text given to them about themselves, and even then it is likely to default to some common text from the internet.
There are no OpenAI GPT4 model with a 2023 November knowledge cutoff.
You can also test it's knowledge, like I did, to validate that it doesn't know anything past November 2024.
I think it's prompted with a bunch of context information (like, "you are a helpful, harmless, honest AI assistant created by OpenAI, your knowledge cutoff date is ..., please answer the user's questions").
If you really think it is just saying whatever it read on the web, how do you explain that not all LLM chatbots claim to be ChatGPT?
Engineering is happening, it's not just a raw model of text from the web connected directly to the user.