Evaluating the veracity and relevance of everything it says. Reflecting on what it’s given me and determining whether it meets my objectives. And then the topics I use it for are thought demanding!
If you are using it for marketing copy, that’s one thing. I’m using it to think through some very hard topics — and my kid is trying to learn how photosynthesis works atm.
As I understand it, these models respond at the same sort of level as the prompts; writing like you're a kid, get a simple reply, write like a PhD and get a fancy one.
"Autocomplete on steroids" has always felt to me like it needlessly diminished how good it is, but in this aspect I'd say it's an important and relevant part of the behaviour.
The issue I am talking about is not about prompting, but a limitation of the models and algorithms below this layer. Prompting only exists because of the chat fine-tuning that happened at the later stages
I feel like I spend more time trying to coax it into staying focused than anything else. Not where I want to spend my time and effort tbh