> Cursor is a nervous intern! They don’t want to admit they don’t know; we need to help by providing context.
With an intern, I give them some search terms and let them go learn. I don't have to do the searching for them. It's actually more important to help them learn how to evaluate the different results. It's not even that they "don't want to admit they don't know" (which is anthropomorphization), they are not designed or trained to ask clarifications. The chat based interaction is an "afterthought" (a round of fine tuning after initial training)
The big issues I see in this paradigm are
(1) you have to know a lot of things already to do this
(2) if we automate all the low hanging fruit, how will we develop humans to the level of understanding to do this
(3) with a human, I can delegate, with an AI, I have to handhold. As much as people want to call it "pair programming" it is often more like having to teach except in never truly learns, so I never get my lost time back
I think there's some updates I can make to this to give some guidance around the (1) you have to know a lot of things point. You can use AI assistance to help learn more. But ultimately, I don't believe we are at the point where the AI makes you "smarter"; but it _can_ make you more productive.
I disagree with (3) based on my experience. That feeling happens when I am not providing enough context. I rarely have experiences where I step back to provide more context and still end up in a dumb loop. Highly recommend providing lots of context + breaking down the problem more.
I would love to dig deeper into an example you have where you feel you "never get your time back". Because in general I am saving a lot of time from how much less typing I have to do.
I'd rather spend my time writing the code myself because it is enjoyable, then spend the same amount of time gathering context and verbosely explaining it in text to an AI
> Highly recommend providing lots of context + breaking down the problem more.
I don't have to do this with a human, I can delegate and know they will on their own. Also, the AI will make the same mistake if I come back a week later, a human learns online. This is at the core of lost time and I don't think AIs are close to this level without an advancement that takes us beyond transformers
Interesting! I find I very often have to do this with humans. But yes, the repeat mistakes is a thing. This is where ensuring I output context along the way is helpful. Although it's maybe annoying to "re-refer" to the context, I find that it helps the AI make the same mistakes less often
With an intern, I give them some search terms and let them go learn. I don't have to do the searching for them. It's actually more important to help them learn how to evaluate the different results. It's not even that they "don't want to admit they don't know" (which is anthropomorphization), they are not designed or trained to ask clarifications. The chat based interaction is an "afterthought" (a round of fine tuning after initial training)
The big issues I see in this paradigm are
(1) you have to know a lot of things already to do this
(2) if we automate all the low hanging fruit, how will we develop humans to the level of understanding to do this
(3) with a human, I can delegate, with an AI, I have to handhold. As much as people want to call it "pair programming" it is often more like having to teach except in never truly learns, so I never get my lost time back