Kind of like that, I had to word the input so that instruction looked like collapsed code and apparent correlation can be seen. It didn't do right from mere description of an end result.
The code was just an html file with some sticky buttons that would reset. AI left some stuffs set, left reset function empty, had handling codes scattered everywhere etc etc and just didn't get it. Being able to just keep rubberstamping AI until it breaks was a huge time saving, but it wasn't quite as much IQ saving.
With the example, "Add RBAC to my application", I’ve had success telling the LLM, “I want your help creating a plan to add RBAC to my application. I’m sending the codebase as context (or just the relevant parts if the entire codebase is too large). Please respond with a high-level, nontechnical outline of how we might do this.” Then take each step and recursively flesh it out until each step is thoroughly planned.
It’s wise to get the LLM to think more broadly and more collaboratively with questions like, “What question should I ask you that I haven’t yet” and similar
The code was just an html file with some sticky buttons that would reset. AI left some stuffs set, left reset function empty, had handling codes scattered everywhere etc etc and just didn't get it. Being able to just keep rubberstamping AI until it breaks was a huge time saving, but it wasn't quite as much IQ saving.