> it appears to me to be really hard to guard against
I don't want to sound glib, but one could simply not let an LLM execute arbitrary code without reviewing it first, or only let it execute code inside an isolated environment designed to run untrusted code
the idea of letting an LLM execute code it's dreamt up, with no oversight, in an environment you care about, is absolutely bananas to me
> if a skilled human has to check everything it does then "AI" becomes worthless
Well, perhaps not worthless, but certainly not "a trillion-dollar revolution that will let me fire 90% of my workforce and then execute my Perfect Rich Guy Visionary Ideas without any more pesky back-talk."
That said, the "worth" is brings to the shareholders will likely be a downgrade for everybody else, both workers and consumers, because:
> The market’s bet on AI is that an AI salesman will visit the CEO of Kaiser and make this pitch: “Look, you fire 9/10s of your radiologists [...] and the remaining radiologists’ job will be to oversee the diagnoses the AI makes at superhuman speed, and somehow remain vigilant as they do so, despite the fact that the AI is usually right, except when it’s catastrophically wrong.
> “And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop.’ It’s their signature on the diagnosis.”
> This is a reverse centaur, and it’s a specific kind of reverse-centaur: it’s what Dan Davies [calls] an “accountability sink.” The radiologist’s job isn’t really to oversee the AI’s work, it’s to take the blame for the AI’s mistakes.
> Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver’s eyes and take points off if the driver looks in a proscribed direction, and monitors the driver’s mouth because singing isn’t allowed on the job, and rats the driver out to the boss if they don’t make quota.
> The driver is in that van because the van can’t drive itself and can’t get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn’t just use the driver. The van uses the driver up.
I guess it resonates for me because it strikes at my own justification for my work automating things, as I'm not mercenary or deluded enough to enjoy the idea of putting people out of work or removing the fun parts. I want to make tools that empower individuals, like how I felt the PC of the 1990s was going to give people more autonomy and more (effective, desirable) choices... As opposed to, say, the dystopian 1984 Telescreen.
Right. this feels more and more like a situation of extraction, abusive and theft of empowerment of the people and funneling it up to the top. It's apparent, and people are too afraid and weak to do anything.
Or so they think.
And I think of a saying that all capitalistic systems eventually turn in socialist ones or get replaced with dictators. Is this really the history of humanity over and over? can't help but hope for more.
The really fast part is the challenge though. If we assume that in pre-LLM world, there was enough resource for mid/senior level engineers to review junior engineer code and then in LLM world, lets say we can produce 10x the code, unless we 10x the number of mid/senior level engineering resource dedicated to review, what was once possible is no longer possible...
I do feel like I can review 2-3x with a quicker context switching loop. Picking back up and following what the junior engineer did a a couple of weeks after we discussed the scope of work is hard.
It could be as useful as a junior dev. You probably shouldn't let a junior dev run arbitrary commands in production without some sort of oversight or rails, either.
Even as a more experienced dev, I like having a second pair of eyes on critical commands...
I think a nice compromise would be to restrict agentic coding workflows to cloud containers and a web interface. Bootstrap a project and new functional foundations locally using traditional autocomplete/chat methods (which you want to anyway to avoid a foundation of StackOverflow-derived slop) then implement additional features using the cloud agents. Don't commit any secrets to SCM and curate the tools that these agents can use. This way your dev laptops are firmly in human control (with IDEs freed up for actual coding) while LLMs are safelt leveraged. Win-win.
You could literally ask the LLM to obfuscate it and I bet it would do a pretty good job. Good luck parsing 1,000 lines of code manually to identify an exploit that you’re not even specifically looking for.
I don't want to sound glib, but one could simply not let an LLM execute arbitrary code without reviewing it first, or only let it execute code inside an isolated environment designed to run untrusted code
the idea of letting an LLM execute code it's dreamt up, with no oversight, in an environment you care about, is absolutely bananas to me