"Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable."
You've expressed very clearly what LLMs would have to do in order to be economically transformative.
"If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam."
It's not that process innovations are lacking, it's that product innovations are perceived as an indignity by most people. Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?
> Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?
Is the value in the outcome of receiving medical advice and care, and becoming educated, or is the value just in the co-opting of another human being's attention?
If the value is in the outcome, the means to achieving that aren't of much consequence.
More subtly, what is an education? What is care? As you point out, the LLMs are (or probably will become) perfectly good at the measurable parts of those services; but I think the residual edge of “good” education/care is more than just the other human’s co-opted attention.
How many of us have a reminiscence that starts “looking back, the most life-changing part of my primary or secondary education was ________,” where the blank is a person, not a curriculum module? How many doctors operate, at least in part, on hunches—on totalities of perception-filtered-through-experience that they can’t fully put into words?
I’m reminded of the recent account of homebound elderly Japanese people relying on the Yakult delivery lady partly for tiny yoghurt drinks, but mainly for a glimmer of human contact [0]. Although I guess that cuts to your point: the value in that example really is just co-opting another human’s attention.
In most of these caring professions, some of the value is in the measurable outcome (bacterial infection? Antibiotic!), but different means really do create different collections of value that don’t fully overlap (fine, I’ll actually lay off the wine because the doctor put the fear of the lord in me).
I guess the optimistic case is, with the rote mechanical aspects automated away, maybe humans have more time to give each other the residual human element…
> How many of us have a reminiscence that starts “looking back, the most life-changing part of my primary or secondary education was ________,”
For me it was a website with turotirals on how to make flash games. It literally launched my career and improved the quality of life for my whole family by an order of magnitude.
I am primarily the naysayer of AI but I admit that current LLMs could have easily replicated the whole website.
I love, love that. And if even one of my weird little side projects—including the ones I build with AI-powered tools—connects with a young person like that, I’ll be satisfied.
To me it’s not the “how” so much as the “what,” though.
I can only speak to my own experience with that sort of thing, but how much of what moved you was the invisible authorial hand behind the tutorials—deciding what’s fun to them to write about, and how to talk about it in a way that clicked with a young you?
I guess, what’s the difference between that website, the official docs for the language in question, the formal spec for the language, the .h files themselves that mechanically define the engine that compiles the language, a big pile of examples of working code in the language…
For that matter what’s the difference between what’s fun to do in the language and what’s boring?
I would grant that LLM tech would probably shine at the grunt work part of “please translate these docs into a grade-school-pitched, engaging, example-driven tutorial website; make it dinosaur themed.” But equally it could pitch it for a billion other audiences, and most will not bear fruit without guidance and refinement. And LLM frontends are already same-y, what will distinguish it to your young eyes? Knowing (or finding out) what’s worth doing is the tricky part—and that’s hard to separate from the humans on the receiving end.
I think about when tiktok made it “easy” to recut songs and memes, and to do basic compositing effects. The “how” required specialized software and serious skill for a long time, then suddenly it didn’t. But when it comes to the “what,” there are still people who are good at using the tools and ones who are bad—ones who make good content and ones who make bad ones… and the difference seems to cleave along the normal human lines: innate talent + practice + persistence.
As the old saw goes, contemporary art: “But I could do that!” “Yeah, but you didn’t.”
Alternatively, as the fox said, “C’est le temps que tu as perdu pour ta rose qui fait ta rose si importante”…
After reading your comment I take back my last sentence. I dont think the LLM would have been able to create that website becaue what LLM would have created would have been an uninspiring husk of tutorials. The website had a certain personality to it with the choice of games he would make and the "interesting" problems he would demonstrate and give solutions to.
The supply/demand picture here is more complicated than it looks.
If AI displaces human educators, yes, their supply shrinks -- but we can't assume what direction its demand will go.
We've seen this pattern before: as recorded music became free, live performance got more expensive, and therefore much less accessible than it used to be.
What's likely to happen is that "worse" (read: AI) education will become much cheaper, while "better" (read: in-person) education that involves human connection-driven benefits will become much less accessible compared to what it is today.
Most people may be consider it a win. It's certainly not a world I'm looking forward to.
Important follow-up to my comment: as fewer people do X -- live music, medicine, education, you name it -- fewer talented people do it as well.
Fields need a large base of participants to produce great ones. This is exactly why software has been so extraordinary over the past 30 years: an unusual concentration of gifted minds across the entire humankind committed themselves to it.
In my view, Bach, Rachmaninoff, Cole Porter equivalents today probably aren't writing symphonies. They've decided to write code for a living. Which is why any Great American Songbook made today won't hold a candle next to one from 1950s.
Disagree, we do have the Bach's and Rachmanioff's today: John Williams, Jerry Goldsmith, Bear McCreary, Yuki Kajiura, Hans Zimmer, and probably a slew I'm not even aware of today.
We're in the greatest era of symphonies IMO, it's just that they're hiding in surprising places; movies, TV shows, games, etc.
I don't think we can know whether or not this is the case in our own lifetimes, because we are so immersed in popular culture that we can't be objective about it. Enough of our historical great composers weren't venerated until after their deaths, and to describe composers as "hiding" within the most popular media of our era is a great disservice to the many composers that don't have the fame, connections and reputation to be hired to write for these.
I would also point out that composing for a medium like a game or a movie places a great deal of constraints upon the composer, in terms of theme, cost of instrumentation, duration and most importantly: what is safe and palatable for an executive to approve of.
And AI is stuck in the past. As we prepare to launch a new product… people using AI won’t know about it for months or years, potentially. This will make startups have to seed the planet with text so an AI learns about it, not to mention normal SEO and other shit. I’m sure it is only a matter of time before you can pay to inject your product into the models so it knows about it faster, but incumbent companies will pay more to make sure they don’t.
> I’m sure it is only a matter of time before you can pay to inject your product into the models so it knows about it faster, but incumbent companies will pay more to make sure they don’t.
You have just discovered the fully enshittified version of the business model ai companies hope to reach.
> Is the value in the outcome of receiving medical advice and care, and becoming educated,
Absorbing information doesn't make you "educated". Learning how to employ knowledge with accountability and trust with beings in the real world is what's important, and a machine can't teach you how to do that.
> or is the value just in the co-opting of another human being's attention?
Why is it "co-opting" if it involves a mutually consenting exchange?
Neither does traditional human interacting education - those are things you learn in your first jobs in the real world, regardless of how you were educated.
Even if you have perfect medical information and advice through an LLM, can you perform surgery on yourself? Can you prescribe yourself whatever medication you think you need?
For education, if you know as much as the average Harvard grad, can you give yourself a Harvard degree that will be as readily accepted in a job application or raising funds for a new business?
It's interesting that you assume there's value in being educated in this hypothetical world of complete passive consumption.
The world you're describing is one where the entire economic value of humanity is in reminding the AI to put out the food bowl and refill the water dish at the appropriate time.
The interesting thing here is less about what people aspire to, and more about the lack of imagination and thought when considering the world they want to create.
It would be funny if the sleepwalkers weren't trying so hard to drag humanity along.
The premise of your argument is that "the outcome" can be separated from the process. This is true enough for manufacturing bricks: I don't much care what processes was used to create a brick if it has certain a compressive strength, mass, etc.
But Baumol's argument, which you introduced to the conversation, is that outcome and process cannot actually be distinguished, even if a distinction in thought is possible among economic theorists.
> But Baumol's argument, which you introduced to the conversation, is that outcome and process cannot actually be distinguished
How is that Baumol's argument? How is 'outcome' vs 'process' relevant to his argument at all?
'Cost disease' is just the foundational truth that the cost of the output from industries with stagnant productivity will increase due to the fact that the workers in that industry can be more valuable in other industries, reducing the number of relative workers in the stagnant industry.
If you want to make the output from a stagnant industry available to a broader spectrum of the population then you have to improve the productivity of that industry.
I think he means that when you go to watch the symphony orchestra, you are going to watch a bunch of people sitting with their instruments, manually playing them.
There is no way to separate this process from the product of the process.
You're not buying the sound of the music. You can just stream that. As far as that is the product, it has already been automated and scaled so millions of people can hear it at once, whenever they feel like it.
You're buying the sound AND the people sitting in their formal clothes manually moving their strings over a violin, with painstaking accuracy developed through years of manual practice.
You couldn't make a robot do it, for example. You could maybe make a robot play a violin, but that again isn't what the product is.
The product is tied to an expectation of what it is that does not allow for it to be done more effectively.
By contrast manufacturing processes are not tied to this expectation. If I buy a loaf of bread, I don't care whether the wheat was manually harvested or harvested by a huge machine.
The musical performance example is just one example. The general problem of services being resistant to increased productivity, however, is not restricted to this somewhat unique case. That's why I pointed to medical advice and education: when I need a medical consult or personalized tutoring, I don't specifically care if I have to lock down irreplaceable moments of another human being's life in order receive them.
It's misguided to focus on one special case of the cost disease problem where human by definition must provide the services, when most of the time this is not the case.
It's very true for healthcare (especially mental healthcare) and education today as well, because for most people, the choice isn't LLM vs. human attention - it's LLM vs. no access at all.
It's not inherently insolvable, it's just nearly impossible to solve because the alternative solutions tend to have "overhaul the global economy and human nature" as their prerequisite.
Pretty much all of humanity's tough problems, the ones we can't seem to make much progress on despite centuries or more of trying, are at their core coordination problems. We only really make progress on those when we can sidestep explicit coordination in some way. Every other kind of problem is usually amenable to technology, and we've solved most of them already, largely in the last 200 years.
> the value just in the co-opting of another human being's attention?
Thats a weird way of describing it.
A machine telling me to exercise and eat right will be ignored, even if the advice is correct. A person I trust taking me aside, looking me in the eye and asking me the same would be taken far more seriously.
That may well be true if you need to be persuaded to exercise and eat right.
OTOH, if you don't need to be persuaded and just want information on how best to go about doing it, then I think it makes little difference where the information comes from as long as it's of reasonable quality.
The specific example was indeed a poor one since we have extensive data on that, and even high-touch non-surgical interventions involving hours per week from multiple specialists (read: incredibly expensive) with very-willing participants have proven a lot less effective than one might hope (somewhat effective! But only moderately so, which ain't enough given the price tag). Docs saying "eat better and exercise" at an annual check-up has basically no effect whatsoever.
Turning dozens to hundreds of decisions per week for which the correct decision must be made in nearly every case, into a single decision per week for which the correct choice must be made, has proven wildly more effective than any of that (I mean glp-1 agonists).
It also seems like the value of quality tutoring that doesn't primarily function as social/class signaling goes down as tools capable of automating high quality intellectual work are more widely available.
It depends on outcome again: is the value of tutoring the social class elevation, or is it in the outcome of becoming more skilled and knowledgable?
There's also the deeper philosophical question of what is the meaning of life, and if there's inherent value in learning outside of what remunerative advantages you reap from it.
I feel like lobster’s history might be relevant here - will at some point having a flawed forgetful human being give medical advice be for poor people?
That's reasonable, but don't feel like you're safe letting the humans rest on their laurels. Human medical errors kill thousands upon thousands every year.
You've expressed very clearly what LLMs would have to do in order to be economically transformative.
"If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam."
It's not that process innovations are lacking, it's that product innovations are perceived as an indignity by most people. Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?