> The term agent has been used by AI researchers without a formal definition [1]
> [1] In traditional AI, agents are defined entities that perceive and act upon their environment, but that definition is less useful in the LLM era — even a thermostat would qualify as an agent under that definition.
I'm a huge believer in the power of agents, but this kind of complete ignorance of the history of AI gets frustrating. This statement belies a gross misunderstanding of how simple agents have been viewed.
If you're serious about agents then Minsky's The Society of the Mind should be on your desk. From the opening chapter:
> We want to explain intelligence as a combination of simpler things. This means that we must be sure to check, at every step, that none of our agents is, itself, intelligent... Accordingly, whenever we find that an agent has to do anything complicated, we'll replace it with a subsociety of agents that do simpler things.
Instead this write up completely ignores the logic of one of the seminal writings on this topic (and it's okay to disagree with Minsky, I sure do, but you need to at least acknowledge this) and immediately thinks the future of agents must be immensely complex.
Automatic thermostats existed in the early days of research on agents, and the key to a thermostat being an agent is it's ability to communicate with other agents automatically, and collectively perform complex actions.
You can agree or disagree, as you say, with SoM (Marvin was my undergrad advisor so my opinions are...complex) but that book can in many ways be considered a summation of decades of research in the field. To reinforce your point: even if you disagree with modeling cognition as a set of competing/cooperating "agents", there's enormous ahistorism in the paper. Which, TBF, is quite common in CS.
Also the thermostat has been extremely usefully used in an exploration of agency by Daniel Dennett, in his famous "The Intentional Stance"
(I accidentally typed "Intensional Stance" which really would be a fascinating book to read!)
I worked on agent-based systems >20 years ago, including large research projects and standardisation/interoperability work.
Ultimately, that effort failed but I don’t see any awareness of that considerable volume of work reflected in today’s use of the word “agent”. If nothing else, there was a lot of work on the use-cases and human factors.
It’s just a bit disheartening to know that so much work, by hundreds of researchers (at least), over 10+ years, has just slipped into irrelevance
It was opined on HN the other day, but operations research (aka data science), the annals of human process mapping (aka process automation), and control theory (outside ME/hardware) all suffer similarly.
A field retitles itself, and suddenly no one is aware of the still-applicable research from before the name change.
Which is probably more broadly to say that no modern courses teach surveys of previous material.
> [1] In traditional AI, agents are defined entities that perceive and act upon their environment, but that definition is less useful in the LLM era — even a thermostat would qualify as an agent under that definition.
I'm a huge believer in the power of agents, but this kind of complete ignorance of the history of AI gets frustrating. This statement belies a gross misunderstanding of how simple agents have been viewed.
If you're serious about agents then Minsky's The Society of the Mind should be on your desk. From the opening chapter:
> We want to explain intelligence as a combination of simpler things. This means that we must be sure to check, at every step, that none of our agents is, itself, intelligent... Accordingly, whenever we find that an agent has to do anything complicated, we'll replace it with a subsociety of agents that do simpler things.
Instead this write up completely ignores the logic of one of the seminal writings on this topic (and it's okay to disagree with Minsky, I sure do, but you need to at least acknowledge this) and immediately thinks the future of agents must be immensely complex.
Automatic thermostats existed in the early days of research on agents, and the key to a thermostat being an agent is it's ability to communicate with other agents automatically, and collectively perform complex actions.