Claude basically does this now (including deciding when to use subagents, tools, and agent teams). I built a similar thing a month ago and saw the writing on the wall.
This is the interesting constraint though. Have you found going deeper than 2 levels actually helps? Every time I've let agents spawn recursively the quality at depth 3+ drops off a cliff. Not because the model is dumber, the accumulated context drift means the deep child is solving a slightly different problem than what the root intended. Curious if your tree structure has a natural depth limit in practice or if it scales.
Are you sure? In Opencode they can, but it's hard to track them then (say, if you want to steer them) -- you have to click through. It would be nice to have a dynamic execution graph alongside the text.
Yeah exactly. I noticed Claude start doing exactly this a month ago too. It recursively breaks problems down while allowing you to either change direction at each level or keep going. This is where claude jumped up to be legitimately better at solving real world problems than a substantial amount of developers. I can only assume the other AI companies are just going to copy the approach shortly too.