There are some pretty interesting innovations coming down from deep packet inspection providers. In one case they're caching the data that users are serving via bittorrent and re-serving it to the second and future persons requesting it, basically giving users credit for sharing without saturating their upstreams and causing network problems.
It wouldn't surprise me if these products can do the same thing for downstream traffic as well.
Everything old is new again, it's like the 1990s proxy servers brought into the 2000s-2010s.
In this case, the "old" stuff you're talking about never really got "old". There's fundamentally not much of a difference between a proxy server and an edge overlay network fed with traffic redirection; in both cases, you're getting content from a middlebox.
Sure, conceptually. I didn't know of any big ISPs using proxies in the last 5-7 years, though, until the Comcast DPI fiasco happened.
I could well be out of the loop and not aware of any huge Squid or comparable installations, but I believed them to be archaic until recently because of how much faster commercial/service provider type WAN interfaces (DS-x, OC-x) were growing relative to consumer interfaces (DSL, cable). Of course FTTH (large bandwidth to each node) and wireless data (shared upstream + carrier optimizations for signal quality instead of bandwidth + physics) may turn this on it's head.
It wouldn't surprise me if these products can do the same thing for downstream traffic as well.
Everything old is new again, it's like the 1990s proxy servers brought into the 2000s-2010s.