i'm personally fairly irked about the massive shift towards "cdn all the things!", i have noscript blocking third-party assets and a larger number of sites is broken for me every day.
rather than cdns, there should be an sha or md5 hash sent with every asset, like an etag, so that things need not live on a specific domain to be pulled from cache.
NoScript doesn't do general-purpose blocking of resources from third-party domains- it only handles blocking scripts and objects like Flash. So if a site won't load when NoScript is active, it's because the site is not delivering any viewable content through the initial request, just scripts which then go on to request the actual content. Such sites are obviously not optimizing for latency, and are also accessibility nightmares.
As for the cookie concerns: first-time visitors don't have any cookies to be sending. Regular users have a cached copy of your stylesheet. Someone who hasn't been to your site in months probably won't have a cached stylesheet, and if you're worried about the performance impact of cookies for them, then just consider whether that cost is worth paying for whatever benefit you're getting by setting long-lived cookies for users who didn't come back soon after their previous visit.
Site 1 sends <script src="/foo/jquery.js" hash="SHA3:12345...">. Your browser hasn't cached this file, so it downloads jquery.js, verifies the hash and caches the contents. Site 2 sends <script src="/bar/jquery-2.5.js hash="SHA3:12345...">. The browser finds that the hash matches the cached jquery.js and loads that instead of downloading the script again.
The scripts could still be served from CDNs, but it wouldn't have to be the same one to have a cache hit. Popular libraries like jQuery would have so many hits that a CDN might not even be worth the effort. Actually the concept is so simple, it's surprising that this hasn't already been implemented unless there is a security issue that I'm not seeing.
I don't know if it's a big issue, but if you know the SHA3 of some files on other websites, you could use that to know where the user has been or not. For example, you add a file that you know is specific to Facebook <script src="/js/somefakefile.js" hash="SHA3:FACEBOOKHASH..."> If a user don't download /js/somefakefile.js, you know they have visited Facebook at some point.
You can already do this with a timing attack. Use js to add a script tag with http://facebook.com/js/somefakefile.js to the page, and time how long it takes to load. If it's in cache it will be much faster.
There's an emerging standard for specifying a hash for a resource, and potentially loading it from one of several locations, so long as it matches the given hash: http://www.w3.org/TR/SRI/
That provides integrity checking without encryption. It also helps with caching - rather than expiration times, cache systems can use the hash. If you already have a copy of jquery in cache and it matches the hash, it doesn't matter where it came from.
That's a non problem if you use a cryptographically sound hashing algorithm. If someone can find collisions to sha3, they can do a lot more damage than xss.
Edit: just reread the GP and saw that they said the content should be able to come from a cache of a different domain. You're right, hash collision there would be bad (as others had mentioned you'd want something with a low chance if collision construction).
How does this introduce a new XSS issue? It's no different from the current system. And in this case the hash isn't being used to verify the content, it's there for cache invalidation.
I guess someone could poison your local cache somehow, and maybe that could be a problem with shared cdns. But there are other mechanisms to make sure you're delivered the right content in the first instance.
I think the main issues are actually browser support and having to deal with it in your own framework / code. That's the point people start saying - you know what, the existing system works well enough, I'll jut crank up the expiry on my existing headers and stick a cache buster in the URL when it needs to change.
though you could say this about any system that uses hashing. if sha256 or 512 is used, the likelihood of collisions with other functioning code, especially code that would yield useful xss seems impossibly small.
No. This is bad advice, do not do this; it comes up every time there's a problem with a hash algorithm. The equivalent number of bits in a hashing algorithm designed as such (e.g. sha-384) will be substantially more secure than sha256+md5.
The point is that you don't want to stand on only one leg when a flaw in that leg may be discovered in the future.
Admittedly MD5 was a poor example from me in that light...
A more realistic mix might be: sha256+RIPEMD160+Whirlpool.
Or, in crypto-speak: Concatenating outputs from multiple hash functions provides collision resistance as good as the strongest of the algorithms included in the concatenated result. [1]
Even a very serious attack against SHA-512 would be unlikely to render it as weak as RIPEMD-160 (compare even e.g. MD4, which is pretty thoroughly broken but the attacks still take compute time). If you have the bits, it's better to spend them on a longer hash rather than more hashes.
rather than cdns, there should be an sha or md5 hash sent with every asset, like an etag, so that things need not live on a specific domain to be pulled from cache.
EDIT: those downvoting, care to state your case?