Doesn't that depend on what you mean by "shave ms loading a page"?
If you're optimizing for time to first render, or time to visually complete, then you need to render the page using as little logic as possible - sending an empty skeleton that then gets hydrated with user data over APIs is fastest for a user's perception of loading speed.
If you want to speed up time to first input or time to interactive you need to actually build a working page using user data, and that's often fastest on the backend because you reduce network calls which are the slowest bit. I'd argue most users actually prefer that, but it depends on the app. Something like a CRUD SAAS app is probably best rendered server side, but something like Figma is best off sending a much more static page and then fetching the user's design data from the frontend.
The idea that there's one solution that will work for everything is wrong, mainly because what you optimise for is a subjective choice.
And that's before you even get to Dev experience, team topology, Conway's law, etc that all have huge impacts on tech choices.
> sending an empty skeleton that then gets hydrated with user data over APIs is fastest for a user's perception of loading speed
This is often repeated, but my own experience is the opposite: when I see a bunch of skeleton loaders on a page, I generally expect to be in for a bad experience, because the site is probably going to be slow and janky and cause problems. And the more the of the site is being skeleton-loaded, the more my spirits worsen.
My guess is that FCP has become the victim of Goodhart's Law — more sites are trying to optimise FCP (which means that _something_ needs to be on the screens ASAP, even if it's useless) without optimising for the UX experience. Which means delaying rendering more and adding more round-trips so that content can be loaded later on rather than up-front. That produces sites that have worse experiences (more loading, more complexity), even though the metric says the experience should be improving.
It also breaks a bunch of optimizations that browsers have implemented over the years. Compare how back/forward history buttons work on reddit vs server side rendered pages.
It is possible to get those features back, in fairness... but it often requires more work than if you'd just let the browser handle things properly in the first place.
Seems like 95% of businesses are not willing to pay the web dev who created the problem in the first place to also fix the problem, and instead want more features released last week.
The number of websites needlessly forced into being SPAs without working navigation like back and forth buttons is appalling.
I think it's more the bounce rate is improving. People may recall a worse experience later, but more will stick around for that experience if they see something happen sooner.
> If you're optimizing for time to first render, or time to visually complete, then you need to render the page using as little logic as possible - sending an empty skeleton that then gets hydrated with user data over APIs is fastest for a user's perception of loading speed.
I think that OP's point is that these optimization strategies are completely missing the elephant in the room. Meaning, sending multi-MB payloads creates the problem, and shaving a few ms here and there with more complexity while not looking at the performance impact of having to handle multi-MB payloads doesn't seem to be an effective way to tackle the problem.
> speed up time to first input or time to interactive you need to actually build a working page using user data, and that's often fastest on the backend because you reduce network calls which are the slowest bit.
It’s only fastest to get the loading skeleton onto the page.
My personal experience with basically any site that has to go through this 2-stage loading exercise is that:
- content may or may not load properly.
- I will probably be waiting well over 30 seconds for the actually-useful-content.
- when it does all load, it _will_ be laggy and glitchy. Navigation won’t work properly. The site may self-initiate a reload, button clicks are…50/50 success rate for “did it register, or is it just heinously slow”.
I’d honestly give up a lot of fanciness just to have “sites that work _reasonably_” back.
30s is probably an exaggeration even for most bad websites, unless you are on a really poor connection. But I agree with the rest of it.
Often it isn't even a 2-stages thing but an n-stages thing that happens there.
If you're optimizing for time to first render, or time to visually complete, then you need to render the page using as little logic as possible - sending an empty skeleton that then gets hydrated with user data over APIs is fastest for a user's perception of loading speed.
If you want to speed up time to first input or time to interactive you need to actually build a working page using user data, and that's often fastest on the backend because you reduce network calls which are the slowest bit. I'd argue most users actually prefer that, but it depends on the app. Something like a CRUD SAAS app is probably best rendered server side, but something like Figma is best off sending a much more static page and then fetching the user's design data from the frontend.
The idea that there's one solution that will work for everything is wrong, mainly because what you optimise for is a subjective choice.
And that's before you even get to Dev experience, team topology, Conway's law, etc that all have huge impacts on tech choices.