This is a great article, walks through a bunch of complications with measuring web performance.
Summary:
1. Use performance.now(), which has 0.1ms resolution and is monotonic, instead of new Date() which has 1ms resolution and can decrease if machine time changes
2. Check for document.hidden and the 'visibilitychange' event
3. Use event.timeStamp for start time (when measuring responsiveness to an event)
4. Use the parameter that requestAnimationFrame() passes to callbacks to include time running framework code
5. You can nest requestAnimationFrame() to include Layout and Paint time, but that limits precision to 16ms so they didn't
6. Instead of percentiles, count % of events under a target like 100ms, it's more user-centric: instead of "how fast/slow is my app" it tells you "how often do users experience slowness"
7. Chart the counts at multiple targets in an percent stacked area chart
One weird thing to me about items 5 and 6—why not both? Why not have a chart including and a chart excluding Layout and Paint time, so you know if you're doing bad CSS or something that's introducing an expensive repaint? Why not have a stacked line chart with percentiles alongside the percent stacked area chart?
> performance.now(), which has 0.1ms resolution and is monotonic
Not so much anymore.
MDN says: “The timestamp is not actually high-resolution. To mitigate security threats such as Spectre, browsers currently round the results to varying degrees. (Firefox started rounding to 1 millisecond in Firefox 60.) Some browsers may also slightly randomize the timestamp. The precision may improve again in future releases; browser developers are still investigating these timing attacks and how best to mitigate them.”
The article mentions “new Date()” as the alternative to the performance API. Unless you’re supporting IE8 you should prefer Date.now(), which saves you the object instantiation if you’re just reading out ms.
Also, performance.now (and the date methods) have their precision hamstrung in some browsers for privacy and Spectre mitigation.
Of course, for perf use the dedicated APIs. Just don’t expect microsecond resolution to be accurate.
I’m curious how they do competitive analysis. They want to be faster than gmail but how do they measure? I tried to take a high-speed video of gmail navigating from thread list to thread but it was not conclusive. Gmail collects a ton of client-side timing data and I wonder if anyone has ever just intercepted that.
It is unlikely the performance can be your long term selling point, when you do not own the full stack. They still have to interact with gmail through an API, while gmail app itself can process everything server-side. I think there is a place for $50/month email, but you need to own the entire stack, not just build a fancy email client.
Email is beautifully asynchronous so the time spent interacting with the underlying API is not important unless it’s horrifically sabotaged for third party clients.
The visibility stuff is good. They forgot to mention that when the tab is hidden or not currently focused, the browser doesn’t fire requestAnimationFrames because it’s good for battery life.
Also he lost me at don’t measure layout and paint. That’s like the #1 reason why UI is slow. Layout and paint is a very compute intensive task. Deffo you should measure both but you can get huge gains by optimizing how many dom nodes browser has to deal with so layout and paint is efficient.
V8 won’t bat an eye to go through a million items in an array and do a filter map operation on them. Rendering a million things in a browser is a different ball game.
I’ve spent many years of my life optimizing UI perf. I have to give the Chrome devtools team massive kudos for building a great performance analyzing tool.
The article suggests not instrumenting layout and paint, though it does mention the double rAF option.
Be careful skipping this! It is trivially easy to regress layout perf with CSS property changes and if you’re not instrumenting layout you might not catch it. I’ve personally fixed 300ms P90 regressions from thrashing layout on a button click.
Like the article in general, but it would be even better if basic conditions like cpu governor, fixed clock frequencies and so on would be mentioned. This is especially important for comparable results ;-)
Am I missing something obvious, or couldn't they have calculated "percentage less than 100ms" if they were able to calculate percentiles? Rather than collecting just the 100ms figure?
Breaking news
Computer Scientists over at Stanford have discovered that web apps are inherently slow as fuck. Studies show that if your software projects' are performance critical, you should definitely avoid that many layers of GUI complexity and memory consumption.
I think React Native is a great solution for small developers because they reduce the man power required for cross compatibility.
To answer your question, it depends on the type of response time you're asking. If you mean simply button click event fire type responses, then yes native app development is superior. But like I stated if you wa t your app to be truly write once run anywhere, JS web apps are the way to go.....these days at least.
That being said, I only had one go at React and another brief stint with Android app development. I found them both to be complete messes. Maybe I'm so used to my last job doing C# Winforms, but I haven't found a more comfortable experience.
Something I encourage teams to do is keep in mind “when to stop” optimizing. Otherwise you can spend forever trying to squeeze another tiny improvement that at the end of the day isnt going to ever have any kind of return on investment.
Might as well use this downvoted thread for my little hobby horse as well: I was done with Fastmail when I realized they recycle email addresses if you stop paying.
They should either lock access to the account until you buy a new subscription (which seems obvious, just hold the account hostage), or lock the account forever if they don't want to store/recv messages for inactive accounts.
This is also true of every personal domain used for email. While it's not great from Fastmail, it's no less secure in that respect than a custom domain.
Summary:
1. Use performance.now(), which has 0.1ms resolution and is monotonic, instead of new Date() which has 1ms resolution and can decrease if machine time changes
2. Check for document.hidden and the 'visibilitychange' event
3. Use event.timeStamp for start time (when measuring responsiveness to an event)
4. Use the parameter that requestAnimationFrame() passes to callbacks to include time running framework code
5. You can nest requestAnimationFrame() to include Layout and Paint time, but that limits precision to 16ms so they didn't
6. Instead of percentiles, count % of events under a target like 100ms, it's more user-centric: instead of "how fast/slow is my app" it tells you "how often do users experience slowness"
7. Chart the counts at multiple targets in an percent stacked area chart
One weird thing to me about items 5 and 6—why not both? Why not have a chart including and a chart excluding Layout and Paint time, so you know if you're doing bad CSS or something that's introducing an expensive repaint? Why not have a stacked line chart with percentiles alongside the percent stacked area chart?