As the chart is updated bi-weekly, but the data point that may change is for the current year. The first few days or weeks of a new year are less accurate compared to the end of a year.
Still, both previous years were notably up after the first month already. This is different and perhaps notable regardless. Either the release schedule is skewed forward or the hardware is genuinely stagnating. Or perhaps the benchmark is hitting some other CPU unrelated bottleneck.
They have more than twice as many samples on their Feb 10 update this year (47810) as they did last year on Feb 14 (22761). They have shown some growth year on year in sample size, but nowhere near doubling.
That suggests this month has been an outlier in having a strangely large number of samples already, which could all be related—maybe their software started being bundled with a specific OEM or was featured on a popular content creator or whatever.
As a sibling notes, less accurate just means less accurate, not necessarily skewed upward. There simply is less data to draw conclusions on than there will be later, so any medium-sized effect that adds 20k extra samples will have a larger effect now than later.
I had the same thought. The common reasons for such a decline would be something like change of CPU architecture prioritizing energy efficiency over raw performance, limitations in manufacturing processes, or changes in benchmarking methodologies wouldn't had such steep decline. So I would guess either change of methodology or as you said weaker CPUs in general are measured by that.
So if I understand right, more old hardware makes it into the sample now than before, increasing the unreliability of early data? That makes sense I guess.
Not increasing the unreliability—it's unreliable no matter what this early in the year—but it's possible that more old hardware made it in this past month than in previous early years which would explain why the number went down this time.
Or just have the last data point include everything from the full 12 month period before (as the title "year on year" would suggest) and maybe even put it in the correct place on the x-axis (e.g. for today, Feb 12, 2025, about 11.8% of the full year gap width from the (Dec 31) 2024 point).
variance.
Have you noticed that the mainstream public-level discussion of almost any topic never progresses farther than a point estimate? Variance implies nuance, and nuance is annoying to those who'd just rather paint a story. Variance isn't even that much nuance, because it is also just a point estimate for the variability of a distribution, not even its shape.
Public discourse is stuck at the mean point estimate, and as such, is constantly misled.
This is all an analogy, but feels very true.
> Have you noticed that the mainstream public-level discussion of almost any topic never progresses farther than a point estimate? [...] Variance isn't even that much nuance, because it is also just a point estimate for the variability of a distribution, not even its shape.
Next time you're doing something that isn't a professional STEM job, see how far you can get through your day without adding or multiplying.
Unless you're totting up your score in a board game or something of that ilk, you'll be amazed at how accommodating our society is to people who can't add or multiply.
Sure, when you're in the supermarket you can add up your purchases as you shop, if you want to. But if you don't want to, you can just buy about the same things every week for about the same price. Or keep a rough total of the big items. Or you can put things back once you see the total at the till. Or you can 'click and collect' to know exactly what the bill will be.
You don't see mainstream discussion of variance because 90% of the population don't know WTF a variance is.
Sometimes even an average is too much to ask. One of my pet peeves is articles about "number changed". Averages and significance never enter the discussion.
Worst are the ones where he number is the number of times some random event happened. In that case there's a decent chance the difference is less than twice the square root, so assuming a Poisson distribution you know the difference is insignificant.
Seems like there's only bandwidth of 4bit that can be reserved for this, and variance couldn't make the cut. It's actually already generous since sometimes only 1 bit can barely make it through, whether something is "good" or "bad".
This reminds me of the presidential election where most credible institutions talked about 50/50 odds and where subsequently criticized after Trump's clear victory in terms of electoral votes.
Few people bothered to look at the probability distributions the forecasters published which showed decent probabilities for landslide wins in either direction.
We still know the current data point has about three times the standard error of the previous point, but I agree it's hard to say anything useful without knowledge of the within-point variance.