There is research (DORA/Accelerate/DevOps report) that makes a good case that throughput (like number of pull requests) contributes positively to company performance. More precisely the DORA metric is deployment frequency.
In my org they count both the number of pull requests, and the number of comments you add to reviews. Easily gamed, but that's the performance metric they use to compare every engineer now.
> With some napkin math assuming a similar distribution today, that would mean on average each engineer ships at least 1 change to production every 3 days.
This is the important metric. It means there is very little divergence between what’s being worked on and what’s in production. The smaller the difference, the quicker you deliver value to users and the less risky it is to deploy.
Isn't it more like a BS counter that keep incrementing and that is indicative of churn but nothing else reliably.
One of the most low effort, easily to game metric that can be skewed to show anything that the user wants to show.