Analytics are usually based on aggregating discreet events. It's based on irregular time series. Metrics and sensor data are usually regular time series. That is, series where you have samples taken at regular intervals of time, like once ever 10 seconds.
When it comes to querying regular time series, don't have a huge number of points to aggregate across a single series, where analytics can have millions in a single series that you're looking at.
Then there are other types of queries that you need in analytics that don't make sense in metrics like getting sessions and calculating funnels.
InfluxDB is still useful for analytics today, it's just that in some instances it's more basic and crude compared to what you can do with things like MixPanel.
Is it possible that a regular time series could have better read performance (particularly in aggregations) vs an irregular one due to determinism/randomness - or is that irrelevant to the underlying implementation?