You are a long way from the practical limits. I have used SQLite databases with billions of rows that were still able to retrieve hundreds of arbitrary rows in less than second. We used it to store time-series performance data and generate charts. We had tens of thousands of these databases (not all with so many rows) and they performed their job admirably.
The one thing that SQLite cannot handle performantly is deleting large numbers of rows (millions) - so don't plan on deleting any data from your tables once they get that big. It appeared to me from a cursory examination of the code that B-tree rebalancing was happening after deleting each row which makes the big deletes very expensive. We got around this problem by sharding our data into a new table and a new database for each week and then mounting all of the databases necessary for a query. When we wanted to delete data we just deleted the database file with the corresponding shard. Obviously that only worked for our particular time series data.
Anyway, the bottom line is that SQLite is more scalable and has better performance than people give it credit for.
The one thing that SQLite cannot handle performantly is deleting large numbers of rows (millions) - so don't plan on deleting any data from your tables once they get that big. It appeared to me from a cursory examination of the code that B-tree rebalancing was happening after deleting each row which makes the big deletes very expensive. We got around this problem by sharding our data into a new table and a new database for each week and then mounting all of the databases necessary for a query. When we wanted to delete data we just deleted the database file with the corresponding shard. Obviously that only worked for our particular time series data.
Anyway, the bottom line is that SQLite is more scalable and has better performance than people give it credit for.