In high quality RDBMSes like Oracle and SQL Server, there is not a whole lot you can do to tune your queries and the majority of your bottleneck is likely to be in how you are using your data. If you have inefficiencies, you either have a poorly designed schema or out of date statistics/indices.
The story is slightly different for open source RDBMSes which have inferior cost-based optimizers and multi-version concurrency control implementations that fail on a wider range of corner cases like:
select count(*) from Billing_And_Accounts_Receivable.Transaction
which is slow in PostgreSQL for reasons I'd rather not take the time to teach about here.
"there is not a whole lot you can do to tune your queries"
What does this even mean? For any query which relies on data from different tables, yes there are opportunities to tune the query, and there are opportunities to build different kinds of indexes, and there are opportunities to de-normalize the data and remove the join completely. This is independent of your RDBMS vendor.
The statement that Oracle/SQL Server are 'high quality' and the implication that open source RDBMSes are not, is flame-bait. All RDBMSes (that I have experience with - including Oracle and SQL server - so caveat emptor) will hit a performance wall when an index becomes invalid and the engine has to table-scan - and indexes can become invalid automatically as the amount of data in the tables changes - typically this will happen at 3am in the morning when you are on call.
There is also the hardware that the files are stored on, and the balancing act of price/performance of splitting tables onto separate spindles, using SSDs for semi-hot tables, cramming in more ram, upgrading the network connection between db and app servers etc etc. Every real world application needs constant tweaking from query to hardware as bottlenecks appear in different layers. Again these issues are almost entirely independent of the RDBMS (apart from with SQL Server that only runs on Windows, so the deployment platform is limited by what Windows server supports).
Nonrelational datastores can also hit performance walls too - the fact that some of the nonrelational stores are newer also has the added 'fun' that the performance characteristics are not completely known.
Commercial DBMSes have way better customer support than open source DBMSes. Not only do they better understand the performance characteristics of their clients, but they have dedicated teams whose job it is to isolate corner cases where the DBMS is not optimized for a particular environment.
My statement -- there is not a whole lot you can do to tune your queries -- is about rewriting the queries themselves, such as join hints, etc. Changing RAID configurations, organizing the data physically on disk differently, etc. have no bearing on the quality of SQL an ORM generates. So what does? Well, 20 years ago many DBMSes required you to compile stored procedures in order for the cost-based optimizer to cache anything. Today, not only does SQL Server 2008 R2 support fine grained plan caching and allow you to adjust the size of that cache, but you can remove from the cache any plan you dislike. You can also force a poorly performing query to use a specific plan cache. This is one of the ideas behind the LINQ Re-Motion project for .NET.
I am really arguing about where to put work effort into, and really NOT disagreeing with you. We are talking past each other.
I personally believe, as I wrote in the author's blog comments, that an ORM based on an algebraic model would likely be better than the big and irregular ORM APIs we have today.
The story is slightly different for open source RDBMSes which have inferior cost-based optimizers and multi-version concurrency control implementations that fail on a wider range of corner cases like:
select count(*) from Billing_And_Accounts_Receivable.Transaction
which is slow in PostgreSQL for reasons I'd rather not take the time to teach about here.