Forget the speed of the database. What happens when you need to transmit the returned data across a network? Would you rather transmit 32 bytes per row, or 10kB?
Of course, that comes down to the scaling problem. The application I'm currently working on can't use ORMs (though we have rolled a few of our own read-only ones), because of 1) the size of the data we're going through, and 2) the lack of support for splitting reads and read/writes to different connections.
(1) is definitely a problem I've seen. SQLAlchemy for example takes forever to initialize objects representing even a single row. Fine if you have less than 100 tables with less than 100 rows each and almost never join, but start scaling, and you run into issues.
"forever to initialize a single row" is obviously some pretty severe hyperbole, but the slowness of loading rows into fully formed objects is largely a product of C python's sluggishness in creating new objects. This also has nothing to do with how many rows are in the table, only how many rows you happen to be fetching over the wire for a particular query.
Performance is also something SQLAlchemy is obsessive about, so the speed of loading objects in SQLA has increased by orders of magnitude over the course of 0.4, 0.5, 0.6, 0.7. If your opinions are based on earlier versions, your impressions may be out of date.
Sorry to be vague. In my benchmarks it takes around 30 ms to initialize an instance of a model as a result of a query using Python 2.6 and sqlalchemy 0.6. This is in contrast to raw MySQLdb query initialized as a dict which takes 2-4 ms.
MySQLdb is written in C. So you're comparing the speed of raw C code to pure Python (which runs on top of that C code). The vast difference in speed between pure C and Python has little to do with SQLAlchemy which I can assure you generates those objects as fast as possible while still maintaining proper behavior.
not really sure what you mean. MySQLdb, i.e. this project: http://mysql-python.sourceforge.net/FAQ.html ? When you ask this library to execute a query and return rows, the network communication with the MySQL server is done with C code, i.e. with _mysql . It doesn't make sense to say "_mysql is twice as fast as MySQLdb" - "_mysql" is an intrinsic component of it. The Python portion of MySQLdb does very little.
Here is from the FAQ included with the project:
"MySQLdb is a thin Python wrapper around ``_mysql`` which makes it
compatible with the Python DB API interface (version 2). In reality,
a fair amount of the code which implements the API is in ``_mysql``
for the sake of efficiency."
In fact, if you do want MySQLdb that's pure Python, there's an entirely separate project for that called pymsql: http://code.google.com/p/pymysql/. This library will be a lot slower than MySQLdb.