I'm making a cross platform application which does it's storage in SQLite files. These files are quite big, 10,000,000 in a table.
When I open the file I run a query to count the amount of rows in the table.
My code, using the sqlite3 module:
- Code: Select all
# con is a sqlite3 connection object
cur = con.cursor()
cur.execute("SELECT COUNT(rowid) FROM snp",)
Execute time for a table with 10,000,000 rows:
- Kubuntu, 4 gb ram, dualcore: 8s
- Windows XP-32, 2 GB Ram, dualcore: 15m32s
- Win7-64, 4 GB RAM, I5: 13m15s
During execution, hardly any memory or cpu power is used, so no bottleneck there.
When I run the query through the sqlite3.exe shell it is as fast as the Linux implementation. I think this means that the python code is to blame for the bad performance.
It's the cur.execute line that takes so long to execute.
Anyone knows why the big difference betwene Windows and Linux? And more important, how can I imporve the performance, so it will work as fast as the Linux machine. It's unusable right now.
I'm using Python 2.7.5, fresh installed today on the machines.