Use the same replacement character for sqlite3 and pyscopg in Python? - python

I have a painted-myself-into-a-corner question that hopefully has a sensible solution I'm overlooking. I had a Python project using sqlite3, which I like a lot and use all the time, and I wanted to try to also support running it on postgres, in case scaling becomes an issue.
Some initial research suggested that there wasn't really a single de facto Python database abstraction layer (hopefully I didn't get this wrong), but psycopg2 fortunately seemed to have very similar structure and methods to sqlite3, and I was able to get away with only adding a couple helper functions and switch cases to my existing code to allow it to support both database libraries with the same queries.
The only exception, unbelievably enough, is the replacement character for variables; sqlite3 needs ? and psycopg2 needs %s. These are probably inherent to sqlite and postgres themselves for all I know.
This means that a function like this:
cur.execute("INSERT INTO repositories (repository_url, repository_name, repository_type, repository_thumbnail, last_crawl_timestamp, item_url_pattern) VALUES (%s,%s,%s,%s,%s,%s)", (repository_url, repository_name, repository_type, repository_thumbnail, time.time(), item_url_pattern))
Will only work for postgres, and if I change the %s's, to ?'s, it'll only work for sqlite. This defies any kind of elegant solution -- I don't really want to rig up some kind of string replacement to construct my queries, as that'll get dumb pretty quickly -- and mostly I'm just astonished that this has turned out to be my blocker.
Any thoughts?

The API in use by both implementations is the Python Database API Specification v2.0, documented in PEP 249. The module global paramstyle tells you what style of parameters a particular implementation expects. The possible values and meanings are documented here.

Related

Python prepared statement security

I took a slight peek behind the curtain at the MySQLdb python driver, and to my horror I saw it was simply escaping the parameters and putting them directly into the query string. I realize that escaping inputs should be fine in most cases, but coming from PHP, I have seen bugs where, given certain database character sets and versions of the MySQL driver, SQL injection was still possible.
This question had some incredibly detailed responses regarding the edge cases of string escaping in PHP, and has led me to the belief that prepared statements should be used whenever possible.
So then my questions are: Are there any known cases where the MySQLdb driver has been successfully exploited due to this? When a query needs to be run in a loop, say in the case of an incremental DB migration script, will this degrade performance? Are my concerns regarding escaped input fundamentally flawed?
I can't point to any known exploit cases, but I can say that yes, this is terrible.
The Python project calling itself MySQLdb is no longer maintained. It's been piling up unresolved Github issues since 2014, and just quickly looking at the source code, I can find more bugs not yet reported - for example, it's using a regex to parse queries in execute_many, leading it to mishandle any query with the string " values()" in it where values isn't the keyword.
Instead of MySQLdb, you should be using MySQL's Connector/Python. That is still maintained, and it's hopefully less terrible than MySQLdb. (Hopefully. I didn't check that one.)
...prepared statements should be used whenever possible.
Yes. That's the best advice. If the prepared statement system is broken there will be klaxons blaring from the rooftops and everyone in the Python world will pounce on the problem to fix it. If there's a mistake in your own code that doesn't use prepared statements you're on your own.
What you're seeing in the driver is probably prepared statement emulation, that is the driver is responsible for inserting data into the placeholders and forwarding the final, composed statement to the server. This is done for various reasons, some historical, some to do with compatibility.
Drivers are generally given a lot of serious scrutiny as they're the foundation of most systems. If there is a security bug in there then there's a lot of people that are going to be impacted by it, the stakes are very high.
The difference between using prepared statements with placeholder values and your own interpolated code is massive even if behind the scenes the same thing happens. This is because the driver, by design, always escapes your data. Your code might not, you may omit the escaping on one value and then you have a catastrophic hole.
Use placeholder values like your life depends on it, because it very well might. You do not want to wake up to a phone call or email one day saying your site got hacked and now your database is floating around on the internet.
Using prepared statements is faster than concatenating a query. The database can precompile the statement, so only the parameters will be changed when iterating in a loop.

Why aren't cursors optional in mysqlclient?

I'm quite new to Python and Flask, and while working through the examples, couldn't help noticing cursors. Before this I programmed in PHP, where I never needed cursors. So I got to wondering: What are cursors and why are they used so much in these code examples?
But no matter where I turned, I saw no clear verdict and lots of warnings:
Wikipedia: "Fetching a row from the cursor may result in a network round trip each time", and "Cursors allocate resources on the server, such as locks, packages, processes, and temporary storage."
StackOverflow: See the answer by AndreasT.
The Island of Misfit Cursors: "A good developer is never reluctant to use a tool only because it's often misused by others."
And to top it all, I learned that MySQL does NOT support cursors!
It looks like the only code that doesn't use cursors in the mysqlclient library is the _msql module, and the author repeatedly warns not to use it for compatibility reasons: "If you want to write applications which are portable across databases, use MySQLdb, and avoid using this module directly."
Well, I hope I have explained and supported my dilemma sufficiently well. Here are two big questions troubling me:
Since MySQL doesn't support cursors, what's the whole point of building the entire thing on a Cursor class hierarchy?
Why aren't cursors optional in mysqlclient?
Your are confusing database-engine level cursors and Python db-api cursors. The second ones only exists at the Python code level and are not necessarily tied to database-level ones.
At the Python level, cursors are a way to encapsulate a query and it's results. This abstraction level allow to have a simple, usable and common api for different vendors. Whether the actual implementation for a given vendor relies on database-level cursors or not is a totally different problem.
To make a long story short: there are two distinct concepts here:
database (server) cursors, a feature that exists in some but not all SQL engines
db api (client) cursors (as defined in pep 249), which are used to execute a query and eventually fetch the results.
db api cursors are named so because they conceptually have some similarity with database cursors but are technically totally unrelated.
As to why mysqlclient works this way, it's plain simple: it implements pep 249, which is the community-defined API for Python SQL databases clients.

which databases can be used better for pyqt application

i want to create application in windows. i need to use databases which would be preferable best for pyqt application
like
sqlalchemy
mysql
etc.
I would use SQLite every time unless performance became an obvious big problem.
It comes with Python
You don't need to worry about installing it on a target machine or having an existing installation which might clash (including a potential port clash - SQLite doesn't use a port)
It's fairly small (doesn't increase the installed size too much)
Then, a much less obvious choice that I would very much consider making: adding Django to the mix. Django's model system could make for much simpler management, depending on the type of data you're working with. Also, in the case where I've considered it (I just haven't got to that stage of development yet) it means I can reuse the models I've got on the server and a good bit of code from there too.
Obviously in this case you could need to be careful about what you expose; business-critical processing stuff that you don't want to share, potential security holes in server code which you've helpfully provided the code for, etc.
SQlite is fine for a single user.
If you are going over a network to talk to a central database, then you need a database woith a decent Python lirary.
Take a serious look at MySQL if you need/want SQL.
Otherwise, there is CouchDB in the Not SQL camp, which is great if you are storing documents, and can express searches as Map/reduce functions. Poor for adhoc queries.
If you want a relational database I'd recommend you to use SQLAlchemy, as you then get a choice as well as an ORM. Bu default go with SQLite, as per other recommendations here.
If you don't need a relational database, take a look at ZODB. It's an awesome Python-only object-oriented database.
i guess its totally upto you ..but as far as i am concerned i personlly use sqlite, becoz it is easy to use and amazingly simple syntax whereas for MYSQL u can use it for complex apps and has options for performance tuning. but in end its totally upto u and wt your app requires

Pros and cons of using sqlite3 vs custom table implementation

I noticed that a significant part of my (pure Python) code deals with tables. Of course, I have class Table which supports the basic functionality, but I end up adding more and more features to it, such as queries, validation, sorting, indexing, etc.
I to wonder if it's a good idea to remove my class Table, and refactor the code to use a regular relational database that I will instantiate in-memory.
Here's my thinking so far:
Performance of queries and indexing would improve but communication between Python code and the separate database process might be less efficient than between Python functions. I assume that is too much overhead, so I would have to go with sqlite which comes with Python and lives in the same process. I hope this means it's a pure performance gain (at the cost of non-standard SQL definition and limited features of sqlite).
With SQL, I will get a lot more powerful features than I would ever want to code myself. Seems like a clear advantage (even with sqlite).
I won't need to debug my own implementation of tables, but debugging mistakes in SQL are hard since I can't put breakpoints or easily print out interim state. I don't know how to judge the overall impact of my code reliability and debugging time.
The code will be easier to read, since instead of calling my own custom methods I would write SQL (everyone who needs to maintain this code knows SQL). However, the Python code to deal with database might be uglier and more complex than the code that uses pure Python class Table. Again, I don't know which is better on balance.
Any corrections to the above, or anything else I should think about?
SQLite does not run in a separate process. So you don't actually have any extra overhead from IPC. But IPC overhead isn't that big, anyway, especially over e.g., UNIX sockets. If you need multiple writers (more than one process/thread writing to the database simultaneously), the locking overhead is probably worse, and MySQL or PostgreSQL would perform better, especially if running on the same machine. The basic SQL supported by all three of these databases is the same, so benchmarking isn't that painful.
You generally don't have to do the same type of debugging on SQL statements as you do on your own implementation. SQLite works, and is fairly well debugged already. It is very unlikely that you'll ever have to debug "OK, that row exists, why doesn't the database find it?" and track down a bug in index updating. Debugging SQL is completely different than procedural code, and really only ever happens for pretty complicated queries.
As for debugging your code, you can fairly easily centralize your SQL calls and add tracing to log the queries you are running, the results you get back, etc. The Python SQLite interface may already have this (not sure, I normally use Perl). It'll probably be easiest to just make your existing Table class a wrapper around SQLite.
I would strongly recommend not reinventing the wheel. SQLite will have far fewer bugs, and save you a bunch of time. (You may also want to look into Firefox's fairly recent switch to using SQLite to store history, etc., I think they got some pretty significant speedups from doing so.)
Also, SQLite's well-optimized C implementation is probably quite a bit faster than any pure Python implementation.
You could try to make a sqlite wrapper with the same interface as your class Table, so that you keep your code clean and you get the sqlite performences.
If you're doing database work, use a database, if your not, then don't. Using tables, it sound's like you are. I'd recommend using an ORM to make it more pythonic. SQLAlchemy is the most flexible (though it's not strictly just an ORM).

Does Python support MySQL prepared statements?

I worked on a PHP project earlier where prepared statements made the SELECT queries 20% faster.
I'm wondering if it works on Python? I can't seem to find anything that specifically says it does or does NOT.
Most languages provide a way to do generic parameterized statements, Python is no different. When a parameterized query is used databases that support preparing statements will automatically do so.
In python a parameterized query looks like this:
cursor.execute("SELECT FROM tablename WHERE fieldname = %s", [value])
The specific style of parameterization may be different depending on your driver, you can import your db module and then do a print yourmodule.paramstyle.
From PEP-249:
paramstyle
String constant stating the type of parameter marker
formatting expected by the interface. Possible values are
[2]:
'qmark' Question mark style,
e.g. '...WHERE name=?'
'numeric' Numeric, positional style,
e.g. '...WHERE name=:1'
'named' Named style,
e.g. '...WHERE name=:name'
'format' ANSI C printf format codes,
e.g. '...WHERE name=%s'
'pyformat' Python extended format codes,
e.g. '...WHERE name=%(name)s'
Direct answer, no it doesn't.
joshperry's answer is a good explanation of what it does instead.
From eugene y answer to a similar question,
Check the MySQLdb Package Comments:
"Parameterization" is done in MySQLdb by escaping strings and then blindly interpolating them into the query, instead of using the
MYSQL_STMT API. As a result unicode strings have to go through two
intermediate representations (encoded string, escaped encoded string)
before they're received by the database.
So the answer is: No, it doesn't.
After a quick look through an execute() method of a Cursor object of a MySQLdb package (a kind of de-facto package for integrating with mysql, I guess), it seems, that (at least by default) it only does string interpolation and quoting and not the actual parametrized query:
if args is not None:
query = query % db.literal(args)
If this isn't string interpolation, then what is?
In case of executemany it actually tries to execute the insert/replace as a single statement, as opposed to executing it in a loop. That's about it, no magic there, it seems. At least not in its default behaviour.
EDIT: Oh, I've just realized, that the modulo operator could be overriden, but I've felt like cheating and grepped the source. Didn't find an overriden mod anywhere, though.
For people just trying to figure this out, YES you can use prepared statements with Python and MySQL. Just use MySQL Connector/Python from MySQL itself and instantiate the right cursor:
https://dev.mysql.com/doc/connector-python/en/index.html
https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursorprepared.html
Using the SQL Interface as suggested by Amit can work if you're only concerned about performance. However, you then lose the protection against SQL injection that a native Python support for prepared statements could bring. Python 3 has modules that provide prepared statement support for PostgreSQL. For MySQL, "oursql" seems to provide true prepared statement support (not faked as in the other modules).
Not directly related, but this answer to another question at SO includes the syntax details of 'templated' queries. I'd say that the auto-escaping would be their most important feature...
As for performance, note the method executemany on cursor objects. It bundles up a number of queries and executes them all in one go, which does lead to better performance.
There is a Solution!
You can use them if you put them into a stored procedure on the server and call them like this from python...
cursor.callproc(Procedurename, args)
Here is a nice little tutorial on Stored procedures in mysql and python.
http://www.mysqltutorial.org/calling-mysql-stored-procedures-python/

Categories

Resources