I am new to sql alchemy.
I have a postgres local server, and I want to use sql alchemy to create a database.
I have the following code:
connection = engine.connect()
connection.execute(
text("CREATE DATABASE :database_name").bindparams(bindparam('database_name', quote=False)),
database_name="test_db"
)
But this unfortunately single quotes the database name parameter, which does not work in postgres. The logs from sql alchemy:
[SQL: CREATE DATABASE %(database_name)s]
[parameters: {'database_name': 'test_db'}]
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at or near "'test_db'" LINE 1: CREATE DATABASE 'test_db`'
In postgres logs, it executes the following statement, which is invalid because of the single quotes. A valid one would have double quotes:
CREATE DATABASE 'test_db'
Is there a way for the bind parameter to not be quoted in the resulting statement? I do not want to do the parameter quoting and string creation myself, as I think this abstraction should be handled by sql alchemy - in case I change my underlying database engine for eg, and this looks to be the mechanism sql alchemy promotes to avoid sql injections too.
The same question would apply to other postgres statements like creating an user with a password, or granting privileges to an existing user, which all need quoting which is postgres specific.
You cannot have parameters in statements other than SELECT, INSERT, UPDATE or DELETE.
You'll have to construct the CREATE DATABASE statement as a string containing the database name. Something like
from psycopg2 import sql
cursor.execute(
sql.SQL("CREATE DATABASE {}").format(sql.Identifier('test_db'))
)
Related
Using Sybase as the main transactional DB and SQLite as the in memory DB for Integration tests.
Issue I am facing is conflicting behavior of the two implementations.
I need to execute a query similar to
select dbo.get_name(id), id from some_table
This runs perfectly fine in sybase (I understand the importance of schema prefix for user defined functions). However SQL lite throws error saying SQLite.Operational error near "("
Tried to add dbo as schema while creating SQLite connections but no luck.
Using Python for all the implementation.
You could make dbo a group, and make all your users member of that group. Then you could avoid using the schema prefix at all.
Or, you could have a SQLite database named dbo where you put the function get_name.
I needed to perform an UPDATE JOIN query but django has no builtin support for that, so I wrote a raw SQL query. But now I need to build the where clause of that query dynamically, so I would like to know how to reuse Django's SQL compiler.
I could take the Query object from Model.objects.filter(...).query, then generate the raw SQL of the where clause with query.where.as_sql(query.get_compiler(using='default'), None) but the tables from my current raw SQL are aliased, so the lookup fields will be wrong.
Sorry for ask here but I cannot found much reference about pymysql's security guide about how do we prevent sql injection,
When I do PHP develope I know use mysql preparedstatement(or called Parameterized Query or stmt),but I cannot found reference about this in pymysql
simple code use pymysql like
sqls="select id from tables where name=%s"
attack="jason' and 1=1"
cursor.execute(sqls,attack)
How do I know this will prevent sql injection attack or not?if prevent succeed,how do pymysql prevent?Is cursor.execute already use preparedstatement by default?
Python drivers do not use real query parameters. In python, the argument (the variable attack in your example) is interpolated into the SQL string before sending the SQL to the database server.
This is not the same as using a query parameter. In a real parameterized query, the SQL string is sent to the database server with the parameter placeholder intact.
But the Python driver does properly escape the argument as it interpolates, which protects against SQL injection.
I can prove it when I turn on the query log:
mysql> SET GLOBAL general_log=ON;
And tail the log while I run the Python script:
$ tail -f /usr/local/var/mysql/bkarwin.log
...
180802 8:50:47 14 Connect root#localhost on test
14 Query SET ##session.autocommit = OFF
14 Query select id from tables where name='jason\' and 1=1'
14 Quit
You can see that the query has had the value interpolated into it, and the embedded quote character is preceded by a backslash, which prevents it from becoming an SQL injection vector.
I'm actually testing MySQL's Connector/Python, but pymysql does the same thing.
I disagree with this design decision for the Python connectors to avoid using real query parameters (i.e. real parameters work by sending the SQL query to the database with parameter placeholders, and sending the values for those parameters separately). The risk is that programmers will think that any string interpolation of parameters into the query string will work the same as it does when you let the driver do it.
Example of SQL injection vulnerability:
attack="jason' and '1'='1"
sqls="select id from tables where name='%s'" % attack
cursor.execute(sqls)
The log shows this has resulted in SQL injection:
180802 8:59:30 16 Connect root#localhost on test
16 Query SET ##session.autocommit = OFF
16 Query select id from tables where name='jason' and '1'='1'
16 Quit
I'm new to sqlalchemy and have been trying to figure this out for days!
I have some python code which is executing the following line:
mdb_session.query(PendingConfig).filter(PendingConfig.id == config.id).delete()
It's deleting all rows in a table called PendingConfig which have an id equals to a given config.id.
I want to log the underlying SQL query sqlalchemy that generates, but don't know how to do that since delete() returns an integer equal to the number of rows deleted.
I tried setting up a logger but that had it's own issues, as I explained in this post.
Need help on this!
If you really want to get the SQL that was actually run by the MySQL server, then you can enable the MySQL query log or slow query log, and read it from the database server.
See https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html
The MySQL Server doesn't know anything about Python, it just knows that a client sent it a query to execute.
If it's a parameterized query, it will contain ? placeholders in the SQL text, but SQLAlchemy doesn't do placeholders as far as I know. It always interpolates parameter values into the SQL query before sending it to MySQL.
I created the code to create and execute SQL query statements using pymysql in Python.
If i put the sql query statement generated in code directly in db, it works normally.
However, if i execute the sql query statement created in code with cursor.execute (sql), db will not contain data.
When I connect, I also gave the local_infile = True option. How do I create the code?