Python - automating MySQL query: passing parameter - python

The code in the sequence is working fine, but looking to improve the MySQL code to a more efficient format.
The first case is about a function that received a parameter and returns the customerID from MySQL db:
def clean_table(self,customerName):
getCustomerIDMySQL="""SELECT customerID
FROM customer
WHERE customerName = %s;"""
self.cursorMySQL.execute(getCustomerIDMySQL,(customerName))
for getID_row in self.cursorMySQL:
customerID=getID_row[0]
return customerID
In the case we know before hand that the result will be just one output, how to get the same thing into my getID_row, without using "for" statement?
For the second case, the function is running with the table name ('customer') on it...
def clean_tableCustomer(self):
cleanTableQuery = """TRUNCATE TABLE customer;"""
self.cursorMySQL.execute(cleanTableQuery)
setIndexQuery = """ALTER TABLE customer AUTO_INCREMENT = 1;"""
self.cursorMySQL.execute(setIndexQuery)
then, how to replace the table name as a parameter passed through the function? Here is how I tried to get this done:
def clean_table(self,tableName):
cleanTableQuery = """TRUNCATE TABLE %s;"""
self.cursorMySQL.execute(cleanTableQuery,(tableName))
setIndexQuery = """ALTER TABLE %s AUTO_INCREMENT = 1;"""
self.cursorMySQL.execute(setIndexQuery,(tableName))
But MySQL didn't work this time.
All comments and suggestions are highly appreciated.

For the first case (simple, but easy to get a KeyError when there is no row):
customerID = self.cursorMySQL.fetchone()[0]
More correct is to implement a new method for the cursor class:
def autofetch_value(self, sql, args=None):
""" return a single value from a single row or None if there is no row
"""
self.execute(sql, args)
returned_val = None
row = self.fetchone()
if row is not None:
returned_val = row[0]
return returned_val
For the second case:
def clean_table(self,tableName):
cleanTableQuery = """TRUNCATE TABLE %s;""" % (tableName,)
self.cursorMySQL.execute(cleanTableQuery)
setIndexQuery = """ALTER TABLE %s AUTO_INCREMENT = 1;""" % (tableName,)
self.cursorMySQL.execute(setIndexQuery)
Make sure you sanitize the data, since the cursor won't.

Unfortunately, you cannot parametrize the name of a table (see this post). You will have to use Python string operations to do what you are attempting here.
Hope this helps, it took me a while to find out when I ran into this issue.

Related

Using COUNT(*) OVER() in current query with SQLAlchemy over PostgreSQL

In a prototype application that uses Python and SQLAlchemy with a PostgreSQL database I have the following schema (excerpt):
class Guest(Base):
__tablename__ = 'guest'
id = Column(Integer, primary_key=True)
name = Column(String(50))
surname = Column(String(50))
email = Column(String(255))
[..]
deleted = Column(Date, default=None)
I want to build a query, using SQLAlchemy, that retrieves the list of guests, to be displayed in the back-office.
To implement pagination I will be using LIMIT and OFFSET, and also COUNT(*) OVER() to get the total amount of records while executing the query (not with a different query).
An example of the SQL query could be:
SELECT id, name, surname, email,
COUNT(*) OVER() AS total
FROM guest
WHERE (deleted IS NULL)
ORDER BY id ASC
LIMIT 50
OFFSET 0
If I were to build the query using SQLAlchemy, I could do something like:
query = session.query(Guest)
query = query.filter(Login.deleted == None)
query = query.order_by(Guest.id.asc())
query = query.offset(0)
query = query.limit(50)
result = query.all()
And if I wanted to count all the rows in the guests table, I could do something like this:
from sqlalchemy import func
query = session.query(func.count(Guest.id))
query = query.filter(Login.deleted == None)
result = query.scalar()
Now the question I am asking is how to execute one single query, using SQLAlchemy, similar to the one above, that kills two birds with one stone (returns the first 50 rows and the count of the total rows to build the pagination links, all in one query).
The interesting bit is the use of window functions in PostgreSQL which allows the abovementioned behaviour, thus saving you from having to query twice but just once.
Is it possible?
Thanks in advance.
So I could not find any examples in the SQLAlchemy documentation, but I found these functions:
count()
over()
label()
And I managed to combine them to produce exactly the result I was looking for:
from sqlalchemy import func
query = session.query(Guest, func.count(Guest.id).over().label('total'))
query = query.filter(Guest.deleted == None)
query = query.order_by(Guest.id.asc())
query = query.offset(0)
query = query.limit(50)
result = query.all()
Cheers!
P.S. I also found this question on Stack Overflow, which was unanswered.

use row as variable with python and sql

I am trying to update some values into a database. The user can give the row that should be changed. The input from the user, however is a string. When I try to parse this into the MySQL connector with python it gives an error because of the apostrophes. The code I have so far is:
import mysql.connector
conn = mysql.connector
conn = connector.connect(user=dbUser, password=dbPasswd, host=dbHost, database=dbName)
cursor = conn.cursor()
cursor.execute("""UPDATE Search SET %s = %s WHERE searchID = %s""", ('maxPrice', 300, 10,))
I get this error
mysql.connector.errors.ProgrammingError: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''maxPrice' = 300 WHERE searchID = 10' at line 1
How do I get rid of the apostrophes? Because I think they are causing problems.
As noted, you can't prepare it using a field.
Perhaps the safest way is to allow only those fields that are expected, e.g.
#!/usr/bin/python
import os
import mysql.connector
conn = mysql.connector.connect(user=os.environ.get('USER'),
host='localhost',
database='sandbox',
unix_socket='/var/run/mysqld/mysqld.sock')
cur = conn.cursor(dictionary=True)
query = """SELECT column_name
FROM information_schema.columns
WHERE table_schema = DATABASE()
AND table_name = 'Search'
"""
cur.execute(query)
fields = [x['column_name'] for x in cur.fetchall()]
user_input = ['maxPrice', 300, 10]
if user_input[0] in fields:
cur.execute("""UPDATE Search SET {0} = {1} WHERE id = {1}""".format(user_input[0], '%s'),
tuple(user_input[1:]))
print cur.statement
Prints:
UPDATE Search SET maxPrice = 300 WHERE id = 10
Where:
mysql> show create table Search\G
*************************** 1. row ***************************
Search
CREATE TABLE `Search` (
`id` int(11) DEFAULT NULL,
`maxPrice` float DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1
A column name is not a parameter. Put the column name maxPrice directly into your SQL.
cursor.execute("""UPDATE Search SET maxPrice = %s WHERE searchID = %s""", (300, 10))
If you want to use the same code with different column names, you would have to modify the string itself.
sql = "UPDATE Search SET {} = %s WHERE searchID = %s".format('maxPrice')
cursor.execute(sql, (300,10))
But bear in mind that this is not safe from injection the way parameters are, so make sure your column name is not a user-input string or anything like that.
You cannot do it like that. You need to place the column name in the string before you call cursor.execute. Column names cannot be used when transforming variables in cursor.execute.
Something like this would work:
sql = "UPDATE Search SET {} = %s WHERE searchID = %s".format('maxPrice')
cursor.execute(sql, (300, 10,))
You cannot dynamically bind object (e.g., column) names, only values. If that's the logic you're trying to achieve, you'd have to resort to string manipulation/formatting (with all the risks of SQL-injection attacks that come with it). E.g.:
sql = """UPDATE Search SET {} = %s WHERE searchID = %s""".format('maxPrice')
cursor.execute(sql, (300, 10,))

Performing an SQL query for each item in a tuple

I am new to Python and am hoping someone can help me figure out how to perform an SQL query on each item in a tuple using Python.
I have a SQL Express server that contains a number of databases for a badge reader system. What I am trying to do is pull the user id's that have scanned into a particular reader, then use those id's to get the actual user names.
Currently, I am able run the query that pulls the user id's and run a query on the other table using just one id. What want to be able to do, and seem to be having an issue figuring out, is running that second query on every user id in the tuple that is created from the first query. Below is the code for the two functions I am currently using.
def get_id():
global cardholder
global cur
cur.execute("SELECT user_id FROM db.table WHERE badgereaderid = 'badgereader1'")
cardholder = []
rows = cur.fetchall()
for row in rows:
if row == None:
break
cardholder.append(row[0])
print(cardholder)
def get_name():
global cardholder
global user
global cur
cur.execute("SELECT FirstName, LastName FROM db.table WHERE user_id= '%s'" % cardholder)
while 1:
row = cur.fetchone()
if row == None:
break
user = row[0] + row[1]
Two possible options
Repeated queries in Python
for user_id in cardholder:
cur.execute("SELECT FirstName, LastName FROM db.table WHERE user_id= '%s'" % user_id)
But why not just pull all the data in the first query?
cur.execute("SELECT a.user_id, b.FirstName, b.LastName FROM db.table1 a left join bd.table2 b on a.user_id = b.user_id WHERE a.badgereaderid = 'badgereader1'")
or, use triple quotes to allow multi-line strings and make the SQL command easier to understand
cur.execute("""SELECT
a.user_id,
b.FirstName,
b.LastName
FROM db.table1 a
left join db.table2 b
on a.user_id = b.user_id
WHERE a.badgereaderid = 'badgereader1'""")
A good practice in Python is to define the data collections outside the function if you intend to use them later on in your code
Try this code:
cardholder_names = []
#pass the cardholder as a param to the function
def get_name(cardholder):
#cur is already defined as a global param, no need to do it twice
cur.execute("SELECT FirstName, LastName FROM db.table WHERE user_id='{0}'".format(cardholder))
return cur.fetchone()
#now use the for loop to iterate over all the cardholders
for holder in cardholders:
cardholder_name = get_name(holder)
cardholder_names.append( {"name" : cardholder_name[0], "surname" : cardholder_name[1]})

Python Sqlite3 - how to work very very long WHERE IN() clause

[Using Python3.x]
The basic idea is that I have to run a first query to pull a long list of IDs (text) (about a million IDs) and use those IDs in an IN() clause in a WHERE statement in another query. I'm using python string formatting to make this happen, and works well if the number of IDs is small - say 100k - but gives me an error (pyodbc.Error: ('08S01', '[08S01] [MySQL][ODBC 5.2(a) Driver][mysqld-5.5.31-MariaDB-log]MySQL server has gone away (2006) (SQLExecDirectW)')) when the set is indeed about a million IDs long.
I tried to read into it a bit and think it might have something with the default(?) limits set by SQLite. Also I am wondering if I'm approaching this in the right way anyway.
Here's my code:
Step 1: Getting the IDs
def get_device_ids(con_str, query, tb_name):
local_con = lite.connect('temp.db')
local_cur = local_con.cursor()
local_cur.execute("DROP TABLE IF EXISTS {};".format(tb_name))
local_cur.execute("CREATE TABLE {} (id TEXT PRIMARY KEY, \
lang TEXT, first_date DATETIME);".format(tb_name))
data = create_external_con(con_str, query)
device_id_set = set()
with local_con:
for row in data:
device_id_set.update([row[0]])
local_cur.execute("INSERT INTO srv(id, lang, \
first_date) VALUES (?,?,?);", (row))
lid = local_cur.lastrowid
print("Number of rows inserted into SRV: {}".format(lid))
return device_id_set
Step 2: Generating the query with 'dynamic' IN() clause
def gen_queries(ids):
ids_list = str(', '.join("'" + id_ +"'" for id_ in ids))
query = """
SELECT e.id,
e.field2,
e.field3
FROM table e
WHERE e.id IN ({})
""".format(ids_list)
return query
Step 3: Using that query in another INSERT query
This is where things go wrong
def get_data(con_str, query, tb_name):
local_con = lite.connect('temp.db')
local_cur = local_con.cursor()
local_cur.execute("DROP TABLE IF EXISTS {};".format(tb_name))
local_cur.execute("CREATE TABLE {} (id TEXT, field1 INTEGER, \
field2 TEXT, field3 TEXT, field4 INTEGER, \
PRIMARY KEY(id, field1));".format(tb_name))
data = create_external_con(con_str, query) # <== THIS IS WHERE THAT QUERY IS INSERTED
device_id_set = set()
with local_con:
for row in data:
device_id_set.update(row[1])
local_cur.execute("INSERT INTO table2(id, field1, field2, field3, \
field4) VALUES (?,?,?,?,?);", (row))
lid = local_cur.lastrowid
print("Number of rows inserted into table2: {}".format(lid))
Any help is very much appreciated!
Edit
This is probably the right solution to my problem, however when I try to use "SET SESSION max_allowed_packet=104857600" I get the error: SESSION variable 'max_allowed_packet' is read-only. Use SET GLOBAL to assign the value (1621). Then when I try to change SESSION to GLOBAL i get an access denied message.
Insert the IDs into a (temporary) table in the same database, and then use:
... WHERE e.ID IN (SELECT ID FROM TempTable)

In python, changing MySQL query based on function variables

I'd like to be able to add a restriction to the query if user_id != None ... for example:
"AND user_id = 5"
but I am not sure how to add this into the below function?
Thank you.
def get(id, user_id=None):
query = """SELECT *
FROM USERS
WHERE text LIKE %s AND
id = %s
"""
values = (search_text, id)
results = DB.get(query, values)
This way I can call:
get(5)
get(5,103524234) (contains user_id restriction)
def get(id, user_id=None):
query = """SELECT *
FROM USERS
WHERE text LIKE %s AND
id = %s
"""
values = [search_text, id]
if user_id is not None:
query += ' AND user_id = %s'
values.append(user_id)
results = DB.get(query, values)
As you see, the main difference wrt your original code is the small if block in the middle, which enriches query string and values if needed. I also made values a list, rather than a tuple, so it can be enriched with the more natural append rather than with
values += (user_id,)
which is arguably less readable - however, you can use it if you want to keep values a tuple for some other reasons.
edit: the OP now clarifies in a comment (!) that his original query has an ending LIMIT clause. In this case I would suggest a different approach, such as:
query_pieces = ["""SELECT *
FROM USERS
WHERE text LIKE %s AND
id = %s
""", "LIMIT 5"]
values = [search_text, id]
if user_id is not None:
query_pieces.insert(1, ' AND user_id = %s')
values.append(user_id)
query = ' '.join(query_pieces)
results = DB.get(query, values)
You could do it in other ways, but keeping a list of query pieces in the proper order, enriching it as you go (e.g. by insert), and joining it with some whitespace at the end, is a pretty general and usable approach.
What's wrong with something like:
def get(id, user_id=None):
query = "SELECT * FROM USERS WHERE text LIKE %s"
if user_id != None:
query = query + " AND id = %s"%(user_id)
:
:
That syntax may not be perfect, I haven't done Python for a while - I'm just trying to get the basic idea across. This defaults to the None case and only adds the extra restriction if you give a real user ID.
You could build the SQL query using a list of conditions:
def get(id, user_id=None):
query = """SELECT *
FROM USERS
WHERE
"""
values = [search_text, id]
conditions=[
'text LIKE %s',
'id = %s']
if user_id is not None:
conditions.append('user_id = %s')
values.append(user_id)
query+=' AND '.join(conditions)+' LIMIT 1'
results = DB.get(query, values)

Categories

Resources