Here I want to incr field count, then select count result at one atomic operation.
the code are:
sql = ("update user_cout set count = count+1 "
"where username=%s")
cursor = connection.cursor()
cursor.execute(sql, (username, ))
cursor.execute("select count from user_count "
"where username=%s", (username, ))
Any recommendation way to do this?
You need to perform the UPDATE and the SELECT within a transaction (i.e. statements that are not automatically committed):
connection.autocommit(0);
One can then execute the existing statements and explicitly commit them to the database:
connection.commit();
Note that to achieve atomicity, the user_count table must use a transactional storage engine (such as InnoDB).
#linbo, you could create a function/procedure into your database (check how to do so here) and then run just one call in your code:
sql = ("SELECT MY_CUSTOM_FUNC(%s)")
cursor = connection.cursor()
cursor.execute(sql, (username, ))
There are many advantages of using this approach:
You centralize your logic, it can be called from many places in your application;
Less code;
MySQL uses atomicity by default in functions once it's a simple operation for the db;
I hope it helps!
You can do it like that:
cursor.execute("UPDATE tbl_name SET pos = #next_pos := pos + 1 WHERE some_id = %s; SELECT #next_pos;", (12345, ))
cursor.nextset()
result = cursor.fetchone()
Then result[0] will contain incremented value.
Related
I'm trying to write a function that will add records to PostgreSQL tables, given a table users and a record [username1, 21, Man], I would like to get the string
"""INSERT INTO users VALUES(
'username1',
'21',
'Man
)
"""
This is what I currently have, It doesn't work and I'm wondering what a good way would be to fix it.
def add_record(table, lst_of_attributes):
ln = lst_of_attributes.length
"""
INSERT INTO {} VALUES(
'{}', #has to be repeated (ln-1) times
{}
)
""".format(table, *lst_of_attributes)
Something like this should work:
def add_record(table, lst_of_attributes):
sqlstring = f"INSERT INTO {table} VALUES("
for val in lst_of_attributes:
sqlstring += f"'{val}',"
sqlstring = sqlstring[:-1]
sqlstring += ")"
return sqlstring
table = "users"
lst_of_attributes = ['username1','21','Man']
resultstring = add_record(table, lst_of_attributes)
print(resultstring)
result:
"INSERT INTO users VALUES('username1','21','Man')"
However:
As others pointed out, there are already existing libraries out there for this kind of operation, which are probably saver and easier. Another way to go for simple additions to a table would be stored procedures.
Trying to manually create your own sql strings from variables like this opens yourself up to sql-injection attacks and other bugs.
Always use a library like psycopg to handle it.
I am trying to create a program where a user can enter an operator i.e. <> or = and then a number for a database in pymysql. I have tried a number of different ways of doing this but unfortunately unsuccessful. I have two documents with display being one and importing display into the other document.
Docuemnt 1
def get_pop(op, pop):
if (not conn):
connect();
query = "SELECT * FROM city WHERE Population %s %s"
with conn:
cursor = conn.cursor()
cursor.execute(query, (op, pop))
x = cursor.fetchall()
return x
Document two
def city():
op = input("Enter < > or =: ")
population = input("Enter population: ")
pop = display.get_pop(op, population)
for p in pop:
print(pop)
I am getting the following error.
pymysql.err.ProgrammingError: (1064,......
Please help thanks
You can't do this. Parameterization works for values only, not operators or table names, or column names. You'll need to format the operator into the string. Do not confuse the %s placeholder here with Python string formatting; MySQL is awkward in that it uses %s for binding parameters, which clashes with regular Python string formatting.
The MySQL %s in a query string escapes the user input to protect against SQL Injection. In this case, I set up a basic test to see if the operation part submitted by the user was in a list of accepted operations.
def get_pop(op, pop):
query = "SELECT * FROM city WHERE Population {} %s" # Add a placeholder for format
with conn: # Where does this come from?
cursor = conn.cursor()
if op in ['=', '!=']:
cursor.execute(query.format(op), (pop,))
x = cursor.fetchall()
return x
You'll want to come up with some reasonable return value in the case that if op in ['=', '!='] is not True but that depends entirely on how you want this to behave.
After checking that op indeed contains either "<>" or "=" and that pop indeed contains a number you could try:
query = "SELECT * FROM city WHERE Population " + op + " %s";
Beware of SQL injection.
Then
cursor.execute(query, (pop))
I'm trying to loop over an MySQL query, however I can't get the variable to work. What am I doing wrong? The loop starts at line 10.
cur = db.cursor()
query = '''
Select user_id, solution_id
From user_concepts
Where user_id IN
(Select user_id FROM fields);
'''
cur.execute(query)
numrows = cur.rowcount
for i in xrange(0,numrows):
row = cur.fetchone()
# find all item_oid where task_id = solution_id for first gallery and sort by influence.
cur.execute('''
SELECT task_id, item_oid, influence
FROM solution_oids
WHERE task_id = row[%d]
ORDER BY influence DESC;
''', (i))
cur.fetchall()
error message:
File "james_test.py", line 114, in ''', (i))
File "/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py", line 187, in execute
query = query % tuple([db.literal(item) for item in args])
TypeError: 'int' object is not iterable
cur.execute expect a tuple o dict for params but you gave (i) which is an int not a tuple. To make it a tuple add a comma (i,)
Here's how I would do this. You may not need to declare 2 cursors, but it won't hurt anything. Sometimes a second cursor is necessary because there could be a conflict. Notice how I demonstrate 2 different methods for looping the cursor data. One with the fetchall and one by looping the cursor. A third method could use fetch, but is not shown. Using a dictionary cursor is really nice, but sometimes you may want to use a standard non-dict cursor where values are retrieved only by their number in the row array. Also note the need to use a trailing comma in the parameter list when you have only 1 parameter. Because it expects a tuple. If you have more than 1 parameter, you won't need a trailing comma because more than 1 parm will be a tuple.
cursor1 = db.cursor(MySQLdb.cursors.DictCursor) # a dictcursor enables a named hash
cursor2 = db.cursor(MySQLdb.cursors.DictCursor) # a dictcursor enables a named hash
cursor1.execute("""
Select user_id, solution_id
From user_concepts
Where user_id IN (Select user_id FROM fields);
"""
for row in cursor1.fetchall():
user_id = row["user_id"]
solution_id = row["solution_id"]
cursor2.execute("""
SELECT task_id, item_oid, influence
FROM solution_oids
WHERE task_id = %s
ORDER BY influence DESC;
""", (solution_id,))
for data in cursor2:
task_id = data["task_id"]
item_oid = data["item_oid"]
influence = data["influence"]
Maybe try this:
a = '''this is the {try_}. try'''
i= 1
b = a.format(try_=i)
print b
You could even do:
data = {'try_':i}
b = a.format(**data)
sources:
python's ".format" function
Python string formatting: % vs. .format
I am trying to get the numbers of rows returned from an sqlite3 database in python but it seems the feature isn't available:
Think of php mysqli_num_rows() in mysql
Although I devised a means but it is a awkward: assuming a class execute sql and give me the results:
# Query Execution returning a result
data = sql.sqlExec("select * from user")
# run another query for number of row checking, not very good workaround
dataCopy = sql.sqlExec("select * from user")
# Try to cast dataCopy to list and get the length, I did this because i notice as soon
# as I perform any action of the data, data becomes null
# This is not too good as someone else can perform another transaction on the database
# In the nick of time
if len(list(dataCopy)) :
for m in data :
print("Name = {}, Password = {}".format(m["username"], m["password"]));
else :
print("Query return nothing")
Is there a function or property that can do this without stress.
Normally, cursor.rowcount would give you the number of results of a query.
However, for SQLite, that property is often set to -1 due to the nature of how SQLite produces results. Short of a COUNT() query first you often won't know the number of results returned.
This is because SQLite produces rows as it finds them in the database, and won't itself know how many rows are produced until the end of the database is reached.
From the documentation of cursor.rowcount:
Although the Cursor class of the sqlite3 module implements this attribute, the database engine’s own support for the determination of “rows affected”/”rows selected” is quirky.
For executemany() statements, the number of modifications are summed up into rowcount.
As required by the Python DB API Spec, the rowcount attribute “is -1 in case no executeXX() has been performed on the cursor or the rowcount of the last operation is not determinable by the interface”. This includes SELECT statements because we cannot determine the number of rows a query produced until all rows were fetched.
Emphasis mine.
For your specific query, you can add a sub-select to add a column:
data = sql.sqlExec("select (select count() from user) as count, * from user")
This is not all that efficient for large tables, however.
If all you need is one row, use cursor.fetchone() instead:
cursor.execute('SELECT * FROM user WHERE userid=?', (userid,))
row = cursor.fetchone()
if row is None:
raise ValueError('No such user found')
result = "Name = {}, Password = {}".format(row["username"], row["password"])
import sqlite3
conn = sqlite3.connect(path/to/db)
cursor = conn.cursor()
cursor.execute("select * from user")
results = cursor.fetchall()
print len(results)
len(results) is just what you want
Use following:
dataCopy = sql.sqlExec("select count(*) from user")
values = dataCopy.fetchone()
print values[0]
When you just want an estimate beforehand, then simple use COUNT():
n_estimate = cursor.execute("SELECT COUNT() FROM user").fetchone()[0]
To get the exact number before fetching, use a locked "Read transaction", during which the table won't be changed from outside, like this:
cursor.execute("BEGIN") # start transaction
n = cursor.execute("SELECT COUNT() FROM user").fetchone()[0]
# if n > big: be_prepared()
allrows=cursor.execute("SELECT * FROM user").fetchall()
cursor.connection.commit() # end transaction
assert n == len(allrows)
Note: A normal SELECT also locks - but just until it itself is completely fetched or the cursor closes or commit() / END or other actions implicitely end the transaction ...
I've found the select statement with count() to be slow on a very large DB. Moreover, using fetch all() can be very memory-intensive.
Unless you explicitly design your database so that it does not have a rowid, you can always try a quick solution
cur.execute("SELECT max(rowid) from Table")
n = cur.fetchone()[0]
This will tell you how many rows your database has.
I did it like
cursor.execute("select count(*) from my_table")
results = cursor.fetchone()
print(results[0])
this code worked for me:
import sqlite3
con = sqlite3.connect(your_db_file)
cursor = con.cursor()
result = cursor.execute("select count(*) from your_table").fetchall() #returns array of tupples
num_of_rows = result[0][0]
A simple alternative approach here is to use fetchall to pull a column into a python list, then count the length of the list. I don't know if this is pythonic or especially efficient but it seems to work:
rowlist = []
c.execute("SELECT {rowid} from {whichTable}".\
format (rowid = "rowid", whichTable = whichTable))
rowlist = c.fetchall ()
rowlistcount = len(rowlist)
print (rowlistcount)
The following script works:
def say():
global s #make s global decleration
vt = sqlite3.connect('kur_kel.db') #connecting db.file
bilgi = vt.cursor()
bilgi.execute(' select count (*) from kuke ') #execute sql command
say_01=bilgi.fetchone() #catch one query from executed sql
print (say_01[0]) #catch a tuple first item
s=say_01[0] # assign variable to sql query result
bilgi.close() #close query
vt.close() #close db file
I'm getting a weird error when inserting some data from a Python script to MySQL. It's basically related to a variable being blank that I am inserting. I take it that MySQL does not like blank variables but is there something else I can change it to so it works with my insert statement?
I can successfully use an IF statement to turn it to 0 if its blank but this may mess up some of the data analytics I plan to do in MySQL later. Is there a way to convert it to NULL or something so MySQL accepts it but doesn't add anything?
When using mysqldb and cursor.execute(), pass the value None, not "NULL":
value = None
cursor.execute("INSERT INTO table (`column1`) VALUES (%s)", (value,))
Found the answer here
if the col1 is char, col2 is int, a trick could be:
insert into table (col1, col2) values (%s, %s) % ("'{}'".format(val1) if val1 else "NULL", val2 if val2 else "NULL");
you do not need to add ' ' to %s, it could be processed before pass value to sql.
this method works when execute sql with session of sqlalchemy, for example session.execute(text(sql))
ps: sql is not tested yet
Quick note about using parameters in SQL statements with Python. See the RealPython article on this topic - Preventing SQL Injection Attacks With Python. Here's another good article from TowardsDataScience.com - A Simple Approach To Templated SQL Queries In Python. These helped me with same None/NULL issue.
Also, I found that if I put "NULL" (without quotes) directly into the INSERT query in VALUES, it was interpreted appropriately in the SQL Server DB. The translation problem only exists if needing to conditionally add NULL or a value via string interpolation.
Examples:
cursor.execute("SELECT admin FROM users WHERE username = %s'", (username, ));
cursor.execute("SELECT admin FROM users WHERE username = %(username)s", {'username': username});
UPDATE: This StackOverflow discussion is more in line with what I'm trying to do and may help someone else.
Example:
import pypyodbc
myData = [
(1, 'foo'),
(2, None),
(3, 'bar'),
]
connStr = """
DSN=myDb_SQLEXPRESS;
"""
cnxn = pypyodbc.connect(connStr)
crsr = cnxn.cursor()
sql = """
INSERT INTO myTable VALUES (?, ?)
"""
for dataRow in myData:
print(dataRow)
crsr.execute(sql, dataRow)
cnxn.commit()
crsr.close()
cnxn.close()
Based on above answers I wrote a wrapper function for my use case, you can try and change the function according to your need.
def sanitizeData(value):
if value in ('', None):
return "NULL"
# This case handles the case where value already has ' in it (ex: O'Brien). This is how SQL skils single quotes
if type(value) is str:
return "'{}'".format(value.replace("'", "''"))
return value
Now call the sql query like so,
"INSERT INTO %s (Name, Email) VALUES (%s, %s)"%(table_name, sanitizeData(actual_name), sanitizeData(actual_email))
Why not set the variable equal to some string like 'no price' and then filter this out later when you want to do math on the numbers?
filter(lambda x: x != 'no price',list_of_data_from_database)
Do a quick check for blank, and if it is, set it equal to NULL:
if(!variable_to_insert)
variable_to_insert = "NULL"
...then make sure that the inserted variable is not in quotes for the insert statement, like:
insert = "INSERT INTO table (var) VALUES (%s)" % (variable_to_insert)
...
not like:
insert = "INSERT INTO table (var) VALUES ('%s')" % (variable_to_insert)
...