I'm trying to loop over an MySQL query, however I can't get the variable to work. What am I doing wrong? The loop starts at line 10.
cur = db.cursor()
query = '''
Select user_id, solution_id
From user_concepts
Where user_id IN
(Select user_id FROM fields);
'''
cur.execute(query)
numrows = cur.rowcount
for i in xrange(0,numrows):
row = cur.fetchone()
# find all item_oid where task_id = solution_id for first gallery and sort by influence.
cur.execute('''
SELECT task_id, item_oid, influence
FROM solution_oids
WHERE task_id = row[%d]
ORDER BY influence DESC;
''', (i))
cur.fetchall()
error message:
File "james_test.py", line 114, in ''', (i))
File "/usr/lib64/python2.7/site-packages/MySQLdb/cursors.py", line 187, in execute
query = query % tuple([db.literal(item) for item in args])
TypeError: 'int' object is not iterable
cur.execute expect a tuple o dict for params but you gave (i) which is an int not a tuple. To make it a tuple add a comma (i,)
Here's how I would do this. You may not need to declare 2 cursors, but it won't hurt anything. Sometimes a second cursor is necessary because there could be a conflict. Notice how I demonstrate 2 different methods for looping the cursor data. One with the fetchall and one by looping the cursor. A third method could use fetch, but is not shown. Using a dictionary cursor is really nice, but sometimes you may want to use a standard non-dict cursor where values are retrieved only by their number in the row array. Also note the need to use a trailing comma in the parameter list when you have only 1 parameter. Because it expects a tuple. If you have more than 1 parameter, you won't need a trailing comma because more than 1 parm will be a tuple.
cursor1 = db.cursor(MySQLdb.cursors.DictCursor) # a dictcursor enables a named hash
cursor2 = db.cursor(MySQLdb.cursors.DictCursor) # a dictcursor enables a named hash
cursor1.execute("""
Select user_id, solution_id
From user_concepts
Where user_id IN (Select user_id FROM fields);
"""
for row in cursor1.fetchall():
user_id = row["user_id"]
solution_id = row["solution_id"]
cursor2.execute("""
SELECT task_id, item_oid, influence
FROM solution_oids
WHERE task_id = %s
ORDER BY influence DESC;
""", (solution_id,))
for data in cursor2:
task_id = data["task_id"]
item_oid = data["item_oid"]
influence = data["influence"]
Maybe try this:
a = '''this is the {try_}. try'''
i= 1
b = a.format(try_=i)
print b
You could even do:
data = {'try_':i}
b = a.format(**data)
sources:
python's ".format" function
Python string formatting: % vs. .format
Related
Is there an elegant way of getting a single result from an SQLite SELECT query when using Python?
for example:
conn = sqlite3.connect('db_path.db')
cursor=conn.cursor()
cursor.execute("SELECT MAX(value) FROM table")
for row in cursor:
for elem in row:
maxVal = elem
is there a way to avoid those nested fors and get the value directly? I've tried
maxVal = cursor[0][0]
without any success.
I think you're looking for Cursor.fetchone() :
cursor.fetchone()[0]
Or you could write a wrapper function that, given SQL, returns a scalar result:
def get_scalar_result(conn, sql):
cursor=conn.cursor()
cursor.execute(sql)
return cursor.fetchone()[0]
I apologize for the possibly less than syntactically correct Python above, but I hope you get the idea.
Be careful, accepted answer might cause TypeError!
Due to fetchone() documentation:
Fetches the next row of a query result set, returning a single sequence, or None when no more data is available.
So with some SQL queries cursor.fetchone()[0] could turn into None[0] which leads to raising TypeError exception.
Better way to get first row or None is:
first_row = next(cursor, [None])[0]
If SQL query is empty, next will use default value [None] and get first element from that list without raising exceptions.
If you're not using pysqlite which has the built in cursor.fetchone
cursor.execute("select value from table order by value desc limit 1")
Sequence unpacking can be used to extract the scalar value from the result tuple.
By iterating over the cursor (or cursor.fetchall)if there are multiple rows:
for result, in cursor:
print(result)
Or using cursor.fetchone if there is a single row in the resultset:
result, = cur.fetchone()
print(result)
In both cases the trailing comma after result unpacks the element from the single-element tuple. This is the same as the more commonly seen
a, b = (1, 2)
except the tuples only have one element:
a, = (1,)
select count(*) from ... groupy by ... returns None instead of 0,
so fetchone()[0] would lead to an exception.
Therefore
def get_scalar_from_sql(sqlcur, sqlcmd):
# select count(*) from .... groupy by ... returns None instead of 0
sqlcur.execute(sqlcmd)
scalar = 0
tuple_or_None = sqlcur.fetchone()
if not tuple_or_None is None:
(scalar,) = tuple_or_None
return scalar
or you can try :
cursor.execute("SELECT * FROM table where name='martin'")
I want to check if the ID exists already but I get this error:
Not all parameters were used in the SQL statement
Code:
Id = "TEST"
sql = """SELECT * FROM musics WHERE Id = %s"""
dbc.execute(sql, Id)
row = cursor.rowcount
if row == 0:
#NOT EXSIST
The second argument to execute() should be a sequence of values, one for each placeholder token %s in the query.
You did pass a sequence, but not in the way you intended. Strings are sequences, so you actually passed a sequence of four values - T, E, S, T, which is too many values, because the query only has one placeholder token.
Pass the string as a one-element tuple, like so:
args = ("TEST",)
sql = """SELECT * FROM musics WHERE Id = %s"""
dbc.execute(sql, args)
I have troubles using a simple sql statement with the operator IN through pymssql.
Here is a sample :
import pymssql
conn = pymssql.connect(server='myserver', database='mydb')
cursor = conn.cursor()
req = "SELECT * FROM t1 where id in (%s)"
cursor.execute(req, tuple(range(1,10)))
res = cursor.fetchall()
Surprisingly only the first id is returned and I can't figure out why.
Does anyone encounter the same behavior ?
You're trying to pass nine ID values to the query and you only have one placeholder. You can get nine placeholders by doing this:
ids = range(1,10)
placeholders = ','.join('%s' for i in ids)
req = "SELECT * FROM t1 where id in ({})".format(placeholders)
cursor.execute(req, ids)
res = cursor.fetchall()
As an aside, you don't necessarily need a tuple here. A list will work fine.
It looks like you are only passing SELECT * FROM t1 where id in (1). You call execute with the tuple but the string only has one formatter. To pass all values, call execute like this:
cursor.execute(req, (tuple(range(1,10)),))
This will pass the tuple as first argument to the string to format.
EDIT: Regarding the executeone/many() thing, if you call executemany and it returns the last instead of the first id, it seems that execute will run the query 10 times as it can format the string with 10 values. The last run will then return the last id.
I am trying to get the numbers of rows returned from an sqlite3 database in python but it seems the feature isn't available:
Think of php mysqli_num_rows() in mysql
Although I devised a means but it is a awkward: assuming a class execute sql and give me the results:
# Query Execution returning a result
data = sql.sqlExec("select * from user")
# run another query for number of row checking, not very good workaround
dataCopy = sql.sqlExec("select * from user")
# Try to cast dataCopy to list and get the length, I did this because i notice as soon
# as I perform any action of the data, data becomes null
# This is not too good as someone else can perform another transaction on the database
# In the nick of time
if len(list(dataCopy)) :
for m in data :
print("Name = {}, Password = {}".format(m["username"], m["password"]));
else :
print("Query return nothing")
Is there a function or property that can do this without stress.
Normally, cursor.rowcount would give you the number of results of a query.
However, for SQLite, that property is often set to -1 due to the nature of how SQLite produces results. Short of a COUNT() query first you often won't know the number of results returned.
This is because SQLite produces rows as it finds them in the database, and won't itself know how many rows are produced until the end of the database is reached.
From the documentation of cursor.rowcount:
Although the Cursor class of the sqlite3 module implements this attribute, the database engine’s own support for the determination of “rows affected”/”rows selected” is quirky.
For executemany() statements, the number of modifications are summed up into rowcount.
As required by the Python DB API Spec, the rowcount attribute “is -1 in case no executeXX() has been performed on the cursor or the rowcount of the last operation is not determinable by the interface”. This includes SELECT statements because we cannot determine the number of rows a query produced until all rows were fetched.
Emphasis mine.
For your specific query, you can add a sub-select to add a column:
data = sql.sqlExec("select (select count() from user) as count, * from user")
This is not all that efficient for large tables, however.
If all you need is one row, use cursor.fetchone() instead:
cursor.execute('SELECT * FROM user WHERE userid=?', (userid,))
row = cursor.fetchone()
if row is None:
raise ValueError('No such user found')
result = "Name = {}, Password = {}".format(row["username"], row["password"])
import sqlite3
conn = sqlite3.connect(path/to/db)
cursor = conn.cursor()
cursor.execute("select * from user")
results = cursor.fetchall()
print len(results)
len(results) is just what you want
Use following:
dataCopy = sql.sqlExec("select count(*) from user")
values = dataCopy.fetchone()
print values[0]
When you just want an estimate beforehand, then simple use COUNT():
n_estimate = cursor.execute("SELECT COUNT() FROM user").fetchone()[0]
To get the exact number before fetching, use a locked "Read transaction", during which the table won't be changed from outside, like this:
cursor.execute("BEGIN") # start transaction
n = cursor.execute("SELECT COUNT() FROM user").fetchone()[0]
# if n > big: be_prepared()
allrows=cursor.execute("SELECT * FROM user").fetchall()
cursor.connection.commit() # end transaction
assert n == len(allrows)
Note: A normal SELECT also locks - but just until it itself is completely fetched or the cursor closes or commit() / END or other actions implicitely end the transaction ...
I've found the select statement with count() to be slow on a very large DB. Moreover, using fetch all() can be very memory-intensive.
Unless you explicitly design your database so that it does not have a rowid, you can always try a quick solution
cur.execute("SELECT max(rowid) from Table")
n = cur.fetchone()[0]
This will tell you how many rows your database has.
I did it like
cursor.execute("select count(*) from my_table")
results = cursor.fetchone()
print(results[0])
this code worked for me:
import sqlite3
con = sqlite3.connect(your_db_file)
cursor = con.cursor()
result = cursor.execute("select count(*) from your_table").fetchall() #returns array of tupples
num_of_rows = result[0][0]
A simple alternative approach here is to use fetchall to pull a column into a python list, then count the length of the list. I don't know if this is pythonic or especially efficient but it seems to work:
rowlist = []
c.execute("SELECT {rowid} from {whichTable}".\
format (rowid = "rowid", whichTable = whichTable))
rowlist = c.fetchall ()
rowlistcount = len(rowlist)
print (rowlistcount)
The following script works:
def say():
global s #make s global decleration
vt = sqlite3.connect('kur_kel.db') #connecting db.file
bilgi = vt.cursor()
bilgi.execute(' select count (*) from kuke ') #execute sql command
say_01=bilgi.fetchone() #catch one query from executed sql
print (say_01[0]) #catch a tuple first item
s=say_01[0] # assign variable to sql query result
bilgi.close() #close query
vt.close() #close db file
I need to perform a raw query:
cursor.execute("select id, name from people")
results = cursor.fetchall()
How do I convert this so that I can use it in a Django template:
{% for person in results %}
{{person.name}}
{% endfor %}
Normally, I'd use the model:
results = people.objects.raw("select id, name from people")
That works regardless of how many other models/tables I use in the query.
However, that method requires that I include the primary id for the people model. I cannot do that this time because the sql is actually a group by query, and cannot contain the id.
I definitely want to use raw sql, not some other way of doing the equivalent of "group by".
This worked. Converts the tuple of tuples into a list of dictionaries and gets the field description from cursor.description. Could be made as a little function. And there's probably some smart lamdba thing that could make it shorter.
cursor = connection.cursor()
cursor.execute(my_select)
results = cursor.fetchall()
x = cursor.description
resultsList = []
for r in results:
i = 0
d = {}
while i < len(x):
d[x[i][0]] = r[i]
i = i+1
resultsList.append(d)
return render_to_response(my_template, {"results":resultsList})
If you insist that you need to perform raw sql query using MySQLdb cursor, then create a dictionary cursor DictCursor, so that column values can be accessed by name rather than by position.
cursor.close ()
cursor = conn.cursor (MySQLdb.cursors.DictCursor)
cursor.execute ("SELECT id, name FROM people")
results = cursor.fetchall ()
for row in results:
print "%s, %s" % (row["id"], row["name"])
Using DictCursor you don't need to do any thing just pass it to the template and use it the same way you do with the django queryset.