cursor.fetchall() vs list(cursor) in Python - python

Both methods return a list of the returned items of the query, did I miss something here, or they have identical usages indeed?
Any differences performance-wise?

If you are using the default cursor, a MySQLdb.cursors.Cursor, the entire result set will be stored on the client side (i.e. in a Python list) by the time the cursor.execute() is completed.
Therefore, even if you use
for row in cursor:
you will not be getting any reduction in memory footprint. The entire result set has already been stored in a list (See self._rows in MySQLdb/cursors.py).
However, if you use an SSCursor or SSDictCursor:
import MySQLdb
import MySQLdb.cursors as cursors
conn = MySQLdb.connect(..., cursorclass=cursors.SSCursor)
then the result set is stored in the server, mysqld. Now you can write
cursor = conn.cursor()
cursor.execute('SELECT * FROM HUGETABLE')
for row in cursor:
print(row)
and the rows will be fetched one-by-one from the server, thus not requiring Python to build a huge list of tuples first, and thus saving on memory.
Otherwise, as others have already stated, cursor.fetchall() and list(cursor) are essentially the same.

cursor.fetchall() and list(cursor) are essentially the same. The different option is to not retrieve a list, and instead just loop over the bare cursor object:
for result in cursor:
This can be more efficient if the result set is large, as it doesn't have to fetch the entire result set and keep it all in memory; it can just incrementally get each item (or batch them in smaller batches).

list(cursor) works because a cursor is an iterable; you can also use cursor in a loop:
for row in cursor:
# ...
A good database adapter implementation will fetch rows in batches from the server, saving on the memory footprint required as it will not need to hold the full result set in memory. cursor.fetchall() has to return the full list instead.
There is little point in using list(cursor) over cursor.fetchall(); the end effect is then indeed the same, but you wasted an opportunity to stream results instead.

A (MySQLdb/PyMySQL-specific) difference worth noting when using a DictCursor is that list(cursor) will always give you a list, while cursor.fetchall() gives you a list unless the result set is empty, in which case it gives you an empty tuple. This was the case in MySQLdb and remains the case in the newer PyMySQL, where it will not be fixed for backwards-compatibility reasons. While this isn't a violation of Python Database API Specification, it's still surprising and can easily lead to a type error caused by wrongly assuming that the result is a list, rather than just a sequence.
Given the above, I suggest always favouring list(cursor) over cursor.fetchall(), to avoid ever getting caught out by a mysterious type error in the edge case where your result set is empty.

You could use list comprehensions to bring the item in your tuple into a list:
conn = mysql.connector.connect()
cursor = conn.cursor()
sql = "SELECT column_name FROM db.table_name;"
cursor.execute(sql)
results = cursor.fetchall()
# bring the first item of the tuple in your results here
item_0_in_result = [_[0] for _ in results]

Related

inputing python list into Teradata SQL

I'm having a problem executing this SQL statement with a python list injection. I'm new to teradata SQL, and I'm not sure if this is the appropriate syntax for injecting a list into the where clause.
conn = teradatasql.connect(host='PROD', user='1234', password='1234', logmech='LDAP')
l = ["Comp-EN Routing", "Comp-COLLABORATION"]
l2 = ["PEO", "TEP"]
l3 = ["TCV"]
crsr = conn.cursor()
query = """SELECT SOURCE_ORDER_NUMBER
FROM DL_.BV_DETAIL
WHERE (LEVEL_1 IN ? AND LEVEL_2 IN ?) or LEVEL_3 IN ?"""
crsr.executemany(query, [l,l2,l3])
conn.autocommit = True
I keep getting this error
Version 17.0.0.2] [Session 308831600] [Teradata Database] [Error 3939] There is a mismatch between the number of parameters specified and the number of parameters required.
Late to answer this, but if I found the question someone else will in the future too.
executemany in teradatasql requires that second parameter to be a "sequence of sequences". The most common type of sequence we generally use in Python is a list. Essentially you need a list that contains, for each element in the list, a list.
In your case this may look like:
myListOfLists=[['level1valueA','level1valueA','level3valueA'],['level1valueB','level1valueB','level3valueB']]
Your SQL statement will be executed twice, once for each list in your list.
In your case though I suspect you are wanting to find any combination of the values that you have stored in your three lists which is entirely different ball of wax and is going to take some creativity (generate a list of list with all possible combinations and submit to executemany OR construct a SQL statement that can take in multiple comma delimited lists of values, form a cartesian product, and test for hits)
Want to add some regarding SELECT statement and executemany method: to retrieve all records returned by your query you will need to call .nextset() followed by .fetchall() as many times as it will become False. First .fetchall() will give you only first result (first list of parameters specified).
...
with teradatasql.connect(connectionstring) as conn:
with conn.cursor() as cur:
cur.executemany("SELECT COL1 FROM THEDATABASE.THETABLE WHERE COL1 = ?;",[['A'],['B']])
result=cur.fetchall() # will bring you only rows matching 'A'
if (cur.nextset()):
result2=cur.fetchall() # results for 'B'
...

Importing database with psycopg2 results a table with fewer lines than expected [duplicate]

I am using psycopg2 module in python to read from postgres database, I need to some operation on all rows in a column, that has more than 1 million rows.
I would like to know would cur.fetchall() fail or cause my server to go down? (since my RAM might not be that big to hold all that data)
q="SELECT names from myTable;"
cur.execute(q)
rows=cur.fetchall()
for row in rows:
doSomething(row)
what is the smarter way to do this?
The solution Burhan pointed out reduces the memory usage for large datasets by only fetching single rows:
row = cursor.fetchone()
However, I noticed a significant slowdown in fetching rows one-by-one. I access an external database over an internet connection, that might be a reason for it.
Having a server side cursor and fetching bunches of rows proved to be the most performant solution. You can change the sql statements (as in alecxe answers) but there is also pure python approach using the feature provided by psycopg2:
cursor = conn.cursor('name_of_the_new_server_side_cursor')
cursor.execute(""" SELECT * FROM table LIMIT 1000000 """)
while True:
rows = cursor.fetchmany(5000)
if not rows:
break
for row in rows:
# do something with row
pass
you find more about server side cursors in the psycopg2 wiki
Consider using server side cursor:
When a database query is executed, the Psycopg cursor usually fetches
all the records returned by the backend, transferring them to the
client process. If the query returned an huge amount of data, a
proportionally large amount of memory will be allocated by the client.
If the dataset is too large to be practically handled on the client
side, it is possible to create a server side cursor. Using this kind
of cursor it is possible to transfer to the client only a controlled
amount of data, so that a large dataset can be examined without
keeping it entirely in memory.
Here's an example:
cursor.execute("DECLARE super_cursor BINARY CURSOR FOR SELECT names FROM myTable")
while True:
cursor.execute("FETCH 1000 FROM super_cursor")
rows = cursor.fetchall()
if not rows:
break
for row in rows:
doSomething(row)
fetchall() fetches up to the arraysize limit, so to prevent a massive hit on your database you can either fetch rows in manageable batches, or simply step through the cursor till its exhausted:
row = cur.fetchone()
while row:
# do something with row
row = cur.fetchone()
Here is the code to use for simple server side cursor with the speed of fetchmany management.
The principle is to use named cursor in Psycopg2 and give it a good itersize to load many rows at once like fetchmany would do but with a single loop of for rec in cursor that does an implicit fetchnone().
With this code I make queries of 150 millions rows from multi-billion rows table within 1 hour and 200 meg ram.
EDIT: using fetchmany (along with fetchone() and fetchall(), even with a row limit (arraysize) will still send the entire resultset, keeping it client-side (stored in the underlying c library, I think libpq) for any additional fetchmany() calls, etc. Without using a named cursor (which would require an open transaction), you have to resort to using limit in the sql with an order-by, then analyzing the results and augmenting the next query with where (ordered_val = %(last_seen_val)s and primary_key > %(last_seen_pk)s OR ordered_val > %(last_seen_val)s)
This is misleading for the library to say the least, and there should be a blurb in the documentation about this. I don't know why it's not there.
Not sure a named cursor is a good fit without having a need to scroll forward/backward interactively? I could be wrong here.
The fetchmany loop is tedious but I think it's the best solution here. To make life easier, you can use the following:
from functools import partial
from itertools import chain
# from_iterable added >= python 2.7
from_iterable = chain.from_iterable
# util function
def run_and_iterate(curs, sql, parms=None, chunksize=1000):
if parms is None:
curs.execute(sql)
else:
curs.execute(sql, parms)
chunks_until_empty = iter(partial(fetchmany, chunksize), [])
return from_iterable(chunks_until_empty)
# example scenario
for row in run_and_iterate(cur, 'select * from waffles_table where num_waffles > %s', (10,)):
print 'lots of waffles: %s' % (row,)
As I was reading comments and answers I thought I should clarify something about fetchone and Server-side cursors for future readers.
With normal cursors (client-side), Psycopg fetches all the records returned by the backend, transferring them to the client process. The whole records are buffered in the client's memory. It is when you execute a query like curs.execute('SELECT * FROM ...'.
This question also confirms that.
All the fetch* methods are there for accessing this stored data.
Q: So how fetchone can help us memory wise ?
A: It fetches only one record from the stored data and creates a single Python object and hands you in your Python code while fetchall will fetch and create n Python objects from this data and hands it to you all in one chunk.
So If your table has 1,000,000 records, this is what's going on in memory:
curs.execute --> whole 1,000,000 result set + fetchone --> 1 Python object
curs.execute --> whole 1,000,000 result set + fetchall --> 1,000,000 Python objects
Of-course fetchone helped but still we have the whole records in memory. This is where Server-side cursors comes into play:
PostgreSQL also has its own concept of cursor (sometimes also called
portal). When a database cursor is created, the query is not
necessarily completely processed: the server might be able to produce
results only as they are needed. Only the results requested are
transmitted to the client: if the query result is very large but the
client only needs the first few records it is possible to transmit
only them.
...
their interface is the same, but behind the scene they
send commands to control the state of the cursor on the server (for
instance when fetching new records or when moving using scroll()).
So you won't get the whole result set in one chunk.
The draw-back :
The downside is that the server needs to keep track of the partially
processed results, so it uses more memory and resources on the server.

Using the python MySQLDB SScursor with nested queries

The typical MySQLdb library query can use a lot of memory and perform poorly in Python, when a large result set is generated. For example:
cursor.execute("SELECT id, name FROM `table`")
for i in xrange(cursor.rowcount):
id, name = cursor.fetchone()
print id, name
There is an optional cursor that will fetch just one row at a time, really speeding up the script and cutting the memory footprint of the script a lot.
import MySQLdb
import MySQLdb.cursors
conn = MySQLdb.connect(user="user", passwd="password", db="dbname",
cursorclass = MySQLdb.cursors.SSCursor)
cur = conn.cursor()
cur.execute("SELECT id, name FROM users")
row = cur.fetchone()
while row is not None:
doSomething()
row = cur.fetchone()
cur.close()
conn.close()
But I can't find anything about using SSCursor with with nested queries. If this is the definition of doSomething():
def doSomething()
cur2 = conn.cursor()
cur2.execute('select id,x,y from table2')
rows = cur2.fetchall()
for row in rows:
doSomethingElse(row)
cur2.close()
then the script throws the following error:
_mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now")
It sounds as if SSCursor is not compatible with nested queries. Is that true? If so that's too bad because the main loop seems to run too slowly with the standard cursor.
This problem in discussed a bit in the MySQLdb User's Guide, under the heading of the threadsafety attribute (emphasis mine):
The MySQL protocol can not handle multiple threads using the same
connection at once. Some earlier versions of MySQLdb utilized locking
to achieve a threadsafety of 2. While this is not terribly hard to
accomplish using the standard Cursor class (which uses
mysql_store_result()), it is complicated by SSCursor (which uses
mysql_use_result(); with the latter you must ensure all the rows have
been read before another query can be executed.
The documentation for the MySQL C API function mysql_use_result() gives more information about your error message:
When using mysql_use_result(), you must execute mysql_fetch_row()
until a NULL value is returned, otherwise, the unfetched rows are
returned as part of the result set for your next query. The C API
gives the error Commands out of sync; you can't run this command now
if you forget to do this!
In other words, you must completely fetch the result set from any unbuffered cursor (i.e., one that uses mysql_use_result() instead of mysql_store_result() - with MySQLdb, that means SSCursor and SSDictCursor) before you can execute another statement over the same connection.
In your situation, the most direct solution would be to open a second connection to use while iterating over the result set of the unbuffered query. (It wouldn't work to simply get a buffered cursor from the same connection; you'd still have to advance past the unbuffered result set before using the buffered cursor.)
If your workflow is something like "loop through a big result set, executing N little queries for each row," consider looking into MySQL's stored procedures as an alternative to nesting cursors from different connections. You can still use MySQLdb to call the procedure and get the results, though you'll definitely want to read the documentation of MySQLdb's callproc() method since it doesn't conform to Python's database API specs when retrieving procedure outputs.
A second alternative is to stick to buffered cursors, but split up your query into batches. That's what I ended up doing for a project last year where I needed to loop through a set of millions of rows, parse some of the data with an in-house module, and perform some INSERT and UPDATE queries after processing each row. The general idea looks something like this:
QUERY = r"SELECT id, name FROM `table` WHERE id BETWEEN %s and %s;"
BATCH_SIZE = 5000
i = 0
while True:
cursor.execute(QUERY, (i + 1, i + BATCH_SIZE))
result = cursor.fetchall()
# If there's no possibility of a gap as large as BATCH_SIZE in your table ids,
# you can test to break out of the loop like this (otherwise, adjust accordingly):
if not result:
break
for row in result:
doSomething()
i += BATCH_SIZE
One other thing I would note about your example code is that you can iterate directly over a cursor in MySQLdb instead of calling fetchone() explicitly over xrange(cursor.rowcount). This is especially important when using an unbuffered cursor, because the rowcount attribute is undefined and will give a very unexpected result (see: Python MysqlDB using cursor.rowcount with SSDictCursor returning wrong count).

python postgres can I fetchall() 1 million rows?

I am using psycopg2 module in python to read from postgres database, I need to some operation on all rows in a column, that has more than 1 million rows.
I would like to know would cur.fetchall() fail or cause my server to go down? (since my RAM might not be that big to hold all that data)
q="SELECT names from myTable;"
cur.execute(q)
rows=cur.fetchall()
for row in rows:
doSomething(row)
what is the smarter way to do this?
The solution Burhan pointed out reduces the memory usage for large datasets by only fetching single rows:
row = cursor.fetchone()
However, I noticed a significant slowdown in fetching rows one-by-one. I access an external database over an internet connection, that might be a reason for it.
Having a server side cursor and fetching bunches of rows proved to be the most performant solution. You can change the sql statements (as in alecxe answers) but there is also pure python approach using the feature provided by psycopg2:
cursor = conn.cursor('name_of_the_new_server_side_cursor')
cursor.execute(""" SELECT * FROM table LIMIT 1000000 """)
while True:
rows = cursor.fetchmany(5000)
if not rows:
break
for row in rows:
# do something with row
pass
you find more about server side cursors in the psycopg2 wiki
Consider using server side cursor:
When a database query is executed, the Psycopg cursor usually fetches
all the records returned by the backend, transferring them to the
client process. If the query returned an huge amount of data, a
proportionally large amount of memory will be allocated by the client.
If the dataset is too large to be practically handled on the client
side, it is possible to create a server side cursor. Using this kind
of cursor it is possible to transfer to the client only a controlled
amount of data, so that a large dataset can be examined without
keeping it entirely in memory.
Here's an example:
cursor.execute("DECLARE super_cursor BINARY CURSOR FOR SELECT names FROM myTable")
while True:
cursor.execute("FETCH 1000 FROM super_cursor")
rows = cursor.fetchall()
if not rows:
break
for row in rows:
doSomething(row)
fetchall() fetches up to the arraysize limit, so to prevent a massive hit on your database you can either fetch rows in manageable batches, or simply step through the cursor till its exhausted:
row = cur.fetchone()
while row:
# do something with row
row = cur.fetchone()
Here is the code to use for simple server side cursor with the speed of fetchmany management.
The principle is to use named cursor in Psycopg2 and give it a good itersize to load many rows at once like fetchmany would do but with a single loop of for rec in cursor that does an implicit fetchnone().
With this code I make queries of 150 millions rows from multi-billion rows table within 1 hour and 200 meg ram.
EDIT: using fetchmany (along with fetchone() and fetchall(), even with a row limit (arraysize) will still send the entire resultset, keeping it client-side (stored in the underlying c library, I think libpq) for any additional fetchmany() calls, etc. Without using a named cursor (which would require an open transaction), you have to resort to using limit in the sql with an order-by, then analyzing the results and augmenting the next query with where (ordered_val = %(last_seen_val)s and primary_key > %(last_seen_pk)s OR ordered_val > %(last_seen_val)s)
This is misleading for the library to say the least, and there should be a blurb in the documentation about this. I don't know why it's not there.
Not sure a named cursor is a good fit without having a need to scroll forward/backward interactively? I could be wrong here.
The fetchmany loop is tedious but I think it's the best solution here. To make life easier, you can use the following:
from functools import partial
from itertools import chain
# from_iterable added >= python 2.7
from_iterable = chain.from_iterable
# util function
def run_and_iterate(curs, sql, parms=None, chunksize=1000):
if parms is None:
curs.execute(sql)
else:
curs.execute(sql, parms)
chunks_until_empty = iter(partial(fetchmany, chunksize), [])
return from_iterable(chunks_until_empty)
# example scenario
for row in run_and_iterate(cur, 'select * from waffles_table where num_waffles > %s', (10,)):
print 'lots of waffles: %s' % (row,)
As I was reading comments and answers I thought I should clarify something about fetchone and Server-side cursors for future readers.
With normal cursors (client-side), Psycopg fetches all the records returned by the backend, transferring them to the client process. The whole records are buffered in the client's memory. It is when you execute a query like curs.execute('SELECT * FROM ...'.
This question also confirms that.
All the fetch* methods are there for accessing this stored data.
Q: So how fetchone can help us memory wise ?
A: It fetches only one record from the stored data and creates a single Python object and hands you in your Python code while fetchall will fetch and create n Python objects from this data and hands it to you all in one chunk.
So If your table has 1,000,000 records, this is what's going on in memory:
curs.execute --> whole 1,000,000 result set + fetchone --> 1 Python object
curs.execute --> whole 1,000,000 result set + fetchall --> 1,000,000 Python objects
Of-course fetchone helped but still we have the whole records in memory. This is where Server-side cursors comes into play:
PostgreSQL also has its own concept of cursor (sometimes also called
portal). When a database cursor is created, the query is not
necessarily completely processed: the server might be able to produce
results only as they are needed. Only the results requested are
transmitted to the client: if the query result is very large but the
client only needs the first few records it is possible to transmit
only them.
...
their interface is the same, but behind the scene they
send commands to control the state of the cursor on the server (for
instance when fetching new records or when moving using scroll()).
So you won't get the whole result set in one chunk.
The draw-back :
The downside is that the server needs to keep track of the partially
processed results, so it uses more memory and resources on the server.

Python db-api: fetchone vs fetchmany vs fetchall

I just had a discussion today with some coworkers about python's db-api fetchone vs fetchmany vs fetchall.
I'm sure the use case for each of these is dependent on the implementation of the db-api that I'm using, but in general what are the use cases for fetchone vs fetchmany vs fetchall?
In other words are the following equivalent? or is there one of these that is preferred over the others? and if so in which situations?
cursor.execute("SELECT id, name FROM `table`")
for i in xrange(cursor.rowcount):
id, name = cursor.fetchone()
print id, name
cursor.execute("SELECT id, name FROM `table`")
result = cursor.fetchmany()
while result:
for id, name in result:
print id, name
result = cursor.fetchmany()
cursor.execute("SELECT id, name FROM `table`")
for id, name in cursor.fetchall():
print id, name
As per official psycopg2 documentation
fetchone()
Fetch the next row of a query result set, returning a single tuple, or None when no more data is available:
>>> cur.execute("SELECT * FROM test WHERE id = %s", (3,))
>>> cur.fetchone()
(3, 42, 'bar')
A ProgrammingError is raised if the previous call to execute*() did not produce any result set or no call was issued yet.
fetchmany([size=cursor.arraysize])
Fetch the next set of rows of a query result, returning a list of tuples. An empty list is returned when no more rows are available.
The number of rows to fetch per call is specified by the parameter. If it is not given, the cursor’s arraysize determines the number of rows to be fetched. The method should try to fetch as many rows as indicated by the size parameter. If this is not possible due to the specified number of rows not being available, fewer rows may be returned:
>>> cur.execute("SELECT * FROM test;")
>>> cur.fetchmany(2)
[(1, 100, "abc'def"), (2, None, 'dada')]
>>> cur.fetchmany(2)
[(3, 42, 'bar')]
>>> cur.fetchmany(2)
[]
A ProgrammingError is raised if the previous call to execute*() did not produce any result set or no call was issued yet.
Note there are performance considerations involved with the size parameter. For optimal performance, it is usually best to use the arraysize attribute. If the size parameter is used, then it is best for it to retain the same value from one fetchmany() call to the next.
List item
fetchall()
Fetch all (remaining) rows of a query result, returning them as a list of tuples. An empty list is returned if there is no more record to fetch.
>>> cur.execute("SELECT * FROM test;")
>>> cur.fetchall()
[(1, 100, "abc'def"), (2, None, 'dada'), (3, 42, 'bar')]
A ProgrammingError is raised if the previous call to execute*() did not produce any result set or no call was issued yet.
I think it indeed depends on the implementation, but you can get an idea of the differences by looking into MySQLdb sources. Depending on the options, mysqldb fetch* keep the current set of rows in memory or server side, so fetchmany vs fetchone has some flexibility here to know what to keep in (python's) memory and what to keep db server side.
PEP 249 does not give much detail, so I guess this is to optimize things depending on the database while exact semantics are implementation-defined.
These are implementation specific.
fetchall
Will get all the results from the table. This will work better when size of the table is small. If the table size is bigger, fetchall will fail in those cases.
Will use most of the memory.
Will cause some issues will can occur if the queries is done on network.
fetchmany
fetchmany will get only required number of results. You can yield the results and process. Simple Snippet of implementation of fetchmany.
while True:
results = cursor.fetchmany(arraysize)
if not results:
break
for result in results:
yield result

Categories

Resources