What is the most efficient way to loop through the cursor object in Pymongo?
Currently, this is what I'm doing:
list(my_db.my_collection.find())
Which converts the cursor to list object so that I can iterate over each element. This works fine if the find() query returns a small amount of data. However, when I scale the DB to return 10 million documents, the cursor conversion to the list is taking forever. Instead of converting the DB result(cursor) to list, I tried converting the cursor to dataframe as below:
pd.Dataframe(my_db.my_collection.find())
which didn't give me any performance improvement.
What is the most efficient way to loop through a cursor object in python?
I haven't used the pymongo till date.
But one thing I can definitely say, if you're fetching a huge amount of data by doing
list(my_db.my_collection.find())
then you must use the generator.
Because, using list here would increase memory usage significantly and may bring in MemoryError if it gets beyond the permitted value.
def get_data():
yield(my_db.my_collection.find())
Try using such methods which will not use much memory.
The cursor object pymongo gives you is already lazily loading objects, no need to do anything else.
for doc in my_db.my_collection.find():
#process doc
The method find() returns a Cursor which you can iterate
for match in my_db.my_collection.find():
# do something
pass
Related
I currently have a for loop which is finding and storing combinations in a list. The possible combinations are very large and I need to be able to access the combos.
can I use an empty relational db like SQLite to store my list on a disk instead of using list = []?
Essentially what I am asking is whether there is a db equivalent to list = [] that I can use to store the combinations generated via my script?
Edit:
SQLlite is not a must. Any will work if it can accomplish my task.
Here is the exact function that is causing me so much trouble. Maybe there is a better solution in general.
Idea - Could I insert the list into the database on each loop and then empty the list? Basically, create a list on each loop, send that list to PostgreSQL and then empty the list in the python to keep the RAM usage down?
def permute(set1, set2):
set1_combos = list(combinations(set1, 2))
set2_combos = list(combinations(set2, 8))
full_sets = []
for i in set1_combos:
for j in set2_combos:
full_sets.append(i + j)
return full_sets
Ok, a few ideas
My first thought was, why do you explode the combinations objects in lists? But of course, since we have two nested for loops, the iterator in the inner loop is consumed at the first iteration of the outer loop if it is not converted to a list.
However, you don't need to explode both objects: you can explode just the smaller one. For instance, if both our sets are made of 50 elements, the combinations of 2 elements are 1225 with a memsize (if the items are integers) of about 120 bytes each, i.e. 147KB, while the combinations of 8 elements are 5.36e+08 with a memsize of about 336 bytes, i.e. 180GB. So the first thing is, keep the larger combo set as a combinations object and iterate over it in the outer loop. By the way, this will also be really faster.
Now the database part. I assume a relational DBMS, be it SQLite or anything.
You want to create a table with a single column defined. Each row of your table will contain one final combination. Instead of appending each combination to a list, you will insert it in the table.
Now the question is, how do you need to access the data you created? Do you just need to iterate over the final combos sequentially, or do you need to query them, for instance finding all the combos which contain one specific value?
In the latter case, you'll want to define your column as the Primay Key, so your queries will be efficient; otherwise, you will save space on disk using an auto incrementing integer as the PK (SQLite will create it for you if you don't explicitly define a PK, and so will do a few other DMBS as well).
One final note: the insert phase may be painfully slow if you don't take some specific measures: check this very interesting SO post for details. In short, with a few optimizations they were able to pass from 85 to over 96K insert per second.
EDIT: iterating over the saved data
Once we have the data in the DB, iterating over them could be as simple as:
mycursor.execute('SELECT * FROM <table> WHERE <conditions>')
for combo in mycursor.fetchall():
print(combo) #or do what you need
But if your conditions don't filter away most of the rows you will meet the same memory issue we started with. A first step could be using fetchmany() or even fetchone() instead of fetchall() but still you may have a problem with the size of the query result set.
So you will probably need to read from the DB a chunk of data at a time, exploiting the LIMIT and OFFSET parameters in your SELECT. The final result may be something like:
chunck_size = 1000 #or whatever number fits your case
chunk_count = 0
chunk = mycursor.execute(f'SELECT * from <table> WHERE <conditions> LIMIT {chunk_size} ORDER BY <primarykey>'}
while chunk:
for combo in mycursor.fetchall():
print(combo) #or do what you need
chunk_count += 1
chunk = mycursor.execute(f'SELECT * from <table> WHERE <conditions> ORDER BY <primarykey>' OFFSET {chunk_size * chunk_count} LIMIT {chunk_size}}
Note that you will usually need the ORDER BY clause to ensure rows are returned as you expect them, and not in a random manner.
I don't believe SQLite has a built in array data type. Other DBMSs, such as PostgreSQL, do.
For SQLite, a good recommendation by another user on this site to obtain an array in SQLite can be found here: How to store array in one column in Sqlite3?
Another solution can be found: https://sqlite.org/forum/info/99a33767e8a07e59
In either case, yes it is possible to have a DBMS like SQLite store an array (list) type. However, it may require a little setup depending on the DBMS.
Edit: If you're having memory issues, have you thought about storing your data as a string and accessing the portions of the string you need when you need it?
I am trying to get the results to a SQLAlchemy query. I know that if I loop over the query I can put the results in a list (like below), but this seems inefficient for a large set of results and looks ugly when the result will be a single number (as below). Is there a more direct and/or efficient way to return query results?
mylist = []
for item in session.query(func.max(mytable.id)):
mylist.append(item)
Looping through the result, as you do, is correct. You can also use all() to get the list of sequences (rows). Maybe more efficient is to not store the data in a list, get smaller result sets, and/or do the operation immediately on each row. You could also use a server side cursor if your DBMS supports it.
When only one row with one field is fetched, you can use first() and get the first element of the returned sequence. Code wise, this is probably most efficient:
maxid_mytable = session.query(func.max(mytable.id)).first()[0]
This will the equivalent
mylist = session.query(func.max(mytable.id)).all()
I am using pymongo to insert and retrieve data from MongoDB. These two operations may be performed simultaneously. The question is when I do rows = db.<collection>.find() in pymongo, every time rows.count() returns a different response (since insertion of data is also being done at the same time). Is there some way I can limit MongoDB to return only the rows that were present when I executed the find() statement? I tried adding snapshot=True to find() but the problem persists.
db.<collection>.find().count() will make an additional (runCommand count) call to mongodb anyway.
How about simply taking a length of your cursor, like this:
rows = db.<collection>.find()
print len(list(rows))
Note that you can't just use len(rows).
Hope that helps.
I am working on a Trac-Plugin...
To retrieve my data I create a cursor object and get the result table like this:
db = self.env.get_db_cnx()
cursor = db.cursor()
cursor.execute("SELECT...")
Now the result is being used in 3 different functions. My Problem is now that the cursor is being cleaned out while looping through the first time (like it is told here http://packages.python.org/psycopg2/cursor.html)
I then tried to copy the cursor object, but this failed too. the copy(cursor) function seems to have problem with a big dataset and the function deepcopy(cursor) fails anyway (according to this bug http://bugs.python.org/issue1515).
How can I solve this issue?
Storing the values from any finite iterable is simple:
results = list(cursor)
Iterate over the iterable and store the results in a list. This list can be iterated over as many times as necessary.
You don't need a copy of the cursor, just a copy of the results of the query.
For this specific case, you should do what 9000 suggests in his comment -- use the cursors built-in functionality to get the results of a list, which should be as fast or faster than manually calling list.
If you want to avoid looping through the data an extra time you could try wrapping it in a generator:
def lazy_execute(sql, cursor=cursor):
results = []
cursor.execute(sql)
def fetch():
if results:
for r in results:
yield r
raise StopIteration()
else:
for r in cursor:
results.append(r)
yield r
raise StopIteration()
return fetch
This essentially creates a list as you need it, but lets you call the same function everywhere, safely. You would then use this like so:
results = lazy_execute(my_sql):
for r in results():
"do something with r"
This is almost certainly an over-engineered premature-optimization, though it does have the advantage of the same name meaning the same thing in every case, as opposed to generating a new list and then the same data having two different names.
I think if I were going to argue for using this I would use the same-names argument, unless the data set was pretty huge, but if it's huge enough to matter then there's a good chance you don't want to store it all in memory anyway.
Also it's completely untested.
I am currently selecting a large list of rows from a database using pyodbc. The result is then copied to a large list, and then i am trying to iterate over the list. Before I abandon python, and try to create this in C#, I wanted to know if there was something I was doing wrong.
clientItems.execute("Select ids from largetable where year =?", year);
allIDRows = clientItemsCursor.fetchall() #takes maybe 8 seconds.
for clientItemrow in allIDRows:
aID = str(clientItemRow[0])
# Do something with str -- Removed because I was trying to determine what was slow
count = count+1
Some more information:
The for loop is currently running at about 5 loops per second, and that seems insanely slow to me.
The total rows selected is ~489,000.
The machine its running on has lots of RAM and CPU. It seems to only run one or two cores, and ram is 1.72GB of 4gb.
Can anyone tell me whats wrong? Do scripts just run this slow?
Thanks
This should not be slow with Python native lists - but maybe ODBC's driver is returning a "lazy" object that tries to be smart but just gets slow. Try just doing
allIDRows = list(clientItemsCursor.fetchall())
in your code and post further benchmarks.
(Python lists can get slow if you start inserting things in its middle, but just iterating over a large list should be fast)
It's probably slow because you load all result in memory first and performing the iteration over a list. Try iterating the cursor instead.
And no, scripts shouldn't be that slow.
clientItemsCursor.execute("Select ids from largetable where year =?", year);
for clientItemrow in clientItemsCursor:
aID = str(clientItemrow[0])
count = count + 1
More investigation is needed here... consider the following script:
bigList = range(500000)
doSomething = ""
arrayList = [[x] for x in bigList] # takes a few seconds
for x in arrayList:
doSomething += str(x[0])
count+=1
This is pretty much the same as your script, minus the database stuff, and takes a few seconds to run on my not-terribly-fast machine.
When you connect to your database directly (I mean you get an SQL prompt), how many secods runs this query?
When query ends, you get a message like this:
NNNNN rows in set (0.01 sec)
So, if that time is so big, and your query is slow as "native", may be you have to create an index on that table.
This is slow because you are
Getting all the results
Allocating memory and assigning the values to that memory to create the list allIDRows
Iterating over that list and counting.
If execute gives you back a cursor then use the cursor to it's advantage and start counting as you get stuff back and save time on the mem allocation.
clientItemsCursor.execute("Select ids from largetable where year =?", year);
for clientItemrow in clientItemsCursor:
count +=1
Other hints:
create an index on year
use 'select count(*) from ... to get the count for the year' this will probably be optimised on the db.
Remove the aID line if not needed this is converting the first item of the row to a string even though its not used.