I would like to retrieve a random set of X entries from my postgres database using sqlalchemy. My first approach was this
random_set_of_Xrows = models.Table.query.filter(something).order_by(func.random()).limit(len(X)).all()
since my Table is quite big, this command takes about 1 second, and I was wondering how to optimise it. I guess the order_by function requires to look at all rows, so I figured using offset instead might make it faster. However, I can't quite see how to avoid the row count entirely?
Here is an approach using offset
rowCount = db.session.query(func.count(models.Table.id)).filter(something).scalar()
random_set_of_Xrows = models.Table.query.offset(func.floor(func.random()*rowCount)).limit(len(X)).all()
which however is not faster, with most of the time spent getting rowCount.
Any ideas how to make this faster?
cheers
carl
EDIT: As suggested below I added a column to the table with a random value and used that to extract the rows like
random_set_of_Xrows = models.Table.query.filter(something).order_by(models.Table.random_value).limit(len(X)).all()
I did ignore the offset part, since it doesn't matter to me if two calls give me the same results, I just need a random set of rows.
I've optimized this before by adding an indexed column r that inserts a random value automatically when a row is created. Then when you need a random set of rows just SELECT * FROM table ORDER BY r LIMIT 10 OFFSET some_random_value. You can run a script that updates your schema to add this column to your existing rows. You'll add a slight performance hit to writes with this approach, but if this is a functionality you need persistently it should be a fair trade off.
Related
I'm making a python app with mongoengine where i have a mongodb database of n users and each user holds n daily records. I have a list of n new record per user that I want to add to my db
I want to check if a record for a certain date already exists for an user before adding a new record to the user
what i found in the docs is to iterate through every embedded document in the list to check for duplicate fields but thats an O(n^2) algorithm and took 5 solid seconds for 300 records, too long. below an abbreviated version of the code
There's gotta be a better way to query right? I tried accessing something like user.records.date but that throws a not found
import mongoengine
#snippet here is abbreviated and does not run
# xone of interest in conditional_insert(), line 16
class EmbeddedRecord(mongoengine.EmbeddedDocument):
date = mongoengine.DateField(required = True)
#contents = ...
class User(mongoengine.Document):
#meta{}
#account details
records = mongoengine.EmbeddedDocumentListField(EmbeddedRecord)
def conditional_insert(user, new_record):
# the docs tell me to iterate tthrough every record in the user
# there has to be a better way
for r in user.records:
if str(new_record.date) == str(r.date): #i had to do that in my program
#because python kep converting datetime obj to str
return
# if record of duplicate date not found, insert new record
save_record(user, new_record)
def save_record(): pass
if __name__ == "__main__":
lst_to_insert = [] # list of (user, record_to_insert)
for object in lst_to_insert: #O(n)
conditional_insert(object[0],object[1]) #O(n)
#and I have n lst_to_insert so in reality I'm currently at O(n^3)
Hi everyone (and future me who will probably search for the same question 10 years later)
I optimized the code using the idea of a search tree. Instead of putting all records in a single List in User I broke it down by year and month
class EmbeddedRecord(mongoengine.EmbeddedDocument):
date = mongoengine.DateField(required = True)
#contents = ...
class Year(mongoengine.EmbeddedDocument):
monthly_records = mongoengine.EmbeddedDocumentListField(Month)
class Month(mongoengine.EmbeddedDocument):
daily_records = mongoengine.EmbeddedDocumentListField(EmbeddedRecord)
class User(mongoengine.Document):
#meta{}
#account details
yearly_records = mongoengine.EmbeddedDocumentListField(Year)
because it's mongodb, I can later partition by decades, heck even centuries but by that point I dont think this code will be relevant
I then group my data to insert by months into separate pandas dataframe and feed each dataframe separately. The data flow thus looks like:
0) get monthly df
1) loop through years until we get the right one (lets say 10 steps, i dont think my program will live that long)
2) loop through months until we get the right one (12 steps)
3) for each record in df loop through each daily record in month to check for duplicates
The algorithm to insert with check is still O(n^2) but since there are maximum 31 records at the last step, the code is much faster. I tested 2000 duplicate records and it ran in under a second (didnt actually time it but as long as it feels instant it wont matter that much in my use case)
Mongo cannot conveniently offer you suitable indexes, very sad.
You frequently iterate over user.records.
If you can afford to allocate the memory for 300 users,
just iterate once and throw them into a set, which
offers O(1) constant time lookup, and
offers RAM speed rather than network latency
When you save a user, also make note of if with cache.add((user_id, str(new_record.date))).
EDIT
If you can't afford the memory for all those (user_id, date) tuples,
then sure, a relational database JOIN is fair,
it's just an out-of-core merge of sorted records.
I have had good results with using sqlalchemy to
hit local sqlite (memory or file-backed), or
heavier databases like Postgres or MariaDB.
Bear in mind that relational databases offer lovely ACID guarantees,
but you're paying for those guarantees. In this application
it doesn't sound like you need such properties.
Something as simple as /usr/bin/sort could do an out-of-core
ordering operation that puts all of a user's current
records right next to his historic records,
letting you filter them appropriately.
Sleepycat is not an RDBMS, but its B-tree does offer
external sorting, sufficient for the problem at hand.
(And yes, one can do transactions with sleepycat,
but again this problem just needs some pretty pedestrian reporting.)
Bench it and see.
Without profiling data for a specific workload,
it's pretty hard to tell if any extra complexity
would be worth it.
Identify the true memory or CPU bottleneck,
and focus just on that.
You don't necessarily need ordering, as hashing would suffice,
given enough core.
Send those tuples to a redis cache, and make it his problem
to store them somewhere.
I currently have a for loop which is finding and storing combinations in a list. The possible combinations are very large and I need to be able to access the combos.
can I use an empty relational db like SQLite to store my list on a disk instead of using list = []?
Essentially what I am asking is whether there is a db equivalent to list = [] that I can use to store the combinations generated via my script?
Edit:
SQLlite is not a must. Any will work if it can accomplish my task.
Here is the exact function that is causing me so much trouble. Maybe there is a better solution in general.
Idea - Could I insert the list into the database on each loop and then empty the list? Basically, create a list on each loop, send that list to PostgreSQL and then empty the list in the python to keep the RAM usage down?
def permute(set1, set2):
set1_combos = list(combinations(set1, 2))
set2_combos = list(combinations(set2, 8))
full_sets = []
for i in set1_combos:
for j in set2_combos:
full_sets.append(i + j)
return full_sets
Ok, a few ideas
My first thought was, why do you explode the combinations objects in lists? But of course, since we have two nested for loops, the iterator in the inner loop is consumed at the first iteration of the outer loop if it is not converted to a list.
However, you don't need to explode both objects: you can explode just the smaller one. For instance, if both our sets are made of 50 elements, the combinations of 2 elements are 1225 with a memsize (if the items are integers) of about 120 bytes each, i.e. 147KB, while the combinations of 8 elements are 5.36e+08 with a memsize of about 336 bytes, i.e. 180GB. So the first thing is, keep the larger combo set as a combinations object and iterate over it in the outer loop. By the way, this will also be really faster.
Now the database part. I assume a relational DBMS, be it SQLite or anything.
You want to create a table with a single column defined. Each row of your table will contain one final combination. Instead of appending each combination to a list, you will insert it in the table.
Now the question is, how do you need to access the data you created? Do you just need to iterate over the final combos sequentially, or do you need to query them, for instance finding all the combos which contain one specific value?
In the latter case, you'll want to define your column as the Primay Key, so your queries will be efficient; otherwise, you will save space on disk using an auto incrementing integer as the PK (SQLite will create it for you if you don't explicitly define a PK, and so will do a few other DMBS as well).
One final note: the insert phase may be painfully slow if you don't take some specific measures: check this very interesting SO post for details. In short, with a few optimizations they were able to pass from 85 to over 96K insert per second.
EDIT: iterating over the saved data
Once we have the data in the DB, iterating over them could be as simple as:
mycursor.execute('SELECT * FROM <table> WHERE <conditions>')
for combo in mycursor.fetchall():
print(combo) #or do what you need
But if your conditions don't filter away most of the rows you will meet the same memory issue we started with. A first step could be using fetchmany() or even fetchone() instead of fetchall() but still you may have a problem with the size of the query result set.
So you will probably need to read from the DB a chunk of data at a time, exploiting the LIMIT and OFFSET parameters in your SELECT. The final result may be something like:
chunck_size = 1000 #or whatever number fits your case
chunk_count = 0
chunk = mycursor.execute(f'SELECT * from <table> WHERE <conditions> LIMIT {chunk_size} ORDER BY <primarykey>'}
while chunk:
for combo in mycursor.fetchall():
print(combo) #or do what you need
chunk_count += 1
chunk = mycursor.execute(f'SELECT * from <table> WHERE <conditions> ORDER BY <primarykey>' OFFSET {chunk_size * chunk_count} LIMIT {chunk_size}}
Note that you will usually need the ORDER BY clause to ensure rows are returned as you expect them, and not in a random manner.
I don't believe SQLite has a built in array data type. Other DBMSs, such as PostgreSQL, do.
For SQLite, a good recommendation by another user on this site to obtain an array in SQLite can be found here: How to store array in one column in Sqlite3?
Another solution can be found: https://sqlite.org/forum/info/99a33767e8a07e59
In either case, yes it is possible to have a DBMS like SQLite store an array (list) type. However, it may require a little setup depending on the DBMS.
Edit: If you're having memory issues, have you thought about storing your data as a string and accessing the portions of the string you need when you need it?
I have generated a giant SQLite database and need to get some data out of it. I wrote some script to do so, and profiling let to the unfortunate conclusion that the write process would take approx. 3 days with the current setup. I wrote the script as simplistic as possible to make it as fast as possible.
I am wondering if you have some trick to speed up the whole process. The database has an unique index, but the columns I am querying don't (because of duplicate rows for those).
Would it make sense to use any multi-processing Python library here?
The script would be like this:
import sqlite3
def write_from_query(db_name, table_name, condition, content_column, out_file):
'''
Writes contents from a SQLite database column to an output file
Keyword arguments:
db_name (str): Path of the .sqlite database file.
table_name (str): Name of the target table in the SQLite file.
condition (str): Condition for querying the SQLite database table.
content_colum (str): Name of the column that contains the content for the output file.
out_file (str): Path of the output file that will be written.
'''
# Connecting to the database file
conn = sqlite3.connect('zinc12_drugnow_nrb(copy).sqlite')
c = conn.cursor()
# Querying the database and writing the output file
c.execute('SELECT ({}) FROM {} WHERE {}'.format(content_column, table_name, condition))
with open(out_file, 'w') as outf:
for row in c:
outf.write(row[0])
# Closing the connection to the database
conn.close()
if __name__ == '__main__':
write_from_query(
db_name='my_db.sqlite',
table_name='my_table',
condition='variable1=1 AND variable2<=5 AND variable3="Zinc_Plus"',
content_column='variable4',
out_file='sqlite_out.txt'
)
Link to this script on GitHub
Thanks for your help, I am looking forward to your suggestions!
EDIT:
more information about the database:
I assume that you are running the write_from_query functions for a huge amount of queries.
If so the problem is the missing indices on your filter criteria
This results in the following: for each query you execute, sqlite will loop through the whole 50GB of data and checks whether your conditions hold true. That is VERY inefficient.
The easiest way would be to slap indices on your columns
An alternative would be to formulate less queries that include multiple of your cases and then loop over that data again to split it it in different files. How well this can done however depends on how your data is structured.
I'm not sure about multiprocessing/threading, sqlite is not really made for concurrency, but I guess it could work out since you only read data...
Either you dump the content and filter in your own program - or you add indices to all columns you use in your conditions.
Adding indices to all the columns will take a long long time.
But for many different queries there is no alternative.
No multiprocessing will probably not help. An SSD might, or 64GiB Ram. But they are not needed with indices, queries will be fast on normal disks too.
In conclusion you created a Database without creating indices for the columns you want to query. With 8Mio rows this wont work.
Whilst the process of actually writing this data to a file will take a while I would expect it to be more like minutes than days e.g. at a 50MB/s sequential write speed 15GB works out at around 5 mins.
I suspect that the issue is with the queries / lack of indexes. I would suggest trying to build composite indexes based on the combinations of columns that you need to filter on. As you will see from the documentation here, you can actually add as many columns as you want to an index.
Just to make you aware adding indexes will slow down inserts / updates to your database as every time it now needs to find the appropriate place in the relevant indexes to add data as well as appending data to the end of the tables, but this is probably your only option to speed up the queries.
I will look at the unique indices! But meanwhile another thing I just stumbled upon... Sorry for writing an own answer for my question here, but I thought it is better for the organization...
I was thinking that the .fetchall() command could also speed up the whole process, but I find the sqlite3 documentation on this a little bit brief ... Would something like
with open(out_file, 'w') as outf:
c.excecute ('SELECT * ...')
results = c.fetchmany(10000)
while results:
for row in results:
outf.write(row[0])
results = c.fetchmany(10000)
make sense?
I'm writing small program that is querying for results from database (single table). I'm using python 3.3, sqlalchemy and postgres database.
result = db_session.query(Data).all()
progress = 0
for row in result:
update_progress_bar(progress, len(result))
do_something_with_data(row)
progress += 1
Variable 'result' will contain few thousands rows, and processing of data is taking some time. This is why I introduced simple progress bar to give idea how mutch time it will take.
The problem is, that 30% of the total time is queering the database (first line). So when I start program I get big delay before my progress bar start moving. In addition I don't need to keep all results in memory. I can process them separately.
Is there any way to modify above program to get rows one by one until all rows are received, without loading everything into memory? In addition I want to monitor progress of querying and processing the data.
You need to just loop over the query without calling .all(), and call .yield_per() to set a batch size:
for row in db_session.query(Data).yield_per(10):
do_something_with_data(row)
.all() indeed turns the whole result set into a list first, causing a delay if the resultset is large. Iterating over the query directly after setting .yield_per() instead fetches results as needed, provided the database API supports it.
If you wanted to know up-front how many rows will be returned, call .count() first:
result = db_session.query(Data)
count = result.count()
for row in result.yield_per(10):
update_progress_bar(progress, count)
do_something_with_data(row)
progress += 1
.count() asks the database to gives us an item count for us first.
Your database could still be pre-caching the result rows, leading to a start-up delay, even when using .yield_per(). In that case you'll need to use a windowed query to break up your query into blocks based on the range of values in one of the columns. Wether or not this will work depends on your exact table layout.
I'm using Django and PostgreSQL, but I'm not absolutely tied to the Django ORM if there's a better way to do this with raw SQL or database specific operations.
I've got a model that needs sequential ordering. Lookup operations will generally retrieve the entire list in order. The most common operation on this data is to move a row to the bottom of a list, with a subset of the intervening items bubbling up to replace the previous item like this:
(operation on A, with subset B, C, E)
A -> B
B -> C
C -> E
D -> D
E -> A
Notice how D does not move.
In general, the subset of items will not be more than about 50 items, but the base list may grow to tens of thousands of entries.
The most obvious way of implementing this is with a simple integer order field. This seems suboptimal. It requires the compromise of making the position ordering column non-unique, where non-uniqueness is only required for the duration of a modification operation. To see this, imagine the minimal operation using A with subset B:
oldpos = B.pos
B.pos = A.pos
A.pos = oldpos
Even though you've stored the position, at the second line you've violated the uniqueness constraint. Additionally, this method makes atomicity problematic - your read operation has to happen before the write, during which time your records could change. Django's default transaction handling documentation doesn't address this, though I know it should be possible in the SQL using the "REPEATABLE READ" level of transaction locking.
I'm looking for alternate data structures that suit this use pattern more closely. I've looked at this question for ideas.
One proposal there is the Dewey decimal style solution, which makes insert operations occur numerically between existing values, so inserting A between B and C results in:
A=1 -> B=2
B=2 -> A=2.5
C=3 -> C=3
This solves the column uniqueness problem, but introduces the issue that the column must be a float of a specified number of decimals. Either I over-estimate, and store way more data than I need to, or the system becomes limited by whatever arbitrary decimal length I impose. Furthermore, I don't expect use to be even over the database - some keys are going to be moved far more often than others, making this solution hit the limit sooner. I could solve this problem by periodically re-numbering the database, but it seems that a good data structure should avoid needing this.
Another structure I've considered is the linked list (and variants). This has the advantage of making modification straightforward, but I'm not certain of it's properties with respect to SQL - ordering such a list in the SQL query seems like it would be painful, and extracting a non-sequential subset of the list has terrible retrieval properties.
Beyond this, there are B-Trees, various Binary Trees, and so on. What do you recommend for this data structure? Is there a standard data structure for this solution in SQL? Is the initial idea of going with sequential integers really going to have scaling issues, or am I seeing problems where there are none?
Prefered solutions:
A linked list would be the usual way to achieve this. A query to return the items in order is trivial in Oracle, but Im not sure how you would do it in PostreSQL.
Another option would be to implement this using the ltree module for postgresql.
Less graceful (and write-heavy) solution:
Start transaction. "select for update" within scope for row level locks. Move the target record to position 0, update the targets future succeeding records to +1 where their position is higher than the targets original position (or vice versa) and then update the target to the new position - a single additional write over that needed without a unique constraint. Commit :D
Simple (yet still write-heavy) solution if you can wait for Postgresql 8.5 (Alpha is available) :)
Wrap it in a transaction, select for update in scope, and use a deferred constraint (postgresql 8.5 has support for deferred unique constraints like Oracle).
A temp table and a transaction should maintain atomicity and the unique constraint on sort order. Restating the problem, you want to go from:
A 10 to B 10
B 25 C 25
C 26 E 26
E 34 A 34
Where there can be any number of items in between each row. So, first you read in the records and create a list [['A',10],['B',25],['C',26],['E',34]]. Through some pythonic magic you shift the identifiers around and insert them into a temp table:
create temporary table reorder (
id varchar(20), -- whatever
sort_order number,
primary key (id));
Now for the update:
update table XYZ
set sort_order = (select sort_order from reorder where xyz.id = reorder.id)
where id in (select id from reorder)
I'm only assuming pgsql can handle that query. If it can, it will be atomic.
Optionally, create table REORDER as a permanent table and the transaction will ensure that attempts to reorder the same record twice will be serialized.
EDIT: There are some transaction issues. You might need to implement both of my ideas. If two processes both want to update item B (for example) there can be issues. So, assume all order values are even:
Begin Transaction
Increment all the orders being used by 1. This puts row level write locks on all the rows you are going to update.
Select the data you just updated, if any sort_order fields are even some other process has added a record that matches your criteria. You can either abort the transaction and restart or you can just drop the record and finish the operation using only the records that were updated in step 2. The "right" thing to do depends on what you need this code to accomplish.
Fill your temporary reorder table as above using the proper even sort_orders.
Update the main table as above.
Drop the temporary table.
Commit the transaction
Step 2 ensures that if two lists overlap, only the first one will have access to the row
in question until the transaction completes:
update XYZ set sort_order = sort_order + 1
where -- whatever your select criteria are
select * from XYZ
where -- same select criteria
order by sort_order
Alternatively, you can add a control field to the table to get the same affect and then you don't need to play with the sort_order field. The benefit of using the sort_order field is indexing by a BIT field or a LOCK_BY_USERID field when the field is usually null tends to have poor performance since the index 99% of the time is meaningless. SQL engines don't like indexes that spend most of their time empty.
It seems to me that your real problem is the need to lock a table for the duration of a transaction. I don't immediately see a good way to solve this problem in a single operation, hence the need for locking.
So the question is whether you can do this in a "Django way" as opposed to using straight SQL. Searching "django lock table" turned up some interesting links, including this snippet, there are many others that implement similar behavior.
A straight SQL linked-list style solution can be found in this stack overflow post, it appeared logical and succinct to me, but again it's two operations.
I'm very curious to hear how this turns out and what your final solution is, be sure to keep us updated!
Why not do a simple character field of some length like a max of 16 (or 255) initially.
Start initially with labeling things aaa through zzz (that should be 17576 entries). (You could also add in 0-9, and the uppercase letters and symbols for an optimization.)
As items are added, they can go to the end up to the maximum you allow for the additional 'end times' (zzza, zzzaa, zzzaaa, zzzaab, zzzaac, zzzaad, etc.)
This should be reasonable simple to program, and it's very similar to the Dewey Decimal system.
Yes, you will need to rebalance it occasionally, but that should be a simple operaion. The simplest approach is two passes, pass 1 would be to set the new ordering tag to '0' (or any character earlier than the first character) followed by the new tag of the appropriate length, and step 2 would be to remove the '0 from the front.
Obviuosly, you could do the same thing with floats, and rebalancing it regularly, this is just a variation on that. The one advantage is that most databases will allow you to set a ridiculously large maximum size for the character field, large enough to make it very, very, very unlikely that you would run out of digits to do the ordering, and also make it unlikely that you would ever have to modify the schema, while not wasting a lot of space.
You can solve the renumbering issue by doing the order column as an integer that is always an even number. When you are moving the data, you change the order field to the new sort value + 1 and then do a quick update to convert all the odd order fields to even:
update table set sort_order = bitand(sort_order, '0xFFFFFFFE')
where sort_order <> bitand(sort_order, '0xFFFFFFFE')
Thus you can keep the uniqueness of sort_order as a constraint
EDIT: Okay, looking at the question again, I've started a new answer.