SqlAlchemy mapped bulk update - make safer and faster? - python

I'm using Postgres 9.2 and SqlAlchemy. Currently, this is my code to update the rankings of my Things in my database:
lock_things = session.query(Thing).\
filter(Thing.group_id == 4).\
with_for_update().all()
tups = RankThings(lock_things) # return sorted tuple (<numeric>, <primary key Thing id>)
rank = 1
for prediction, id in tups:
thing = session.query(Thing).\
filter(Thing.group_id == 4).\
filter(Thing.id == id).one()
thing.rank = rank
rank += 1
session.commit()
However, this seems slow. It's also something I want to be atomic, which I why I use the with_for_update() syntax.
I feel like there must be a way to "zip" up the query and so an update in that way.
How can I make this faster and done all in one query?
EDIT: I think I need to create a temp table to join and make a fast update, see:
https://stackoverflow.com/a/20224370/712997
http://tapoueh.org/blog/2013/03/15-batch-update
Any ideas how to do this in SqlAlchemy?

Generally speaking with such operations you aim for two things:
Do not execute a query inside a loop
Reduce the number of queries required by performing computations on the SQL side
Additionally, you might want to merge some of the queries you have, if possible.
Let's start with 2), because this is very specific and often not easily possible. Generally, the fastest operation here would be to write a single query that returns the rank. There are two options with this:
The query is quick to run so you just execute it whenever you need the ranking. This would be the very simple case of something like this:
SELECT
thing.*,
(POINTS_QUERY) as score
FROM thing
ORDER BY score DESC
In this case, this will give you an ordered list of things by some artificial score (e.g. if you build some kind of competition). The POINTS_QUERY would be something that uses a specific thing in a subquery to determine its score, e.g. aggregate the points of all the tasks it has solved.
In SQLAlchemy, this would look like this:
score = session.query(func.sum(task.points)).filter(task.thing_id == Thing.id).correlate(Thing).label("score")
thing_ranking = session.query(thing, score).order_by(desc("score"))
This is somewhat a little bit more advanced usage of SQLAlchemy: We construct a subquery that returns a scalar value we labled score. With correlate we tell it that thing will come from an outer query (this is important).
So that was the case where you run a single query that gives you a ranking (the ranks a determined based on the index in the list and depend on your ranking strategy). If you can achieve this, it is the best case
The query itself is expensive you want the values cached. This means you can either use the solution above and cache the values outside of the database (e.g. in a dict or using a caching library). Or you compute them like above but update a database field (like Thing.rank). Again, the query from above gives us the ranking. Additionally, I assume the simplest kind of ranking: the index denotes the rank:
for rank, (thing, score) in enumerate(thing_ranking):
thing.rank = rank
Notice how I base my rank based on the index using enumerate. Additionally, I take advantage of the fact that since I just queried thing, I already have it in the session, so no need for an extra query. So this might be your solution right here, but read on for some additional info.
Using the last idea from above, we can now tackle 1): Get the query outside the loop. In general I noticed that you pass a list of things to a sorting function that only seems to return IDs. Why? If you can change it, make it so that it returns the things as a whole.
However, it might be possible that you cannot change this function so let's consider what we do if we can't change it. We already have a list of all relevant things. And we get a sorted list of their IDs. So why not build a dict as a lookup for ID -> Thing?
things_dict = dict(thing.id, thing for thing in lock_things)
We can use this dict instead of querying inside the loop:
for prediction, id in tups:
thing = things_dict[id]
However, it may be possible (for some reason I missed in your example) that not all IDs were returned previously. In that case (or in general) you can take advantage of a similar mapping SQLAlchemy keeps itself: You can ask it for a primary key and it will not query the database if it already has it:
for prediction, id in tups:
thing = session.query(Thing).get(id)
So that way we have reduced the problem and only execute queries for objects we don't already have.
One last thing: What if we don't have most of the things? Then I didn't solve your problem, I just replaced the query. In that case, you will have to create a new query that fetches all the elements you need. In general this depends on the source of the IDs and how they are determined, but you could always go the least efficient way (which is still way faster than inside-loop queries): Using SQL's IN:
all_things = session.query(Thing).filter(Thing.group_id == 4).filter(Thing.id.in_([id for _, id in tups]).all()
This would construct a query that filters with the IN keyword. However, with a large list of things this is terribly inefficient and thus if you are in this case, it is most likely better you construct some more efficient way in SQL that determines if this is an ID you want.
Summary
So this was a long text. So sum up:
Perform queries in SQL as much as possible if you can write it efficiently there
Use SQLAlchemy's awesomeness to your advantage, e.g. create subqueries
Try to never execute queries inside a loop
Create some mappings for yourself (or use that of SQLAlchemy to your advantage)
Do it the pythonic way: Keep it simple, keep it explicit.
One final thought: If your queries get really complex and you fear you loose control over the queries executed by the ORM, drop it and use the Core instead. It is almost as awesome as the ORM and gives you huge amounts of control over the queries as you build them yourselves. With this you can construct almost any SQL query you can think of and I am certain that the batch updates you mentioned are also possible here (If you see that my queries above lead to many UPDATE statements, you might want to use the Core).

Related

Which has optimal performance for generating a randomized list: `random.shuffle(ids)` or `.order_by("?")`?

I need to generate a randomized list of 50 items to send to the front-end for a landing page display. The landing page already loads much too slowly, so any optimization would be wonderful!
Given the pre-existing performance issues and the large size of this table, I'm wondering which implementation is better practice, or if the difference is negligible:
Option A:
unit_ids = list(units.values_list('id', flat=True).distinct())
random.shuffle(unit_ids)
unit_ids = unit_ids[:50]
Option B:
list(units.values_list('id', flat=True).order_by("?")[:50])
My concern is that according to the django docs, order_by('?') "may be expensive and slow"
https://docs.djangoproject.com/en/dev/ref/models/querysets/#django.db.models.query.QuerySet.order_by
We are using a MySQL db. I've tried searching for more info about implementation, but I'm not seeing anything more specific than what's in the docs. Help!
Option B should be faster in most cases since a database engine is usually faster than a code in python.
In option A, you are retrieving some ids which should be all the ids by my guess and then you are shuffling them on python. and according to you, the table is large so that makes it a bad idea to do it in python. Also, you are only getting the ids which mean if you need the actual data, you have to make another query.
With all the explanations, you should still try both and see which one is faster because they both depend on different variables. Just time them both and see which one works faster for you and then go with that.
Tradeoffs:
Shoveling large amounts of data to the client (TEXT columns; all the rows; etc)
Whether the table is so big that fetching N random rows is likely to hit the disk N times.
My first choice would be simply:
SELECT * FROM t ORDER BY RAND() LIMIT 50;
My second choice would be to use "lazy loading" (not unlike your random.shuffle, but better because it does not need a second round-trip):
SELECT t.*
FROM ( SELECT id FROM t ORDER BY RAND() LIMIT 50 ) AS r
JOIN t USING(id)
If that is not "fast enough", then first find out whether the subquery is the slowdown or the outer query.
If the inner query is the problem, then see http://mysql.rjweb.org/doc.php/random
If the outer query is the problem, you are doomed. It is already optimal (assuming PRIMARY KEY(id)).

What's the difference between index and internal ID in neo4j?

I'm setting up my database and sometimes I'll need to use an ID. At first, I added an ID as a property to my nodes of interest but realized I could also just use neo4j's internal id "". Then I stumbled upon the CREATE INDEX ON :label(something) and was wondering exactly what this would do? I thought an index and the would be the same thing?
This might be a stupid question, but since I'm kind of a beginner in databases, I may be missing some of these concepts.
Also, I've been reading about which kind of database to use (mySQL, MongoDB or neo4j) and decided on neo4j since my data pretty much follows a graph structure. (it will be used to build metabolic models: connections genes->proteins->reactions->compounds)
In SQL the syntax just seemed too complex as I had to go around several tables to make simple connections that neo4j accomplishes quite easily...
From what I understand MongoDb stores data independently, and, since my data is connected, it doesnt really seem to fit the data structure.
But again, since my knowledge on this subject is limited, perhaps I'm not doing the right choice?
Thanks in advance.
Graph dbs are ideal for connected data like this, it's a more natural fit for both storing and querying than relational dbs or document stores.
As far as indexes and ids, here's the index section of the docs, but the gist of it is that this has to do with how Neo4j can look up starting nodes. Neo4j only uses indexes for finding these starting nodes (though in 3.5 when we do index lookup like this, if you have ORDER BY on the indexed property, it will use the index to augment the performance of the ordering).
Here is what Neo4j will attempt to use, depending on availability, from fastest to slowest:
Lookup by internal ID - This is always quick, however we don't recommend preserving these internal ids outside the context of a query. The reason for that is that when graph elements are deleted, their ids become eligible for reuse. If you preserve the internal ids outside of Neo4j, and perform a lookup with them later, there is a chance that whatever you expected it to reference could have been deleted, and may point at nothing, or may point at some new node with completely different data.
Lookup by index - This where you would want to use CREATE INDEX ON (or add a unique constraint, if that makes sense for your model). When you use a MATCH or MERGE using the label and property (or properties) associated with the index, then this is a fast and direct lookup of the node(s) you want.
Lookup by label scan - If you perform a MATCH with a label present in the pattern, but no means to use an index (either no index present for the label/property combination, or only a label is present but no property), then a label scan will be performed, and every node of the given label will be matched to and filtered. This becomes more expensive as more nodes with those labels are added.
All nodes scan - If you do not supply any label in your MATCH pattern, then every node in your db will be scanned and filtered. This is very expensive as your db grows.
You can EXPLAIN or PROFILE a query to see its query plan, which will show you which means of lookup are used to find the starting nodes, and the rest of the operations for executing the query.
Once a starting node or nodes are found, then Neo4j uses relationship traversal and filtering to expand and find all paths matching your desired pattern.

MySQLdb SELECT with executemany() equivalents

So it seems well discussed and documented that executemany() is intended for operations like INSERTs and UPDATEs. Calling cursor.executemany(...) on a list of dictionaries and reading in cursor.fetchall() seems to only return an item from a single execute.
To achieve the same result, the general solution seems to be to formulate a single query with the select items like SELECT FROM books WHERE title IN ('booka', 'bookb', 'bookc', ...), but it seems this would run into issues at-scale, say hundreds of thousands of items. (Please correct me if I'm wrong.)
The most obvious solution from there may be to batch the queries such that the IN predicates are a reasonable length, but that seems like a lot of hand-holding to make sure each packet is of a reasonable size.
Are there other features in MySQLdb that I'm overlooking that would return a list of results, given a dictionary of input values?
This gets particularly interesting when I'm trying to match against sets of tuples, e.g. (book, author), not just single values.

Override SQL Alchemy ORM filter condition

How can I undo a filter condition on an SQLAlchemy query? E.g.
q = Model.query.filter(Model.age > 25)
q = q.remove_filter(Model.age) # what is this called?
In addition to other suggestions, the simple (but if it's actually simple depends on your exact situation) solution can be to just not apply the filters you don't need.
Most probably you have some complex code that conditionally applies the filters and then you have a case when you want to undo some filters.
In this case, instead of applying all the filters to the query, you could collect the filters you need into some collection, for example a dictionary or a list.
Then you remove filters you don't need from the dictionary and then you actually apply these collected filters to the query object.
I don't believe that feature currently exists. That being said, you can view (and delete) the where clauses (filters) by directly accessing the q.whereclause.clauses list of BinaryExpression objects.
If that's your only filter, something like this would work:
q.whereclause.clauses.pop()
But it gets a little trickier (and hackier) if you want to remove a specific filter out of many.

Most efficent way to get one random row from oracle

I have a scenario where I have to obfuscate data(=scramble, for testing purposes, so it is not possible to see the real data, there is no need on unscramble/unobfuscate it) in database. There are several tables that are referencing the address_table. I can not obfuscate the address_table, so I figured that I simply change the references in those tables with random other address_table ID-s. The address_table contains 6M+ records. So I would create a temp table with all the address ID-s and then, when needed call some function to get a random one from there. So I could possibly generate a random value and take that row like:
Select * From (
Select Id, Rownum Rn From myTempTable )
WHERE RN = x;
where x is some random value generated by dbms_random. Now, although this is what I need, it does not perform anything near to what I expect.
Other thing I have tried is to call the sample() function, this (at least on small table) performs I bit better, but it is not good enough.
I know there are several threads on this matter like this or this on mySql, but they do not directly answer it in terms of performance.
Also, I am not limited in using pl/sql. I know a very little of pl/sql, how is it in terms of performance? I mean, it is just another process in DB server processing queue, perhaps i could get better performance doing the processing (i mean generating the update scripts, populating randoms etcetc) on client side using something like python, even considering network latency etc? Does anybody have any experience on this?
Use sample clause
select * from myTempTable SAMPLE(10);
This will return only 10% of rows.
If you just want to hide the real data why don't you take care of that in the select part of the query. Insteady of querying:
select column_name from table;
you could select
select scrambling_function(column_name) from table;
scrambling_function can be whatever you like.
There is not a good way to sample randomly using SQL that I am aware of. The sample function available in some SQL versions is not a sufficient random sample. The best way is to export the full sample set and use random software to determine the index of rows to be included in your final solution. Or if you have a simple number index (1,2,3...n) and know how many rows you need to select from you could upload a list of index's to include and query against that. Try random.org for random number generation, their API is located at http://www.random.org/clients/http/.

Categories

Resources