Properly refresh an SQLAlchemy session to view externally updated data - python

After trying everything suggested here, I still can't get SQLAlchemy to display the correct results!
I've used various combinations of Nick's answer, session.commit(), flush() and expire_all(), restarted MySQL, even restarted the entire freaking server, and I still get old results from SQLAlchemy...why????
The most infuriating thing about this whole issue is that I can see from any other application, or even from a direct connection.execute() call, that the updated data is there. I just can't get it to display on the webpage!
BTW this is in a Pyramid app, not Flask, but since Pyramid is 99% Flask it shouldn't make a difference, right?
MTIA for any help on this, it's driving me nuts!!
PS: I tried to add this as an answer to the linked question, but it was deleted for not being a valid answer. So for future reference, if I just want to add something to an existing question without having to post an entirely new one, how would I go about that?
EDIT: My apologies zvone, here is my code:
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
session = DBSession()
query = session.query(Item).join(Item.tagged)
filters = []
for term in searchTerms:
subterms = term.split(' ')
for subterm in subterms:
filters.append(Item.itemTitle.like('%' + subterm + '%'))
filters.append(Tag.tagName.like('%' + subterm + '%'))
query = query.filter(or_(*filters))
matchedItems = query.all()
And to make some more sense out of it, here's the context:
I'm building a basic CMS where users can upload and download items of any type (text files, images, etc.).
The whole idea of this page is to allow the user to search for items that have been tagged with certain expressions. Tags are entered in the search field as a comma-delimited string of search phrases, e.g. "movies, books, photos, search term with spaces". This string is split up into its counterparts to create searchTerms, a Python list of all the terms entered into the field.
You can see in the code where I'm iterating through searchTerms, splitting phrases into separate words and adding query filters for each word.
The problem arises when searching for "big, theory". I know for certain that 3 users on the production site have posted Big Bang Theory episodes, but after migrating these DB records to my dev server, I only get one search result (the old amount).
Many thanks again for the help! :D

Related

Flask relationship model multiple get or create if not exist

I just started with Flask and SQLAlchemy in flask.
So I have a many-to-many relationship using the example here http://docs.sqlalchemy.org/en/latest/orm/tutorial.html
If you scrolldown to the part about Keywords and tags this is what I am working on.
So far I am able to insert new Keywords related to my Post and I am using append. Which is wrong I know. So what happens is that the next time a non unique keyword occurs in a blog post it will throw an error about Conflict with Keyword (since keywords are supposed to be unique)
I know the right way is something else, I just don't know what. I have seen an example of
get_or_create(keyword) which basically filters by keyword and then adds it if not found. However I believe as data size grows this will also be wrong. (Several calls on every save with single insert). I love the way SQLAlchemy is doing multiple insert automatically. I wish to keep that but avoid this duplicate key issue.
Edit: found the solution, SQLAlchemy docs guide you towards error but the explanation is in there. I have added the answer.
Ok after hours of trial and error I found the solution, plus somethings I was doing wrong.
This is how SQL alchemy works. the answer is merge.
make a LIST of tags as Tag models, don;t matter if they exist as long as your primary key is name or something unique.
tags = [Tag('a1'),Tag('a2')]
Say you have Tag a1 already in DB but we don't really care. All we want is to insert if related data does not exist. WHich is what SQLalchemy does.
Now you make a Post with the LIST of ALL the tags we made. If its one only , it also is a list.
therefore
new_post = Post('a great new post',post_tags=tags)
db.session.merge(new_post)
db.session.commit()
I have used Flask syntax but the idea is same. Just make sure you are not creating the Model OUTSIDE the session. More likely, you wont do it.
This was actually simple but nowhere in the SQLAlchemy docs this example is mentioned. They use append() which is wrong. It's only to create new Tags knowing you are not making duplicates.
Hope it helps.

How to use High Replication Datastore

Okay, I have watched the video and read the articles in the App Engine documentation (including Using the High Replication Datastore). However I am still completely confused on the practical usage of it. I understand the benefits (from the video) and they sound great. But what I am lacking is a few practical examples. There are plenty of master/slave examples on the web, but very little illustrating (with proper documentation) the high replication datastore. The guestbook code example used in the Using the High Replication Datastore article illustrates the ancestor key by adding a new functionality that the previous guestbook code example does not have (seems you can change guestbook). This just adds to the confusion.
I often use djangoforms on GAE and I was wondering if someone can help me translate all these queries into high replication datastore compatible queries (let's forget for a moment the discussion that not all queries necessarily need to be high replication datastore compatible queries and focus on the example itself).
UPDATE: with high replication datastore compatible queries I refer to queries that always return the latest data and not potential stale data. Using entity groups seems to be the way to go here but as mentioned before, I don't have many practical code examples of how to do this, so that is what I am looking for!
So the queries in this article are:
The main recurring query in this article is:
query = db.GqlQuery("SELECT * FROM Item ORDER BY name")
which we will translate to:
query = Item.all().order('name') // datastore request
validating the form happens like:
data = ItemForm(data=self.request.POST)
if data.is_valid():
# Save the data, and redirect to the view page
entity = data.save(commit=False)
entity.added_by = users.get_current_user()
entity.put() // datastore request
and getting the latest entry from the datastore for populating a form happens like:
id = int(self.request.get('id'))
item = Item.get(db.Key.from_path('Item', id)) // datastore request
data = ItemForm(data=self.request.POST, instance=item)
So what do I/we need to do to make all these datastore requests compatible with the high replication datastore?
One last thing that is also not clear to me. Using ancestor keys, does this have any impact on the model in datastore. For example, in the guestbook code example they use:
def guestbook_key(guestbook_name=None):
return db.Key.from_path('Guestbook', guestbook_name or 'default_guestbook')
However 'Guestbook' does not exist in the model, so how can you use 'db.Key.from_path' on this and why would this work? Does this change how data is stored in the datastore which I need to keep into account when retrieving the data (e.g. does it add another field I should exclude from showing when using djangoforms)?
Like I said before, this is confusing me a lot and your help is greatly appreciated!
I'm not sure why you think you need to change your queries at all. The documentation that you link to clearly states:
The back end changes, but the datastore API does not change at all. You'll use the same programming interfaces no matter which datastore you're using.
The point of that page is just to say that queries may be out of sync if you don't use entity groups. Your final code snippet is just an example of that - the string 'Guestbook' is exactly an ancestor key. I don't understand why you think it needs to exist in the model. Once again, this is unchanged from the non-HR datastore - it has always been the case that keys are built up from paths, which can consist of arbitrary strings. You probably need to reread the documentation on entity groups and keys.
The changes to use the HRD are not in how queries are made, but in what guarantees are made about what data you get back. The example you give:
query = db.GqlQuery("SELECT * FROM Item ORDER BY name")
will work in the HRD as well. The catch (basically) is that this kind of query (using either this syntax, or the Item.all() form) can return objects slightly out-of-date. This is probably not a big deal with the guestbook.
Note that if you're getting an object by key directly, it will never be out-of-date. It's only for queries that you can see this issue. You can avoid this problem with queries by placing all the entities that need to be consistent in a single entity group. Note that this limits the rate at which you can write to the entity group.
In answer to your follow-up question, "Guestbook" is the name of the entity.

Google Apps Engine Datastore Search

I was wondering if there was any way to search the datastore for a entry. I have a bunch of entries for songs(title, artist,rating) but im not sure how to really search through them for both song title and artist. We take in a search term and are looking for all entries that "match." But we are lost :( any help is much appreciated!
We are using python
edit1: current code is useless, its an exact search but might help you see the issue
query = song.gql("SELECT * FROM song WHERE title = searchTerm OR artist = searchTerm")
The song data you work with sounds as a rather static data set (primarily inserts, no or few updates). In that case there is GAE technique called Relation Index Entity (RIE) which is an efficient way to implement keyword-based search.
But some preparation work required which is briefly:
build special RIE entity where you place all searchable keywords
from each song (one-to-one relationship).
RIE stores them in StringListProperty which supports searches like this:
keywords = 'SearchTerm'
(returns True if any of the values in the list keywords matches 'SearchTerm'`)
AND condition works immediately by adding multipe filters as above
OR condition needs more work by implementing in-memory merge from AND-only queries
You can find details on solution workflow and code samples in my blog Relation Index Entities with Python for Google Datastore.
http://www.billkatz.com/2009/6/Simple-Full-Text-Search-for-App-Engine

Appengine (python) returns empty for valid queries

EDIT: Figured it out. For whatever reason the field in the index was called strWord instead of wordStr. I didn't notice because of the similarities. The file was auto generated, so I must have called the field that in a previous development version.
I've got an app with around half a million 'records', each of which only stores three fields. I'd like to look up records by a string field with a query, but I'm running into problems. If I visit the console page, manually view a record and save it (without making changes) it shows up in a query:
SELECT * FROM wordEntry WHERE wordStr = 'SomeString'
If I don't do this, I get 'no results'. Does appengine need time to update? If so, how much?
(I was also having trouble batch deleting and modifying data, but I was able to break the problem up into smaller chunks.)
When this has happened to me it's because I've been using a TextField, which cannot be queried (but confusingly just gets ignored). Try switching to StringField.

Reverse Search Best Practices?

I'm making an app that has a need for reverse searches. By this, I mean that users of the app will enter search parameters and save them; then, when any new objects get entered onto the system, if they match the existing search parameters that a user has saved, a notification will be sent, etc.
I am having a hard time finding solutions for this type of problem.
I am using Django and thinking of building the searches and pickling them using Q objects as outlined here: http://www.djangozen.com/blog/the-power-of-q
The way I see it, when a new object is entered into the database, I will have to load every single saved query from the db and somehow run it against this one new object to see if it would match that search query... This doesn't seem ideal - has anyone tackled such a problem before?
At the database level, many databases offer 'triggers'.
Another approach is to have timed jobs that periodically fetch all items from the database that have a last-modified date since the last run; then these get filtered and alerts issued. You can perhaps put some of the filtering into the query statement in the database. However, this is a bit trickier if notifications need to be sent if items get deleted.
You can also put triggers manually into the code that submits data to the database, which is perhaps more flexible and certainly doesn't rely on specific features of the database.
A nice way for the triggers and the alerts to communicate is through message queues - queues such as RabbitMQ and other AMQP implementations will scale with your site.
The amount of effort you use to solve this problem is directly related to the number of stored queries you are dealing with.
Over 20 years ago we handled stored queries by treating them as minidocs and indexing them based on all of the must have and may have terms. A new doc's term list was used as a sort of query against this "database of queries" and that built a list of possibly interesting searches to run, and then only those searches were run against the new docs. This may sound convoluted, but when there are more than a few stored queries (say anywhere from 10,000 to 1,000,000 or more) and you have a complex query language that supports a hybrid of Boolean and similarity-based searching, it substantially reduced the number we had to execute as full-on queries -- often no more that 10 or 15 queries.
One thing that helped was that we were in control of the horizontal and the vertical of the whole thing. We used our query parser to build a parse tree and that was used to build the list of must/may have terms we indexed the query under. We warned the customer away from using certain types of wildcards in the stored queries because it could cause an explosion in the number of queries selected.
Update for comment:
Short answer: I don't know for sure.
Longer answer: We were dealing with a custom built text search engine and part of it's query syntax allowed slicing the doc collection in certain ways very efficiently, with special emphasis on date_added. We played a lot of games because we were ingesting 4-10,000,000 new docs a day and running them against up to 1,000,000+ stored queries on a DEC Alphas with 64MB of main memory. (This was in the late 80's/early 90's.)
I'm guessing that filtering on something equivalent to date_added could be done used in combination the date of the last time you ran your queries, or maybe the highest id at last query run time. If you need to re-run the queries against a modified record you could use its id as part of the query.
For me to get any more specific, you're going to have to get a lot more specific about exactly what problem you are trying to solve and the scale of the solution you are trying accomplishing.
If you stored the type(s) of object(s) involved in each stored search as a generic relation, you could add a post-save signal to all involved objects. When the signal fires, it looks up only the searches that involve its object type and runs those. That probably will still run into scaling issues if you have a ton of writes to the db and a lot of saved searches, but it would be a straightforward Django approach.

Categories

Resources