I am implementing a web application based on Google App Engine, Relying on ndb. I am facing a strange problem when trying to update one of the many entities in my db. When I try to update one of its properties (i.e. a string property) I get
"/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 1715, in _validate (value,)) BadValueError: Expected string, got (u'a test',)
The same code works when I create a new entity of the same kind.
I know there will be soon someone asking me to add relevant code. It would be practically unreasonable to copy the javascript and python code I developed. I am interested to know if this behavior is known to occur in certain conditions I can check.
UPDATE
consider that the value with which I am trying to update the property is a post parameter (i.e. self.request.get('parameter'))
It looks to me like you are trying to assign a tuple to a model's ndb.StringProperty().
Related
I'm trying to create a model in SQLAlchemy, but I'm having a hard time finding out what is the best way. I currently have a class called returns, which I want to give an additional variable. A state of which the return is in. So for example, a return can be expected, received or processed. However, in the Flask application I want to show the user a nice string. For example; processed should become "Waiting for reimboursment".
The problem however is, I don't want to send these strings to the database, since I might change them in the future or add statusses. Therefore I want some kind of translation to be made between the value saved in the DB and the 'string' value. I have tried solving this by using Enums, but it is not possible to create the 'string' values. I would like something like this to return either the 'key' or the 'value', where only the key is saved in the database.
return.status.key
return.status.value
I have tried looking for a solution but was not able to find anything that seems to be fit.
What is the best practice for these kinds of requirements?
I am new to python and pyramid and I am trying to figure out a way to print out some object values that I am using in a view callable to get a better idea of how things are working. More specifically, I am wanting to see what is coming out of a sqlalchemy query.
DBSession.query(User).filter(User.name.like('%'+request.matchdict['search']+'%'))
I need to take that query and then look up what Office a user belongs to by the office_id attribute that is part of the User object. I was thinking of looping through the users that come up from that query and doing another query to look up the office information (in the offices table). I need to build a dictionary that includes some User information and some Office information then return it to the browser as json.
Is there a way that I can experiment with different attempts at this while viewing my output without having to rely on the browser. I am more of a front end developer so when I am writing javascript I just view my outputs using console.log(output).
console.log(output) is to JavaScript
as
????? is to Python (specifically pyramid view callable)
Hope the question is not dumb. Just trying to learn. Appreciate anyones help.
This is a good reason to experiment with pshell, Pyramid's interactive python interpreter. From within pshell you can tinker with things on the command-line and see what they will do before adding them to your application.
http://docs.pylonsproject.org/projects/pyramid/en/1.4-branch/narr/commandline.html#the-interactive-shell
Of course, you can always use "print" and things will show up in the console. SQLAlchemy also has the sqlalchemy.echo ini option that you can turn on to see all queries. And finally, it sounds like you just need to do a join but maybe aren't familiar with how to write complex database queries, so I'd suggest you look into that before resorting to writing separate queries. Likely a single query can return you what you need.
I'm creating a game mod for Counter-Strike in python, and it's basically all done. The only thing left is to code a REAL database, and I don't have any experience on sqlite, so I need quite a lot of help.
I have a Player class with attribute self.steamid, which is unique for every Counter-Strike player (received from the game engine), and self.entity, which holds in an "Entity" for player, and Entity-class has lots and lots of more attributes, such as level, name and loads of methods. And Entity is a self-made Python class).
What would be the best way to implement a database, first of all, how can I save instances of Player with an other instance of Entity as it's attribute into a database, powerfully?
Also, I will need to get that users data every time he connects to the game server, (I have player_connect event), so how would I receive the data back?
All the tutorials I found only taught about saving strings or integers, but nothing about whole instances. Will I have to save every attribute on all instances (Entity instance has few more instances as it's attributes, and all of them have huge amounts of attributes...), or is there a faster, easier way?
Also, it's going to be a locally saved database, so I can't really use any other languages than sql.
You need an ORM. Either you roll your own (which I never suggest), or you use one that exists already. Probably the two most popular in Python are sqlalchemy, and the ORM bundled with Django.
SQL databses typically can hold only fundamental datatypes. You can use SQLAlchemy if you want to map your models so that their attributes are automatically mapped to SQL types - but it would require a lot of study and trial and error using SQLlite on your part.
I think you are not entirely correct when you say "it has to be SQL" - if you are running Python code, you can save whatver format you like.
However, Python allows you to serialize your instance Data to a string - which is persistable in a database.
So, you can create a varchar(65535) field in the SQL, along with an ID field (which could be the player ID number you mentioned, for example), and persist to it the value returned by:
import pickle
value = pickle.dumps(my_instance)
When retrieving the value you do the reverse:
my_instance = pickle.loads(value)
I just started with Flask and SQLAlchemy in flask.
So I have a many-to-many relationship using the example here http://docs.sqlalchemy.org/en/latest/orm/tutorial.html
If you scrolldown to the part about Keywords and tags this is what I am working on.
So far I am able to insert new Keywords related to my Post and I am using append. Which is wrong I know. So what happens is that the next time a non unique keyword occurs in a blog post it will throw an error about Conflict with Keyword (since keywords are supposed to be unique)
I know the right way is something else, I just don't know what. I have seen an example of
get_or_create(keyword) which basically filters by keyword and then adds it if not found. However I believe as data size grows this will also be wrong. (Several calls on every save with single insert). I love the way SQLAlchemy is doing multiple insert automatically. I wish to keep that but avoid this duplicate key issue.
Edit: found the solution, SQLAlchemy docs guide you towards error but the explanation is in there. I have added the answer.
Ok after hours of trial and error I found the solution, plus somethings I was doing wrong.
This is how SQL alchemy works. the answer is merge.
make a LIST of tags as Tag models, don;t matter if they exist as long as your primary key is name or something unique.
tags = [Tag('a1'),Tag('a2')]
Say you have Tag a1 already in DB but we don't really care. All we want is to insert if related data does not exist. WHich is what SQLalchemy does.
Now you make a Post with the LIST of ALL the tags we made. If its one only , it also is a list.
therefore
new_post = Post('a great new post',post_tags=tags)
db.session.merge(new_post)
db.session.commit()
I have used Flask syntax but the idea is same. Just make sure you are not creating the Model OUTSIDE the session. More likely, you wont do it.
This was actually simple but nowhere in the SQLAlchemy docs this example is mentioned. They use append() which is wrong. It's only to create new Tags knowing you are not making duplicates.
Hope it helps.
I'm making an app that has a need for reverse searches. By this, I mean that users of the app will enter search parameters and save them; then, when any new objects get entered onto the system, if they match the existing search parameters that a user has saved, a notification will be sent, etc.
I am having a hard time finding solutions for this type of problem.
I am using Django and thinking of building the searches and pickling them using Q objects as outlined here: http://www.djangozen.com/blog/the-power-of-q
The way I see it, when a new object is entered into the database, I will have to load every single saved query from the db and somehow run it against this one new object to see if it would match that search query... This doesn't seem ideal - has anyone tackled such a problem before?
At the database level, many databases offer 'triggers'.
Another approach is to have timed jobs that periodically fetch all items from the database that have a last-modified date since the last run; then these get filtered and alerts issued. You can perhaps put some of the filtering into the query statement in the database. However, this is a bit trickier if notifications need to be sent if items get deleted.
You can also put triggers manually into the code that submits data to the database, which is perhaps more flexible and certainly doesn't rely on specific features of the database.
A nice way for the triggers and the alerts to communicate is through message queues - queues such as RabbitMQ and other AMQP implementations will scale with your site.
The amount of effort you use to solve this problem is directly related to the number of stored queries you are dealing with.
Over 20 years ago we handled stored queries by treating them as minidocs and indexing them based on all of the must have and may have terms. A new doc's term list was used as a sort of query against this "database of queries" and that built a list of possibly interesting searches to run, and then only those searches were run against the new docs. This may sound convoluted, but when there are more than a few stored queries (say anywhere from 10,000 to 1,000,000 or more) and you have a complex query language that supports a hybrid of Boolean and similarity-based searching, it substantially reduced the number we had to execute as full-on queries -- often no more that 10 or 15 queries.
One thing that helped was that we were in control of the horizontal and the vertical of the whole thing. We used our query parser to build a parse tree and that was used to build the list of must/may have terms we indexed the query under. We warned the customer away from using certain types of wildcards in the stored queries because it could cause an explosion in the number of queries selected.
Update for comment:
Short answer: I don't know for sure.
Longer answer: We were dealing with a custom built text search engine and part of it's query syntax allowed slicing the doc collection in certain ways very efficiently, with special emphasis on date_added. We played a lot of games because we were ingesting 4-10,000,000 new docs a day and running them against up to 1,000,000+ stored queries on a DEC Alphas with 64MB of main memory. (This was in the late 80's/early 90's.)
I'm guessing that filtering on something equivalent to date_added could be done used in combination the date of the last time you ran your queries, or maybe the highest id at last query run time. If you need to re-run the queries against a modified record you could use its id as part of the query.
For me to get any more specific, you're going to have to get a lot more specific about exactly what problem you are trying to solve and the scale of the solution you are trying accomplishing.
If you stored the type(s) of object(s) involved in each stored search as a generic relation, you could add a post-save signal to all involved objects. When the signal fires, it looks up only the searches that involve its object type and runs those. That probably will still run into scaling issues if you have a ton of writes to the db and a lot of saved searches, but it would be a straightforward Django approach.