Imagine you have the following situation:
for i in xrange(100000):
account = Account()
account.foo = i
account.save
Obviously, the 100,000 INSERT statements executed by Django are going to take some time. It would be nicer to be able to combine all those INSERTs into one big INSERT. Here's the kind of thing I'm hoping I can do:
inserts = []
for i in xrange(100000):
account = Account()
account.foo = i
inserts.append(account.insert_sql)
sql = 'INSERT INTO whatever... ' + ', '.join(inserts)
Is there a way to do this using QuerySet, without manually generating all those INSERT statements?
As shown in this related question, one can use #transaction.commit_manually to combine all the .save() operations as a single commit to greatly improve performance.
#transaction.commit_manually
def your_view(request):
try:
for i in xrange(100000):
account = Account()
account.foo = i
account.save()
except:
transaction.rollback()
else:
transaction.commit()
Alternatively, if you're feeling adventurous, have a look at this snippet which implements a manager for bulk inserting. Note that it works only with MySQL, and hasn't been updated in a while so it's hard to tell if it will play nice with newer versions of Django.
You could use raw SQL.
Either by Account.objects.raw() or using a django.db.connection objects.
This might not be an option if you want to maintain database agnosticism.
http://docs.djangoproject.com/en/dev/topics/db/sql/
If what you're doing is a one time setup, perhaps using a fixture would be better.
Related
I have a couple models that I want to update at the same time. First I get their data from the db with a simple:
s = Store.get(Store.id == store_id)
new_book = Book.get(Book.id == data[book_id'])
old_book = Book.get(Book.id == s.books.id)
The actual schema is irrelevant here. Then I do some updates to these models and at the end I save all three of them with:
s.save()
new_book.save()
old_book.save()
The function that handles these operations uses the #db.atomic() decorator so the writes are bunched into a single transaction. The problem is that what if, between the point where I get() the data from the DB and the point where I save the modified data, another process changed something with these models in the DB already. Is there a way to execute those writes (.save() operations) only if the underlying DB rows have not been changed? I could read their last_changed value but again, is there a way to do this and update at the same time? And if data has been changed, simply throw an exception?
Turns out there is a solution for this in the official docs called Optimistic Locking.
If I want to check for the existence and if possible retrieve an object, which of the following methods is faster? More idiomatic? And why? If not either of the two examples I list, how else would one go about doing this?
if Object.objects.get(**kwargs).exists():
my_object = Object.objects.get(**kwargs)
my_object = Object.objects.filter(**kwargs)
if my_object:
my_object = my_object[0]
If relevant, I care about mysql and postgres for this.
Why not do this in a try/except block to avoid the multiple queries / query then an if?
try:
obj = Object.objects.get(**kwargs)
except Object.DoesNotExist:
pass
Just add your else logic under the except.
django provides a pretty good overview of exists
Using your first example it will do the query two times, according to the documentation:
if some_queryset has not yet been evaluated, but you
know that it will be at some point, then using some_queryset.exists()
will do more overall work (one query for the existence check plus an
extra one to later retrieve the results) than simply using
bool(some_queryset), which retrieves the results and then checks if
any were returned.
So if you're going to be using the object, after checking for existance, the docs suggest just using it and forcing evaluation 1 time using
if my_object:
pass
I got this long queryset statement on a view
contributions = user_profile.contributions_chosen.all()\
.filter(payed=False).filter(belongs_to=concert)\
.filter(contribution_def__left__gt=0)\
.filter(contribution_def__type_of='ticket')
That i use in my template
context['contributions'] = contributions
And later on that view i make changes(add or remove a record) to the table contributions_chosen and if i want my context['contributions'] updated i need to requery the database with the same lenghty query.
contributions = user_profile.contributions_chosen.all()\
.filter(payed=False).filter(belongs_to=concert)\
.filter(contribution_def__left__gt=0)\
.filter(contribution_def__type_of='ticket')
And then again update my context
context['contributions'] = contributions
So i was wondering if theres any way i can avoid repeating my self, to reevaluate the contributions so it actually reflects the real data on the database.
Ideally i would modify the queryset contributions and its values would be updated, and at the same time the database would reflect this changes, but i don't know how to do this.
UPDATE:
This is what i do between the two
context['contributions'] = contributions
I add a new contribution object to the contributions_chosen(this is a m2m relation),
contribution = Contribution.objects.create(kwarg=something,kwarg2=somethingelse)
user_profile.contributions_chosen.add(contribution)
contribution.save()
user_profile.save()
And in some cases i delete a contribution object
contribution = user_profile.contributions_chosen.get(id=1)
user_profile.contributions_chosen.get(id=request.POST['con
contribution.delete()
As you can see i'm modifying the table contributions_chosen so i have to reissue the query and update the context.
What am i doing wrong?
UPDATE
After seeing your comments about evaluating, i realize i do eval the queryset i do
len(contributions) between context['contribution'] and that seems to be problem.
I'll just move it after the database operations and thats it, thanks guy.
update
Seems you have not evaluated the queryset contributions, thus there is no need to worry about updating it because it still has not fetched data from DB.
Can you post code between two context['contributions'] = contributions lines? Normally before you evaluate the queryset contributions (for example by iterating over it or calling its __len__()), it does not contain anything reading from DB, hence you don't have to update its content.
To re-evaluate a queryset, you could
# make a clone
contribution._clone()
# or any op that makes clone, for example
contribution.filter()
# or clear its cache
contribution._result_cache = None
# you could even directly add new item to contribution._result_cache,
# but its could cause unexpected behavior w/o carefulness
I don't know how you can avoid re-evaluating the query, but one way to save some repeated statements in your code would be to create a dict with all those filters and to specify the filter args as a dict:
query_args = dict(
payed=False,
belongs_to=concert,
contribution_def__left__gt=0,
contribution_def__type_of='ticket',
)
and then
contributions = user_profile.contributions_chosen.filter(**query_args)
This just removes some repeated code, but does not solve the repeated query. If you need to change the args, just handle query_args as a normal Python dict, it is one after all :)
I'm just using SQLAlchemy core, and cannot get the sql to allow me to add where clauses. I would like this very generic update code to work on all my tables. The intent is that this is part of a generic insert/update function that corresponds to every table. By doing it this way it allows for extremely brief test code and simple CLI utilities that can simply pass all args & options without the complexity of separate sub-commands for each table.
It'll take a few more tweaks to get it there, but should be doing the updates now just fine. However, while SQLAlchemy refers to generative queries it doesn't distinguish between selects & updates. I've reviewed SQLAlchemy documentation, Essential SQLAlchemy, stackoverflow, and several source code repositories, and have found nothing.
u = self._table.update()
non_key_kw = {}
for column in self._table.c:
if column.name in self._table.primary_key:
u.where(self._table.c[column.name] == kw[column.name])
else:
col_name = column.name
non_key_kw[column.name] = kw[column.name]
print u
result = u.execute(kw)
Which fails - it doesn't seem to recognize the where clause:
UPDATE struct SET year=?, month=?, day=?, distance=?, speed=?, slope=?, temp=?
FAIL
And I can't find any examples of building up an update in this way. Any recommendations?
the "where()" method is generative in that it returns a new Update() object. The old one is not modified:
u = u.where(...)
In my AppEngine project I have a need to use a certain filter as a base then apply various different extra filters to the end, retrieving the different result sets separately. e.g.:
base_query = MyModel.all().filter('mainfilter', 123)
Then I need to use the results of various sub queries separately:
subquery1 = basequery.filter('subfilter1', 'xyz')
#Do something with subquery1 results here
subquery2 = basequery.filter('subfilter2', 'abc')
#Do something with subquery2 results here
Unfortunately 'filter()' affects the state of the basequery Query instance, rather than just returning a modified version. Is there any way to duplicate the Query object and use it as a base? Is there perhaps a standard Python way of duping an object that could be used?
The extra filters are actually applied by the results of different forms dynamically within a wizard, and they use the 'running total' of the query in their branch to assess whether to ask further questions.
Obviously I could pass around a rudimentary stack of filter criteria, but I'd rather use the Query itself if possible, as it adds simplicity and elegance to the solution.
There's no officially approved (Eg, not likely to break) way to do this. Simply creating the query afresh from the parameters when you need it is your best option.
As Nick has said, you better create the query again, but you can still avoid repeating yourself. A good way to do that would be like this:
#inside a request handler
def create_base_query():
return MyModel.all().filter('mainfilter', 123)
subquery1 = create_base_query().filter('subfilter1', 'xyz')
#Do something with subquery1 results here
subquery2 = create_base_query().filter('subfilter2', 'abc')
#Do something with subquery2 results here