I am building an app where each user is assigned a task from a tasks table. In order to do so, we are going to mark an existing entry as deleted (flag) and then add an entry that holds the person responsible for the task in that table.
The issue here is that if multiple users decide to get a task at the same time, the request would prioritize older entries over newer ones, so there is the chance they are going to read the same task and get assigned the same task. Is there a simple way around it?
My first inclination was to create a singleton class that handles job distribution, but I am pretty sure that such issues can be handled by Django direct. What should I try?
Related
Is there any way to maintain a variable that is accessible and mutable across processes?
Example
User A made a request to a view called make_foo and the operation within that view takes time. We want to have a flag variable that says making_foo = True that is viewable by User B that will make a request and by any other user or service within that django app and be able to set it to False when done
Don't take the example too seriously, I know about task queues but what I am trying to understand is the idea of having a shared mutable variable across processes without the need to use a database.
Is there any best practice to achieve that?
One thing you need to be aware of is that when your django server is running in production, there is not just one django process, there will be several worker threads running at the same time.
If you want to share data between processes, even internally, you will need some kind of database to do so, whether that's with SQLite3 or Redis (which I recommend for stuff like this).
I won't go into the details because it's already been said before by other people, but Redis is an in-memory database that uses key-value storing (unlike how Django uses a model, Redis is essentially a giant dictionary). Redis is fast and most operations are atomic which means you are unlikely to encounter race conditions.
Let's say that I have a Django web application with two users. My web application has a global variable that exist on the server (a Pandas Dataframe created from data from an external SQL database).
Let's say that a user makes an update request to that Dataframe and now that Dataframe is being updated. As the Dataframe is being updated, the other user makes a get request for that Dataframe. Is there a way to 'lock' that Dataframe until user 1 is finished with it and then finish the request made by user 2?
EDIT:
So the order of events should be:
User 1 makes an update request, Dataframe is locked, User 2 makes a get request, Dataframe is finished updating, Dataframe is unlocked, User 2 gets his/her request.
Lines of code would be appreciated!
Ehm... Django is not a server. It has a single-threaded development server in it, but it should not be used for anything beyond development and maybe not even for that. Django applications are deployed using WSGI. WSGI server running your app is likely to start several separate worker threads and will be killing and restarting these threads according to the rules in its configuration.
This means, that you cannot rely on multiple requests hitting the same process. Django app lifecycle is between getting a request and returning a response. Anything that is not explicitly made persistent between those two events should be considered gone.
So, when one of your users updates a global variable, this variable only exists in the one process this user randomly accessed. The second user might or might not hit the same process and therefore might or might not get the same copy of the variable. More than that, the process will sooner or later be killed by the WSGI server and all the updates will be gone.
What I am getting at is that you might want to rethink your architecture before you bother with the atomic update problems.
Don't share in memory objects if you're going to mutate them. Concurrency is super hard to do right and premature optimization is evil. Give each user their own view of the data and only share data via the database (using transactions to make your updates atomic). Keep and increment counters in your database every time you make an update, make transactions fail if those number have changed since the data was read (as somebody else has mutated it).
Also, don't make important architectural decisions when tired! :)
I often have models that are a local copy of some remote resource, which needs to be periodically kept in sync.
Task(
url="/keep_in_sync",
params={'entity_id':entity_id},
name="sync-%s" % entity_id,
countdown=3600
).add()
Inside keep_in_sync any changes are saved to the model and a new task is scheduled to happen again later.
Now, while superficially this seems like a nice solution, in practice you might become worried if all the necessary tasks have really been added or not. Maybe you have entities representing the level of food pellets inside your hamster cages so that an automated email can be sent to your housekeeper to feed them. But then a few weeks later when you come back from your holiday, you find several of your hamsters starving.
It then starts seeming like a good idea to make a script that goes through each entity and makes sure that the proper task really is in the queue for it. But neither Task nor Queue classes have any method for checking if a task exists or not.
Can you save the hamsters and come up with a nicer way to make sure that a method really for sure is being periodically called for each entity?
Update
It seems that if you want to be really sure that tasks are scheduled, you need to keep track of your own tasks as Nick Johnson suggests. Not ready to let go of the convenient task queue, so for the time being will just tolerate the uncertainty of being unable to check if tasks are really scheduled or not.
Instead of enqueueing a task per entity, handle multiple entities in a single task. This can be triggered by a daily cron job, for instance, which fans out to multiple tasks. As well as ensuring you execute your code for each entity, you can also take advantage of asynchronous URLFetch to synchronize with the external resource more efficiently, and batch puts and gets from the datastore to make the updates more efficient.
You'll get an exception (TaskAlreadyExistsError) if there already such task in queue (same url and same params). So, don't worry, just all of them into queue, and remember to catch exceptions.
You can find full list of exceptions here: http://code.google.com/intl/en/appengine/docs/python/taskqueue/exceptions.html
How can I modify the behaviour of models in Django to perform automatic locking and unlocking records in database on selecting them and make this behaviour transparent for a programmer? I already know how to lock and unlock a record (database record locking) but we'd like to know where this code should be placed in Django models. I'd like it to work for every all(), filter(), exclude() and other queries, and unlocking should be invoked while save() and also in queries in which we don't save anything.
UPDATE: This application has several threads that can be run by two or more servers simultaneously and I want to ensure that no records from a common database would be processed by more than one thread. Threads search for some records and then send some data through sockets and finally update these records. In other words, besides Django website there are servers.
You'll want to obtain the lock during the model __init__(); this can be implemented fairly simply through inheritance. The model's Manager will have a QuerySet instantiated for that model and your lock will be obtained as soon as the retrieved model is added to the query _result_cache[] or when the query's iterator() is called. Of course, you'll have to make sure that if the model doesn't have an associated pk yet that you forgo the lock contention.
If you don't want to do DB-level locking, you can look at Django-cachebot. It handles model invalidation, but the most important thing is that it can keep model records in a common store for your threads, so you can implement a model-locking state and have that pushed to the store and queried by other threads.
I also have this urge to give you big ups for an odd question. I wouldn't try this at home, so I sincerely hope you know what your doing!
I'm developing software using the Google App Engine.
I have some considerations about the optimal design regarding the following issue: I need to create and save snapshots of some entities at regular intervals.
In the conventional relational db world, I would create db jobs which would insert new summary records.
For example, a job would insert a record for every active user that would contain his current score to the "userrank" table, say, every hour.
I'd like to know what's the best method to achieve this in Google App Engine. I know that there is the Cron service, but does it allow us to execute jobs which will insert/update thousands of records?
I think you'll find that snapshotting every user's state every hour isn't something that will scale well no matter what your framework. A more ordinary environment will disguise this by letting you have longer running tasks, but you'll still reach the point where it's not practical to take a snapshot of every user's data, every hour.
My suggestion would be this: Add a 'last snapshot' field, and subclass the put() function of your model (assuming you're using Python; the same is possible in Java, but I don't know the syntax), such that whenever you update a record, it checks if it's been more than an hour since the last snapshot, and if so, creates and writes a snapshot record.
In order to prevent concurrent updates creating two identical snapshots, you'll want to give the snapshots a key name derived from the time at which the snapshot was taken. That way, if two concurrent updates try to write a snapshot, one will harmlessly overwrite the other.
To get the snapshot for a given hour, simply query for the oldest snapshot newer than the requested period. As an added bonus, since inactive records aren't snapshotted, you're saving a lot of space, too.
Have you considered using the remote api instead? This way you could get a shell to your datastore and avoid the timeouts. The Mapper class they demonstrate in that link is quite useful and I've used it successfully to do batch operations on ~1500 objects.
That said, cron should work fine too. You do have a limit on the time of each individual request so you can't just chew through them all at once, but you can use redirection to loop over as many users as you want, processing one user at a time. There should be an example of this in the docs somewhere if you need help with this approach.
I would use a combination of Cron jobs and a looping url fetch method detailed here: http://stage.vambenepe.com/archives/549. In this way you can catch your timeouts and begin another request.
To summarize the article, the cron job calls your initial process, you catch the timeout error and call the process again, masked as a second url. You have to ping between two URLs to keep app engine from thinking you are in a accidental loop. You also need to be careful that you do not loop infinitely. Make sure that there is an end state for your updating loop, since this would put you over your quotas pretty quickly if it never ended.