I am writing a simple livechat app using django. I keep data about chat sessions in a static variable on my Chat class. Locally it really works.
I have deployed a test version of an app on heroku, but heroku is a cloud platform. There is no synchronization between class variables in different threads.
So I decided to use memcached. But I can't find if django.core.cache allows search for a key in cache or iterate through entire cache to check values. What is the best way to solve this problem?
Memcached only allows you to get/set entries by their keys. You can't iterate these entries to check something. But if your cache keys are sequential (like sess1, sess2, etc.) you can try to check for existence in a loop:
for i in range(1000):
sess = cache.get('sess%s' % i)
# some logic
But anyway it seems like a bad design decision. I don't have enough information about what you're doing but I guess that some sort of persistent storage (like database) would work nice. You can also consider http://redis.io/ which has more features than memcached but still very fast.
Related
When using sessions, Flask requires a secret key. In every example I've seen, the secret key is somehow generated and then stored either in source code or in configuration file.
What is the reason to store it permanently? Why not simply generate it when the application starts?
app.secret_key = os.urandom(50)
The secret key is used to sign the session cookie. If you had to restart your application, and regenerated the key, all the existing sessions would be invalidated. That's probably not what you want (or at least, not the right way to go about invalidating sessions). A similar case could be made for anything else that relies on the secret key, such as tokens generated by itsdangerous to provide reset password urls (for example).
The application might need to be restarted because of a crash, or because the server rebooted, or because you are pushing a bug fix or new feature, or because the server you're using spawns new processes, etc. So you can't rely on the server being up forever.
The standard practice is to have some throwaway key commited to the repo (so that there's something there for dev machines) and then to set the key in the local config when deploying. This way, the key isn't leaked and doesn't need to be regenerated.
There's also the case of running secondary systems that depend on the app context, such as Celery for running background tasks, or multiple load balanced instances of the application. If each running instance of the application has different settings, they may not work together correctly in some cases.
Assume a Flask application that allows to build an object (server-side) through a number of steps (wizard-like ; client-side).
I would like to create an initial object server-side an build it up step by step given the client-side input, keeping the object 'alive' throughout the whole build-process. A unique id will be associated with the creation of each new object / wizard.
Serving the Flask application with the use of WSGI on Apache, requests can go through multiple instance of the Flask application / multiple threads.
How do I keep this object alive server-side, or in other words how to keep some kind of global state?
I like to keep the object in memory, not serialize/deserialize it to/from disk. No cookies either.
Edit:
I'm aware of the Flask.g object but since this is on per request basis this is not a valid solution.
Perhaps it is possible to use some kind of cache layer, e.g.:
from werkzeug.contrib.cache import SimpleCache
cache = SimpleCache()
Is this a valid solution? Does this layer live across multiple app instances?
You're looking for sessions.
You said you don't want to use cookies, but did you mean you didn't want to store the data as a cookie or are you avoiding cookies entirely? For the former case, take a look at server side sessions, e.g. Flask-KVSession
Instead of storing data on the client, only a securely generated ID is stored on the client, while the actual session data resides on the server.
When using sessions, Flask requires a secret key. In every example I've seen, the secret key is somehow generated and then stored either in source code or in configuration file.
What is the reason to store it permanently? Why not simply generate it when the application starts?
app.secret_key = os.urandom(50)
The secret key is used to sign the session cookie. If you had to restart your application, and regenerated the key, all the existing sessions would be invalidated. That's probably not what you want (or at least, not the right way to go about invalidating sessions). A similar case could be made for anything else that relies on the secret key, such as tokens generated by itsdangerous to provide reset password urls (for example).
The application might need to be restarted because of a crash, or because the server rebooted, or because you are pushing a bug fix or new feature, or because the server you're using spawns new processes, etc. So you can't rely on the server being up forever.
The standard practice is to have some throwaway key commited to the repo (so that there's something there for dev machines) and then to set the key in the local config when deploying. This way, the key isn't leaked and doesn't need to be regenerated.
There's also the case of running secondary systems that depend on the app context, such as Celery for running background tasks, or multiple load balanced instances of the application. If each running instance of the application has different settings, they may not work together correctly in some cases.
I'm using NDB and Python on Google App Engine. What is the proper way update a property on multiple entities with the same value? The NDB equivalent of:
UPDATE notifications SET read = true WHERE user_id = 123.
The use case is I have these fan-out notifications. And a specific user wants to set all of their notifications as read (potentially 100s). I know that I could use get_async and put_async to fetch each unread notification and set it as read, but I'm worried about the latency that is created by fetching potentially 100s of serializations/deserializations.
Any advice is greatly appericated.
You can call a function for each entity with the map() method of Query. For best performance don't forget the _async.
But one of the most useful service of GAE is Task Queues, especially in cases like this. If you combine Query Cursors and deferred library, you can easily process any number of entities.
I want to create a Django application with some logged-in users. On another side, since I want some real-time capabilities, I want to use an Express.js application.
Now, the problem is, I don't want unauthentified users to access Express.js application's datas. So I have to share a session store between the Express.js and the Django applications.
I thought using Redis would be a good idea, since the volatile keys are perfect for this fit, and I already use Redis for another part of the application.
On the Express.js application, I'd have this kind of code :
[...]
this.sessionStore = new RedisStore;
this.use(express.session({
// Private crypting key
secret: 'keyboard cat', // I'm worried about this for session sharing
store: this.sessionStore,
cookie: {
maxAge: 1800000
}
}))
[...]
On the Django side, I'd think of using the django-redis-session app.
So, is this a good idea? Won't there be any problem? Especially about the secret key, I'm not sure they will both share the same sessions.
You will have to write a custom session store for either Express or Django. Django, by default (as well as in django-redis-sessions) stores sessions as pickled Python objects. Express stores sessions as JSON strings. Express, with connect-redis, stores sessions under the key sess:sessionId in redis, while Django (not totally sure about this) seems to store them under the key sessionId. You might be able to use django-redis-sessions as a base, and override encode, decode, _get_session_key, _set_session_key and perhaps a few others. You would also have to make sure that cookies are stored and encrypted in the same way.
Obviously, it will be way harder to create a session store for Express that can pickle and unpickle Python objects.