I've heard of redis-cache but how exactly does it work? Is it used as a layer between django and my rdbms, by caching the rdbms queries somehow?
Or is it supposed to be used directly as the database? Which I doubt, since that github page doesn't cover any login details, no setup.. just tells you to set some config property.
This Python module for Redis has a clear usage example in the readme: http://github.com/andymccurdy/redis-py
Redis is designed to be a RAM cache. It supports basic GET and SET of keys plus the storing of collections such as dictionaries. You can cache RDBMS queries by storing their output in Redis. The goal would be to speed up your Django site. Don't start using Redis or any other cache until you need the speed - don't prematurely optimize.
Just because Redis stores things in-memory does not mean that it is meant to be a cache. I have seen people using it as a persistent store for data.
That it can be used as a cache is a hint that it is useful as a high-performance storage. If your Redis system goes down though you might loose data that was not been written back onto the disk again. There are some ways to mitigate such dangers, e.g. a hot-standby replica.
If your data is 'mission-critical', like if you run a bank or a shop, Redis might not be the best pick for you. But if you write a high-traffic game with persistent live data or some social-interaction stuff and manage the probability of data-loss to be quite acceptable, then Redis might be worth a look.
Anyway, the point remains, yes, Redis can be used as a database.
Redis is basically an 'in memory' KV store with loads of bells and whistles. It is extremely flexible. You can use it as a temporary store, like a cache, or a permanent store, like a database (with caveats as mentioned in other answers).
When combined with Django the best/most common use case for Redis is probably to cache 'responses' and sessions.
There's a backend here https://github.com/sebleier/django-redis-cache/ and excellent documentation in the Django docs here: https://docs.djangoproject.com/en/1.3/topics/cache/ .
I've recently started using https://github.com/erussell/django-redis-status to monitor my cache - works a charm. (Configure maxmemory on redis or the results aren't so very useful).
You can also use Redis as a queue for distributed tasks in your Django app. You can use it as a message broker for Celery or Python RQ.
Redis as a Primary database
Yes you can use Redis key-value store as a primary database.
Redis not only store key-value pairs it also support different data structures like
List
Set
Sorted set
Hashes
Bitmaps
Hyperloglogs
Redis Data types Official doc
Redis is in memory key-value store so you must aware of it if Redis server failure occurred your data will be lost.
Redis can also persist data check official doc.
Redis Persistence Official doc
Redis as a Cache
Yes Redis reside between in Django and RDBMS.
How it works
given a URL, try finding that page in the cache if the page is in the cache: return the cached page else: generate the page save the generated page in the cache (for next time) return the generated page
Django’s cache framework Official Doc
How can use Redis with Django
We can use redis python client redis-py for Django application.
Redis python client redis-py Github
We can use Django-redis for django cache backend.
Django-redis build on redis-py and added extra features related to django application.
Django-redis doc Github
Other libraries also exists.
Redis use cases and data types
Some use cases
Session cache
Real time analytics
Web caching
Leaderboards
Top Redis Use Cases by Core Data structure types
Big Tech companies using Redis
Twitter GitHub Weibo Pinterest Snapchat Craigslist Digg StackOverflow Flickr
Related
Suppose I need to build a web application where each client will be simulating their trading strategy using historical stock data. The data will be provided by a 3rd party vendor over the internet: for example, fetching historical data for a single stock based on stock ticker through HTTP call. Also, I am planning to use Django as a back-end framework.
Here is my question:
I would like to be able to prefetch and cache the data on the server side, so that each client's request would not need to do an HTTP call again, but get it from the shared resource. I guess, storing it in database, like SQL could be one solution. However, is there a way to use memory shared between clients in Django on the backend side? Any pointer or suggestion would be very helpful. Thanks.
This sounds like a fine thing to store in a share cache, like memcache or redis (or, yes, even an SQL database-backed cache).
You should read https://docs.djangoproject.com/en/dev/topics/cache/; this can explain how you can store the result of your HTTP call under a cache key and then retrieve it. The caching works the same regardless of what backend (memcache, redis, local memory, SQL DB) you use, so you can test this out with the local-memory cache or DB cache and, if you like it, move to a better solution like memcache.
There are many caching strategies you could use here, but a nice place to start rather then storing the data in a SQL database you could store the data in something like Memcached https://docs.djangoproject.com/en/dev/topics/cache/#memcached. Without more information I couldn't get more specific than that.
I want to operate on cache values using Redis cache system. Redis has many good operations itself, but cache module in Django doesn't support them. I know for using Redis methods I can import Redis and create a client and use these methods, but using this method we must create client each time and try to connect to the Redis server. Is this an efficient way to use Redis in Django in large scale requests?
Other people have faced the same issue and have created Redis cache backends for Django so you can just import one of those, eg
https://django-redis.readthedocs.org/en/latest/
I was wondering what is the 'best' way of passing data between views. Is it better to create invisible fields and pass it using POST or should I encode it in my URLS? Or is there a better/easier way of doing this? Sorry if this question is stupid, I'm pretty new to web programming :)
Thanks
There are different ways to pass data between views. Actually this is not much different that the problem of passing data between 2 different scripts & of course some concepts of inter-process communication come in as well. Some things that come to mind are -
GET request - First request hits view1->send data to browser -> browser redirects to view2
POST request - (as you suggested) Same flow as above but is suitable when more data is involved
Django session variables - This is the simplest to implement
Client-side cookies - Can be used but there is limitations of how much data can be stored.
Shared memory at web server level- Tricky but can be done.
REST API's - If you can have a stand-alone server, then that server can REST API's to invoke views.
Message queues - Again if a stand-alone server is possible maybe even message queues would work. i.e. first view (API) takes requests and pushes it to a queue and some other process can pop messages off and hit your second view (another API). This would decouple first and second view API's and possibly manage load better.
Cache - Maybe a cache like memcached can act as mediator. But then if one is going this route, its better to use Django sessions as it hides a whole lot of implementation details but if scale is a concern, memcached or redis are good options.
Persistent storage - store data in some persistent storage mechanism like mysql. This decouples your request taking part (probably a client facing API) from processing part by having a DB in the middle.
NoSql storages - if speed of writes are in other order of hundreds of thousands per sec, then MySql performance would become bottleneck (there are ways to get around by tweaking mysql config but its not easy). Then considering NoSql DB's could be an alternative. e.g: dynamoDB, Redis, HBase etc.
Stream Processing - like Storm or AWS Kinesis could be an option if your use-case is real-time computation. In fact you could use AWS Lambda in the middle as a server-less compute module which would read off and call your second view API.
Write data into a file - then the next view can read from that file (real ugly). This probably should never ever be done but putting this point here as something that should not be done.
Cant think of any more. Will update if i get any. Hope this helps in someway.
I'm building a website that doesn't require a database because a REST API "is the database". (Except you don't want to be putting site-specific things in there, since the API is used by mostly mobile clients)
However there's a few things that normally would be put in a database, for example the "jobs" page. You have master list view, and the detail views for each job, and it should be easy to add new job entries. (not necessarily via a CMS, but that would be awesome)
e.g. example.com/careers/ and example.com/careers/77/
I could just hardcode this stuff in templates, but that's no DRY- you have to update the master template and the detail template every time.
What do you guys think? Maybe a YAML file? Or any better ideas?
Thx
Why not still keep it in a database? Your remote REST store is all well and funky, but if you've got local data, there's nothing (unless there's spec saying so) to stop you storing some stuff in a local db. Doesn't have to be anything v glamorous - could be sqlite, or you could have some fun with redis, etc.
You could use the Memcachedb via the Django cache interface.
For example:
Set the cache backend as memcached in your django settings, but install/use memcachedb instead.
Django can't tell the difference between the two because the provide the same interface (at least last time I checked).
Memcachedb is persistent, safe for multithreaded django servers, and won't lose data during server restarts, but it's just a key value store. not a complete database.
Some alternatives built into the Python library are listed in the Data Persistence chapter of the documentation. Still another is JSON.
I'm writing a simple app with AppEngine, using Python. After a successful insert by a user and redirect, I'd like to display a flash confirmation message on the next page.
What's the best way to keep state between one request and the next? Or is this not possible because AppEngine is distributed? I guess, the underlying question is whether AppEngine provides a persistent session object.
Thanks
Hannes
No session support is included in App Engine itself, but you can add your own session support.
GAE Utilities is one library made specifically for this; a more heavyweight alternative is to use django sessions through App Engine Patch.
The ways to reliable keep state between requests are memcache, the datastore or through the user (cookies or post/get).
You can use the runtime cache too, but this is very unreliable as you don't know if a request will end up in the same runtime or the runtime can drop it's entire cache if it feels like it.
I really wouldn't use the runtime cache except for very specific situations, for example I use it to cache the serialization of objects to json as that is pretty slow and if the caching is gone I can regenerate the result easily.