i have a database that store some rows that are generated every second. I would like to update website every new content arrives to db, and i don't want to refresh page, but add new rows to current page. What is the best approach to this?
You should use AJAX queries, either of two techniques:
Periodically have the page request updates (and update the table with the result in JavaScript; backend remains plain django); or
Use long-polling, a technique commonly known as "comet", for keeping a connection open with the server and receiving a server event when there is an update. The backend for this could be a bit tricky in frameworks designed for request/response pattern; but you can find leads on how to do it in python here.
Related
I am working on a project that uses Django and Django REST Framework. In one of the views there's a method F() that does the following:
Fetches data from the database (read operation)
Sends a create (POST) request to a 3rd party API. (although not local, this is a write operation and this is where a race condition might take place)
Returns JSON data
I'd like F() to be atomic, in other words, if the server receives multiple requests at the same time asking for this view, the server should handle one request at a time and not allow multiple threads to access this block of code simultaneously. How can this be achieved? I have read that Django provides transactions.atomic() but this guarantees atomicity of database transactions, what I need is atomicity for a whole block of code regardless of whether it accesses the database or not.
The concept you are looking for is a "mutex" or a "lock". This article may guide you in the right direction https://lincolnloop.com/blog/distributed-locking-django/
I'm looking for a way to scan a website and instantly detect an update, without having to refresh the page. So when a new post is pushed to the webpage, I'd like to be instantly notified. Is there a way to do that without having to refresh the page constantly?
Cheers
What browser are you using? Chrome has an auto refresh extension. Try doing a Google search for the extension. It's very easy to set up. It's more of a timed refresh that you can program. But it works for situations like what you are asking.
Without knowing a bit more about your task, it's hard to give you a clear answer. Typically you would set up some kind of API to determine if data has been updated, rather than scraping the contents of a website directly. See if an API exists, or if you could create one for your purpose.
Using an API
Write a script that calls the API every minute or so (or more often if necessary). Every time you call the API, save the result. Then compare the previous result to the new result - if they're different then the data has been updated.
Scraping a Website
If you do have to scrape a website, this is possible. If you execute an HTTP GET request against a webpage, the response will contain the DOM of the webpage. You can then traverse the DOM to determine the contents of a webpage. Similar to the API example, you can write a script that executes the HTTP request every minute or so, saves the state, and compares it to the previous state. There are numerous libraries out there to help preform HTTP request and traverse the DOM, but without knowing your tech stack I can't really recommend anything.
I have a rethinkdb. Data will get in database for every five minutes.
I want to create a website to real-time inspect this data flow from rethinkdb.
That is, when surfing the webpage, the data from db on webpages can update automatically without refreshing the webpage.
I know there are several ways to make it real-time such as django channels or websockets. However, model in django does not support rethinkdb.
Sorry I am a layman of making website and may express things inaccurately.
Can someone give me a keyword or hint?
If you make your question more specific, the community here will be able to offer you better support.
However, here is a general solution to your problem.
You will need to do two things:
Create a backend API that allows you to:
Check if new data has been added to the database
Fetch new data via a REST api request
Make frontend AJAX requests to this api
Fetch data
Periodically (every 30sec) check if there is new data
Fetch data again if new data is detected
To do this using Django as the backend, I would recommend using the Django Rest Framework to create your API.
This API should have two endpoints:
ListView of your data
Endpoint returning the id and timestamp of the last datapoint
Next you will have to create a frontend that uses javascript to make requests to these endpoints. When you fetch data, store the id and timestamp of the most recent data point. Use this to check if there is new data.
I would recommend using a Javascript framework such as Angular or react but depending on your needs these may be overkill.
EDIT:
Now that you have updated your answer to be more specific, here is my advice. It sounds like your number one priority is rethinkDB and real time data. Django is not well suited this because it is not compatible with rethinkDB. Real time support has come a long way in Django with Django channels however.
It sounds like you are early on in your project and have little to no codebase in Django. I would recommend using horizon along with rethink db. Horizon is a javascript backend built for real time data from rethinkdb.
As a non webdev, I'd really like if you could point me out to the 'correct' way to do this.
I built an application that populates and updates a database periodically (sqlite to be precise). I store leaderboards in my database, and would like to display them in a webpage.
As my leaderboards can change all the time, I need the webpage to be dynamic, either on a time based trigger or whenever the database changes.
I was about to use javascript for that, by realized that my database is hosted on my server, so client side js might not work.
Any ideas on that?
As my app is built in Python, I'd prefer avoid using php solutions but use more 'trendy' technologies (js, ruby, python, ? ? ? whatever)
Thanks!
Ok, given the keywords I got now here is an almost exact duplicate :
Notify user on database change? JavaScript/AJAX
This is what Ajax is for. You write Javascript on the front end that uses Ajax to call your server-side code to return the database content, then updates your HTML when it receives a response.
You need to use javascript on the client side not on the server. Your javascript code will make async calls (using ajax) to check if the db changed and update the website accordingly.
You can either use javascript to poll the server side periodically. Or you can use javascript to create (perhaps using a library like SockJS or SocketIO) a websocket connection that can actually push data to the client side when it changes. I do this on a number of projects using Tornado's websocket support on the server side.
I am working on a social-network type of application on App Engine, and would like to send multiple images to the client based on a single get request. In particular, when a client loads a page, they should see all images that are associated with their account.
I am using python on the server side, and would like to use Javascript/JQuery on the client side to decode/display the received images.
The difficulty is that I would like to only perform a single query on the server side (ie. query for all images associated with a single user) and send all of the images resulting from the query to the client as a single unit, which will then be broken up into the individual images. Ideally, I would like to use something similar to JSON, but while JSON appears to allow multiple "objects" to be sent as a JSON response, it does not appear to have the ability to allow multiple images (or binary files) to be sent as a JSON response.
Is there another way that I should be looking at this problem, or perhaps a different technology that I should be considering that might allow me to send multiple images to the client, in response to a single get request?
Thank you and Kind Regards
Alexander
The App Engine part isn't much of a problem (as long as the number of images and total size doesn't exceed GAE's limits), but the user's browser is unlikely to know what to do in order to receive multiple payloads per GET request -- that's just not how the web works. I guess you could concatenate all the blobs/bytestreams (together with metadata needed for the client to reconstruct them) and send that (it will still have to be a separate payload from the HTML / CSS / Javascript that you're also sending), as long as you can cajole Javascript into separating the megablob into the needed images again (but for that part you should open a separate question and tag it Javascript, as Python has little to do with it, and GAE nothing at all).
I would instead suggest just accepting the fact that the browser (presumably via ajax, as you mention in tags) will be sending multiple requests, just as it does to every other webpage on the WWW, and focus on optimizing the serving side -- the requests will be very close in time, so you should just use memcache to keep the yet-unsent images to avoid multiple fetch-from-storage requests in your GAE app.
As an improvement to Alex's answer, there's no need to use memcache: Simply do a keys-only query to get a list of keys of images you want to send to the client, then use db.get() to fetch the image corresponding to the required key for each image request. This requires roughly the same amount of effort as a single regular query.
Trying to send all of the images in one request means that you will be fighting very hard against some of the fundamental assumptions of the web and browser technology. If you don't have a really, really compelling reason to do this, you should consider delivering one image per request. That already works now, no sweat, no effort, no wheels reinvented.
I can't think of a sensible way to do what you ask, but I can tell you that you are asking for pain in trying to implement the solution that you are describing.
Send the client URLs for all the images in one hit, and deal with it on the client. That fits with the design of the protocol, and still lets you only make one query. The client might, if you're lucky, be able to stream those back in its next request, but the neat thing is that it'll work (eventually) even if it can't reuse the connection for some reason (usually a busted proxy in the way).