I'm writing web application for get data from Database(MySQL) to file.csv using Django Framework.I can get data but It take so much time to get complete successfully.I don't want to wait for get data finish. I want to continue to next activity but background still get data from Database.
My setup : Python 3.6.5, Django 2.1
Thank you for helping me.
Celery is a python package that will help you perform asynchronous tasks in Django. You can refer to First Steps With Celery for getting started. Also, I have covered a few setup issues and their solution in this post.
Related
My problem is basically that I am currently making a customized management system based on Django(3.1) Python(v3.7.9) in which I am pulling the data from a third-party tool. The tool is not giving me the webhooks of every data that I want for visualization and analysis.
The webhook is giving me bits of information and I have to perform a GET request to their API to fetch the rest details if those are not in my database. They are asking for a successful response of webhook within 5 secs otherwise it will trigger a retry.
If I try to do a get request within the function of webhook the time of 5 second will get exceeded the solutions that I came up with was to Django Middleware or Django Triggers so which would be best suitable for my problem I am bit confused.
Note: I can not lower the Django version as I have to use Async Functions
This would be a good use case for a task scheduler like Celery.
Django-triggers is an interface to the Celery scheduler, so it might be a good fit.
Keep in mind, Celery has to be run as a separate process next to django.
Another popular task scheduler is rq-scheduler.
This offers a simple implementation using Redis as a message queue. *Note that Loadbalanced/multi-instance applications are not easily setup with RQ.
I need some direction as to how to achieve the following functionality using Django.
I want my application to enable multiple users to submit jobs to make calls to an API.
Each user job will require multiple API calls and will store the results in a db or a file.
Each user should be able to submit multiple jobs.
In case of some failure such as network blocked or API not returning results I want the application to pause for a while and then resume completing that job.
Basically want the application to pickup from where it was left off.
Any ideas as to how I could implement this solution or any technologies such as celery I should be looking at or even if you can suggest an opensource project where I can learn how to perform this would be a great help.
You can do this with rabbitmq and celery.
This post might be helpful.
https://medium.com/#ffreitasalves/executing-time-consuming-tasks-asynchronously-with-django-and-celery-8578eebab356
I've been searching and iterating a lot recently trying to figure out how to have a real time progress bar on the webpage to show the progress of programs running at the backend at the server.
Right now I'm using Django for building the website as I need to run python programs on the server. So far when the web user click "submit" button, it will take the python program (written in views.py) about one minute to present the result and having a progress bar can really help here.
Hope I provided enough detail about my question and thank you in advance for any one who can help.
I think you can create a background task and use AJAX for pulling the current status and present it to the user.
Take a look at the celery project: http://www.celeryproject.org/.
It may help you manage your background tasks and works great with django!
Hope it helps...
I have got a basic Django web application running on Heroku. I would like to add a spider to crawl some webs (e.g with Scrapy) based on a scheduled task ( e.g. via APScheduler ) to get some tables of Django databases loaded with collected data.
Does anybody know of documentation or examples for the basis to achieve this kind of integration? I find it very hard to figure it out.
I have not used Scrapy at all, but I'm actually working with APScheduler and it's very simple to use. So my first guess would be to use a BackgroundScheduler (inside your Django app) and add a job to it that would execute a callable "spider" periodically.
The thing here is how could you embed a Scrapy project inside your Django app so you can access one of its "spiders" and effectively use it as a callable in your scheduled job.
I'm maybe not helping much, but I'm just trying to give you some kickstart orientation. I'm pretty sure that if you carefully read the Scrapy's documentation you'll make your way.
Best.
Hello :) My task is to write a web framework for a Python program with some massive calculations, which are desired to be executed at proper server and the result should be sent to the browser after it's calculated. My question is - is Django the right framework for that purpose? I tried to find out where Django executes scripts, but I haven't found any satisfying answer, so I hope that I would find one here.
Thank you for any attention.
django is a server side application