Scheduling a regular event: Cron/Cron alternatives (including Celery) - python

Something I've had interest in is regularly running a certain set of actions at regular time intervals. Obviously, this is a task for cron, right?
Unfortunately, the Internet seems to be in a bit of disagreement there.
Let me elaborate a little about my setup. First, my development environment is in Windows, while my production environment is hosted on Webfaction (Linux). There is no real cron on Windows, right? Also, I use Django! And what's suggested for Django?
Celery of course! Unfortunately, setting up Celery has been more or less a literal nightmare for me - please see Error message 'No handlers could be found for logger “multiprocessing”' using Celery. And this is only ONE of the problems I've had with Celery. Others include a socket error which it I'm the only one ever to have gotten the problem.
Don't get me wrong, Celery seems REALLY cool. Unfortunately, there seems to be a lack of support, and some odd limitations built into its preferred backend, RabbitMQ. Unfortunately, no matter how cool a program is, if it doesn't work, well, it doesn't work!
That's where I hope all of you can come in. I'd like to know about cron or a cron-equivalent, which can be set up similarly (preferably identically) in both a Windows and a Linux environment.
(I've been struggling with Celery for about two weeks now and unfortunately I think it's time to toss in the towel and give up on it, at least for now.)

I had the same problem, and held off trying to solve it with celery (too complicated) or cron (external to application) and ended up finding Advanced Python Scheduler. Only just started using it but it seems reasonably mature and stable, has decent documentation and will take a number of scheduling formats (e.g. cron style).
From the documentation, running a function at a specific interval.
from apscheduler.scheduler import Scheduler
sched = Scheduler()
sched.start()
def hello_world():
print "hello world"
sched.add_interval_job(hello_world,seconds=10)
This is non-blocking, and I run something pretty identical by simply importing the module from my urls.py. Hope this helps.

A simple, non-Celery way to approach things would be to create custom django-admin commands to perform your asynchronous or scheduled tasks.
Then, on Windows, you use the at command to schedule these tasks. On Linux, you use cron.
I'd also strongly recommend ditching Windows if you can for a development environment. Your life will be so much better on Linux or even Mac OSX. Re-purpose a spare or old machine with Ubuntu for example, or run Ubuntu in a VM on your Windows box.

https://github.com/andybak/django-cron
Triggered by a single cron task but all the scheduling and configuration is done in Python.

Django Chronograph is a great alternative. You only need to setup one cron then do everything in django admin. You can schedule tasks/commands from django management.

Related

How to schedule a job with Flask on Windows?

It seems that flask-crontab cannot run on my machine because there is no such thing as a windows cron.
This is the error I get: ModuleNotFoundError: No module named 'fcntl'
If this is the case, how can I write a scheduled job in a Flask application?
Specifically, I need to be able to deploy the application--so it won't be running on windows once it goes into production, but in the meantime, I can't debug if I can't test the cronjob on my own computer.
Any help is appreciated, e.g. pointing me to documentation that is useful, suggesting other extensions, etc. Thanks!
Pete
I would recommend you start using Celery,
Celery is a great library to use for job scheduling, whether you want this job to be periodic or upon request tasks.
Follow this guide
You can use Apache Airflow. It is written in Python. Its documentation is here.

Using celery 4 without Redis

My site write with django. I need to run some task in the background of container(I using ec2).
Recently, I research Celery. But, it required redis or queue server to run. It makes I cannot using celery because I mustn't install something else.
Question: Can I setup celery stand alone? If yes, how to do this? If no, Are we have any alternative, which can install stand alone?
The answer is - no, you cannot use Celery without a broker (Redis, RabbitMQ, or any other from the list of supported brokers).
I am not aware of a service that does both (queue management AND execution environment for your tasks). Best services follow the UNIX paradigm - "do one thing, and do it right". Service you described above would have to do two different, non-trivial things and that is probably why most likely such service does not exist (at least not in the Python world).

Deploying multiple Gunicorn applications. Best process manager for easy one-click setup?

I am a python programmer, and server administration was always a bit hard for me to immerse to. I always read tutorials and in practice just repeated the steps each time I set up a new project. I always used uwsgi, but realized that gunicorn is easier for simple projects like mine.
Recently, I successfully set up my first single gunicorn application with the help of this article: https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-16-04
But what should I do if I want to launch another app with gunicorn? Should I just make another systemd service file, like myproject.service? I'm looking for convenient 'one click' setup, so I can easily transfer my project to another machine, or add more gunicorn applications without much configuration. Or maybe, I should use another process manager like supervisor? What is the best solution for a newbie like me?
Sorry if my question's too dumb, but I'm really trying.
Thank you!

what is a robust way to execute long-running tasks/batches under Django?

I have a Django app that is intended to be run on Virtualbox VMs on LANs. The basic user will be a savvy IT end-user, not a sysadmin.
Part of that app's job is to connect to external databases on the LAN, run some python batches against those databases and save the results in its local db. The user can then explore the systems using Django pages.
Run time for the batches isn't all that long, but runs to minutes, tens of minutes potentially, not seconds. Run frequency is infrequent at best, I think you could spend days without needing a refresh.
This is not celery's normal use case of long tasks which will eventually push the results back into the web UI via ajax and/or polling. It is more similar to a dev's occasional use of the django-admin commands, but this time intended for an end user.
The user should be able to initiate a run of one or several of those batches when they want in order to refresh the calculations of a given external database (the target db is a parameter to the batch).
Until the batches are done for a given db, the app really isn't useable. You can access its pages, but many functions won't be available.
It is very important, from a support point of view that the batches remain easily runnable at all times. Dropping down to the VMs SSH would probably require frequent handholding which wouldn't be good - it is best that you could launch them from the Django webpages.
What I currently have:
Each batch is in its own script.
I can run it on the command line (via if __name__ == "main":).
The batches are also hooked up as celery tasks and work fine that way.
Given the way I have written them, it would be relatively easy for me to allow running them from subprocess calls in Python. I haven't really looked into it, but I suppose I could make them into django-admin commands as well.
The batches already have their own rudimentary status checks. For example, they can look at the calculated data and tell whether they have been run and display that in Django pages without needing to look at celery task status backends.
The batches themselves are relatively robust and I can make them more so. This is about their launch mechanism.
What's not so great.
In Mac dev environment I find the celery/celerycam/rabbitmq stack to be somewhat unstable. It seems as if sometime rabbitmqs daemon balloons up in CPU/RAM use and then needs to be terminated. That mightily confuses the celery processes and I find I have to kill -9 various tasks and relaunch them manually. Sometimes celery still works but celerycam doesn't so no task updates. Some of these issues may be OSX specific or may be due to the DEBUG flag being switched for now, which celery warns about.
So then I need to run the batches on the command line, which is what I was trying to avoid, until the whole celery stack has been reset.
This might be acceptable on a normal website, with an admin watching over it. But I can't have that happen on a remote VM to which only the user has access.
Given that these are somewhat fire-and-forget batches, I am wondering if celery isn't overkill at this point.
Some options I have thought about:
writing a cleanup shell/Python script to restart rabbitmq/celery/celerycam and generally make it more robust. i.e. whatever is required to make celery & all more stable. I've already used psutil to figure out rabbit/celery process are running and display their status in Django.
Running the batches via subprocess instead and avoiding celery. What about django-admin commands here? Does that make a difference? Still needs to be run from the web pages.
an alternative task/process manager to celery with less capability but also less moving parts?
not using subprocess but relying on Python multiprocessing module? To be honest, I have no idea how that compares to launches via subprocess.
environment:
nginx, wsgi, ubuntu on virtualbox, chef to build VMs.
I'm not sure how your celery configuration makes it unstable but sounds like it's still the best fit for your problem. I'm using redis as the queue system and it works better than rabbitmq from my own experience. Maybe you can try it see if it improves things.
Otherwise, just use cron as a driver to run periodic tasks. You can just let it run your script periodically and update the database, your UI component will poll the database with no conflict.

Pyramid (waitress) behind IIS randomly stops working when using APScheduler

I'm running a Pyramid application on Windows using Waitress as the app server and IIS as the Web Server (proxy). When I run the application, it works for a (seemingly) random amount of time before it just stops. It can go for days, even weeks at a time and then just stop, leaving IIS to throw a 502 error. When it stops, there's no way to restart it short of restarting Windows.
It's a small application which uses APScheduler to hit a couple of APIs to sync inventory between eBay/Amazon. I'm not entirely sure what is causing this problem, as there is no error shown in the logs. I had an older version of the application running (without APScheduler) and I didn't have this issue, so I'm assuming it has to do with APScheduler.
Has anybody else experienced this?
First let my say that I have not used APScheduler myself yet, and have next to no knowledge about running Python servers (or anything else really) on Windows and IIS.
So I can only guess here, but it seems clear that your problem is related to APScheduler in a way. I could imagine that something goes wrong in a thread that APScheduler uses for your background tasks, and the hanging thread brings down your whole application, due to the GIL (global interpreter lock in Python). This could for example happen when your thread(s) run into some kind of race condition.
Maybe it happens that processing starts before the previous iteration has finished processing. Or you get a really big backlog, and that leads to problems when processing starts.
Anyway, I think task queues are much better suited for background processing in web applications, because they run separately and out of the context of your web server.
You can schedule a task as soon as it's triggered by some user action, and it will be processed as soon as a worker is available, and not be deferred until a certain point in time.
I would recommend to give Celery a try, but there are also other solutions available, many based on Redis.
Celery is really powerful, and has advanced features like periodic tasks and crontab style schedules - so you can probably use it to do what you are doing using APScheduler now.
This looked very helpful for setting up Celery under Windows: http://mrtn.me/blog/2012/07/04/django-on-windows-run-celery-as-a-windows-service/
This may also prove helpful:
How to create Celery Windows Service?
Note: I haven't tried any of these myself, since I use Linux if I have a choice.
It's probably also possible to get APScheduler to work correctly, but I think it will be much easier to use Celery, because if a problem occurs in a worker you will be able to debug it much easier than a problem occuring in a background thread. Celery can also be configured to automatically send you email in case of an error.

Categories

Resources