How to write a script in Python that outputs if celery is running on a machine (Ubuntu)?
My use-case. I have a simple python file with some tasks. I'm not using Django or Flask. I use supervisor to run the task queue. For example,
tasks.py
from celery import Celery, task
app = Celery('tasks')
#app.task()
def add_together(a, b):
return a + b
Supervisor:
[program:celery_worker]
directory = /var/app/
command=celery -A tasks worker info
This all works, I now want to have page which checks if celery/supervisor process is running. i.e. something like this maybe using Flask allowing me to host the page giving a 200 status allowing me to load balance.
For example...
check_status.py
from flask import Flask
app = Flask(__name__)
#app.route('/')
def status_check():
#check supervisor is running
if supervisor:
return render_template('up.html')
else:
return render_template('down.html')
if __name__ == '__main__':
app.run()
Update 09/2020: Jérôme updated this answer for Celery 4.3 here: https://stackoverflow.com/a/57628025/1159735
You can run the celery status command via code by importing the celery.bin.celery package:
import celery
import celery.bin.base
import celery.bin.celery
import celery.platforms
app = celery.Celery('tasks', broker='redis://')
status = celery.bin.celery.CeleryCommand.commands['status']()
status.app = status.get_app()
def celery_is_up():
try:
status.run()
return True
except celery.bin.base.Error as e:
if e.status == celery.platforms.EX_UNAVAILABLE:
return False
raise e
if __name__ == '__main__':
if celery_is_up():
print('Celery up!')
else:
print('Celery not responding...')
How about using subprocess, not sure if it is a good idea:
>>> import subprocess
>>> output = subprocess.check_output('ps aux'.split())
>>> 'supervisord' in output
True
you can parse process state from supervisorctl status output
import subprocess
def is_celery_worker_running():
ctl_output = subprocess.check_output('supervisorctl status celery_worker'.split()).strip()
if ctl_output == 'unix:///var/run/supervisor.sock no such file':
# supervisord not running
return False
elif ctl_output == 'No such process celery_worker':
return False
else:
state = ctl_output.split()[1]
return state == 'RUNNING'
Inspired by #vgel's answer, using Celery 4.3.0.
import celery
import celery.bin.base
import celery.bin.control
import celery.platforms
# Importing Celery app from my own application
from my_app.celery import app as celery_app
def celery_running():
"""Test Celery server is available
Inspired by https://stackoverflow.com/a/33545849
"""
status = celery.bin.control.status(celery_app)
try:
status.run()
return True
except celery.bin.base.Error as exc:
if exc.status == celery.platforms.EX_UNAVAILABLE:
return False
raise
if __name__ == '__main__':
if celery_is_up():
print('Celery up!')
else:
print('Celery not responding...')
A sparse web user interface comes with supervisor. May be you could use that. It can be enabled in the supervisor config. Key to look for is [inet_http_server]
You could even look at the source code of that piece to get ideas to implement your own.
This isn't applicable for celery, but for anyone that ended up here to see if supervisord is running, check to see if the pidfile defined for supervisord in your supervisord.conf configuration file exists. If so, it's running; if not, it isn't. The default pidfile is /tmp/supervisord.pid, which is what I used below.
import os
import sys
if os.path.isfile("/tmp/supervisord.pid"):
print "supervisord is running."
sys.exit()
In my experience, I'd set a message to track whether it was complete or not so that the queues would be responsible for retrying tasks.
Related
I have an issue when trying to log python prints using uwsgi from the console.
So I run my application from the console with:
uwsgi --http :9090 --wsgi-file wsgi.py --master -p 4
My wsgy.py file contains:
from assets_generator import app as application
if __name__ == "__main__":
app.run()
and my app looks like this (inside asset_generator.py):
from flask import Flask, render_template
app = Flask(__name__)
app.config.from_envvar('CONFIG')
from uwsgidecorators import thread
from worker import Worker
#thread
def _start_worker(item):
worker = Worker(item=item)
worker.run()
#app.route("/post-asset", methods=['GET', 'POST'])
def post_asset():
from flask import request
_start_worker(request.values)
return "OK", 200
The worker's run class calls a convert method:
class Worker(object):
def __init__(self, item):
super(Worker, self).__init__()
self.item = item
def run(self):
with app.app_context():
# prepare stuff for convertion, fill urls etc....
details = self.convert(
name=self.item.get('name'),
source_url=self.item.get('source_url'),
conversion_format=self.item.get('format'),
default_options=default_options
)
and the convert methods calls an url:
def convert(self, name, source_url, conversion_format, default_options):
try:
print "before requests " + source_url # THIS PRINT WORKS
r = requests.get(source_url)
print "after requests" # THIS ONE DOESN'T
# do other stuff, prints doesn't work
except Exception as e:
print " Error"
raise e
finally:
print "finally" # DOESN'T PRINT
if zip_extract_path:
shutil.rmtree(zip_extract_path)
print "before returning None" # DOESN'T PRINT
return None
My problem is that I can see the first print in the uwsgi console logs, but the second one actually never happen, and any other prints that are after this request call never happen.
I have tested manually to do the
r = requests.get(source_url)
with the right url from the place where this uwsgi applicaiton is ran, and the requests actually succeed and return OK.
I am a bit confused why my prints stop working, If anyone has an insight on this, it would be gladly appreciated.
For some reason, after trying to print a non-existing variable, and setting --py-autoreload 1 in the uwsgi config, my logs are now displayed, I don't understand why though.
you should pass flush=True to print statement for ex.
print("Hello", flush=True)
or useimport sys
sys.stdout.flush()
The reason is you did not reload your code. Though you change and save your file, your script keeps running(did not restart and reload code). If you add autoreload the script will check your file whether has changed, and restart server when it detected you modified the code. py-autoreload=N means it will check your code ecah N second(s). Read more: https://serverfault.com/questions/411362/how-do-i-make-uwsgi-restart-when-a-python-script-is-modified/411363
I have the following script which just boots up a web server serving a dynamically created website. In order to get dynamic data the script opens a file to read the data.
My concern is how can I catch CTRL-C command for killing the python script so I can close the file before script thread is killed.
I tried the following couple things but neither work:
from flask import Flask, render_template
import time
# Initialize the Flask application
app = Flask(__name__)
fileNames = {}
fileDesc = {}
for idx in range(1,4):
fileNames["name{}".format(idx)] = "./name" + str(idx) + ".txt"
fileDesc["name{}".format(idx)] = open(fileNames["name{}".format(idx)],'r')
try:
#app.route('/')
def index():
# code for reading data from files
return render_template('index.html', var1 = var1)
#app.errorhandler(Exception)
def all_exception_handler(error):
print("Closing")
for key, value in fileDesc:
val.close()
print("Files closed")
if __name__ == '__main__':
app.run(
host="192.168.0.166",
port=int("8080"),
debug=True
)
except KeyboardInterrupt:
print("Closing")
for key, value in fileDesc:
val.close()
print("Files closed")
Thanks in advance.
I am struggling with the same thing in my project. Something that did work for me was using signal to capture CTRL-C.
import sys
import signal
def handler(signal, frame):
print('CTRL-C pressed!')
sys.exit(0)
signal.signal(signal.SIGINT, handler)
signal.pause()
When this piece of code is put in the script that is running the Flask app, the CTRL-C can be captured. As of now, you have to use CTRL-C twice and then the handler is executed though. I'll investigate further and edit the answer if I find something new.
Edit 1
Okay I've done some more research and came up with some other methods, as the above is quite hack 'n slash.
In production, clean-up code such as closing databases or files is done via the #app.teardown_appcontext decorator. See this part of the tutorial.
When using the simple server, you can shut it down via exposing the werkzeug shutdown function. See this post.
Edit 2
I've tested the Werkzeug shutdown function, and it also works together with the teardown_appcontext functions. So I suggest to write your teardown functions using the decorator and writing a simple function that just does the shutdown of the werkzeug server. That way production and development code are the same.
Use atexit to handle this, from: https://stackoverflow.com/a/30739397/5782985
import atexit
#defining function to run on shutdown
def close_running_threads():
for thread in the_threads:
thread.join()
print "Threads complete, ready to finish"
#Register the function to be called on exit
atexit.register(close_running_threads)
#start your process
app.run()
I'm new to celery. All of the examples I've seen start a celery worker from the command line. e.g:
$ celery -A proj worker -l info
I'm starting a project on elastic beanstalk and thought it would be nice to have the worker be a subprocess of my web app. I tried using multiprocessing and it seems to work. I'm wondering if this is a good idea, or if there might be some disadvantages.
import celery
import multiprocessing
class WorkerProcess(multiprocessing.Process):
def __init__(self):
super().__init__(name='celery_worker_process')
def run(self):
argv = [
'worker',
'--loglevel=WARNING',
'--hostname=local',
]
app.worker_main(argv)
def start_celery():
global worker_process
worker_process = WorkerProcess()
worker_process.start()
def stop_celery():
global worker_process
if worker_process:
worker_process.terminate()
worker_process = None
worker_name = 'celery#local'
worker_process = None
app = celery.Celery()
app.config_from_object('celery_app.celeryconfig')
Seems like a good option, definitely not the only option but a good one :)
One thing you might want to look into (you might already be doing this), is linking the autoscaling to the size of your Celery queue. So you only scale up when the queue is growing.
Effectively Celery does something similar internally of course, so there's not a lot of difference. The only snag I can think of is the handling of external resources (database connections for example), that might be a problem but is completely dependent on what you are doing with Celery.
If anyone is interested, I did get this working on Elastic Beanstalk with a pre-configured AMI server running Python 3.4. I had a lot of problems with the Docker based server running Debian Jessie. Something to do with port remapping, maybe. Docker is kind of a black box, and I've found it very hard to work with and debug. Fortunately, the good folks at AWS just added a non-docker Python 3.4 option on April 8, 2015.
I did a lot of searching to get this deployed and working. I saw lots of questions without answers. So here's my very simple deployed python 3.4/flask/celery process.
Celery you can just pip install. You'll need to install rabbitmq from a configuration file with a config command or container_command. I'm using a script in my uploaded project zip, so a container_command is necessary to use the script (regular eb config command takes place before the project is installed).
[yourapproot]/.ebextensions/05_install_rabbitmq.config:
container_commands:
01RunScript:
command: bash ./init_scripts/app_setup.sh
[yourapproot]/init_scripts/app_setup.sh:
#!/usr/bin/env bash
# Download and install Erlang
yum install erlang
# Download the latest RabbitMQ package using wget:
wget http://www.rabbitmq.com/releases/rabbitmq-server/v3.5.1/rabbitmq-server-3.5.1-1.noarch.rpm
# Install rabbit
rpm --import http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
yum -y install rabbitmq-server-3.5.1-1.noarch.rpm
# Start server
/sbin/service rabbitmq-server start
I'm doing a flask app, so I startup the workers before the first request:
#app.before_first_request
def before_first_request():
task_mgr.start_celery()
The task_mgr creates the celery app object (which I call celery, since the flask app object is app). The -Ofair is pretty key here, for a simple task manager. There's all kinds of strange behavior with task prefetch. This should maybe be the default?
task_mgr/task_mgr.py:
import celery as celery_module
import multiprocessing
class WorkerProcess(multiprocessing.Process):
def __init__(self):
super().__init__(name='celery_worker_process')
def run(self):
argv = [
'worker',
'--loglevel=WARNING',
'--hostname=local',
'-Ofair',
]
celery.worker_main(argv)
def start_celery():
global worker_process
multiprocessing.set_start_method('fork') # 'spawn' seems to work also
worker_process = WorkerProcess()
worker_process.start()
def stop_celery():
global worker_process
if worker_process:
worker_process.terminate()
worker_process = None
worker_name = 'celery#local'
worker_process = None
celery = celery_module.Celery()
celery.config_from_object('task_mgr.celery_config')
My config is pretty simple so far:
task_mgr/celery_config.py:
BROKER_URL = 'amqp://'
CELERY_RESULT_BACKEND = 'amqp://'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json' # 'pickle' warning: can't use datetime in json
CELERY_RESULT_SERIALIZER = 'json' # 'pickle' warning: can't use datetime in json
CELERY_TASK_RESULT_EXPIRES = 18000 # Results hang around for 5 hours
CELERYD_CONCURRENCY = 4
Then you can put tasks wherever you need them:
from task_mgr.task_mgr import celery
import time
#celery.task(bind=True)
def error_task(self):
self.update_state(state='RUNNING')
time.sleep(10)
raise KeyError('im an error')
#celery.task(bind=True)
def long_task(self):
self.update_state(state='RUNNING')
time.sleep(20)
return 'long task finished'
#celery.task(bind=True)
def task_with_status(self, wait):
self.update_state(state='RUNNING')
for i in range(5):
time.sleep(wait)
self.update_state(
state='PROGRESS',
meta={
'current': i + 1,
'total': 5,
'status': 'progress',
'host': self.request.hostname,
}
)
time.sleep(wait)
return 'finished with wait = ' + str(wait)
I also keep a task queue to hold the async results so I can monitor the tasks:
task_queue = []
def queue_task(task, *args):
async_result = task.apply_async(args)
task_queue.append(
{
'task_name':task.__name__,
'task_args':args,
'async_result':async_result
}
)
return async_result
def get_tasks_info():
tasks = []
for task in task_queue:
task_name = task['task_name']
task_args = task['task_args']
async_result = task['async_result']
task_id = async_result.id
task_state = async_result.state
task_result_info = async_result.info
task_result = async_result.result
tasks.append(
{
'task_name': task_name,
'task_args': task_args,
'task_id': task_id,
'task_state': task_state,
'task_result.info': task_result_info,
'task_result': task_result,
}
)
return tasks
And of course, start the tasks where you need to:
from webapp.app import app
from flask import url_for, render_template, redirect
from webapp import tasks
from task_mgr import task_mgr
#app.route('/start_all_tasks')
def start_all_tasks():
task_mgr.queue_task(tasks.long_task)
task_mgr.queue_task(tasks.error_task)
for i in range(1, 9):
task_mgr.queue_task(tasks.task_with_status, i * 2)
return redirect(url_for('task_status'))
#app.route('/task_status')
def task_status():
current_tasks = task_mgr.get_tasks_info()
return render_template(
'parse/task_status.html',
tasks=current_tasks
)
And that's about it. Let me know if you need any help, though my celery knowledge is still fairly limited.
I have a flask app where I'd like to execute some code on the first time the app is run, not on the automatic reloads triggered by the debug mode. Is there any way of detecting when a reload is triggered so that I can do this?
To give an example, I might want to open a web browser every time I run the app from sublime text, but not when I subsequently edit the files, like so:
import webbrowser
if __name__ == '__main__':
webbrowser.open('http://localhost:5000')
app.run(host='localhost', port=5000, debug=True)
You can set an environment variable.
import os
if 'WERKZEUG_LOADED' in os.environ:
print 'Reloading...'
else:
print 'Starting...'
os.environ['WERKZEUG_LOADED']='TRUE'
I still don't know how to persist a reference that survives the reloading, though.
What about using Flask-Script to kick off a process before you start your server? Something like this (cribbed from their documentation and edited slightly):
# run_devserver.py
import webbrowser
from flask.ext.script import Manager
from myapp import app
manager = Manager(app)
if __name__ == "__main__":
webbrowser.open('http://localhost:5000')
manager.run(host='localhost', port=5000, debug=True)
I have a Flask app where it's not really practical to change the DEBUG flag or disable reloading, and the app is spun up in a more complex way than just flask run.
#osa's solution didn't work for me with flask debug on, because it doesn't have enough finesse to pick out the werkzeug watcher process from the worker process that gets reloaded.
I have this code in my main package's __init__.py (the package that defines the flask app). This code is run by another small module which has from <the_package_name> import app followed by app.run(debug=True, host='0.0.0.0', port=5000). Therefore this code is executed before the app starts.
import ptvsd
import os
my_pid = os.getpid()
if os.environ.get('PPID') == str(os.getppid()):
logger.debug('Reloading...')
logger.debug(f"Current process ID: {my_pid}")
try:
port = 5678
ptvsd.enable_attach(address=('0.0.0.0', port))
logger.debug(f'========================== PTVSD waiting on port {port} ==========================')
# ptvsd.wait_for_attach() # Not necessary for my app; YMMV
except Exception as ex:
logger.debug(f'PTVSD raised {ex}')
else:
logger.debug('Starting...')
os.environ['PPID'] = str(my_pid)
logger.debug(f"First process ID: {my_pid}")
NB: note the difference between os.getpid() and os.getppid() (the latter gets the parent process's ID).
I can attach at any point and it works great, even if the app has reloaded already before I attach. I can detach and re-attach. The debugger survives a reload.
I'm using Celery to manage asynchronous tasks. Occasionally, however, the celery process goes down which causes none of the tasks to get executed. I would like to be able to check the status of celery and make sure everything is working fine, and if I detect any problems display an error message to the user. From the Celery Worker documentation it looks like I might be able to use ping or inspect for this, but ping feels hacky and it's not clear exactly how inspect is meant to be used (if inspect().registered() is empty?).
Any guidance on this would be appreciated. Basically what I'm looking for is a method like so:
def celery_is_alive():
from celery.task.control import inspect
return bool(inspect().registered()) # is this right??
EDIT: It doesn't even look like registered() is available on celery 2.3.3 (even though the 2.1 docs list it). Maybe ping is the right answer.
EDIT: Ping also doesn't appear to do what I thought it would do, so still not sure the answer here.
Here's the code I've been using. celery.task.control.Inspect.stats() returns a dict containing lots of details about the currently available workers, None if there are no workers running, or raises an IOError if it can't connect to the message broker. I'm using RabbitMQ - it's possible that other messaging systems might behave slightly differently. This worked in Celery 2.3.x and 2.4.x; I'm not sure how far back it goes.
def get_celery_worker_status():
ERROR_KEY = "ERROR"
try:
from celery.task.control import inspect
insp = inspect()
d = insp.stats()
if not d:
d = { ERROR_KEY: 'No running Celery workers were found.' }
except IOError as e:
from errno import errorcode
msg = "Error connecting to the backend: " + str(e)
if len(e.args) > 0 and errorcode.get(e.args[0]) == 'ECONNREFUSED':
msg += ' Check that the RabbitMQ server is running.'
d = { ERROR_KEY: msg }
except ImportError as e:
d = { ERROR_KEY: str(e)}
return d
From the documentation of celery 4.2:
from your_celery_app import app
def get_celery_worker_status():
i = app.control.inspect()
availability = i.ping()
stats = i.stats()
registered_tasks = i.registered()
active_tasks = i.active()
scheduled_tasks = i.scheduled()
result = {
'availability': availability,
'stats': stats,
'registered_tasks': registered_tasks,
'active_tasks': active_tasks,
'scheduled_tasks': scheduled_tasks
}
return result
of course you could/should improve the code with error handling...
To check the same using command line in case celery is running as daemon,
Activate virtualenv and go to the dir where the 'app' is
Now run : celery -A [app_name] status
It will show if celery is up or not plus no. of nodes online
Source:
http://michal.karzynski.pl/blog/2014/05/18/setting-up-an-asynchronous-task-queue-for-django-using-celery-redis/
The following worked for me:
import socket
from kombu import Connection
celery_broker_url = "amqp://localhost"
try:
conn = Connection(celery_broker_url)
conn.ensure_connection(max_retries=3)
except socket.error:
raise RuntimeError("Failed to connect to RabbitMQ instance at {}".format(celery_broker_url))
One method to test if any worker is responding is to send out a 'ping' broadcast and return with a successful result on the first response.
from .celery import app # the celery 'app' created in your project
def is_celery_working():
result = app.control.broadcast('ping', reply=True, limit=1)
return bool(result) # True if at least one result
This broadcasts a 'ping' and will wait up to one second for responses. As soon as the first response comes in, it will return a result. If you want a False result faster, you can add a timeout argument to reduce how long it waits before giving up.
I found an elegant solution:
from .celery import app
try:
app.broker_connection().ensure_connection(max_retries=3)
except Exception as ex:
raise RuntimeError("Failed to connect to celery broker, {}".format(str(ex)))
You can use ping method to check whether any worker (or specific worker) is alive or not https://docs.celeryproject.org/en/latest/_modules/celery/app/control.html#Control.ping
celey_app.control.ping()
You can test on your terminal by running the following command.
celery -A proj_name worker -l INFO
You can review every time your celery runs.
The below script is worked for me.
#Import the celery app from project
from application_package import app as celery_app
def get_celery_worker_status():
insp = celery_app.control.inspect()
nodes = insp.stats()
if not nodes:
raise Exception("celery is not running.")
logger.error("celery workers are: {}".format(nodes))
return nodes
Run celery status to get the status.
When celery is running,
(venv) ubuntu#server1:~/project-dir$ celery status
-> celery#server1: OK
1 node online.
When no celery worker is running, you get the below information displayed in terminal.
(venv) ubuntu#server1:~/project-dir$ celery status
Error: No nodes replied within time constraint