How to revoke the task while running - python

When I send a task and I try to revoke:
app=Celery()
app.control.revoke(task.id)
#or
app.control.revoke(task.id, terminate=True)
I get that error:
[2019-09-05 05:27:50,110: ERROR/MainProcess] pidbox command error: NotImplementedError("<class 'celery.concurrency.gevent.TaskPool'> does not implement kill_job",)
I'm using gevent.
celery -A MyApp worker -l info -P gevent
what's wrong?

gevent concurrency does not allow killing of the jobs. Pre-fork does allow it as it is as simple as killing the worker-process that is running the task that you want to terminate, same goes for threading.
There is an issue about this, with a proposed solution - https://github.com/celery/celery/issues/4019 - but nobody made a PR.

Related

Received unregistered task of type 'vubon'. - Getting unregistered error even when all the tasks are shown when starting celery. Why?

The command used to run celery which gives me the output shown in the image.As you can see in the image it says key error vubon. And I don't have any task named "vubon".
What maybe the issue is it import structure or something else???
celery -A patronpay worker -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler

Detect and Initiate celery worker in Python Code

Normally I run the following in terminal to start the worker process
celery -A myapp worker --loglevel=info
What I want to achieve now is that with python code
I will check whether they are worker process being initiated,
if not only I run this command (with python code)
How to achieve that?
There is no need for that as Celery gives you standard way to do it...
--pidfile PIDFILE Optional file used to store the process pid. The
program won't start if this file already exists and
the pid is still alive.
So simply change how you start your worker to something like celery -A myapp worker --loglevel=info --pidfile celery1.pid
If you open another terminal and run the command I wrote above, it will not run as the PID file is already created.

celery: daemonic processes are not allowed to have children (with concurrent.futures)

In Python (3.6) I try to create processes (with multiprocessing) in a celery task (celery 3.1.17) but it gives the error:
daemonic processes are not allowed to have children
code
def mp():
with concurrent.futures.ProcessPoolExecutor(max_workers=90) as executor:
data = executor.map(pdfocr,[no for no in list(json.keys())])
Start worker with -P threads argument:
celery worker -P threads
Celery version >= 4.4.0 has been officially supported
Refer: https://github.com/celery/celery/issues/4525#issuecomment-566503932

app-celery: ERROR (no such file)

I've followed the tutorial on how to implement celery on my django production server, using supervisor.
I've done this successfully, however when I try to start supervisor with sudo supervisorctl start app-celery - it returns:
app-celery: ERROR (no such file)
Here is my config in the folder /etc/supervisor/conf.d (app-celery.conf):
[program:app-celery]
command=/home/app/bin/celery worker -A draft1 --loglevel=INFO
directory=/home/app/draft1
numprocs=1
stdout_logfile=/var/log/supervisor/celery.log
stderr_logfile=/var/log/supervisor/celery.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
stopasgroup=true
; Set Celery priority higher than default (999)
; so, if rabbitmq is supervised, it will start first.
priority=1000
Any idea what the problem is?
I had the same issue. Adding the following resolved the issue for me.
environment=DJANGO_SETTINGS_MODULE="my_proj.settings"
I'm not sure why this is necessary. It's not listed in the documentation I've seen, and running the raw command either inside or outside of the virtual environment seems to be fine. Nevertheless, celery now starts and restarts without issue for me.

Why the supervisor make the celery worker changing form running to starting all the time?

backgroud
The system is Centos7, which have a python2.x. 1GB memory and single core.
I install python3.x , I can code python3 into python3.
The django-celery project is based on a virtualenv python3.x,and I had make it well at nginx,uwsgi,mariadb. At least,I think so for no error happend.
I try to use supervisor to control the django-celery's worker,like below:
command=env/bin/python project/manage.py celeryd -l INFO -n worker_%(process_num)s
numprocs=4
process_name=projects_worker_%(process_num)s
stdout_logfile=logfile.log
etderr_logfile=logfile_err.log
Also had make setting about celery events,celery beat,this part is well ,no error happend. Error comes from the part of worker.
When I keep the proces big than 1,it would run at first,when I do supervisorctl status,all are running.
But when I do the same command to see status once more times,some process status change to starting.
So I try more times,I found that:the worker's status would always change from running to starting and then changeing starting to running-- no stop.
When I check the supervisor's logfile at tmp/supervisor.log,it shows like:
exit status 1; not expected
entered runnging state,process has stayed up for > than 1 seconds(startsecs)
'project_worker_0' with pid 2284
Maybe it shows why the worker change status all the time.
What's more ,when I change the proces to 1,the worker could failed.The worker's log show me:
stale pidfile exists.Removing it
But,I did not ponit the pidfile path to worker.And,I just found the events's and beat 's pidfie at the / path,no worker's pidfile.Also ,I try find / -name *.pid to find a pidfile like worker,or celeryd,but here did not exist.
question
firstly, I want to deploy the project , so ,did here any other way to deploy the django-celery with virtulanev's celery part?
If here anyone can tell me how this phenomenon comes,I would better to choose supervisor to deploy the celery part. Anyone can help me about it ?
PS
Any of your thoughts may be helpful to me, best wishs!
Finally, I solve this problem yesterday night.
about the reason
I make the project could success running at a windows 10 system, but did no check when I change the project to centos7.+. The command:env/bin/python project/manage.py celeryd could not run success. So the supervisor would start a process which will failed soon.
Why the command could not success? I had pip installed all the package need. But it show err below:
Running a worker with superuser privileges when the worker accepts messages serialized with pickle is a very bad idea!
If you really want to continue then you have to set the C_FORCE_ROOT
environment variable (but please think about this before you do).
User information: uid=0 euid=0 gid=0 egid=0
I try to search some blog about this error, and get the anser:
export C_FORCE_ROOT='true' # at the centos enviroument
action to solve(after meeting error like this)
add export C_FORCE_ROOT='true' to centos's enviroment file and source it.
check command 'env/bin/python project/manage.py celeryd ',did it run successful.
restart the supervisord. Attention please! not supervisorctl reload,it just reload the .conf file,not the environment file. Try kill the process supervisord -c xx.conf(ps aux | grep supervisord and kill -9 process_number,be careful).
some url about the blog
the error when just run celeryd not sucess in chinese

Categories

Resources