Skip logging Celery results - python

I have a small webapp, written in Python using Flask. Some endpoints I have, require long execution times (~60 s +). The solution is to return task ids instantly, while starting up a Celery task in the background.
Everything works fine as it is. I have redirected the logging of Celery to a file and that is working great. The result that the task return is a huge data structure that later will be processed and potentially returned to the end-user. However, I have a small issue with the logging of the results. When celery finish a task it also logs the results of it. In my case, the previously mentioned, huge data structure. This is making the logfile harder to read and unnecessarily big.
Is it possible to only log that the task finished, the state of it and time it took?
Something like this:
[2017-02-06 15:12:01,286: INFO/PoolWorker-6] Task <task_name> succeeded in 60s
Not like this:
[2017-02-06 15:12:01,286: INFO/PoolWorker-6] Task <task_name> succeeded in 60s <very long string, potentially thousands of rows>

you can change celery worker log level to higher then INFO with:
celery ... --loglevel ERROR
see more from doc.

Related

How to Inspect the Queue Processing a Celery Task

I'm currently leveraging celery for periodic tasks. I am new to celery. I have two workers running two different queues. One for slow background jobs and one for jobs user's queue up in the application.
I am monitoring my tasks on datadog because it's an easy way to confirm my workers a running appropriately.
What I want to do is after each task completes, record which queue the task was completed on.
#after_task_publish.connect()
def on_task_publish(sender=None, headers=None, body=None, **kwargs):
statsd.increment("celery.on_task_publish.start.increment")
task = celery.tasks.get(sender)
queue_name = task.queue
statsd.increment("celery.on_task_publish.increment", tags=[f"{queue_name}:{task}"])
The following function is something that I implemented after researching the celery docs and some StackOverflow posts, but it's not working as intended. I get the first statsd increment but the remaining code does not execute.
I am wondering if there is a simpler way to inspect inside/after each task completes, what queue processed the task.
Since your question says is there a way to inspect inside/after each task completes - I'm assuming you haven't tried this celery-result-backend stuff. So you could check out this feature which is provided by Celery itself : Celery-Result-Backend / Task-result-Backend .
It is very useful for storing results of your celery tasks.
Read through this => https://docs.celeryproject.org/en/stable/userguide/configuration.html#task-result-backend-settings
Once you get an idea of how to setup this result-backend, Search for result_extended key (in the same link) to be able to add queue-names in your task return values.
Number of options are available - Like you can setup these results to go to any of these :
Sql-DB / NoSql-DB / S3 / Azure / Elasticsearch / etc
I have made use of this Result-Backend feature with Elasticsearch and this how my task results are stored :
It is just a matter of adding few configurations in settings.py file as per your requirements. Worked really well for my application. And I have a weekly cron that clears only successful results of tasks - since we don't need the results anymore - and I can see only failed results (like the one in image).
These were main keys for my requirement : task_track_started and task_acks_late along with result_backend

Perpetual tasks in Celery?

I'm building a django app where I use a camera to capture images, analyze them, store metadata and results of the analysis in a database, and finally present the data to users.
I'm considering using Celery to handle to background process of capturing images and then processing them:
app = Celery('myapp')
#app.task
def capture_and_process_images(camera):
while True:
image = camera.get_image()
process_image(image)
sleep(5000)
#app.task
def process_image(image):
# do some calculations
# django orm calls
# etc...
The first task will run perpetually, while the second should take ~20 seconds, so there will be multiple images being processed at once.
I haven't found any examples online of using Celery in this way, so I'm not sure if this is bad practice or not.
Can/should Celery be used to handle perpetually running tasks?
Thank you.
Running perpetual tasks in Celery is a done in practise. Take a look at daemonization, which essentially runs a permanent task without user interaction, so I wouldn't say there is anything wrong with running it permanently in your case.
Having celery task running infinitely is not seems like a good idea to me.
If you are going to capture images at some intervals I would suggest you to use some cron-like script getting an image every 5 seconds and launching celery task to process it.
Note also that it is a best practice to avoid synchronous subtasks in celery, see docs for more details.

Retrieving result from celery worker constantly

I have an web app in which I am trying to use celery to load background tasks from a database. I am currently loading the database upon request, but would like to load the tasks on an hourly interval and have them work in the background. I am using flask and am coding in python.I have redis running as well.
So far using celery I have gotten the worker to process the task and the beat to send the tasks to the worker on an interval. But I want to retrieve the results[a dataframe or query] from the worker and if the result is not ready then it should load the previous result of the worker.
Any ideas on how to do this?
Edit
I am retrieving the results from a database using sqlalchemy and I am rendering the results in a webpage. I have my homepage which has all the various links which all lead to different graphs which I want to be loaded in the background so the user does not have to wait long loading times.
The Celery Task is being executed by a Worker, and it's Result is being stored in the Celery Backend.
If I get you correctly, then I think you got few options:
Ignore the result of the graph-loading-task, store what ever you need, as a side effect of the task, in your database. When needed, query for the most recent result in that database. If the DB is Redis, you may find ZADD and ZRANGE suitable. This way you'll get the new if available, or the previous if not.
You can look for the result of a task if you provide it's id. You can do this when you want to find out the status, something like (where celery is the Celery app): result = celery.AsyncResult(<the task id>)
Use callback to update farther when new result is ready.
Let a background thread wait for the AsyncResult, or native_join, which is supported with Redis, and update accordingly (not recommended)
I personally used option #1 in similar cases (using MongoDB) and found it to be very maintainable and flexible. But possibly, due the nature of your UI, option #3 will more suitable for you needs.

Task queue for deferred tasks in GAE with python

I'm sorry if this question has in fact been asked before. I've searched around quite a bit and found pieces of information here and there but nothing that completely helps me.
I am building an app on Google App engine in python, that lets a user upload a file, which is then being processed by a piece of python code, and then resulting processed file gets sent back to the user in an email.
At first I used a deferred task for this, which worked great. Over time I've come to realize that since the processing can take more than then 10 mins I have before I hit the DeadlineExceededError, I need to be more clever.
I therefore started to look into task queues, wanting to make a queue that processes the file in chunks, and then piece everything together at the end.
My present code for making the single deferred task look like this:
_=deferred.defer(transform_function,filename,from,to,email)
so that the transform_function code gets the values of filename, from, to and email and sets off to do the processing.
Could someone please enlighten me as to how I turn this into a linear chain of tasks that get acted on one after the other? I have read all documentation on Google app engine that I can think about, but they are unfortunately not written in enough detail in terms of actual pieces of code.
I see references to things like:
taskqueue.add(url='/worker', params={'key': key})
but since I don't have a url for my task, but rather a transform_function() implemented elsewhere, I don't see how this applies to me…
Many thanks!
You can just keep calling deferred to run your task when you get to the end of each phase.
Other queues just allow you to control the scheduling and rate, but work the same.
I track the elapsed time in the task, and when I get near the end of the processing window the code stops what it is doing, and calls defer for the next task in the chain or continues where it left off, depending if its a discrete set up steps or a continues chunk of work. This was all written back when tasks could only run for 60 seconds.
However the problem you will face (it doesn't matter if it's a normal task queue or deferred) is that each stage could fail for some reason, and then be re-run so each phase must be idempotent.
For long running chained tasks, I construct an entity in the datastore that holds the description of the work to be done and tracks the processing state for the job and then you can just keep rerunning the same task until completion. On completion it marks the job as complete.
To avoid the 10 minutes timeout you can direct the request to a backend or a B type module
using the "_target" param.
BTW, any reason you need to process the chunks sequentially? If all you need is some notification upon completion of all chunks (so you can "piece everything together at the end")
you can implement it in various ways (e.g. each deferred task for a chunk can decrease a shared datastore counter [read state, decrease and update all in the same transaction] that was initialized with the number of chunks. If the datastore update was successful and counter has reached zero you can proceed with combining all the pieces together.) An alternative for using deferred that would simplify the suggested workflow can be pipelines (https://code.google.com/p/appengine-pipeline/wiki/GettingStarted).

Status of Python Celery tasks

I'm wondering what kind of options there are for monitoring celery tasks from a browser, after they have been deployed to a worker?
My current application stack is a flask app running inside twisted, using celery to run dozens to thousands of small background tasks (updating metadata in a repository, creating image derivatives, etc.) I'm envisioning using ajax long-polling to monitor the status of the celery tasks initiated by the user. I'm using redis for the backend broker and results.
I see celery has some command line ways to monitor tasks, or flower for a web dashboard. But if I wanted to see more detailed status from a particular task sent to celery, would it make more sense for that task to print / write to a log file, then long-poll that file for changes from the flask front-end?
At this point a user can say, "update these 10,000 items", the tasks are sent to celery, and the front-end very quickly says, "job sent!". And the tasks do complete. But I'd like to have the user navigate to "/status" and see the status of those 10,000 small jobs - even a scrolling log file would probably work.
Any suggestions would be greatly appreciated. Took a lot of head scratching to make it this far sketching things out, but I'm spinning my wheels figuring out exactly WHAT to long-poll from the user front-end.
Try Jobstatic, which is extending Celery.
From project description:
Jobtastic gives you goodies like:
Easy progress estimation/reporting
Job status feedback
Helper methods for gracefully handling a dead task broker (delay_or_eager and delay_or_fail)
Super-easy result caching
Thundering herd avoidance
Integration with a celery jQuery plugin for easy client-side progress display
Memory leak detection in a task run
Jobtastic was a great idea, but not quite what worked for us. In the end, decided to create an incrementing job number (stored in Redis alongside results and broker), push all celery task id's associated with that job number into a python object, then pickle and store that in redis. We can then use that later to see if the entire "job" is complete, or the status thereof. For our purposes, works just lovely.

Categories

Resources