Python Celery: Updating the state of an AsyncResult - python

After a parent task succeeds, depending on some of his child tasks results, I wish to update the task state.
However:
1/ I cannot find a way to retrieve the actual task instance based on its id, only its AsyncResult
def level5_success(task_id):
result = app.AsyncResult(task_id)
# Set the parent task state (do not work)
app.AsyncResult(task_id).update_state(state='HOWAREYOUDOING')
2/ I cannot find a way to update the state of an AsyncResult, only with the task itself using update_state:
def on_level4_success(sender, *args, **kwargs):
sender.update_state(state='HOWAREYOUDOING')
Any idea?

It feels like you are operating outside the bounds of what celery is designed to do. Coordination of work state and process is supposed to be done with the worker canvas, not by monkeying around with the celery internals. Even if you manage to get it to work, I doubt that state hacking is in the contract celery intends to keep with its API; it is entirely possible that your work will be broken by future changes to celery.
What are you trying to do that you cannot do with groups, chords and chains?

Related

How can I change the key names in celery

After the finishing a task, celery writes it in to redis. However, the key of the entry is determined by celery itself(I think it is the task id). I want to give specific names because that's how other services know their place.
In my code there is one producer that creates the tasks and many workers.
Right now I use tasks_name.get() after all tasks get completed and create new redis entries with my naming convention.
But that is that seems so unnecessary and slow. Celery should just use my convention.
I am thinking about renaming them but if this is feasible. How can I receive ids from producer.
Maybe some custom RedisBackend class but that is too deep.
You can give the task a custom ID using task_id parameter when submitting the task. It's supported by apply and apply_async methods of Celery task and send_task method of Celery class.

How to create a shared counter in Celery?

Is there a way to have a shared counter (shared between workers) in Celery? I am also open to other ideas on how to solve my problem, but would like to stick to Celery. Here is my problem:
I have a task that is dependent on an index passed to it. These tasks could pass or fail, but I need to target a number of passed tasks. If a job fails it should kick off a new job with the next available index.
I can of course do this through a function that tracks the active jobs and initiates the new jobs, but if there was something built in that'd be great.
You can use task_failure celery signal.
from celery.signals import task_failure
#task_failure.connect
def fail_task_handler(sender=None, body=None, **kwargs):
print('a task has failed')
# start new task or do something else
More at http://celery.readthedocs.org/en/latest/userguide/signals.html#task-failure

reuse results for celery tasks

Is there any common solution to store and reuse celery task results without executing tasks again? I have many http fetch tasks in my metasearch project and wish to reduce number of useless http requests (they can take long time and return same results) by store results of first one and fire it back without real fetching. Also it will be very useful to does not start new fetch task when the same one is already in progress. Instead of running new job app has to return AsyncResult by id (id is unique and generated by task call args) of already pending task.
Looks like I need to define new apply_async(Celery.send_task) behavior for tasks with same task_id:
if task with given task_id doesn't started yet then start it
if task with given task_id already started return AsyncResult(task_id) without actually run task
#task decorator should accept new ttl
kwarg to determine cache time (only for redis backend?)
Looks like the simplest answer is to store your results in a cache (like a database) and first ask for the result from your cache else fire the http request.
I don't think there's something specific to celery that can perform this.
Edit:
To comply with the fact that you the tasks are sent at the same time an additional thing would be to build a lock for celery task (see Celery Task Lock receipt).
In your case you want to give the lock a name containing the task name and the url name. And you can use whatever system you want for cache if visible by all your workers (Redis in your case?)

Python / rq - monitoring worker status

If this is an idiotic question, I apologize and will go hide my head in shame, but:
I'm using rq to queue jobs in Python. I want it to work like this:
Job A starts. Job A grabs data via web API and stores it.
Job A runs.
Job A completes.
Upon completion of A, job B starts. Job B checks each record stored by job A and adds some additional response data.
Upon completion of job B, user gets a happy e-mail saying their report's ready.
My code so far:
redis_conn = Redis()
use_connection(redis_conn)
q = Queue('normal', connection=redis_conn) # this is terrible, I know - fixing later
w = Worker(q)
job = q.enqueue(getlinksmod.lsGet, theURL,total,domainid)
w.work()
I assumed my best solution was to have 2 workers, one for job A and one for B. The job B worker could monitor job A and, when job A was done, get started on job B.
What I can't figure out to save my life is how I get one worker to monitor the status of another. I can grab the job ID from job A with job.id. I can grab the worker name with w.name. But haven't the foggiest as to how I pass any of that information to the other worker.
Or, is there a much simpler way to do this that I'm totally missing?
Update januari 2015, this pull request is now merged, and the parameter is renamed to depends_on, ie:
second_job = q.enqueue(email_customer, depends_on=first_job)
The original post left intact for people running older versions and such:
I have submitted a pull request (https://github.com/nvie/rq/pull/207) to handle job dependencies in RQ. When this pull request gets merged in, you'll be able to do:
def generate_report():
pass
def email_customer():
pass
first_job = q.enqueue(generate_report)
second_job = q.enqueue(email_customer, after=first_job)
# In the second enqueue call, job is created,
# but only moved into queue after first_job finishes
For now, I suggest writing a wrapper function to sequentially run your jobs. For example:
def generate_report():
pass
def email_customer():
pass
def generate_report_and_email():
generate_report()
email_customer() # You can also enqueue this function, if you really want to
# Somewhere else
q.enqueue(generate_report_and_email)
From this page on the rq docs, it looks like each job object has a result attribute, callable by job.result, which you can check. If the job hasn't finished, it'll be None, but if you ensure that your job returns some value (even just "Done"), then you can have your other worker check the result of the first job and then begin working only when job.result has a value, meaning the first worker was completed.
You are probably too deep into your project to switch, but if not, take a look at Twisted. http://twistedmatrix.com/trac/ I am using it right now for a project that hits APIs, scrapes web content, etc. It runs multiple jobs in parallel, as well as organizing certain jobs in order, so Job B doesn't execute until Job A is done.
This is the best tutorial for learning Twisted if you want to attempt. http://krondo.com/?page_id=1327
Combine the things that job A and job B do in one function, and then use e.g. multiprocessing.Pool (it's map_async method) to farm that out over different processes.
I'm not familiar with rq, but multiprocessing is a part of the standard library. By default it uses as many processes as your CPU has cores, which in my experience is usually enough to saturate the machine.

Periodical tasks for each entity

I often have models that are a local copy of some remote resource, which needs to be periodically kept in sync.
Task(
url="/keep_in_sync",
params={'entity_id':entity_id},
name="sync-%s" % entity_id,
countdown=3600
).add()
Inside keep_in_sync any changes are saved to the model and a new task is scheduled to happen again later.
Now, while superficially this seems like a nice solution, in practice you might become worried if all the necessary tasks have really been added or not. Maybe you have entities representing the level of food pellets inside your hamster cages so that an automated email can be sent to your housekeeper to feed them. But then a few weeks later when you come back from your holiday, you find several of your hamsters starving.
It then starts seeming like a good idea to make a script that goes through each entity and makes sure that the proper task really is in the queue for it. But neither Task nor Queue classes have any method for checking if a task exists or not.
Can you save the hamsters and come up with a nicer way to make sure that a method really for sure is being periodically called for each entity?
Update
It seems that if you want to be really sure that tasks are scheduled, you need to keep track of your own tasks as Nick Johnson suggests. Not ready to let go of the convenient task queue, so for the time being will just tolerate the uncertainty of being unable to check if tasks are really scheduled or not.
Instead of enqueueing a task per entity, handle multiple entities in a single task. This can be triggered by a daily cron job, for instance, which fans out to multiple tasks. As well as ensuring you execute your code for each entity, you can also take advantage of asynchronous URLFetch to synchronize with the external resource more efficiently, and batch puts and gets from the datastore to make the updates more efficient.
You'll get an exception (TaskAlreadyExistsError) if there already such task in queue (same url and same params). So, don't worry, just all of them into queue, and remember to catch exceptions.
You can find full list of exceptions here: http://code.google.com/intl/en/appengine/docs/python/taskqueue/exceptions.html

Categories

Resources