Execute code after some time in django - python

I have a code which deletes an api column when executed. Now I want it to execute after some time lets say two weeks. Any idea or directions how do I implement it?
My code:
authtoken = models.UserApiToken.objects.get(api_token=token)
authtoken.delete()
This is inside a function and executed when a request is made.

There are two main ways to get this done:
Make it a custom management command, and trigger it through crontab.
Use celery, make it a celery task, and use celerybeat to trigger the job after 2 weeks.
I would recommend celery, as it provides a better control of your task queues and jobs.

Related

Django celery redis remove a specific periodic task from queue

There is a specific periodic task that needs to be removed from message queue. I am using the configuration of Redis and celery here.
tasks.py
#periodic_task(run_every=crontab(minute='*/6'))
def task_abcd():
"""
some operations here
"""
There are other periodic tasks also in the project but I need to stop this specific task to stop from now on.
As explained in this answer, the following code will work?
#periodic_task(run_every=crontab(minute='*/6'))
def task_abcd():
pass
In this example periodic task schedule is defined directly in code, meaning it is hard-coded and cannot be altered dynamically without code change and app re-deploy.
The provided code with task logic deleted or with simple return at the beginning - will work, but will not be the answer to the question - task will still run, there just is no code that will run with it.
Also, it is recommended NOT to use #periodic_task:
"""Deprecated decorator, please use :setting:beat_schedule."""
so it is not recommended to use it.
First, change method from being #periodic_task to just regular celery #task, and because you are using Django - it is better to go straightforward for #shared_task:
from celery import shared_task
#shared_task
def task_abcd():
...
Now this is just one of celery tasks, which needs to be called explicitly. Or it can be run periodically if added to celery beat schedule.
For production and if using multiple workers it is not recommended to run celery worker with embedded beat (-B) - run separate instance of celery beat scheduler.
Schedule can specified in celery.py or in django project settings (settings.py).
It is still not very dynamic, as to re-read settings app needs to be reloaded.
Then, use Database Scheduler which will allow dynamically creating schedules - which tasks need to be run and when and with what arguments. It even provides nice django admin web views for administration!
That code will work but I'd go for something that doesn't force you to update your code every time you need to disable/enable the task.
What you could do is to use a configurable variable whose value could come from an admin panel, a configuration file, or whatever you want, and use that to return before your code runs if the task is in disabled mode.
For instance:
#periodic_task(run_every=crontab(minute='*/6'))
def task_abcd():
config = load_config_for_task_abcd()
if not config.is_enabled:
return
# some operations here
In this way, even if your task is scheduled, its operations won't be executed.
If you simply want to remove the periodic task, have you tried to remove the function and then restart your celery service. You can restart your Redis service as well as your Django server for safe measure.
Make sure that the function you removed is not referenced anywhere else.

How to run a function periodically in django/celery without having a worker (or using Schedule)?

I would like to run my function do_periodic_server_error_checking every ten minutes, but I do not wish to:
have it run on a worker - I'd like it to be run on the server itself (as far as I can see there is no way to get apply_async, delay, or #periodic_task to run on the server)
use Schedule - (not sure why people are recommending a busy-waiting approach for a django server - maybe I'm missing something about the operation of Schedule)
My best idea so far is to schedule a cron job to make an HTTP request to the server, but I'm wondering if there's an easier way from within django itself?

Running python bot in django background

I've got a django project with simple form to take users details in. I want to use python bot running in the background and constantly checking django database for any changes. Is it Celery the right tool for this job? Any other solution? Thank you
I don't think Celery is really what you want here - Celery is primarily for moving tasks that don't need to be dealt with in the same process to a separate worker, such as sending registration emails.
For this situation I'd be inclined to use Django's signals to trigger the required functionality whenever the appropriate changes are made to the database. For instance, if it needed to be triggered when a particular type of object was created, such as a new user, then you might use the post_save signal of the user model.
The bot would be in a separate process, but it's not too hard to communicate between processes using Redis. Just have the signal publish a message to Redis, and have the bot listen for that message and carry out the required action on that event.
I don't have the details of your needs but, there are a few ways to achieve such things:
The Constantly checking approach:
A crontab which launch your python script every minute.
Like you said, you could use Celery beat, to achieve what a crontab would do, in your python environment
"On change" approach:
Probably the best, if you have control of the Django project, you could have your script run on the form validation/save! For this, You can add a celery task, run the python script, use Django signals...

Web2py scheduler - Best practices to rerun task continuously and to add task at startup

I want to add a task to the queue at app startup, currently adding a scheduler.queue_task(...) to the main db.py file. This is not ideal as I had to define the task function in this file.
I also want the task to repeat every 2 minutes continuously.
I would like to know what is the best practices for this?
As stated in web2py doc, to rerun task continuously, you just have to specify it at task queuing time :
scheduler.queue_task(your_function,
pargs=your_args,
timeout = 120, # just in case
period=120, # as you want to run it every 2 minutes
immediate=True, # starts task ASAP
repeats=0 # just does the infinite repeat magic
)
To queue it at startup, you might want to use web2py cron feature this simple way:
#reboot root *your_controller/your_function_that_calls_queue_task
Do not forget to enable this feature (-Y, more details in the doc).
There is no real mechanism for this within web2py it seems.
There are a few hacks one could do to continuously repeat tasks or schedule at startup but as far as I can see the web2py scheduler needs alot of work.
Best option is to just abondon this web2py feature and use celery or similar for advanced usage.

Can we force run the jobs in APScheduler job store?

Using APScheduler version 3.0.3. Services in my application internally use APScheduler to schedule & run jobs. Also I did create a wrapper class around the actual APScheduler(just a façade, helps in unit tests). For unit testing these services, I can mock the this wrapper class. But I have a situation where I would really like the APScheduler to run the job (during test). Is there any way by which one can force run the job?
There is no default trigger to launch a job immediately, to achieve this
you can get the current time and set a DateTrigger to the job like this:
my_job.modify_job(trigger=DateTrigger(run_date=datetime.datetime.now()))
this way you will "force" the job to run, but you have to make sure to insert the job again in the scheduler, another option is just creating a new job to run the same function with the add_job function
sched.add_job(func=your_function(),
trigger=DateTrigger(run_date=datetime.datetime.now()))
this way you don't have to do any additional step.
Another approach: you can write logic of your job in separate function. So, you will be able to call this function in your scheduled job as well as somewhere else. I guess that this is a more explicit way to do what you want.
Two alternatives.
change the jobs next_run_time(not verified personally, but here says it works: APScheduler how to trigger job now)
job = self.scheduler.get_job(job_id=job_id)
job.modify(next_run_time=datetime.now() + timedelta(seconds=5))
Add a run-once job to trigger it(verified):
self.scheduler.add_job(func=job.run,
executor=executor,
trigger="date",
max_instances=1,
run_date=datetime.now() + timedelta(seconds=5))

Categories

Resources