I need to execute a task.
i want to delete a table details at the interval of 30 days.
and i want to execute the task automatically while our server starts(after executing python manage.py runserver)
then i tried
pip install schedule
my scheduler function
def set_timeschedule():
HeartBeatLog.objects.all().delete()
then i dont understand i call the function
so i called this in urls.py
if date.today().day == 28:
schedule.every().day.at("16:36").do(set_timeschedule)
time added for testing ( i need to execute them in 30 days gap)
here HeartBeatLog is my modal. and i cant get this in manage.py or urls.py
but its not working
Is this is the Proper way for doing task ?
How can i get the Exact data.
where i put this code to delete the data while staring the server.
is it possible to kill the schedule after 1 time execution
You can set it up as such:
def set_timeschedule():
if date.today().day == 28:
HeartBeatLog.objects.all().delete()
schedule.every().day.at("16:36").do(set_timeschedule) # Sets up a job
while True:
schedule.run_pending() # Checks for active jobs and runs
sleep(900) # Sleep for 900s
You need to have run_pending() for the jobs that are scheduled to run
Related
I'm using supervisord to keep a Telegram bot developed with Python running on a server and I'm using the datetime library to get the current date and perform an operation that depends on the current date.
from datetime import date
today = date.today()
The problem: I noticed that, when the process is running, the Python always returns the same date; so, my bot doesn't return a different output each day, but the same.
To solve that, I had to enter the server, stop supervisor, kill the process and, manually, execute the python script to run the bot with the current date.
I thought about using a crontab to run supervisorctl restart all one time per day, but when i run that command the Python process doesn't stopped, even when I killed the process and run that command the output still returned the date of yesterday, I needed to manually run python3 myfile.py to refresh it. There is a way that I can actualize the Python date.today() without killing the process, or a way that I can kill and restart the Python process refreshing the current date?
The current code:
def get_today_meditation(update, context):
chat_id = update.message.chat_id
today = date.today()
print(today)
[ ... ]
def main():
key_api = os.environ.get('PYTHON_API_BREVIARIO_KEY')
locale.setlocale(locale.LC_ALL, "pt_BR.UTF-8")
updater = Updater(key_api, use_context=True)
dispatcher = updater.dispatcher
dispatcher.add_handler(CommandHandler('meditacaodehoje', get_today_meditation))
updater.start_polling()
updater.idle()
I have worked on a python project and have created a .exe file using pyinstaller. Now i have the executable file which i need to run on each and every machine (desktop/laptop) in my company. i am looking for a scheduling solution, where i can schedule the .exe file to run after every 2 hours and only on specific days.
Can some one guide me to a scheduling tool or any way where i can schedule to run the executable file on every machine.
things to consider : A solution which don't need or say less software to be installed in all the machines. The reason i created .exe file using pyinstaller is it doesn't need python to be installed in all the machines.
I would do something like the code reported above.
Note that here in the exemple I have set a sleep of one second, this means that every second the function checks if it is the right time to execute the job.
If your execution time precision is in minutes you can have a sleep of 60''.
import time
# here the time and date when you want to execute your job
datetime_of_exe = 'Mon Dec 14 12:00:00 2020'
# this is the job I want to do
def job():
print('I am working now!')
# this function to check if it is the right datetime to execute the job
def timeToExecute(datetime):
if time.ctime() == datetime:
return True
else:
return False
# this loop works 24h and exits after the job is done
while True:
if timeToExecute(datetime_of_exe):
job()
exit()
time.sleep(1)
I have a scheduler_project.py script file.
code:
from apscheduler.schedulers.background import BackgroundScheduler
scheduler = BackgroundScheduler()
def func_1():
# updating file-1
def func_2():
# updating file-2
scheduler.add_job(func_1, 'cron', hour='10-23' , minute='0-58', second=20)
scheduler.add_job(func_2, 'cron', hour='1')
scheduler.start()
When I run, (on Windows machine)
E:\> python scheduler_project.py
E:\> # there is no error
In Log: (I have added logging in above code at debug level)
It says, job is added and starts in (some x seconds), but it is not
starting.
In task manager, the command prompt process display for a second and disappear.
And my files are also not getting updated, which is obvious.
What's happening? What is the right way to execute this scheduler script?
Scheduler was created to run with other code - so after starting scheduler you can run other code.
If you don't have any other job then you have to use some loop to run it all time.
In documentation you can see link to examples on GitHub and one of example uses
while True:
time.sleep(2)
My objective is to schedule an Azure Batch Task to run every 5 minutes from the moment it has been added, and I use the Python SDK to create/manage my Azure resources. I tried creating a Job-Schedule and it automatically created a new Job under the specified Pool.
job_spec = batch.models.JobSpecification(
pool_info=batch.models.PoolInformation(pool_id=pool_id)
)
schedule = batch.models.Schedule(
start_window=datetime.timedelta(hours=1),
recurrence_interval=datetime.timedelta(minutes=5)
)
setup = batch.models.JobScheduleAddParameter(
'python_test_schedule',
schedule,
job_spec
)
batch_client.job_schedule.add(setup)
What I did is then add a task to this new Job. But the task seems to run only once as soon as it is added (like a normal task). Is there something more that I need to do to make the task run recurrently? There doesn't seem to be much documentation and examples of JobSchedule either.
Thank you! Any help is appreciated.
You are correct in that a JobSchedule will create a new job at the specified time interval. Additionally, you cannot have a task "re-run" every 5 minutes once it has completed. You could do either:
Have one task that runs a loop, performing the same action every 5 minutes.
Use a Job Manager to add a new task (that does the same thing) every 5 minutes.
I would probably recommend the 2nd option, as it has a little more flexibility to monitor the progress of the tasks and job and take actions accordingly.
An example client which creates the job might look a bit like this:
job_manager = models.JobManagerTask(
id='job_manager',
command_line="/bin/bash -c 'python ./job_manager.py'",
environment_settings=[
mdoels.EnvironmentSettings('AZ_BATCH_KEY', AZ_BATCH_KEY)],
resource_files=[
models.ResourceFile(blob_sas="https://url/to/job_manager.py", file_name="job_manager.py")],
authentication_token_settings=models.AuthenticationTokenSettings(
access=[models.AccessScope.job]),
kill_job_on_completion=True, # This will mark the job as complete once the Job Manager has finished.
run_exclusive=False) # Whether the job manager needs a dedicated VM - this will depend on the nature of the other tasks running on the VM.
new_job = models.JobAddParameter(
id='my_job',
job_manager_task=job_manager,
pool_info=models.PoolInformation(pool_id='my_pool'))
batch_client.job.add(new_job)
Now we need a script to run as the Job Manager on the compute node. In this case I will use Python, so you will need to add a StartTask to you pool (or JobPrepTask to the job) to install the azure-batch Python package.
Additionally the Job Manager Task will need to be able to authenticate against the Batch API. There are two methods of doing this depending on the scope of activities that the Job Manager will perform. If you only need to add tasks, then you can use the authentication_token_settings attribute, which will add an AAD token environment variable to the Job Manager task with permissions to ONLY access the current job. If you need permission to do other things, like alter the pool, or start new jobs, you can pass an account key via environment variable. Both options are shown above.
The script you run on the Job Manager task could look something like this:
import os
import time
from azure.batch import BatchServiceClient
from azure.batch.batch_auth import SharedKeyCredentials
from azure.batch import models
# Batch account credentials
AZ_BATCH_ACCOUNT = os.environ['AZ_BATCH_ACCOUNT_NAME']
AZ_BATCH_KEY = os.environ['AZ_BATCH_KEY']
AZ_BATCH_ENDPOINT = os.environ['AZ_BATCH_ENDPOINT']
# If you're using the authentication_token_settings for authentication
# you can use the AAD token in the environment variable AZ_BATCH_AUTHENTICATION_TOKEN.
def main():
# Batch Client
creds = SharedKeyCredentials(AZ_BATCH_ACCOUNT, AZ_BATCH_KEY)
batch_client = BatchServiceClient(creds, base_url=AZ_BATCH_ENDPOINT)
# You can set up the conditions under which your Job Manager will continue to add tasks here.
# It could be a timeout, max number of tasks, or you could monitor tasks to act on task status
condition = True
task_id = 0
task_params = {
"command_line": "/bin/bash -c 'echo hello world'",
# Any other task parameters go here.
}
while condition:
new_task = models.TaskAddParameter(id=task_id, **task_params)
batch_client.task.add(AZ_JOB, new_task)
task_id += 1
# Perform any additional log here - for example:
# - Check the status of the tasks, e.g. stdout, exit code etc
# - Process any output files for the tasks
# - Delete any completed tasks
# - Error handling for tasks that have failed
time.sleep(300) # Wait for 5 minutes (300 seconds)
# Job Manager task has completed - it will now exit and the job will be marked as complete.
if __name__ == '__main__':
main()
job_spec = batchmodels.JobSpecification(
pool_info=pool_info,
job_manager_task=batchmodels.JobManagerTask(
id="JobManagerTask",
#specify the command that needs to run recurrently
command_line="/bin/bash -c \" python3 task.py\""
))
Add the task that you want run recurrently as a JobManagerTask inside JobSpecification as shown above. Now this JobManagerTask will run recurrently.
I need to write a python script that autostarts on boot and is executed every 5 minutes on a raspberry pi. How can this be done? in particular, how can I avoid having a script locking up the cpu running a infine loop waiting for the 5 minutes to be over?
You can easily use cron for this task (schedule to run Python script). ;)
How to setup cron
I suppose that you have cron installed already; if not, then install some (vixie-cron for an example).
Create a new file /etc/cron.d/<any-name>.cron with the following content:
# run script every 5 minutes
*/5 * * * * myuser python /path/to/script.py
# run script after system (re)boot
#reboot myuser python /path/to/script.py
where myuser is the user to run the script (it shouldn’t be root if possible, for security reasons). If this doesn’t work, then try to append the content to /etc/crontab instead.
You might want to redirect stdout/stderr of the script to file, so you can check if everything works fine. This is same as in shell, just add something like >>/var/log/<any-name>-info.log 2>>/var/log/<any-name>-error.log after the script path.
Use schedule
wrap the scrip in a function
import schedule
import time
def func():
print("this is python")
schedule.every(5).minutes.do(func)
while True:
schedule.run_pending()
time.sleep(1)
You can use time.sleep
count = -1
while(not abort):
count = (count+1) % 100
if count == 0:
print('hello world!')
time.sleep(3)
I am considering your code takes less than 5 minutes, but the execution time for each run is not constant.
import time
while True:
t= time.time()
# your code goes here
................
........
t= time.time()-t
time.sleep(300-t)