How to schedule job with a particular time as well as interval? - python

Using python's schedule module how can I start a job at particular time and thereon it should be scheduled at regular intervals.
Suppose I want to schedule a task every 4 hours starting from 09:00 am.
schedule.every(4).hours.at("09:00").do(task) # This doesn't work
How to achieve the above?

You can convert the inner schedule (every 4 hours) into a separate function which would be called by the main schedule (fixed time). The inner schedule function would be the one calling your job function.
Example -
import schedule
import time
def job():
print "I am working" #your job function
def schedule_every_four_hours():
job() #for the first job to run
schedule.every(4).hour.do(job)
return schedule.CancelJob
schedule.every().day.at("09:00").do(schedule_every_four_hours)
while True:
schedule.run_pending()
time.sleep(1)
If you would like to kill the schedule based on your requirement read more here. Check here.

The above solution will not work if there are multiple schedules, as the schedule.CancelJob will cancel the other schedules on the pipe, better to use clear tag
import schedule
from datetime import datetime
import time
def task():
print 'I am here...',datetime.now()
def schedule_every_four_hours(clear):
if clear =='clear':
schedule.every(2).seconds.do(task).tag('mytask1') #for the first job to runschedule.every(4).hour.at("9:00").do(task)
else:
schedule.every(5).seconds.do(task).tag('mytask2') # for the second job to runschedule.every(4).hour.at("9:00").do(task)
print clear
schedule.clear(clear)
now = datetime.now()
times = str(now.hour+0)+ ":"+str(now.minute+1)
times1 = str(now.hour+0)+ ":"+str(now.minute+3)
schedule.every().day.at(times).do(schedule_every_four_hours,'clear').tag('clear')
schedule.every().day.at(times1).do(schedule_every_four_hours,'clear1').tag('clear1')
while True:
schedule.run_pending()
time.sleep(1)

Just as add on - because I was searching for a solution for this:
Start at a specific time
Have an interval
Exit at a specific time
import schedule
import time
from datetime import datetime as dt
def job():
now = dt.now()
dt_string = now.strftime('%Y-%m-%d %H:%M:%S')
print ("I am working " ,dt_string )#your job function
def schedule_every_four_hours():
job() #for the first job to run
schedule.every(2).minutes.until("09:46").do(job)
print(' 2 Minuten')
return schedule.CancelJob
schedule.every().day.at("09:29").do(schedule_every_four_hours)
while True:
schedule.run_pending()
time.sleep(1)

Related

python schedule running every 5 minutes adds few seconds delay

I am running the below code as an example where the function gets data and cleans it and shows the result every five minutes.
import schedule
import time
def job():
print("I'm working...")
schedule.every(5).minutes.do(job)
while True:
schedule.run_pending()
time.sleep(1)
The problem I have now is when the function runs, it takes a few seconds to do everything. For example, if the code runs at 9:00 am, it takes 2-5 seconds to complete the task. Due to this, the next time code runs at 9:05:05 seconds.
Is there a solution that can help me run the function every 5 minutes even after taking some time to complete the tasks in the function? I want the function to run exactly at 9:00 am, 9:05 am, and 9:10 am respectively.
Run another thread as mentioned in schedule docs: https://schedule.readthedocs.io/en/stable/
Schedule does not account for the time it takes for the job function to execute. To guarantee a stable execution schedule you need to move long-running jobs off the main-thread (where the scheduler runs). See Parallel execution for a sample implementation.
and once again I copy from documentation:
import threading
import time
import schedule
def job():
print("I'm running on thread %s" % threading.current_thread())
def run_threaded(job_func):
job_thread = threading.Thread(target=job_func)
job_thread.start()
schedule.every(10).seconds.do(run_threaded, job)
schedule.every(10).seconds.do(run_threaded, job)
schedule.every(10).seconds.do(run_threaded, job)
schedule.every(10).seconds.do(run_threaded, job)
schedule.every(10).seconds.do(run_threaded, job)
while 1:
schedule.run_pending()
time.sleep(1)
I already searched and see many people are with problems because the schedule doesn't control the execution time, and the workaround for this is the Parallel execution: here
Even after trying this, I still have the problem and even missed one call on date: 2023-02-15 21:03:11
Current Time = 2023-02-15 21:03:09.996591 <Thread(Thread-196 (test), started 123145329221632)>
Current Time = 2023-02-15 21:03:10.999913 <Thread(Thread-197 (test), started 123145329221632)>
Current Time = 2023-02-15 21:03:12.000702 <Thread(Thread-198 (test), started 123145329221632)>
Current Time = 2023-02-15 21:03:13.002731 <Thread(Thread-199 (test), started 123145329221632)>
Can someone help me with this? I would appreciate it so much.
my code:
import time
from datetime import datetime, timedelta
import threading
def test():
current_time = datetime.now()
print("Current Time =", current_time, threading.current_thread())
def run_threaded(job_func):
job_thread = threading.Thread(target=job_func)
job_thread.start()
schedule.every(1).seconds.do(run_threaded, test)
if __name__ == "__main__":
while True:
schedule.run_pending()
time.sleep(1)

python apscheduler does nothing

I made the following ,but it doesn't print the time.
from apscheduler.schedulers.background import BackgroundScheduler
from datetime import datetime
def tick():
print('Tick! The time is: %s' % datetime.now())
scheduler = BackgroundScheduler()
scheduler.add_job(tick,'interval',seconds=3)
print('starting')
scheduler.start()
print('stopped')
This is because your program is exiting before the interval has elapsed and needs to be kept alive at least until the first interval, consider using the following example:
while True:
#Thread activity here (time.sleep(2) for example)
or using other forms of activity to keep your main thread alive. Or just print out the time without this scheduling if that's what you really need.

python schedule module - not executing

Accoring to the documentation the following sample code should execute once, but it does not when including var currentTime , can you tell me how to pass the var to the schedule process properly
import schedule
import time
from datetime import datetime
currentTime = datetime.now().strftime('%H:%M')
def job_that_executes_once():
# Do some work that only needs to happen once...
print ('Only Once')
return schedule.CancelJob
schedule.every().day.at('22:30').do(job_that_executes_once)
while True:
schedule.run_pending()
time.sleep(1)
print ('Not executed')

I need a Major schedule file and inside that i also needed to run a sub schedule file,how can i do that?I tried crontab to schedule major

i am not able to schedule the sub schedule file(run only at once)
30 1 * * * python3 majorfile.py
the major file is scripted like this
import datetime
import time
import schedule
if today == some specific date:
def run1():
exec(open('sub_file.py').read())
return schedule.CancelJob
schedule.every().day.at("5:30").do(run1)
(this job is needed to run once)
with second the part i am facing issue(in this part also have situations like concurrent events)....it is not running....can anyone help me in this?'''
Error:-
The works pending must be checked in a loop.
Code:-
import datetime
from time import sleep
import schedule
def run1():
exec(open('sub_file.py').read())
return schedule.CancelJob
schedule.every().day.at("5:30").do(run1)
while True:
schedule.run_pending()
sleep(1)

How can make a task run every hour on the dot? [duplicate]

I want to repeatedly execute a function in Python every 60 seconds forever (just like an NSTimer in Objective C or setTimeout in JS). This code will run as a daemon and is effectively like calling the python script every minute using a cron, but without requiring that to be set up by the user.
In this question about a cron implemented in Python, the solution appears to effectively just sleep() for x seconds. I don't need such advanced functionality so perhaps something like this would work
while True:
# Code executed here
time.sleep(60)
Are there any foreseeable problems with this code?
If your program doesn't have a event loop already, use the sched module, which implements a general purpose event scheduler.
import sched, time
def do_something(scheduler):
# schedule the next call first
scheduler.enter(60, 1, do_something, (scheduler,))
print("Doing stuff...")
# then do your stuff
my_scheduler = sched.scheduler(time.time, time.sleep)
my_scheduler.enter(60, 1, do_something, (my_scheduler,))
my_scheduler.run()
If you're already using an event loop library like asyncio, trio, tkinter, PyQt5, gobject, kivy, and many others - just schedule the task using your existing event loop library's methods, instead.
Lock your time loop to the system clock like this:
import time
starttime = time.time()
while True:
print("tick")
time.sleep(60.0 - ((time.time() - starttime) % 60.0))
If you want a non-blocking way to execute your function periodically, instead of a blocking infinite loop I'd use a threaded timer. This way your code can keep running and perform other tasks and still have your function called every n seconds. I use this technique a lot for printing progress info on long, CPU/Disk/Network intensive tasks.
Here's the code I've posted in a similar question, with start() and stop() control:
from threading import Timer
class RepeatedTimer(object):
def __init__(self, interval, function, *args, **kwargs):
self._timer = None
self.interval = interval
self.function = function
self.args = args
self.kwargs = kwargs
self.is_running = False
self.start()
def _run(self):
self.is_running = False
self.start()
self.function(*self.args, **self.kwargs)
def start(self):
if not self.is_running:
self._timer = Timer(self.interval, self._run)
self._timer.start()
self.is_running = True
def stop(self):
self._timer.cancel()
self.is_running = False
Usage:
from time import sleep
def hello(name):
print "Hello %s!" % name
print "starting..."
rt = RepeatedTimer(1, hello, "World") # it auto-starts, no need of rt.start()
try:
sleep(5) # your long-running job goes here...
finally:
rt.stop() # better in a try/finally block to make sure the program ends!
Features:
Standard library only, no external dependencies
start() and stop() are safe to call multiple times even if the timer has already started/stopped
function to be called can have positional and named arguments
You can change interval anytime, it will be effective after next run. Same for args, kwargs and even function!
You might want to consider Twisted which is a Python networking library that implements the Reactor Pattern.
from twisted.internet import task, reactor
timeout = 60.0 # Sixty seconds
def doWork():
#do work here
pass
l = task.LoopingCall(doWork)
l.start(timeout) # call every sixty seconds
reactor.run()
While "while True: sleep(60)" will probably work Twisted probably already implements many of the features that you will eventually need (daemonization, logging or exception handling as pointed out by bobince) and will probably be a more robust solution
Here's an update to the code from MestreLion that avoids drifiting over time.
The RepeatedTimer class here calls the given function every "interval" seconds as requested by the OP; the schedule doesn't depend on how long the function takes to execute. I like this solution since it doesn't have external library dependencies; this is just pure python.
import threading
import time
class RepeatedTimer(object):
def __init__(self, interval, function, *args, **kwargs):
self._timer = None
self.interval = interval
self.function = function
self.args = args
self.kwargs = kwargs
self.is_running = False
self.next_call = time.time()
self.start()
def _run(self):
self.is_running = False
self.start()
self.function(*self.args, **self.kwargs)
def start(self):
if not self.is_running:
self.next_call += self.interval
self._timer = threading.Timer(self.next_call - time.time(), self._run)
self._timer.start()
self.is_running = True
def stop(self):
self._timer.cancel()
self.is_running = False
Sample usage (copied from MestreLion's answer):
from time import sleep
def hello(name):
print "Hello %s!" % name
print "starting..."
rt = RepeatedTimer(1, hello, "World") # it auto-starts, no need of rt.start()
try:
sleep(5) # your long-running job goes here...
finally:
rt.stop() # better in a try/finally block to make sure the program ends!
import time, traceback
def every(delay, task):
next_time = time.time() + delay
while True:
time.sleep(max(0, next_time - time.time()))
try:
task()
except Exception:
traceback.print_exc()
# in production code you might want to have this instead of course:
# logger.exception("Problem while executing repetitive task.")
# skip tasks if we are behind schedule:
next_time += (time.time() - next_time) // delay * delay + delay
def foo():
print("foo", time.time())
every(5, foo)
If you want to do this without blocking your remaining code, you can use this to let it run in its own thread:
import threading
threading.Thread(target=lambda: every(5, foo)).start()
This solution combines several features rarely found combined in the other solutions:
Exception handling: As far as possible on this level, exceptions are handled properly, i. e. get logged for debugging purposes without aborting our program.
No chaining: The common chain-like implementation (for scheduling the next event) you find in many answers is brittle in the aspect that if anything goes wrong within the scheduling mechanism (threading.Timer or whatever), this will terminate the chain. No further executions will happen then, even if the reason of the problem is already fixed. A simple loop and waiting with a simple sleep() is much more robust in comparison.
No drift: My solution keeps an exact track of the times it is supposed to run at. There is no drift depending on the execution time (as in many other solutions).
Skipping: My solution will skip tasks if one execution took too much time (e. g. do X every five seconds, but X took 6 seconds). This is the standard cron behavior (and for a good reason). Many other solutions then simply execute the task several times in a row without any delay. For most cases (e. g. cleanup tasks) this is not wished. If it is wished, simply use next_time += delay instead.
The easier way I believe to be:
import time
def executeSomething():
#code here
time.sleep(60)
while True:
executeSomething()
This way your code is executed, then it waits 60 seconds then it executes again, waits, execute, etc...
No need to complicate things :D
I ended up using the schedule module. The API is nice.
import schedule
import time
def job():
print("I'm working...")
schedule.every(10).minutes.do(job)
schedule.every().hour.do(job)
schedule.every().day.at("10:30").do(job)
schedule.every(5).to(10).minutes.do(job)
schedule.every().monday.do(job)
schedule.every().wednesday.at("13:15").do(job)
schedule.every().minute.at(":17").do(job)
while True:
schedule.run_pending()
time.sleep(1)
Alternative flexibility solution is Apscheduler.
pip install apscheduler
from apscheduler.schedulers.background import BlockingScheduler
def print_t():
pass
sched = BlockingScheduler()
sched.add_job(print_t, 'interval', seconds =60) #will do the print_t work for every 60 seconds
sched.start()
Also, apscheduler provides so many schedulers as follow.
BlockingScheduler: use when the scheduler is the only thing running in your process
BackgroundScheduler: use when you’re not using any of the frameworks below, and want the scheduler to run in the background inside your application
AsyncIOScheduler: use if your application uses the asyncio module
GeventScheduler: use if your application uses gevent
TornadoScheduler: use if you’re building a Tornado application
TwistedScheduler: use if you’re building a Twisted application
QtScheduler: use if you’re building a Qt application
I faced a similar problem some time back. May be http://cronus.readthedocs.org might help?
For v0.2, the following snippet works
import cronus.beat as beat
beat.set_rate(2) # run twice per second
while beat.true():
# do some time consuming work here
beat.sleep() # total loop duration would be 0.5 sec
The main difference between that and cron is that an exception will kill the daemon for good. You might want to wrap with an exception catcher and logger.
If drift is not a concern
import threading, time
def print_every_n_seconds(n=2):
while True:
print(time.ctime())
time.sleep(n)
thread = threading.Thread(target=print_every_n_seconds, daemon=True)
thread.start()
Which asynchronously outputs.
#Tue Oct 16 17:29:40 2018
#Tue Oct 16 17:29:42 2018
#Tue Oct 16 17:29:44 2018
If the task being run takes appreciable amount of time, then the interval becomes 2 seconds + task time, so if you need precise scheduling then this is not for you.
Note the daemon=True flag means this thread won't block the app from shutting down. For example, had issue where pytest would hang indefinitely after running tests waiting for this thead to cease.
Simply use
import time
while True:
print("this will run after every 30 sec")
#Your code here
time.sleep(30)
One possible answer:
import time
t=time.time()
while True:
if time.time()-t>10:
#run your task here
t=time.time()
I use Tkinter after() method, which doesn't "steal the game" (like the sched module that was presented earlier), i.e. it allows other things to run in parallel:
import Tkinter
def do_something1():
global n1
n1 += 1
if n1 == 6: # (Optional condition)
print "* do_something1() is done *"; return
# Do your stuff here
# ...
print "do_something1() "+str(n1)
tk.after(1000, do_something1)
def do_something2():
global n2
n2 += 1
if n2 == 6: # (Optional condition)
print "* do_something2() is done *"; return
# Do your stuff here
# ...
print "do_something2() "+str(n2)
tk.after(500, do_something2)
tk = Tkinter.Tk();
n1 = 0; n2 = 0
do_something1()
do_something2()
tk.mainloop()
do_something1() and do_something2() can run in parallel and in whatever interval speed. Here, the 2nd one will be executed twice as fast.Note also that I have used a simple counter as a condition to terminate either function. You can use whatever other contition you like or none if you what a function to run until the program terminates (e.g. a clock).
Here's an adapted version to the code from MestreLion.
In addition to the original function, this code:
1) add first_interval used to fire the timer at a specific time(caller need to calculate the first_interval and pass in)
2) solve a race-condition in original code. In the original code, if control thread failed to cancel the running timer("Stop the timer, and cancel the execution of the timer’s action. This will only work if the timer is still in its waiting stage." quoted from https://docs.python.org/2/library/threading.html), the timer will run endlessly.
class RepeatedTimer(object):
def __init__(self, first_interval, interval, func, *args, **kwargs):
self.timer = None
self.first_interval = first_interval
self.interval = interval
self.func = func
self.args = args
self.kwargs = kwargs
self.running = False
self.is_started = False
def first_start(self):
try:
# no race-condition here because only control thread will call this method
# if already started will not start again
if not self.is_started:
self.is_started = True
self.timer = Timer(self.first_interval, self.run)
self.running = True
self.timer.start()
except Exception as e:
log_print(syslog.LOG_ERR, "timer first_start failed %s %s"%(e.message, traceback.format_exc()))
raise
def run(self):
# if not stopped start again
if self.running:
self.timer = Timer(self.interval, self.run)
self.timer.start()
self.func(*self.args, **self.kwargs)
def stop(self):
# cancel current timer in case failed it's still OK
# if already stopped doesn't matter to stop again
if self.timer:
self.timer.cancel()
self.running = False
Here is another solution without using any extra libaries.
def delay_until(condition_fn, interval_in_sec, timeout_in_sec):
"""Delay using a boolean callable function.
`condition_fn` is invoked every `interval_in_sec` until `timeout_in_sec`.
It can break early if condition is met.
Args:
condition_fn - a callable boolean function
interval_in_sec - wait time between calling `condition_fn`
timeout_in_sec - maximum time to run
Returns: None
"""
start = last_call = time.time()
while time.time() - start < timeout_in_sec:
if (time.time() - last_call) > interval_in_sec:
if condition_fn() is True:
break
last_call = time.time()
I use this to cause 60 events per hour with most events occurring at the same number of seconds after the whole minute:
import math
import time
import random
TICK = 60 # one minute tick size
TICK_TIMING = 59 # execute on 59th second of the tick
TICK_MINIMUM = 30 # minimum catch up tick size when lagging
def set_timing():
now = time.time()
elapsed = now - info['begin']
minutes = math.floor(elapsed/TICK)
tick_elapsed = now - info['completion_time']
if (info['tick']+1) > minutes:
wait = max(0,(TICK_TIMING-(time.time() % TICK)))
print ('standard wait: %.2f' % wait)
time.sleep(wait)
elif tick_elapsed < TICK_MINIMUM:
wait = TICK_MINIMUM-tick_elapsed
print ('minimum wait: %.2f' % wait)
time.sleep(wait)
else:
print ('skip set_timing(); no wait')
drift = ((time.time() - info['begin']) - info['tick']*TICK -
TICK_TIMING + info['begin']%TICK)
print ('drift: %.6f' % drift)
info['tick'] = 0
info['begin'] = time.time()
info['completion_time'] = info['begin'] - TICK
while 1:
set_timing()
print('hello world')
#random real world event
time.sleep(random.random()*TICK_MINIMUM)
info['tick'] += 1
info['completion_time'] = time.time()
Depending upon actual conditions you might get ticks of length:
60,60,62,58,60,60,120,30,30,60,60,60,60,60...etc.
but at the end of 60 minutes you'll have 60 ticks; and most of them will occur at the correct offset to the minute you prefer.
On my system I get typical drift of < 1/20th of a second until need for correction arises.
The advantage of this method is resolution of clock drift; which can cause issues if you're doing things like appending one item per tick and you expect 60 items appended per hour. Failure to account for drift can cause secondary indications like moving averages to consider data too deep into the past resulting in faulty output.
e.g., Display current local time
import datetime
import glib
import logger
def get_local_time():
current_time = datetime.datetime.now().strftime("%H:%M")
logger.info("get_local_time(): %s",current_time)
return str(current_time)
def display_local_time():
logger.info("Current time is: %s", get_local_time())
return True
# call every minute
glib.timeout_add(60*1000, display_local_time)
timed-count can do that to high precision (i.e. < 1 ms) as it's synchronized to the system clock. It won't drift over time and isn't affected by the length of the code execution time (provided that's less than the interval period of course).
A simple, blocking example:
from timed_count import timed_count
for count in timed_count(60):
# Execute code here exactly every 60 seconds
...
You could easily make it non-blocking by running it in a thread:
from threading import Thread
from timed_count import timed_count
def periodic():
for count in timed_count(60):
# Execute code here exactly every 60 seconds
...
thread = Thread(target=periodic)
thread.start()
''' tracking number of times it prints'''
import threading
global timeInterval
count=0
def printit():
threading.Timer(timeInterval, printit).start()
print( "Hello, World!")
global count
count=count+1
print(count)
printit
if __name__ == "__main__":
timeInterval= int(input('Enter Time in Seconds:'))
printit()
I think it depends what you want to do and your question didn't specify lots of details.
For me I want to do an expensive operation in one of my already multithreaded processes. So I have that leader process check the time and only her do the expensive op (checkpointing a deep learning model). To do this I increase the counter to make sure 5 then 10 then 15 seconds have passed to save every 5 seconds (or use modular arithmetic with math.floor):
def print_every_5_seconds_have_passed_exit_eventually():
"""
https://stackoverflow.com/questions/3393612/run-certain-code-every-n-seconds
https://stackoverflow.com/questions/474528/what-is-the-best-way-to-repeatedly-execute-a-function-every-x-seconds
:return:
"""
opts = argparse.Namespace(start=time.time())
next_time_to_print = 0
while True:
current_time_passed = time.time() - opts.start
if current_time_passed >= next_time_to_print:
next_time_to_print += 5
print(f'worked and {current_time_passed=}')
print(f'{current_time_passed % 5=}')
print(f'{math.floor(current_time_passed % 5) == 0}')
starting __main__ at __init__
worked and current_time_passed=0.0001709461212158203
current_time_passed % 5=0.0001709461212158203
True
worked and current_time_passed=5.0
current_time_passed % 5=0.0
True
worked and current_time_passed=10.0
current_time_passed % 5=0.0
True
worked and current_time_passed=15.0
current_time_passed % 5=0.0
True
To me the check of the if statement is what I need. Having threads, schedulers in my already complicated multiprocessing multi-gpu code is not a complexity I want to add if I can avoid it and it seems I can. Checking the worker id is easy to make sure only 1 process is doing this.
Note I used the True print statements to really make sure the modular arithemtic trick worked since checking for exact time is obviously not going to work! But to my pleasant surprised the floor did the trick.

Categories

Resources