Python Threading Timer - activate function every X seconds - python

Is there any simple way to activate the thread to fire up the function every X sec, to display some data?
def send_data():
data = "Message from client"
socket.sendall(data.encode())
write_thread = threading.Thread(target=send_data())
write_thread.start()

You could try the ischedule module - it provides very straightforward syntax for scheduling any given function.
Here's an example straight from the GitHub page:
from ischedule import run_loop, schedule
#schedule(interval=0.1)
def task():
print("Performing a task")
run_loop(return_after=1)
The return_after param in run_loop() is an optional timeout.
Also, in case you're unfamiliar, the # syntax is a Python decorator.

A simple way would be this:
import time
while True:
task()
time.sleep(1)

Related

Python Run Program Every X seconds? [duplicate]

I want to repeatedly execute a function in Python every 60 seconds forever (just like an NSTimer in Objective C or setTimeout in JS). This code will run as a daemon and is effectively like calling the python script every minute using a cron, but without requiring that to be set up by the user.
In this question about a cron implemented in Python, the solution appears to effectively just sleep() for x seconds. I don't need such advanced functionality so perhaps something like this would work
while True:
# Code executed here
time.sleep(60)
Are there any foreseeable problems with this code?
If your program doesn't have a event loop already, use the sched module, which implements a general purpose event scheduler.
import sched, time
def do_something(scheduler):
# schedule the next call first
scheduler.enter(60, 1, do_something, (scheduler,))
print("Doing stuff...")
# then do your stuff
my_scheduler = sched.scheduler(time.time, time.sleep)
my_scheduler.enter(60, 1, do_something, (my_scheduler,))
my_scheduler.run()
If you're already using an event loop library like asyncio, trio, tkinter, PyQt5, gobject, kivy, and many others - just schedule the task using your existing event loop library's methods, instead.
Lock your time loop to the system clock like this:
import time
starttime = time.time()
while True:
print("tick")
time.sleep(60.0 - ((time.time() - starttime) % 60.0))
If you want a non-blocking way to execute your function periodically, instead of a blocking infinite loop I'd use a threaded timer. This way your code can keep running and perform other tasks and still have your function called every n seconds. I use this technique a lot for printing progress info on long, CPU/Disk/Network intensive tasks.
Here's the code I've posted in a similar question, with start() and stop() control:
from threading import Timer
class RepeatedTimer(object):
def __init__(self, interval, function, *args, **kwargs):
self._timer = None
self.interval = interval
self.function = function
self.args = args
self.kwargs = kwargs
self.is_running = False
self.start()
def _run(self):
self.is_running = False
self.start()
self.function(*self.args, **self.kwargs)
def start(self):
if not self.is_running:
self._timer = Timer(self.interval, self._run)
self._timer.start()
self.is_running = True
def stop(self):
self._timer.cancel()
self.is_running = False
Usage:
from time import sleep
def hello(name):
print "Hello %s!" % name
print "starting..."
rt = RepeatedTimer(1, hello, "World") # it auto-starts, no need of rt.start()
try:
sleep(5) # your long-running job goes here...
finally:
rt.stop() # better in a try/finally block to make sure the program ends!
Features:
Standard library only, no external dependencies
start() and stop() are safe to call multiple times even if the timer has already started/stopped
function to be called can have positional and named arguments
You can change interval anytime, it will be effective after next run. Same for args, kwargs and even function!
You might want to consider Twisted which is a Python networking library that implements the Reactor Pattern.
from twisted.internet import task, reactor
timeout = 60.0 # Sixty seconds
def doWork():
#do work here
pass
l = task.LoopingCall(doWork)
l.start(timeout) # call every sixty seconds
reactor.run()
While "while True: sleep(60)" will probably work Twisted probably already implements many of the features that you will eventually need (daemonization, logging or exception handling as pointed out by bobince) and will probably be a more robust solution
Here's an update to the code from MestreLion that avoids drifiting over time.
The RepeatedTimer class here calls the given function every "interval" seconds as requested by the OP; the schedule doesn't depend on how long the function takes to execute. I like this solution since it doesn't have external library dependencies; this is just pure python.
import threading
import time
class RepeatedTimer(object):
def __init__(self, interval, function, *args, **kwargs):
self._timer = None
self.interval = interval
self.function = function
self.args = args
self.kwargs = kwargs
self.is_running = False
self.next_call = time.time()
self.start()
def _run(self):
self.is_running = False
self.start()
self.function(*self.args, **self.kwargs)
def start(self):
if not self.is_running:
self.next_call += self.interval
self._timer = threading.Timer(self.next_call - time.time(), self._run)
self._timer.start()
self.is_running = True
def stop(self):
self._timer.cancel()
self.is_running = False
Sample usage (copied from MestreLion's answer):
from time import sleep
def hello(name):
print "Hello %s!" % name
print "starting..."
rt = RepeatedTimer(1, hello, "World") # it auto-starts, no need of rt.start()
try:
sleep(5) # your long-running job goes here...
finally:
rt.stop() # better in a try/finally block to make sure the program ends!
import time, traceback
def every(delay, task):
next_time = time.time() + delay
while True:
time.sleep(max(0, next_time - time.time()))
try:
task()
except Exception:
traceback.print_exc()
# in production code you might want to have this instead of course:
# logger.exception("Problem while executing repetitive task.")
# skip tasks if we are behind schedule:
next_time += (time.time() - next_time) // delay * delay + delay
def foo():
print("foo", time.time())
every(5, foo)
If you want to do this without blocking your remaining code, you can use this to let it run in its own thread:
import threading
threading.Thread(target=lambda: every(5, foo)).start()
This solution combines several features rarely found combined in the other solutions:
Exception handling: As far as possible on this level, exceptions are handled properly, i. e. get logged for debugging purposes without aborting our program.
No chaining: The common chain-like implementation (for scheduling the next event) you find in many answers is brittle in the aspect that if anything goes wrong within the scheduling mechanism (threading.Timer or whatever), this will terminate the chain. No further executions will happen then, even if the reason of the problem is already fixed. A simple loop and waiting with a simple sleep() is much more robust in comparison.
No drift: My solution keeps an exact track of the times it is supposed to run at. There is no drift depending on the execution time (as in many other solutions).
Skipping: My solution will skip tasks if one execution took too much time (e. g. do X every five seconds, but X took 6 seconds). This is the standard cron behavior (and for a good reason). Many other solutions then simply execute the task several times in a row without any delay. For most cases (e. g. cleanup tasks) this is not wished. If it is wished, simply use next_time += delay instead.
The easier way I believe to be:
import time
def executeSomething():
#code here
time.sleep(60)
while True:
executeSomething()
This way your code is executed, then it waits 60 seconds then it executes again, waits, execute, etc...
No need to complicate things :D
I ended up using the schedule module. The API is nice.
import schedule
import time
def job():
print("I'm working...")
schedule.every(10).minutes.do(job)
schedule.every().hour.do(job)
schedule.every().day.at("10:30").do(job)
schedule.every(5).to(10).minutes.do(job)
schedule.every().monday.do(job)
schedule.every().wednesday.at("13:15").do(job)
schedule.every().minute.at(":17").do(job)
while True:
schedule.run_pending()
time.sleep(1)
Alternative flexibility solution is Apscheduler.
pip install apscheduler
from apscheduler.schedulers.background import BlockingScheduler
def print_t():
pass
sched = BlockingScheduler()
sched.add_job(print_t, 'interval', seconds =60) #will do the print_t work for every 60 seconds
sched.start()
Also, apscheduler provides so many schedulers as follow.
BlockingScheduler: use when the scheduler is the only thing running in your process
BackgroundScheduler: use when you’re not using any of the frameworks below, and want the scheduler to run in the background inside your application
AsyncIOScheduler: use if your application uses the asyncio module
GeventScheduler: use if your application uses gevent
TornadoScheduler: use if you’re building a Tornado application
TwistedScheduler: use if you’re building a Twisted application
QtScheduler: use if you’re building a Qt application
I faced a similar problem some time back. May be http://cronus.readthedocs.org might help?
For v0.2, the following snippet works
import cronus.beat as beat
beat.set_rate(2) # run twice per second
while beat.true():
# do some time consuming work here
beat.sleep() # total loop duration would be 0.5 sec
The main difference between that and cron is that an exception will kill the daemon for good. You might want to wrap with an exception catcher and logger.
If drift is not a concern
import threading, time
def print_every_n_seconds(n=2):
while True:
print(time.ctime())
time.sleep(n)
thread = threading.Thread(target=print_every_n_seconds, daemon=True)
thread.start()
Which asynchronously outputs.
#Tue Oct 16 17:29:40 2018
#Tue Oct 16 17:29:42 2018
#Tue Oct 16 17:29:44 2018
If the task being run takes appreciable amount of time, then the interval becomes 2 seconds + task time, so if you need precise scheduling then this is not for you.
Note the daemon=True flag means this thread won't block the app from shutting down. For example, had issue where pytest would hang indefinitely after running tests waiting for this thead to cease.
Simply use
import time
while True:
print("this will run after every 30 sec")
#Your code here
time.sleep(30)
One possible answer:
import time
t=time.time()
while True:
if time.time()-t>10:
#run your task here
t=time.time()
I use Tkinter after() method, which doesn't "steal the game" (like the sched module that was presented earlier), i.e. it allows other things to run in parallel:
import Tkinter
def do_something1():
global n1
n1 += 1
if n1 == 6: # (Optional condition)
print "* do_something1() is done *"; return
# Do your stuff here
# ...
print "do_something1() "+str(n1)
tk.after(1000, do_something1)
def do_something2():
global n2
n2 += 1
if n2 == 6: # (Optional condition)
print "* do_something2() is done *"; return
# Do your stuff here
# ...
print "do_something2() "+str(n2)
tk.after(500, do_something2)
tk = Tkinter.Tk();
n1 = 0; n2 = 0
do_something1()
do_something2()
tk.mainloop()
do_something1() and do_something2() can run in parallel and in whatever interval speed. Here, the 2nd one will be executed twice as fast.Note also that I have used a simple counter as a condition to terminate either function. You can use whatever other contition you like or none if you what a function to run until the program terminates (e.g. a clock).
Here's an adapted version to the code from MestreLion.
In addition to the original function, this code:
1) add first_interval used to fire the timer at a specific time(caller need to calculate the first_interval and pass in)
2) solve a race-condition in original code. In the original code, if control thread failed to cancel the running timer("Stop the timer, and cancel the execution of the timer’s action. This will only work if the timer is still in its waiting stage." quoted from https://docs.python.org/2/library/threading.html), the timer will run endlessly.
class RepeatedTimer(object):
def __init__(self, first_interval, interval, func, *args, **kwargs):
self.timer = None
self.first_interval = first_interval
self.interval = interval
self.func = func
self.args = args
self.kwargs = kwargs
self.running = False
self.is_started = False
def first_start(self):
try:
# no race-condition here because only control thread will call this method
# if already started will not start again
if not self.is_started:
self.is_started = True
self.timer = Timer(self.first_interval, self.run)
self.running = True
self.timer.start()
except Exception as e:
log_print(syslog.LOG_ERR, "timer first_start failed %s %s"%(e.message, traceback.format_exc()))
raise
def run(self):
# if not stopped start again
if self.running:
self.timer = Timer(self.interval, self.run)
self.timer.start()
self.func(*self.args, **self.kwargs)
def stop(self):
# cancel current timer in case failed it's still OK
# if already stopped doesn't matter to stop again
if self.timer:
self.timer.cancel()
self.running = False
Here is another solution without using any extra libaries.
def delay_until(condition_fn, interval_in_sec, timeout_in_sec):
"""Delay using a boolean callable function.
`condition_fn` is invoked every `interval_in_sec` until `timeout_in_sec`.
It can break early if condition is met.
Args:
condition_fn - a callable boolean function
interval_in_sec - wait time between calling `condition_fn`
timeout_in_sec - maximum time to run
Returns: None
"""
start = last_call = time.time()
while time.time() - start < timeout_in_sec:
if (time.time() - last_call) > interval_in_sec:
if condition_fn() is True:
break
last_call = time.time()
I use this to cause 60 events per hour with most events occurring at the same number of seconds after the whole minute:
import math
import time
import random
TICK = 60 # one minute tick size
TICK_TIMING = 59 # execute on 59th second of the tick
TICK_MINIMUM = 30 # minimum catch up tick size when lagging
def set_timing():
now = time.time()
elapsed = now - info['begin']
minutes = math.floor(elapsed/TICK)
tick_elapsed = now - info['completion_time']
if (info['tick']+1) > minutes:
wait = max(0,(TICK_TIMING-(time.time() % TICK)))
print ('standard wait: %.2f' % wait)
time.sleep(wait)
elif tick_elapsed < TICK_MINIMUM:
wait = TICK_MINIMUM-tick_elapsed
print ('minimum wait: %.2f' % wait)
time.sleep(wait)
else:
print ('skip set_timing(); no wait')
drift = ((time.time() - info['begin']) - info['tick']*TICK -
TICK_TIMING + info['begin']%TICK)
print ('drift: %.6f' % drift)
info['tick'] = 0
info['begin'] = time.time()
info['completion_time'] = info['begin'] - TICK
while 1:
set_timing()
print('hello world')
#random real world event
time.sleep(random.random()*TICK_MINIMUM)
info['tick'] += 1
info['completion_time'] = time.time()
Depending upon actual conditions you might get ticks of length:
60,60,62,58,60,60,120,30,30,60,60,60,60,60...etc.
but at the end of 60 minutes you'll have 60 ticks; and most of them will occur at the correct offset to the minute you prefer.
On my system I get typical drift of < 1/20th of a second until need for correction arises.
The advantage of this method is resolution of clock drift; which can cause issues if you're doing things like appending one item per tick and you expect 60 items appended per hour. Failure to account for drift can cause secondary indications like moving averages to consider data too deep into the past resulting in faulty output.
e.g., Display current local time
import datetime
import glib
import logger
def get_local_time():
current_time = datetime.datetime.now().strftime("%H:%M")
logger.info("get_local_time(): %s",current_time)
return str(current_time)
def display_local_time():
logger.info("Current time is: %s", get_local_time())
return True
# call every minute
glib.timeout_add(60*1000, display_local_time)
timed-count can do that to high precision (i.e. < 1 ms) as it's synchronized to the system clock. It won't drift over time and isn't affected by the length of the code execution time (provided that's less than the interval period of course).
A simple, blocking example:
from timed_count import timed_count
for count in timed_count(60):
# Execute code here exactly every 60 seconds
...
You could easily make it non-blocking by running it in a thread:
from threading import Thread
from timed_count import timed_count
def periodic():
for count in timed_count(60):
# Execute code here exactly every 60 seconds
...
thread = Thread(target=periodic)
thread.start()
''' tracking number of times it prints'''
import threading
global timeInterval
count=0
def printit():
threading.Timer(timeInterval, printit).start()
print( "Hello, World!")
global count
count=count+1
print(count)
printit
if __name__ == "__main__":
timeInterval= int(input('Enter Time in Seconds:'))
printit()
I think it depends what you want to do and your question didn't specify lots of details.
For me I want to do an expensive operation in one of my already multithreaded processes. So I have that leader process check the time and only her do the expensive op (checkpointing a deep learning model). To do this I increase the counter to make sure 5 then 10 then 15 seconds have passed to save every 5 seconds (or use modular arithmetic with math.floor):
def print_every_5_seconds_have_passed_exit_eventually():
"""
https://stackoverflow.com/questions/3393612/run-certain-code-every-n-seconds
https://stackoverflow.com/questions/474528/what-is-the-best-way-to-repeatedly-execute-a-function-every-x-seconds
:return:
"""
opts = argparse.Namespace(start=time.time())
next_time_to_print = 0
while True:
current_time_passed = time.time() - opts.start
if current_time_passed >= next_time_to_print:
next_time_to_print += 5
print(f'worked and {current_time_passed=}')
print(f'{current_time_passed % 5=}')
print(f'{math.floor(current_time_passed % 5) == 0}')
starting __main__ at __init__
worked and current_time_passed=0.0001709461212158203
current_time_passed % 5=0.0001709461212158203
True
worked and current_time_passed=5.0
current_time_passed % 5=0.0
True
worked and current_time_passed=10.0
current_time_passed % 5=0.0
True
worked and current_time_passed=15.0
current_time_passed % 5=0.0
True
To me the check of the if statement is what I need. Having threads, schedulers in my already complicated multiprocessing multi-gpu code is not a complexity I want to add if I can avoid it and it seems I can. Checking the worker id is easy to make sure only 1 process is doing this.
Note I used the True print statements to really make sure the modular arithemtic trick worked since checking for exact time is obviously not going to work! But to my pleasant surprised the floor did the trick.

Python - If nothing happens for 1 minute, proceed code

I am writing a script which sends a serial message over a websocket to a device. When I want to start the device I write:
def start(ws):
"""
Function to send the start command
"""
print("start")
command = dict()
command["commandId"] = 601
command["id"] = 54321
command["params"] = {}
send_command(ws, command)
Every 5 hours or so the device restarts, during the restart, my function start request does not run and my code stops completely.
My question is, is there a way to tell python: "If nothing has happened for 1 minute, try again"
It's not clear exactly what ws is or how you set it up; but you want to add a timeout to the socket.
https://websockets.readthedocs.io/en/stable/api.html#websockets.client.connect has a timeout keyword; refer to the documentation for details about what it does.
If this is not the websocket library you are using, please update your question with details.
You can use sleep from time module
import time
time.sleep(60) # waits for 1 minute
Also, do consider Multithreading for sleep
import threading
import time
def print_hello():
for i in range(4):
time.sleep(0.5)
print("Hello")
def print_hi():
for i in range(4):
time.sleep(0.7)
print("Hi")
t1 = threading.Thread(target=print_hello)
t2 = threading.Thread(target=print_hi)
t1.start()
t2.start()
The above program has two threads. Have used time.sleep(0.5) and time.sleep(0.75) to suspend execution of these two threads for 0.5 seconds and 0.7 seconds respectively.
more here

How do I run a task in the background with a delay?

I have the following code:
import time
def wait10seconds():
for i in range(10):
time.sleep(1)
return 'Counted to 10!'
print(wait10seconds())
print('test')
Now my question is how do you make print('test') run before the function wait10seconds() is executed without exchanging the 2 lines.
I want the output to be the following:
test
Counted to 10!
Anyone know how to fix this?
You can use Threads for this
like:
from threading import Thread
my_thread = Thread(target=wait10seconds) # Create a new thread that exec the function
my_thread.start() # start it
print('test') # print the test
my_thread.join() # wait for the function to end
You can use a Timer. Taken from the Python docs page:
def hello():
print("hello, world")
t = Timer(30.0, hello)
t.start() # after 30 seconds, "hello, world" will be printed
if you are using python 3.5+ you can use asyncio:
import asyncio
async def wait10seconds():
for i in range(10):
await asyncio.sleep(1)
return 'Counted to 10!'
print(asyncio.run(wait10seconds()))
asyncio.run is new to python 3.7, for python 3.5 and 3.6 you won't be able to use asyncio.run but you can achieve the same thing by working with the event_loop directly

Make this code non blocking

I'm using the VSphere API, here are the lines that I'm dealing with:
task = vm.PowerOff()
while task.info.state not in [vim.TaskInfo.State.success, vim.TaskInfo.State.error]:
time.sleep(1)
log.info("task {} is running".format(task))
log.ingo("task {} is done".format(task))
The problem here is that this blocks the execution completely whilst the task is not finished. I would like the logging part to be ran "in parallel", so I can start other tasks.
I thought about creating a function that would accept a task as parameter, and poll the info.state attribute just like now, but how do I make this non blocking ?
EDIT: I'm using Python 2.7
You could use asyncio and create an event loop. You can use asyncio.async() to create an asynchronous task that won't block the event loop execution.
Here is an example of using the threading module:
import threading
class VMShutdownThread(threading.Thread):
def __init__(self, vm):
self.vm = vm
def run(self):
task = vm.PowerOff()
while task.info.state not in [vim.TaskInfo.State.success, vim.TaskInfo.State.error]:
time.sleep(1)
log.info("task {} is running".format(task))
log.info("task {} is done".format(task))
vm_shutdown_thread = VMShutdownThread(vm)
vm_shutdown_thread.start()
If you create a logger, you can configure it to print the thread name.

Returning value from thread in python without blocking main thread

I have got an XMLRPC server and client runs some functions on server and gets returned value. If the function executes quickly then everything is fine but I have got a function that reads from file and returns some value to user. Reading takes about minute(there is some complicated stuff) and when one client runs this function on the server then server is not able to respond for other users until the function is done.
I would like to create new thread that will read this file and return value for user. Is it possible somehow?
Are there any good solutions/patters to do not block server when one client run some long function?
Yes it is possible , this way
#starting the thread
def start_thread(self):
threading.Thread(target=self.new_thread,args=()).start()
# the thread you are running your logic
def new_thread(self, *args):
#call the function you want to retrieve data from
value_returned = partial(self.retrieved_data_func,arg0)
#the function that returns
def retrieved_data_func(self):
arg0=0
return arg0
Yes, using the threading module you can spawn new threads. See the documentation. An example would be this:
import threading
import time
def main():
print("main: 1")
thread = threading.Thread(target=threaded_function)
thread.start()
time.sleep(1)
print("main: 3")
time.sleep(6)
print("main: 5")
def threaded_function():
print("thread: 2")
time.sleep(4)
print("thread: 4")
main()
This code uses time.sleep to simulate that an action takes a certain amount of time. The output should look like this:
main: 1
thread: 2
main: 3
thread: 4
main: 5

Categories

Resources