Using threading.Timer to run code at regular time intervals in Python - python

I am using the following approach to run certain code on Raspberry Pi Zero at regular time intervals, independent from the main thread:
from datetime import datetime
from threading import Timer
class Session:
def __init__(self):
self.refresh = None
def useful_method(self, param):
print(param)
def refresh_token(self):
print('%s Refreshing the token' % datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f'))
def set_token_refresh(self, period_seconds = None):
# Establish regular token refresh
if self.refresh:
self.refresh.cancel()
self.refresh = None
if period_seconds:
def refresh():
self.refresh_token()
self.refresh = Timer(period_seconds, refresh)
self.refresh.start()
self.refresh = Timer(period_seconds, refresh)
self.refresh.start()
s = Session()
s.set_token_refresh(3) # Every 3 seconds
# Do other things with the Session object
This lets me to leave the session object alone and let the token updates be carried out in background, or I can use it for other things, knowing that the token refreshing is being taken care of.
For practical demo, in this example code the period is 3 seconds. In the production code it is 180 seconds. The problem is, this code runs mostly fine, but then one call to refresh_token() happens way later than 180 seconds (in the instances I am investigating, 1100 or 1828 seconds later). And then the consequential calls once again are after correct time period. Unfortunately, if the token is refreshed too late, the session is no more valid. This happens when the code is idle, and there does not seem to be anything special about the time when this happens — date/time is not changed or anything.
It's not clear how to debug this, as the problem could not be provoked. Are there any known issues with using threading.Timer reliably?

Related

Python socket io emit event every 1 second

I'm creating HMI web application which visualise realtime data from simulation.
It means I'm updating data every 0.25/0.5 seconds. I decided to use socket.io.
So I want my server after connection to emit data every some period of time. I thought the best option would be emitting data in something similar like setInterval in JavaScript. However, in python this has not proved too easy. I tried a lot of options from stackoverflow f.e:
Python Equivalent of setInterval()?
But most of them are causing errors. Here are some methods which I tried.
#socketio.on('signalList')
def start_polling_signals():
global poll_signals
poll_signals = threading.Timer(1000, start_polling_signals())
poll_signals.start()
list_signals_v2()
print('polling')
#socketio.on('stopSignalList')
def stop_polling_signals():
global poll_signals
poll_signals.cancel()
poll_signals = None
Causes maximum recursion depth exceeded
#socketio.on('signalList')
def start_polling_signals():
starttime = time.time()
while True:
list_signals_v2()
time.sleep(1 - ((time.time() - starttime) % 1))
Causes that other socket events don't work, and I have no idea how to stop that polling.
Any ideas how to do it in optimal way? Have in mind I need to be able to start and stop the interval.
You should start a background task using the
start_background_task(target, *args, **kwargs)
This is the correct way to init something like what you want to do. Pass as target a function that will send data in the period of time you want it to send data.

Is asyncio.loop.time() comparable with datetime.datetime.now() and how?

I'm hoping to use an asyncio.loop to set callbacks at specific times. My problem is that I need to schedule these based on datetime.datetime objects (UTC) but asyncio.loop.call_at() uses an internal reference time.
A quick test on python 3.7.3 running on Ubuntu shows that asyncio.loop.time() is reporting the system uptime. For conversion my first thought is to naively store a reference time and use it later:
from asyncio import new_event_loop
from datetime import datetime, timedelta
_loop = new_event_loop()
_loop_base_time = datetime.utcnow() - timedelta(seconds=_loop.time())
def schedule_at(when, callback, *args):
_loop.call_at((when - _loop_base_time).total_seconds(), callback, *args)
However it's not clear whether or not this offset (datetime.utcnow() - timedelta(seconds=loop.time())) is stable. I have no idea whether system up-time drifts in comparison to UTC even where the system clock is modified (eg: through NTP updates).
Bearing in mind this is for monitoring software which will potentially be running for months at a time, small drifts might be very significant. I should note that I've seen systems lose minutes per day without an NTP daemon and one off NTP updates can shift times by many minutes in a short space of time. Since I don't know if the two are kept in sync, it's unclear how much I need to be concerned.
Note: I am aware of python's issue with scheduling events more than 24 hours in the future. I will get round this by storing distant future events in a list and polling for up-coming events every 12 hours, scheduling them only when they are < 24 hours in the future.
Is it possible to reliably convert from datetime.datetime to asyncio.loop times? or are the two time systems incomparable?. If they are comparable, is there anything special I need to do to ensure my calculations are correct.
You could compute the difference in seconds using the same time framework as the one you're using for scheduling, then use asyncio.call_later with the computed delay:
def schedule_at(when, callback, *args):
delay = (when - datetime.utcnow()).total_seconds()
_loop.call_later(delay, callback, *args)
This would work around the question of whether the difference between the loop's time and utcnow is stable; it only needs to be stable between the time of scheduling the task and the time of its execution (which, according to your notes, should be less than 12 hours).
For example: if the event loop's internal clock drifts 1 second apart from utcnow every hour (a deliberately extreme example), you would drift at most 12 seconds per task, but you would not accumulate this error over months of runtime. Compared with the approach of using a fixed reference, this approach gives a better guarantee.
Alternative approach would be not to rely on a loop internal clock at all. You can run a task in background and periodically check if callback should be executed.
This method's inaccuracy corresponds to a time you wait before next check, but I don't think it's critical considering any other possible inaccuracies (like Python GC's stop-the-world, for example).
On a good side is that you aren't limited by 24 hours.
This code shows main idea:
import asyncio
import datetime
class Timer:
def __init__(self):
self._callbacks = set()
self._task = None
def schedule_at(self, when, callback):
self._callbacks.add((when, callback,))
if self._task is None:
self._task = asyncio.create_task(self._checker())
async def _checker(self):
while True:
await asyncio.sleep(0.01)
self._exec_callbacks()
def _exec_callbacks(self):
ready_to_exec = self._get_ready_to_exec()
self._callbacks -= ready_to_exec
for _, callback in ready_to_exec:
callback()
def _get_ready_to_exec(self):
now = datetime.datetime.utcnow()
return {
(when, callback,)
for (when, callback,)
in self._callbacks
if when <= now
}
timer = Timer()
async def main():
now = datetime.datetime.utcnow()
s1_after = now + datetime.timedelta(seconds=1)
s3_after = now + datetime.timedelta(seconds=3)
s5_after = now + datetime.timedelta(seconds=5)
timer = Timer()
timer.schedule_at(s1_after, lambda: print('Hey!'))
timer.schedule_at(s3_after, lambda: print('Hey!'))
timer.schedule_at(s5_after, lambda: print('Hey!'))
await asyncio.sleep(6)
if __name__ == '__main__':
asyncio.run(main())

Python: Executing funciton in future time period with continuious postponement?

I have a Django web app which is used by embedded systems to upload regular data, currently every 2 minutes, to the server where Django just pops it into a database.
I'd like to create an alert system where by if there's no data uploaded from the remote system in a time period, say 10 minutes for example, I raise an alarm on the server, via email or something.
In other programming languages/environments I'd create a 10 minute timer to execute a function in 10 minutes, but every time data is uploaded I'd restart the timer. Thus hopefully the timer would never expire and the expiry function would never get called.
I might well have missed something obvious but if there is something I have missed it. This just does not seem possible in Python. Have I missed something?
At present looks like I need an external daemon monitoring the database :-(
You could use the time module for this:
import time
def didEventHappen():
# insert appropriate logic here to check
# for what you want to check for every 10 minutes
value = True # this is just a placeholder so the code runs
return value
def notifyServer():
print("Hello server, the event happened")
start = time.clock()
delay = 10 * 60 # 10 minutes, converted to seconds
while True:
interval = time.clock() - start
eventHappened = False
if interval >= delay:
eventHappened = didEventHappen()
start = time.clock() # reset the timer
if eventHappened:
notifyServer()
else:
print("event did not happen")
Alternatively, you could use the sched module.

How do I run a Python program to periodically update an existing pandas DataFrame?

I am creating panel data by importing from a database's API using a function called instance which generates a pd.DataFrame column of 200 dict objects, each containing the values for the same variables (e.g. "Number of comments" and "Number of views") corresponding to one of the 200 members of the panel.
This data is constantly being updated in real time and the database does not store its data. In other words, if one wants to keep track of how the data progresses over time, one must manually call the function instance every desired period (e.g. every hour).
I am wondering how I would go about writing a program to passively run my instance function every hour appending it to every other hour's execution. For this purpose, I have found the threading module of potential interest, particularly its Timer program, but have had difficulty applying it effectively. This is what I have come up with:
def instance_log(year, month, day, loglength):
start = datetime.datetime.now()
log = instance(year,month,day)
t = threading.Timer(60, log.join(instance(year, month, day)))
t.start()
if datetime.datetime.now() > start+datetime.timedelta(hours=loglength):
t.cancel()
return(log)
I tried running this program for loglength=1 (i.e. update the log DataFrame every minute for an hour), but it failed. Any help diagnosing what I did wrong or suggesting an alternate means of achieving what I'd want would be greatly appreciated.
By the way, to avoid confusion, I should clarify the inputs year, month, and day are used to identify the 200 panel members so that I use the same panelists for each iteration of instance.
Without knowing too much about your Instance (assuming it's a class) API this is how I would do this:
#!/usr/bin/env python
from __future__ import print_function
from circuits import Event, Component, Timer
class Instance(object):
"""My Instance Object"""
class App(Component):
def init(self, instance):
self.instance = instance
# Create a scheduled event every hour
Timer(60 * 60, Event.create("log_instance"), persist=True).register(self)
def log_instance(self, year, month, day, loglength):
"""Event Handler for scheduled log_instance Event"""
log = self.instance(year, month, day)
print(log) # Do something with log
instance = Instance() # create instance?
App(instance).run()
This doesn't use Python's threading library but provides a reusable and composable event-driven structure that you can extend using the circuits framework. (caveat: I'm the author of this framework/library and am biased towards Event-Driven approaches!).
NB: This is untested code as I'm not familiar with your exact requirements or your Instance's API (nor have you really shown that in the question).

Fast and Precise Python Repeating Timer

I need to send repeating messages from a list quickly and precisely. One list needs to send the messages every 100ms, with a +/- 10ms window. I tried using the code below, but the problem is that the timer waits the 100ms, and then all the computation needs to be done, making the timer fall out of the acceptable window.
Simply decreasing the wait is a messy, and unreliable hack. The there is a Lock around the message loop in the event the list gets edited during the loop.
Thoughts on how to get python to send messages consistently around 100ms? Thanks
from threading import Timer
from threading import Lock
class RepeatingTimer(object):
def __init__(self,interval, function, *args, **kwargs):
super(RepeatingTimer, self).__init__()
self.args = args
self.kwargs = kwargs
self.function = function
self.interval = interval
self.start()
def start(self):
self.callback()
def stop(self):
self.interval = False
def callback(self):
if self.interval:
self.function(*self.args, **self.kwargs)
Timer(self.interval, self.callback, ).start()
def loop(messageList):
listLock.acquire()
for m in messageList:
writeFunction(m)
listLock.release()
MESSAGE_LIST = [] #Imagine this is populated with the messages
listLock = Lock()
rt = RepeatingTimer(0.1,loop,MESSAGE_LIST)
#Do other stuff after this
I do understand that the writeFunction will cause some delay, but not more than the 10ms allowed. I essentially need to call the function every 100ms for each message. The messagelist is small, usually less than elements.
The next challenge is to have this work with every 10ms, +/-1ms :P
Yes, the simple waiting is messy and there are better alternatives.
First off, you need a high-precision timer in Python. There are a few alternatives and depending on your OS, you might want to choose the most accurate one.
Second, you must be aware of the basics preemptive multitasking and understand that there is no high-precision sleep function, and that its actual resolution will differ from OS to OS too. For example, if we're talking Windows, the minimal sleep interval might be around 10-13 ms.
And third, remember that it's always possible to wait for a very accurate interval of time (assuming you have a high-resolution timer), but with a trade-off of high CPU load. The technique is called busy waiting:
while(True):
if time.clock() == something:
break
So, the actual solution is to create a hybrid timer. It will use the regular sleep function to wait the main bulk of the interval, and then it'll start probing the high-precision timer in the loop, while doing the sleep(0) trick. Sleep(0) will (depending on the platform) wait the least possible amount of time, releasing the rest of the remaining time slice to other processes and switching the CPU context. Here is a relevant discussion.
The idea is thoroughly described in the Ryan Geiss's Timing in Win32 article. It's in C and for Windows API, but the basic principles apply here as well.
Store the start time. Send the message. Get the end time. Calculate timeTaken=end-start. Convert to FP seconds. Sleep(0.1-timeTaken). Loop back.
try this:
#!/usr/bin/python
import time; # This is required to include time module.
from threading import Timer
def hello(start, interval, count):
ticks = time.time()
t = Timer(interval - (ticks-start-count*interval), hello, [start, interval, count+1])
t.start()
print "Number of ticks since 12:00am, January 1, 1970:", ticks, " #", count
dt = 1.25 # interval in sec
t = Timer(dt, hello, [round(time.time()), dt, 0]) # start over at full second, round only for testing here
t.start()

Categories

Resources