Python does something every 5 minutes - python

i need to check data on an API. The API is refreshed with new data every 5 minutes (10:00, 10:05, 10:10 etc...)
I don't want to use time.sleep(300) because i want my script to do something at 10:05:03, then 10:05:03 etc. and not 5 min avec the script started (maybe it started at 10h12
How can i build this?
Thanks y'all.

UPDATE:
Just wanted to remove the possibility of recursion error, so I have rewritten the code:
from threading import Thread
from time import sleep
import datetime
def check_api():
# ... your code here ...
pass
def schedule_api():
while datetime.datetime.now().minute % 5 != 0:
sleep(1)
check_api()
while True:
sleep(300)
check_api()
thread = Thread(target=schedule_api)
thread.start()
Also if you want your thread to quit when the main program exits you could set daemon as True on the thread like:
thread.daemon = True
But this does not enforce a clean termination of this thread so you could also try this approach below:
# ...
RUNNING = True
# ...
thread = Thread(target=schedule_api)
thread.start()
#...
def main():
# ... all main code ...
pass
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
RUNNING = False
You can use the following code:
import threading
def check_api():
pass
timer_thread = threading.Timer(300, check_api)
timer_thread.start()
# call timer_thread.cancel() when you need it to stop
This will call your check_api function every 5 minutes and will not block your main code's execution.
as mentioned by #scotyy3785 the above code will only run once but I realize what you want and have written the code for it:
from threading import Thread
from time import sleep
import datetime
def check_api():
# ... your code here ...
pass
def caller(callback_func, first=True):
if first:
while not datetime.datetime.now().minute % 5 == 0:
sleep(1)
callback_func()
sleep(300)
caller(callback_func, False)
thread = Thread(target=caller, args=(check_api,))
thread.start()
# you'll have to handle the still running thread on exit
The above code will call check_api at minutes like 00, 05, 10, 15...

Check the time regularly in a loop and do something at certain minute marks:
import time
# returns the next 5 minute mark
# e.g. at minute 2 return 5
def get_next_time():
minute = time.localtime().tm_min
result = 5 - (minute % 5) + minute
if result == 60:
result = 0
return result
next_run = get_next_time()
while True:
now = time.localtime()
# at minute 0, 5, 10... + 3 seconds:
if next_run == now.tm_min and now.tm_sec >= 3:
print("checking api")
next_run = get_next_time()
time.sleep(1)

Related

How to run Threads at the same time and stop one of them without stopping other Thread?

I'm developing a reminder app. I'm asking this question for the 4th time. My issue is, I have 2 Threads. I'm using this Threads as reminders.
When the reminder date comes, the Thread stops.
Here's an example:
import datetime
from threading import Thread
# Current date & time: 12:30, 8/21/2020
current = datetime.datetime.now()
# First reminder: 12:35, 8/21/2020
a = datetime.datetime(2020, 8, 21, 12, 35)
# Second reminder: 12:40, 8/21/2020
b = datetime.datetime(2020, 8, 21, 12, 40)
Let's say I created 2 Threads. One Thread is waiting for a and the other one is waiting for b.
Everything is working nicely. Those Threads will wait until the reminder date comes. And then they will stop automatically using flag.
BUT when I attempt to stop Thread a, program stops Thread b too.
How to prevent this? Here's my full code:
import threading
from datetime import datetime, timedelta
import time
class Reminder:
# Target function
def createThread(self, check):
# Specific date (10 secs later from current date)
b = datetime.now() + timedelta(seconds = 10)
# Set flag (in this case it's 'e')
global e
e = check # gets value from parameter
while True:
# Current time
a = datetime.now()
# If user wants to stop Thread and e equals True, break
if e == True:
print("** REMINDER STOPPED ** -> Stopped.\n")
break
# If current time and set time equals, break
else:
if a >= b:
print("** REMINDER NOTIFICATION ** -> Worked.")
break
def exec(self):
# Global Thread name
global t
# Set thread, sleep 1 seconds and stop the thread
t = threading.Thread(target = self.createThread, args = [False])
# Start thread
t.start()
# Wait
time.sleep(1)
def stop(self):
global e
e = True # Stop thread
# This while statement checks program is still running
while True:
# Input
se = input("\nYup?")
print(se)
# Call class
r = Reminder()
# If input equals 'e', run the Thread
if se == 'e':
r.exec()
# If input equals 'd', stop the Thread
if se == 'd':
r.stop()
I want to set Threads by their name and delete them by their name.
Such as when I type delete a, program stops Thread a, but Thread b should be keep running.
Thanks.
Instead of one shared flag for all threads, you need a flag per thread.
The simplest way to do that is to have set or dict containing the thread objects or their thread_ids.
If you want to retrieve and stop threads by name, a dictionary is likely your best bet. You can spin up a thread and associate to a string.
spool = dict()
# add new thread
spool["first"] = threading.Thread(target = self.createThread, args = [False])
spool["first"].start()
# retrieve and stop thread
thread = spool["first"]
thread.join()
That being said, it would likely be both simpler and more efficient to simply store an ordered list of the reminders, and have a single thread iterating over them until one matches the current time or whatever reminder conditional you want to have.
class Reminder:
def __init__(self, time, msg):
self.time = time
self.msg = msg
def daemon(reminders):
while True:
for reminder in reminders:
if reminder.time <= datetime.now:
# print message or other actions ...
reminders = list[]
d = threading.Thread(target = daemon, args = [reminders])
# add more reminders and other program actions
You could also simply use the threading.Thread to execute a function after a give number of seconds. See the below example modified from the docs:
def hello():
print("hello, world")
year = 60 * 60 * 24 * 365
t = Timer(year, hello)
t.start() # after 1 year, "hello, world" will be printed

Use timeout to return if function has not finished

I have the following scenario:
res = []
def longfunc(arg):
# function runs arg number of steps
# each step can take 500 ms to 2 seconds to complete
# longfunc keeps adding result of each step into the array res
def getResult(arg,timeout):
# should call longfunc()
# if longfunc() has not provided result by timeout milliseconds then return None
# if there is partial result in res by timeout milliseconds then return res
# if longfunc() ends before timeout milliseconds then return complete result of longfunc i.e. res array
result = getResult(2, 500)
I am thinking of using multiprocessing.Process() to put longfunc() in a separate process, then start another thread to sleep for timeout milliseconds. I can't figure out how to get result from both of them in the main thread and decide which one came first. Any suggestions on this approach or other approaches are appreciated.
You can use time.perf_counterand your code will see:
import time
ProcessTime = time.perf_counter #this returns nearly 0 when first call it if python version <= 3.6
ProcessTime()
def longfunc(arg, timeout):
start = ProcessTime()
while True
# Do anything
delta = start + timeout - ProcessTime()
if delta > 0:
sleep(1)
else:
return #Error or False
you can change While for a for loop an for each task, check timeout
If you are applying multiprocessing then you have to simply apply p.join(timeout=5) where p in a process
Here is a simple example
import time
from itertools import count
from multiprocessing import Process
def inc_forever():
print('Starting function inc_forever()...')
while True:
time.sleep(1)
print(next(counter))
def return_zero():
print('Starting function return_zero()...')
return 0
if __name__ == '__main__':
# counter is an infinite iterator
counter = count(0)
p1 = Process(target=inc_forever, name='Process_inc_forever')
p2 = Process(target=return_zero, name='Process_return_zero')
p1.start()
p2.start()
p1.join(timeout=5)
p2.join(timeout=5)
p1.terminate()
p2.terminate()
if p1.exitcode is None:
print(f'Oops, {p1} timeouts!')
if p2.exitcode == 0:
print(f'{p2} is luck and finishes in 5 seconds!')
I think it may help you

How to timeout a long running program using rxpython?

Say I have a long running python function that looks something like this?
import random
import time
from rx import Observable
def intns(x):
y = random.randint(5,10)
print(y)
print('begin')
time.sleep(y)
print('end')
return x
I want to be able to set a timeout of 1000ms.
So I'm dong something like, creating an observable and mapping it through the above intense calculation.
a = Observable.repeat(1).map(lambda x: intns(x))
Now for each value emitted, if it takes more than 1000ms I want to end the observable, as soon as I reach 1000ms using on_error or on_completed
a.timeout(1000).subscribe(lambda x: print(x), lambda x: print(x))
above statement does get timeout, and calls on_error, but it goes on to finish calculating the intense calculation and only then returns to the next statements. Is there a better way of doing this?
The last statement prints the following
8 # no of seconds to sleep
begin # begins sleeping, trying to emit the first value
Timeout # operation times out, and calls on_error
end # thread waits till the function ends
The idea is that if a particular function timesout, i want to be able to continue with my program, and ignore the result.
I was wondering if the intns function was done on a separate thread, I guess the main thread continues execution after timeout, but I still want to stop computing intns function on a thread, or kill it somehow.
The following is a class that can be called using with timeout() :
If the block under the code runs for longer than the specified time, a TimeoutError is raised.
import signal
class timeout:
# Default value is 1 second (1000ms)
def __init__(self, seconds=1, error_message='Timeout'):
self.seconds = seconds
self.error_message = error_message
def handle_timeout(self, signum, frame):
raise TimeoutError(self.error_message)
def __enter__(self):
signal.signal(signal.SIGALRM, self.handle_timeout)
signal.alarm(self.seconds)
def __exit__(self, type, value, traceback):
signal.alarm(0)
# example usage
with timeout() :
# infinite while loop so timeout is reached
while True :
pass
If I'm understanding your function, here's what your implementation would look like:
def intns(x):
y = random.randint(5,10)
print(y)
print('begin')
with timeout() :
time.sleep(y)
print('end')
return x
You can do this partially using threading
Although there's no specific way to kill a thread in python, you can implement a method to flag the thread to end.
This won't work if the thread is waiting on other resources (in your case, you simulated a "long" running code by a random wait)
See also
Is there any way to kill a Thread in Python?
This way it works:
import random
import time
import threading
import os
def intns(x):
y = random.randint(5,10)
print(y)
print('begin')
time.sleep(y)
print('end')
return x
thr = threading.Thread(target=intns, args=([10]), kwargs={})
thr.start()
st = time.clock();
while(thr.is_alive() == True):
if(time.clock() - st > 9):
os._exit(0)
Here's an example for timeout
import random
import time
import threading
_timeout = 0
def intns(loops=1):
print('begin')
processing = 0
for i in range(loops):
y = random.randint(5,10)
time.sleep(y)
if _timeout == 1:
print('timedout end')
return
print('keep processing')
return
# this will timeout
timeout_seconds = 10
loops = 10
# this will complete
#timeout_seconds = 30.0
#loops = 1
thr = threading.Thread(target=intns, args=([loops]), kwargs={})
thr.start()
st = time.clock();
while(thr.is_alive() == True):
if(time.clock() - st > timeout_seconds):
_timeout = 1
thr.join()
if _timeout == 0:
print ("completed")
else:
print ("timed-out")
You can use time.sleep() and make a while loop for time.clock()

Is it possible to execute function every x seconds in python, when it is performing pool.map?

I am running pool.map on big data array and i want to print report in console every minute.
Is it possible? As i understand, python is synchronous language, it can't do this like nodejs.
Perhaps it can be done by threading.. or how?
finished = 0
def make_job():
sleep(1)
global finished
finished += 1
# I want to call this function every minute
def display_status():
print 'finished: ' + finished
def main():
data = [...]
pool = ThreadPool(45)
results = pool.map(make_job, data)
pool.close()
pool.join()
You can use a permanent threaded timer, like those from this question: Python threading.timer - repeat function every 'n' seconds
from threading import Timer,Event
class perpetualTimer(object):
# give it a cycle time (t) and a callback (hFunction)
def __init__(self,t,hFunction):
self.t=t
self.stop = Event()
self.hFunction = hFunction
self.thread = Timer(self.t,self.handle_function)
def handle_function(self):
self.hFunction()
self.thread = Timer(self.t,self.handle_function)
if not self.stop.is_set():
self.thread.start()
def start(self):
self.stop.clear()
self.thread.start()
def cancel(self):
self.stop.set()
self.thread.cancel()
Basically this is just a wrapper for a Timer object that creates a new Timer object every time your desired function is called. Don't expect millisecond accuracy (or even close) from this, but for your purposes it should be ideal.
Using this your example would become:
finished = 0
def make_job():
sleep(1)
global finished
finished += 1
def display_status():
print 'finished: ' + finished
def main():
data = [...]
pool = ThreadPool(45)
# set up the monitor to make run the function every minute
monitor = PerpetualTimer(60,display_status)
monitor.start()
results = pool.map(make_job, data)
pool.close()
pool.join()
monitor.cancel()
EDIT:
A cleaner solution may be (thanks to comments below):
from threading import Event,Thread
class RepeatTimer(Thread):
def __init__(self, t, callback, event):
Thread.__init__(self)
self.stop = event
self.wait_time = t
self.callback = callback
self.daemon = True
def run(self):
while not self.stop.wait(self.wait_time):
self.callback()
Then in your code:
def main():
data = [...]
pool = ThreadPool(45)
stop_flag = Event()
RepeatTimer(60,display_status,stop_flag).start()
results = pool.map(make_job, data)
pool.close()
pool.join()
stop_flag.set()
One way to do this, is to use main thread as the monitoring one. Something like below should work:
def main():
data = [...]
results = []
step = 0
pool = ThreadPool(16)
pool.map_async(make_job, data, callback=results.extend)
pool.close()
while True:
if results:
break
step += 1
sleep(1)
if step % 60 == 0:
print "status update" + ...
I've used .map() instead of .map_async() as the former is synchronous one. Also you probably will need to replace results.extend with something more efficient. And finally, due to GIL, speed improvement may be much smaller than expected.
BTW, it is little bit funny that you wrote that Python is synchronous in a question that asks about ThreadPool ;).
Consider using the time module. The time.time() function returns the current UNIX time.
For example, calling time.time() right now returns 1410384038.967499. One second later, it will return 1410384039.967499.
The way I would do this would be to use a while loop in the place of results = pool(...), and on every iteration to run a check like this:
last_time = time.time()
while (...):
new_time = time.time()
if new_time > last_time+60:
print "status update" + ...
last_time = new_time
(your computation here)
So that will check if (at least) a minute has elapsed since your last status update. It should print a status update approximately every sixty seconds.
Sorry that this is an incomplete answer, but I hope this helps or gives you some useful ideas.

How to stop a function at a specific time and continue with next function in python?

I have a code:
function_1()
function_2()
Normally, function_1() takes 10 hours to end.
But I want function_1() to run for 2 hours, and after 2 hours, function_1 must return and program must continue with function_2(). It shouldn't wait for function_1() to be completed. Is there a way to do this in python?
What makes functions in Python able to interrupt their execution and resuming is the use of the "yield" statement -- your function then will work as a generator object. You call the "next" method on this object to have it start or continue after the last yield
import time
def function_1():
start_time = time.time()
while True:
# do long stuff
running_time = time.time() -start_time
if running_time > 2 * 60 * 60: # 2 hours
yield #<partial results can be yield here, if you want>
start_time = time.time()
runner = function_1()
while True:
try:
runner.next()
except StopIteration:
# function_1 had got to the end
break
# do other stuff
If you don't mind leaving function_1 running:
from threading import Thread
import time
Thread(target=function_1).start()
time.sleep(60*60*2)
Thread(target=function_2).start()
You can try to use module Gevent: start function in thread and kill that thread after some time.
Here is example:
import gevent
# function which you can't modify
def func1(some_arg)
# do something
pass
def func2()
# do something
pass
if __name__ == '__main__':
g = gevent.Greenlet(func1, 'Some Argument in func1')
g.start()
gevent.sleep(60*60*2)
g.kill()
# call the rest of functions
func2()
from multiprocessing import Process
p1 = Process(target=function_1)
p1.start()
p1.join(60*60*2)
if p1.is_alive():p1.terminate()
function_2()
I hope this helps
I just tested this using the following code
import time
from multiprocessing import Process
def f1():
print 0
time.sleep(10000)
print 1
def f2():
print 2
p1 = Process(target=f1)
p1.start()
p1.join(6)
if p1.is_alive():p1.terminate()
f2()
Output is as expected:
0
2
You can time the execution using the datetime module. Probably your optimizer function has a loop somewhere. Inside the loop you can test how much time has passed since you started the function.
def function_1():
t_end = datetime.time.now() + datetime.timedelta(hours=2)
while not converged:
# do your thing
if datetime.time.now() > t_end:
return

Categories

Resources