Python, sleep some code not all - python

I have a situation, where at some point in my code I want to trigger a number of timers, the code will keep running, but at some point these functions will trigger and remove an item from a given list. Similar though not exactly like the code below. The problem is, I want these functions to wait a certain amount of time, the only way I know how is to use sleep, but that stops all of the code, when I need the first function to keep running. So how can I set a function aside with out making everything wait for it? If the answer involves threading, please know that I have very little experience with it and like explanations with pictures and small words.
from time import sleep
from datetime import datetime
def func():
x = 1
for i in range(20):
if i % 4 == 0:
func2()
print("START", datetime.now())
x += 1
else:
print("continue")
def func2():
print("go")
sleep(10)
print("func 2--------------------------------------", datetime.now())
func()

You need to use threading. http://docs.python.org/2/library/threading.html
You can start functions in their own threads.

I used background function. It will run in the background, even if going to another page.
You need to import threading, also time to use time.sleep():
import threading
import time
I had a function where I wanted to sleep code in the background, here is an example:
# This is the one that will sleep, but since you used args on the Thread, it will not make the mainFunction to sleep.
def backgroundFunction(obj):
theObj = obj
time.sleep(120)
# updates the Food to 5 in 2 minutes
obj["Food"] = 5
return
def mainFunction():
obj = {"Food": 4, "Water": 3}
# Make sure there are a comma in the args().
t1 = threading.Thread(target=backgroundFunction, args=(obj,))
t1.start()
return
If you used t1 = threading.Thread(target=backgroundFunction(obj)) it will not be in the background so don't use this, unless you want mainFunction to sleep also.

Depending on the situation, another option might be an event queue based system. That avoids threads, so it can be simpler.
The idea is that instead of using sleep(20), you calculate when the event should fire, using datetime.now() + timedelta(seconds=20). You then put that in a sorted list.
Regularly, perhaps each time through the main loop of your program, you check the first element in the list; if the time has passed, you remove it and call the relevant function.
To add an event:
pending_events.append((datetime.now() + timedelta(seconds=20), e))
pending_events.sort()
Then, as part of your main loop:
for ... # your main loop
# handle timed events:
while pending_events[0][0] < datetime.now():
the_time, e = pending_events.pop(0)
handle_event(e, the_time)
... # rest of your main loop
This relies on your main loop regularly calling the event-handling code, and on the event-handling code not taking much time to handle the event. Depending on what the main loop and the events are doing, this may come naturally or it may be some effort or it may rule out this method...
Notes:
You only need to check the first element in the list, because the list is sorted in time order; checking the first element checks the earliest one and you don't need to check the others until that one has passed.
Instead of a sorted list, you can use a heapq, which is more complicated but faster; in practice, you'd need a lot of pending events to notice any difference.
If the event is to be "every 20s" rather than "after 20s", use the_time + timedelta(seconds=20) to schedule each subsequent event; that way, the delay in getting to and processing the event won't be added.

Related

Round robin scheduler - call functions periodically at appropriate times

I have a list of functions (1..N) and function i needs to be called every X_i seconds (X_i would be large such as 1000+ s). Each X_i doesn't have to be unique, i.e. it is possible that X_i == X_j.
Provided, I generate a list of (function_i, X_i) how can I simply execute these functions at their appropriate times in the future and sleep between calls? I have used ApScheduler before but it runs tasks in parallel and I need functions to be run one after the other.
I can write my own iterator which returns the current function that needs to be executed and blocks until the next one but I'd rather use a library if one exists?
EDIT: N is about 200 at the moment.
threading module
The threading module lets you start a new thread, which will not be affected by other threads' sleep statements. This requires N threads, so if N is extremely huge, let me know and I will try to think of an alternative solution.
You can create N threads and set each one on a timed loop, like so:
import threading, time
def looper(function, delay): # Creates a function that will loop that function
def inner(): # Will keep looping once invoked
while True:
function() # Call the function; you can optionally add args
time.sleep(delay) # Swap this line and the one before it to wait before running rather than after
return inner # The function that should be called to start the loop is returned
def start(functions, delays): # Call this with the two lists to start the loops
for function, delay in zip(functions, delays): # Goes through the respective pairs
thread = threading.Thread(target = looper(function, delay)) # This thread will start the looper
thread.start()
start([lambda: print("hi"), lambda: print("bye")], [0.2, 0.3])
You can try it online here; just hit run and then hit run again when you want to kill it (Thanks to #DennisMitchell for the online interpreter)

How does asyncio.sleep work with negative values?

I decided to implement sleep sort (https://rosettacode.org/wiki/Sorting_algorithms/Sleep_sort) using Python's asyncio when I made a strange discovery: it works with negative values (and returns immediately with 0)!
Here is the code (you can run it here https://repl.it/DYTZ):
import asyncio
import random
async def sleepy(value):
return await asyncio.sleep(value, result=value)
async def main(input_values):
result = []
for sleeper in asyncio.as_completed(map(sleepy, input_values)):
result.append(await sleeper)
print(result)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
input_values = list(range(-5, 6))
random.shuffle(input_values)
loop.run_until_complete(main(input_values))
The code takes 5 seconds to execute, as expected, but the result is always [0, -5, -4, -3, -2, -1, 1, 2, 3, 4, 5]. I can understand 0 returning immediately, but how are the negative values coming back in the right order?
Well, looking at the source:
delay == 0 is special-cased to return immediately, it doesn't even try to sleep.
Non-zero delay calls events.get_event_loop(). Since there are no calls to events.set_event_loop_policy(policy) in asyncio.tasks, it would seem to fall back on the default unless it's already been set somewhere else, and the default is asyncio.DefaultEventLoopPolicy.
This is not defined in events.py, because it's different on Windows from on UNIX.
Either way, sleep calls loop.create_future(). That's defined a few inheritances back, over in base_events.BaseEventLoop. It's just a simple call to the Future() constructor, no significant logic.
From the instance of Future it delegates back to the loop, as follows:
future._loop.call_later(delay,
futures._set_result_unless_cancelled,
future, result)
That one is also in BaseEventLoop, and still doesn't directly handle the delay number: it calls self.call_at, adding the current time to the delay.
call_at schedules and returns an events.TimerHandle, and the callback is to tell the Future it's done. The return value is only relevant if the task is to be cancelled, which it is automatically at the end for cleanup. The scheduling is the important bit.
_scheduled is sorted via heapq - everything goes on there in sorted order, and timers sort by their _when. This is key.
Every time it checks, it strips out all cancelled scheduled things, then runs all remaining scheduled callbacks, in order, until it hits one that's not ready.
TL;DR:
Sleeping with asyncio for a negative duration schedules tasks to be "ready" in the past. This means that they go to the top of the list of scheduled tasks, and are run as soon as the event loop checks. Effectively, 0 comes first because it doesn't even schedule, but everything else registers to the scheduler as "running late" and is handled immediately in order of how late it is.
If you take a look at the asyncio source, sleep special cases 0 and returns immediately.
if delay == 0:
yield
return result
If you continue through the source, you'll see that any other value gets passed through to the event loop's call_later method. Looking at how call_later is implemented for the default loop (BaseEventLoop), you'll see that call_later passes a time to call_at.
self.call_at(self.time() + delay, callback, *args)
The reason the values are turned in order is that the times created with negative delays occur before those with positive delays.

Python time.sleep indefinitely

In Python's time module, there is a sleep() function, where you can make Python wait x seconds before resuming the program. Is there a way to do this indefinitely until a condition is met? For example:
while True:
time.sleep()
if x:
break
time.unsleep()
I am trying to make a pause function for my PyGame program. Any help is appreciated.
Something like this:
while not x: time.sleep(0.1)
will wait until x is true, sleeping a tenth of a second between checks. This is usually short enough for your script to seem to react instantly (in human terms) when x becomes true. You could use 0.01 instead if this is not quick enough. In my experience, today's computers are fast enough that checking a simple condition even every hundredth of a second doesn't really make a dent in CPU usage.
Of course, x should be something that can actually change, e.g. a function call.
Your code in the question implies that you want some other thread to resume your program. In that case you could use resumed = threading.Event(). You could create it in one thread and pass it into another:
while not resumed.wait(): # wait until resumed
"continue waiting"
Call resumed.set() to resume this code immediately.
I am trying to make a pause function for my PyGame program. Any help is appreciated.
Use pygame.time. Typically, you have the main loop where you update the state of the game and at the end of the loop you call clock.tick(60) # 60 fps. It is enough to use paused flag in this case, to skip the updating.
You could spin off a thread as follows:
import sys
import time
from threading import Thread
prepare_to_stop = 0
def in_background_thread():
while not prepare_to_stop:
# Program code here
print(prepare_to_stop)
time.sleep(0.1)
try:
th = Thread(target=in_background_thread)
th.start()
print("\nProgram will shut down after current operation is complete.\n")
time.sleep(10**8)
except KeyboardInterrupt:
prepare_to_stop = 1
print("Program shutting down...")

Break the function after certain time

In Python, for a toy example:
for x in range(0, 3):
# Call function A(x)
I want to continue the for loop if function A takes more than five seconds by skipping it so I won't get stuck or waste time.
By doing some search, I realized a subprocess or thread may help, but I have no idea how to implement it here.
I think creating a new process may be overkill. If you're on Mac or a Unix-based system, you should be able to use signal.SIGALRM to forcibly time out functions that take too long. This will work on functions that are idling for network or other issues that you absolutely can't handle by modifying your function. I have an example of using it in this answer:
Option for SSH to timeout after a short time? ClientAlive & ConnectTimeout don't seem to do what I need them to do
Editing my answer in here, though I'm not sure I'm supposed to do that:
import signal
class TimeoutException(Exception): # Custom exception class
pass
def timeout_handler(signum, frame): # Custom signal handler
raise TimeoutException
# Change the behavior of SIGALRM
signal.signal(signal.SIGALRM, timeout_handler)
for i in range(3):
# Start the timer. Once 5 seconds are over, a SIGALRM signal is sent.
signal.alarm(5)
# This try/except loop ensures that
# you'll catch TimeoutException when it's sent.
try:
A(i) # Whatever your function that might hang
except TimeoutException:
continue # continue the for loop if function A takes more than 5 second
else:
# Reset the alarm
signal.alarm(0)
This basically sets a timer for 5 seconds, then tries to execute your code. If it fails to complete before time runs out, a SIGALRM is sent, which we catch and turn into a TimeoutException. That forces you to the except block, where your program can continue.
Maybe someone find this decorator useful, based on TheSoundDefense answer:
import time
import signal
class TimeoutException(Exception): # Custom exception class
pass
def break_after(seconds=2):
def timeout_handler(signum, frame): # Custom signal handler
raise TimeoutException
def function(function):
def wrapper(*args, **kwargs):
signal.signal(signal.SIGALRM, timeout_handler)
signal.alarm(seconds)
try:
res = function(*args, **kwargs)
signal.alarm(0) # Clear alarm
return res
except TimeoutException:
print u'Oops, timeout: %s sec reached.' % seconds, function.__name__, args, kwargs
return
return wrapper
return function
Test:
#break_after(3)
def test(a, b, c):
return time.sleep(10)
>>> test(1,2,3)
Oops, timeout: 3 sec reached. test (1, 2, 3) {}
If you can break your work up and check every so often, that's almost always the best solution. But sometimes that's not possible—e.g., maybe you're reading a file off an slow file share that every once in a while just hangs for 30 seconds. To deal with that internally, you'd have to restructure your whole program around an async I/O loop.
If you don't need to be cross-platform, you can use signals on *nix (including Mac and Linux), APCs on Windows, etc. But if you need to be cross-platform, that doesn't work.
So, if you really need to do it concurrently, you can, and sometimes you have to. In that case, you probably want to use a process for this, not a thread. You can't really kill a thread safely, but you can kill a process, and it can be as safe as you want it to be. Also, if the thread is taking 5+ seconds because it's CPU-bound, you don't want to fight with it over the GIL.
There are two basic options here.
First, you can put the code in another script and run it with subprocess:
subprocess.check_call([sys.executable, 'other_script.py', arg, other_arg],
timeout=5)
Since this is going through normal child-process channels, the only communication you can use is some argv strings, a success/failure return value (actually a small integer, but that's not much better), and optionally a hunk of text going in and a chunk of text coming out.
Alternatively, you can use multiprocessing to spawn a thread-like child process:
p = multiprocessing.Process(func, args)
p.start()
p.join(5)
if p.is_alive():
p.terminate()
As you can see, this is a little more complicated, but it's better in a few ways:
You can pass arbitrary Python objects (at least anything that can be pickled) rather than just strings.
Instead of having to put the target code in a completely independent script, you can leave it as a function in the same script.
It's more flexible—e.g., if you later need to, say, pass progress updates, it's very easy to add a queue in either or both directions.
The big problem with any kind of parallelism is sharing mutable data—e.g., having a background task update a global dictionary as part of its work (which your comments say you're trying to do). With threads, you can sort of get away with it, but race conditions can lead to corrupted data, so you have to be very careful with locking. With child processes, you can't get away with it at all. (Yes, you can use shared memory, as Sharing state between processes explains, but this is limited to simple types like numbers, fixed arrays, and types you know how to define as C structures, and it just gets you back to the same problems as threads.)
Ideally, you arrange things so you don't need to share any data while the process is running—you pass in a dict as a parameter and get a dict back as a result. This is usually pretty easy to arrange when you have a previously-synchronous function that you want to put in the background.
But what if, say, a partial result is better than no result? In that case, the simplest solution is to pass the results over a queue. You can do this with an explicit queue, as explained in Exchanging objects between processes, but there's an easier way.
If you can break the monolithic process into separate tasks, one for each value (or group of values) you wanted to stick in the dictionary, you can schedule them on a Pool—or, even better, a concurrent.futures.Executor. (If you're on Python 2.x or 3.1, see the backport futures on PyPI.)
Let's say your slow function looked like this:
def spam():
global d
for meat in get_all_meats():
count = get_meat_count(meat)
d.setdefault(meat, 0) += count
Instead, you'd do this:
def spam_one(meat):
count = get_meat_count(meat)
return meat, count
with concurrent.futures.ProcessPoolExecutor(max_workers=1) as executor:
results = executor.map(spam_one, get_canned_meats(), timeout=5)
for (meat, count) in results:
d.setdefault(meat, 0) += count
As many results as you get within 5 seconds get added to the dict; if that isn't all of them, the rest are abandoned, and a TimeoutError is raised (which you can handle however you want—log it, do some quick fallback code, whatever).
And if the tasks really are independent (as they are in my stupid little example, but of course they may not be in your real code, at least not without a major redesign), you can parallelize the work for free just by removing that max_workers=1. Then, if you run it on an 8-core machine, it'll kick off 8 workers and given them each 1/8th of the work to do, and things will get done faster. (Usually not 8x as fast, but often 3-6x as fast, which is still pretty nice.)
This seems like a better idea (sorry, I am not sure of the Python names of thing yet):
import signal
def signal_handler(signum, frame):
raise Exception("Timeout!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(3) # Three seconds
try:
for x in range(0, 3):
# Call function A(x)
except Exception, msg:
print "Timeout!"
signal.alarm(0) # Reset
The comments are correct in that you should check inside. Here is a potential solution. Note that an asynchronous function (by using a thread for example) is different from this solution. This is synchronous which means it will still run in series.
import time
for x in range(0,3):
someFunction()
def someFunction():
start = time.time()
while (time.time() - start < 5):
# do your normal function
return;

How to execute a function asynchronously every 60 seconds in Python?

I want to execute a function every 60 seconds on Python but I don't want to be blocked meanwhile.
How can I do it asynchronously?
import threading
import time
def f():
print("hello world")
threading.Timer(3, f).start()
if __name__ == '__main__':
f()
time.sleep(20)
With this code, the function f is executed every 3 seconds within the 20 seconds time.time.
At the end it gives an error and I think that it is because the threading.timer has not been canceled.
How can I cancel it?
You could try the threading.Timer class: http://docs.python.org/library/threading.html#timer-objects.
import threading
def f(f_stop):
# do something here ...
if not f_stop.is_set():
# call f() again in 60 seconds
threading.Timer(60, f, [f_stop]).start()
f_stop = threading.Event()
# start calling f now and every 60 sec thereafter
f(f_stop)
# stop the thread when needed
#f_stop.set()
The simplest way is to create a background thread that runs something every 60 seconds. A trivial implementation is:
import time
from threading import Thread
class BackgroundTimer(Thread):
def run(self):
while 1:
time.sleep(60)
# do something
# ... SNIP ...
# Inside your main thread
# ... SNIP ...
timer = BackgroundTimer()
timer.start()
Obviously, if the "do something" takes a long time, then you'll need to accommodate for it in your sleep statement. But, 60 seconds serves as a good approximation.
I googled around and found the Python circuits Framework, which makes it possible to wait
for a particular event.
The .callEvent(self, event, *channels) method of circuits contains a fire and suspend-until-response functionality, the documentation says:
Fire the given event to the specified channels and suspend execution
until it has been dispatched. This method may only be invoked as
argument to a yield on the top execution level of a handler (e.g.
"yield self.callEvent(event)"). It effectively creates and returns
a generator that will be invoked by the main loop until the event has
been dispatched (see :func:circuits.core.handlers.handler).
I hope you find it as useful as I do :)
./regards
It depends on what you actually want to do in the mean time. Threads are the most general and least preferred way of doing it; you should be aware of the issues with threading when you use it: not all (non-Python) code allows access from multiple threads simultaneously, communication between threads should be done using thread-safe datastructures like Queue.Queue, you won't be able to interrupt the thread from outside it, and terminating the program while the thread is still running can lead to a hung interpreter or spurious tracebacks.
Often there's an easier way. If you're doing this in a GUI program, use the GUI library's timer or event functionality. All GUIs have this. Likewise, if you're using another event system, like Twisted or another server-process model, you should be able to hook into the main event loop to cause it to call your function regularly. The non-threading approaches do cause your program to be blocked while the function is pending, but not between functioncalls.
Why dont you create a dedicated thread, in which you put a simple sleeping loop:
#!/usr/bin/env python
import time
while True:
# Your code here
time.sleep(60)
I think the right way to run a thread repeatedly is the next:
import threading
import time
def f():
print("hello world") # your code here
myThread.run()
if __name__ == '__main__':
myThread = threading.Timer(3, f) # timer is set to 3 seconds
myThread.start()
time.sleep(10) # it can be loop or other time consuming code here
if myThread.is_alive():
myThread.cancel()
With this code, the function f is executed every 3 seconds within the 10 seconds time.sleep(10). At the end running of thread is canceled.
If you want to invoke the method "on the clock" (e.g. every hour on the hour), you can integrate the following idea with whichever threading mechanism you choose:
import time
def wait(n):
'''Wait until the next increment of n seconds'''
x = time.time()
time.sleep(n-(x%n))
print(time.asctime())
[snip. removed non async version]
To use asyncing you would use trio. I recommend trio to everyone who asks about async python. It is much easier to work with especially sockets. With sockets I have a nursery with 1 read and 1 write function and the write function writes data from an deque where it is placed by the read function; and waiting to be sent. The following app works by using trio.run(function,parameters) and then opening an nursery where the program functions in loops with an await trio.sleep(60) between each loop to give the rest of the app a chance to run. This will run the program in a single processes but your machine can handle 1500 TCP connections insead of just 255 with the non async method.
I have not yet mastered the cancellation statements but I put at move_on_after(70) which is means the code will wait 10 seconds longer than to execute a 60 second sleep before moving on to the next loop.
import trio
async def execTimer():
'''This function gets executed in a nursery simultaneously with the rest of the program'''
while True:
trio.move_on_after(70):
await trio.sleep(60)
print('60 Second Loop')
async def OneTime_OneMinute():
'''This functions gets run by trio.run to start the entire program'''
with trio.open_nursery() as nursery:
nursery.start_soon(execTimer)
nursery.start_soon(print,'do the rest of the program simultaneously')
def start():
'''You many have only one trio.run in the entire application'''
trio.run(OneTime_OneMinute)
if __name__ == '__main__':
start()
This will run any number of functions simultaneously in the nursery. You can use any of the cancellable statements for checkpoints where the rest of the program gets to continue running. All trio statements are checkpoints so use them a lot. I did not test this app; so if there are any questions just ask.
As you can see trio is the champion of easy-to-use functionality. It is based on using functions instead of objects but you can use objects if you wish.
Read more at:
[1]: https://trio.readthedocs.io/en/stable/reference-core.html

Categories

Resources