My application is running a in a loop.. sometimes it needs to call a led flash function from the loop. I pretty much did that like this;
def led_red_flash(flashcount):
logging.debug('Starting')
for l in range(0,flashcount):
GPIO.output(16,GPIO.HIGH)
time.sleep(0.1)
GPIO.output(16,GPIO.LOW)
time.sleep(0.1)
logging.debug('Stopping')
while True:
<do some stuff here>
t = threading.Thread(name='led_red_flash', target=led_red_flash(100) )
t.start()
This works.. but would there be a day to put all the threading stuff in the def led_red_flash part? As my scripts gets more complex it will make it harder to read. So something like this;
while True:
<do some stuff here>
led_red_flash(100)
The above is a very simplified version of the loop I am running. In the actual script it would not be possible to have multiple instances of led_red_flash run at the same time.. so this is not an issue.
You can create a wrapper function:
def _led_red_flash(flashcount):
logging.debug('Starting')
for l in range(0,flashcount):
GPIO.output(16,GPIO.HIGH)
time.sleep(0.1)
GPIO.output(16,GPIO.LOW)
time.sleep(0.1)
logging.debug('Stopping')
def led_red_flash(flashcount):
t = threading.Thread(name='led_red_flash', target=_led_red_flash, args=(100,))
t.start()
return t
BTW, your original code didn't execute led_red_flash in a separate thread. I justed called led_red_flash (led_red_flash(100)).
You should pass the function itself, not the return value of the function call. See threading.Thread.
threading.Thread(name='led_red_flash', target=led_red_flash(100))
to
threading.Thread(name='led_red_flash', target=led_red_flash, args=(100,))
Related
I have a program executed in a subprocess. This program runs forever, reads a line from its stdin, processes it, and outputs a result on stdout. I have encapsulated it as follows:
class BrainProcess:
def __init__(self, filepath):
# starting the program in a subprocess
self._process = asyncio.run(self.create_process(filepath))
# check if the program could not be executed
if self._process.returncode is not None:
raise BrainException(f"Could not start process {filepath}")
#staticmethod
async def create_process(filepath):
process = await sp.create_subprocess_exec(
filepath, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)
return process
# destructor function
def __del__(self):
self._process.kill() # kill the program, since it never stops
# waiting for the program to terminate
# self._process.wait() is asynchronous so I use async.run() to execute it
asyncio.run(self._process.wait())
async def _send(self, msg):
b = bytes(msg + '\n', "utf-8")
self._process.stdin.write(b)
await self._process.stdin.drain()
async def _readline(self):
return await self._process.stdout.readline()
def send_start_cmd(self, size):
asyncio.run(self._send(f"START {size}"))
line = asyncio.run(self._readline())
print(line)
return line
From my understanding asyncio.run() is used to run asynchronous code in a synchronous context. That is why I use it at the following lines:
# in __init__
self._process = asyncio.run(self.create_process(filepath))
# in send_start_cmd
asyncio.run(self._send(f"START {size}"))
# ...
line = asyncio.run(self._readline())
# in __del__
asyncio.run(self._process.wait())
The first line seems to work properly (the process is created correctly), but the other throw exceptions that look like got Future <Future pending> attached to a different loop.
Code:
brain = BrainProcess("./test")
res = brain.send_start_cmd(20)
print(res)
So my questions are:
What do these errors mean ?
How do I fix them ?
Did I use asyncio.run() correctly ?
Is there a better way to encapsulate the process to send and retrieve data to/from it without making my whole application use async / await ?
asyncio.run is meant to be used for running a body of async code, and producing a well-defined result. The most typical example is running the whole program:
async def main():
# your application here
if __name__ == '__main__':
asyncio.run(main())
Of couurse, asyncio.run is not limited to that usage, it is perfectly possible to call it multiple times - but it will create a fresh event loop each time. This means you won't be able to share async-specific objects (such as futures or objects that refer to them) between invocations - which is precisely what you tried to do. If you want to completely hide the fact that you're using async, why use asyncio.subprocess in the first place, wouldn't the regular subprocess do just as well?
The simplest fix is to avoid asyncio.run and just stick to the same event loop. For example:
_loop = asyncio.get_event_loop()
class BrainProcess:
def __init__(self, filepath):
# starting the program in a subprocess
self._process = _loop.run_until_complete(self.create_process(filepath))
...
...
Is there a better way to encapsulate the process to send and retrieve data to/from it without making my whole application use async / await ?
The idea is precisely for the whole application to use async/await, otherwise you won't be able to take advantage of asyncio - e.g. you won't be able to parallelize your async code.
I have a few classes that look more or less like this:
import threading
import time
class Foo():
def __init__(self, interval, callbacks):
self.thread = threading.Thread(target=self.loop)
self.interval = interval
self.thread_stop = threading.Event()
self.callbacks = callbacks
def loop():
while not self.thread_stop.is_set():
#do some stuff...
for callback in self.callbacks():
callback()
time.sleep(self.interval)
def start(self):
self.thread.start()
def kill(self):
self.thread_stop.set()
Which I am using from my main thread like this:
interval = someinterval
callbacks = [some callbacks]
f = Foo(interval, callbacks)
try:
f.start()
except KeyboardInterrupt:
f.kill()
raise
I would like a KeyboardInterrupt to kill the thread after all the callbacks have been completed, but before the loop repeats. Currently they are ignored and I have to resort to killing the terminal process that the program is running in.
I saw the idea of using threading.Event from this post, but it appears like I'm doing it incorrectly, and it's making working on this project a pretty large hassle.
I don't know if it may be relevant, but the callbacks I'm passing access data from the Internet and make heavy use of the retrying decorator to deal with unreliable connections.
EDIT
After everyone's help, the loop now looks like this inside Foo:
def thread_loop(self):
while not self.thread_stop.is_set():
# do some stuff
# call the callbacks
self.thread_stop.wait(self.interval)
This is kind of a solution, although it isn't ideal. This code runs on PythonAnywhere and the price of the account is by CPU time. I'll have to see how much this uses over the course of a day with the constant waking and sleeping of threads, but it at least solves the main issue
I think your problem is that you have a try-except-block around f.start(), but that returns immediately, so you aren't going to catch KeyboardInterrupts after the thread was started.
You could try adding a while-loop at the bottom of your program like this:
f.start()
try:
while True:
time.sleep(0.1)
except KeyboardInterrupt:
f.kill()
raise
This isn't exactly the most elegant solution, but it should work.
Thanks to #shx2 and #jazzpi for putting together the two separate pieces of the puzzle.
so the final code is
import threading
import time
class Foo():
def __init__(self, interval, callbacks):
self.thread = threading.Thread(target=self.loop)
self.interval = interval
self.thread_stop = threading.Event()
self.callbacks = callbacks
def loop():
while not self.thread_stop.is_set():
#do some stuff...
for callback in self.callbacks():
callback()
self.thread_stop.wait(self.interval)
def start(self):
self.thread.start()
def kill(self):
self.thread_stop.set()
And then in main
interval = someinterval
callbacks = [some, callbacks]
f = Foo(interval, callbacks)
f.start()
try:
while True:
time.sleep(0.1)
except KeyboardInterrupt:
f.kill()
raise
#jazzpi's answer correctly addresses the issue you're having in the main thread.
As to the sleep in thread's loop, you can simply replace the call to sleep with a call to self.thread_stop.wait(self.interval).
This way, your thread wakes up as soon as the stop event is set, or after waiting (i.e. sleeping) for self.interval seconds. (Event docs)
I have written a piece of code for scraping, in python. i have a list of url's which need to be scraped, but after a while script's get lost while reading web pages in loop. So i need to set a fixed time, after which script should come out of the loop and start reading the next web page.
Below is the sample code.
def main():
if <some condition>:
list_of_links=['http://link1.com', 'http://link2.com', 'http://link3.com']
for link in list_of_links:
process(link)
def process():
<some code to read web page>
return page_read
The scripts gets lost inside method process() which is called inside for loop again and again. I want for loop to skip to next link if process() method is taking more that a minute to read the webpage.
the script gets lost probably because the remote server does not respond anything, or too slow to respond.
you may set a timeout to the socket to avoid this behavior of the process function. at the very beginning of main function
def main():
socket.setdefaulttimeout(3.0)
# process urls
if ......
the above code fragment means that, if getting no response after waiting for 3 seconds, terminate the process and raise a timeout exception. so
try:
process()
except:
pass
will work.
You probably can use a timer. It depends on the code inside your process function.
If your main and process functions are methods of a class, then :
class MyClass:
def __init__(self):
self.stop_thread = False
def main():
if <some condition>:
list_of_links=['http://link1.com', 'http://link2.com', 'http://link3.com']
for link in list_of_links:
process(link)
def set_stop(self):
self.stop_thread = True
def process():
t = Timer(60.0, self.set_stop)
t.start()
# I don't know your code here
# If you use some kind of loop it could be :
while True:
# Do something..
if self.stop_thread:
break
# Or :
if self.stop_thread:
return
Suppose you are working with some bodgy piece of code which you can't trust, is there a way to run it safely without losing control of your script?
An example might be a function which only works some of the time and might fail randomly/spectacularly, how could you retry until it works? I tried some hacking with using threading module but had trouble to kill a hung thread neatly.
#!/usr/bin/env python
import os
import sys
import random
def unreliable_code():
def ok():
return "it worked!!"
def fail():
return "it didn't work"
def crash():
1/0
def hang():
while True:
pass
def bye():
os._exit(0)
return random.choice([ok, fail, crash, hang, bye])()
result = None
while result != "it worked!!":
# ???
To be safe against exceptions, use try/except (but I guess you know that).
To be safe against hanging code (endless loop) the only way I know is running the code in another process. This child process you can kill from the father process in case it does not terminate soon enough.
To be safe against nasty code (doing things it shall not do), have a look at http://pypi.python.org/pypi/RestrictedPython .
You can try running it in a sandbox.
In your real case application can you switch to multiprocessing? Becasue it seems that what you're asking could be done with multiprocessing + threading.Timer + try/except.
Take a look at this:
class SafeProcess(Process):
def __init__(self, queue, *args, **kwargs):
self.queue = queue
super().__init__(*args, **kwargs)
def run(self):
print('Running')
try:
result = self._target(*self._args, **self._kwargs)
self.queue.put_nowait(result)
except:
print('Exception')
result = None
while result != 'it worked!!':
q = Queue()
p = SafeProcess(q, target=unreliable_code)
p.start()
t = Timer(1, p.terminate) # in case it should hang
t.start()
p.join()
t.cancel()
try:
result = q.get_nowait()
except queues.Empty:
print('Empty')
print(result)
That in one (lucky) case gave me:
Running
Empty
None
Running
it worked!!
In your code samples you have 4 out of 5 chances to get an error, so you might also spawn a pool or something to improve your chances of having a correct result.
I have a threaded python application with a long-running mainloop in the background thread. This background mainloop is actually a call to pyglet.app.run(), which drives a GUI window and also can be configured to call other code periodically. I need a do_stuff(duration) function to be called at will from the main thread to trigger an animation in the GUI, wait for the animation to stop, and then return. The actual animation must be done in the background thread because the GUI library can't handle being driven by separate threads.
I believe I need to do something like this:
import threading
class StuffDoer(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.max_n_times = 0
self.total_n_times = 0
self.paused_ev = threading.Event()
def run(self):
# this part is outside of my control
while True:
self._do_stuff()
# do other stuff
def _do_stuff(self):
# this part is under my control
if self.paused_ev.is_set():
if self.max_n_times > self.total_n_times:
self.paused_ev.clear()
else:
if self.total_n_times >= self.max_n_times:
self.paused_ev.set()
if not self.paused_ev.is_set():
# do stuff that must execute in the background thread
self.total_n_times += 1
sd = StuffDoer()
sd.start()
def do_stuff(n_times):
sd.max_n_times += n_times
sd.paused_ev.wait_for_clear() # wait_for_clear() does not exist
sd.paused_ev.wait()
assert (sd.total_n_times == sd.max_n_times)
EDIT: use max_n_times instead of stop_time to clarify why Thread.join(duration) won't do the trick.
From the documentation for threading.Event:
wait([timeout])
Block until the internal flag is true.
If the internal flag is true on entry,
return immediately. Otherwise, block
until another thread calls set() to
set the flag to true, or until the
optional timeout occurs.
I've found I can get the behavior I'm looking for if I have a pair of events, paused_ev and not_paused_ev, and use not_paused_ev.wait(). I could almost just use Thread.join(duration), except it needs to only return precisely when the background thread actually registers that the time is up. Is there some other synchronization object or other strategy I should be using instead?
I'd also be open to arguments that I'm approaching this whole thing the wrong way, provided they're good arguments.
Hoping I get some revision or additional info from my comment, but I'm kind of wondering if you're not overworking things by subclassing Thread. You can do things like this:
class MyWorker(object):
def __init__(self):
t = Thread(target = self._do_work, name "Worker Owned Thread")
t.daemon = True
t.start()
def _do_work(self):
While True:
# Something going on here, forever if necessary. This thread
# will go away if the other non-daemon threads terminate, possibly
# raising an exception depending this function's body.
I find this makes more sense when the method you want to run is something that is more appropriately a member function of some other class than it would be to as the run method on the thread. Additionally, this saves you from having to encapsulate a bunch of business logic inside of a Thread. All IMO, of course.
It appears that your GUI animation thread is using a spin-lock in its while True loop. This can be prevented using thread-safe queues. Based on my reading of your question, this approach would be functionally equivalent and efficient.
I'm omitting some details of your code above which would not change. I'm also assuming here that the run() method which you do not control uses the self.stop_time value to do its work; otherwise there is no need for a threadsafe queue.
from Queue import Queue
from threading import Event
class StuffDoer:
def __init__(self, inq, ready):
self.inq = inq
self.ready = ready
def _do_stuff(self):
self.ready.set()
self.stop_time = self.inq.get()
GUIqueue = Queue()
control = Event()
sd = StuffDoer(GUIqueue, control)
def do_stuff(duration):
control.clear()
GUIqueue.put(time.time() + duration)
control.wait()
I ended up using a Queue similar to what #wberry suggested, and making use of Queue.task_done and Queue.wait:
import Queue
import threading
class StuffDoer(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.setDaemon(True)
self.max_n_times = 0
self.total_n_times = 0
self.do_queue = Queue.Queue()
def run(self):
# this part is outside of my control
while True:
self._do_stuff()
# do other stuff
def _do_stuff(self):
# this part is under my control
if self.total_n_times >= self.max_n_times:
try:
self.max_n_times += self.do_queue.get(block=False)
except Queue.Empty, e:
pass
if self.max_n_times > self.total_n_times:
# do stuff that must execute in the background thread
self.total_n_times += 1
if self.total_n_times >= self.max_n_times:
self.do_queue.task_done()
sd = StuffDoer()
sd.start()
def do_stuff(n_times):
sd.do_queue.put(n_times)
sd.do_queue.join()
assert (sd.total_n_times == sd.max_n_times)
I made solution based on #g.d.d.c advice for this question. There is my code:
threads = []
# initializing aux thread(s) in the main thread ...
t = threading.Thread(target=ThreadF, args=(...))
#t.setDaemon(True) # I'm not sure does it really needed
t.start()
threads.append(t.ident)
# Block main thread
while filter(lambda thread: thread.ident in threads, threading.enumerate()):
time.sleep(10)
Also, you can use Thread.join to block the main thread - it is better way.