I am using an event based system using the new Python 3.5 coroutines and await. I register events and these events are called by the system.
#event
aysnc def handleevent(args):
# handle the event
I need to initialize some classes to handle the work(time consuming). Then call instance methods, also time consuming (they actually use selenium to browse certain sites).
Ideally I would want something like the following code
# supposedly since this is multiprocessing this is a different driver per process
driver = None
def init():
# do the heavy initialization here
global driver
driver = webdriver.Chrome()
def longworkmethod():
## need to return some data
return driver.dolongwork()
class Drivers:
""" A class to handle async and multiprocessing"""
def __init__(self, numberOfDrivers):
self.pool = multiprocessing.Pool(processes=numberOfDrivers, initializer=init)
async def dowork(self, args):
return self.pool.apply_async(longworkmethod, args=args)
### my main python class
drivers = Drivers(5)
#event
aysnc def handleevent(args):
await drivers.dowork(args)
#event
aysnc def quit(args):
## do cleanup on drivers
sys.exit(0)
This code doesn't work, but I have tried many different ways and none seem to be able to do what I want.
It doesn't have to be this exact form, but how do I go about mixing the await and coroutines with a program that needs multiprocessing?
While there nothing technically speaking that would limit you from mixing asyncio and multiprocessing, I would suggest avoiding doing so. It's going to add a lot of complexity as you'll end up needing an event loop per thread and passing information back and forth will be tricky. Just use one or the other.
asyncio supplies functions for running tasks in another thread - such as AbstractEventLoop.run_in_executor. Take a look at these answers
https://stackoverflow.com/a/33025287/66349 (calling selenium within a coroutine)
https://stackoverflow.com/a/28492261/66349
Alternatively you could just use multiprocessing as selenium has a blocking (non asyncio) interface, however it sounds like some of your code is using already using asyncio so maybe stick with the above.
Related
My ThreadPoolExecutor/gen.coroutine(tornado v4.x) solution to circumvent blocking the webserver is not working anymore with tornado version 6.x.
A while back I started to develop an online Browser game using a Tornado webserver(v4.x) and websockets. Whenever user input is expected, the game would send the question to the client and wait for the response. Back than i used gen.coroutine and a ThreadPoolExecutor to make this task non-blocking. Now that I started refactoring the game, it is not working with tornado v6.x and the task is blocking the server again. I searched for possible solutions, but so far i have been unable to get it working again. It is not clear to me how to change my existing code to be non-blocking again.
server.py:
class PlayerWebSocket(tornado.websocket.WebSocketHandler):
executor = ThreadPoolExecutor(max_workers=15)
#run_on_executor
def on_message(self,message):
params = message.split(':')
self.player.callbacks[int(params[0])]=params[1]
if __name__ == '__main__':
application = Application()
application.listen(9999)
tornado.ioloop.IOLoop.instance().start()
player.py:
#gen.coroutine
def send(self, message):
self.socket.write_message(message)
def create_choice(self, id, choices):
d = {}
d['id'] = id
d['choices']=choices
self.choice[d['id']]=d
self.send('update',self)
while not d['id'] in self.callbacks:
pass
del self.choice[d['id']]
return self.callbacks[d['id']]
Whenever a choice is to be made, the create_choice function creates a dict with a list (choices) and an id and stores it in the players self.callbacks. After that it just stays in the while loop until the websocket.on_message function puts the received answer (which looks like this: id:Choice_id, so for example 1:12838732) into the callbacks dict.
The WebSocketHandler.write_message method is not thread-safe, so it can only be called from the IOLoop's thread, and not from a ThreadPoolExecutor (This has always been true, but sometimes it might have seemed to work anyway).
The simplest way to fix this code is to save IOLoop.current() in a global variable from the main thread (the current() function accesses a thread-local variable so you can't call it from the thread pool) and use ioloop.add_callback(self.socket.write_message, message) (and remove #gen.coroutine from send - it doesn't do any good to make functions coroutines if they contain no yield expressions).
I'm looking for a pythonic asyncio "pattern" for a construct that appears quite often in my programs.
A worker task performs some operations usually consisting of several steps. The details of those operations are controlled by commands sent from a controlling function to the worker task. There are sleeps between individual steps and the worker is able to accept new commands only during these sleeps. A new command should wake up the worker task from sleep immediately.
The commands represent a desired target state. I'm using a Queue for communication. However, there can be only one target, that's why the commands do not build a real queue, but the last one replaces all previous ones. The queue has one item at most.
Currently I'm using other async library. I want to switch to standard asyncio. An example:
# warning: not asyncio code; not real code
cmd_queue = Queue()
async def worker():
cmd = 'INIT'
while cmd != 'STOP':
... do_something1 sync or async ...
newcmd = await cmd_queue.get(timeout=SLEEPTIME1, timeout_value=None)
if newcmd is not None:
cmd = newcmd
continue
... do_something2 sync or async ...
newcmd = await cmd_queue.get(timeout=SLEEPTIME2, timeout_value=None)
if newcmd is not None:
cmd = newcmd
continue
def controlloler():
...
if newcmd:
cmd_queue.clear() # replaces a waiting command
cmd_queue.put(newcmd) # put_nowait() in asyncio
...
I could rewrite that form of queue.get to an asyncio code:
try:
cmd=wait_for(cmd_queue.get(), timeout=SLEEPTIME)
continue # or process otherwise
except asyncio.TimeoutError:
pass
but I think maybe there is a simpler solution. OTOH If you have asyncio experience and think a Queue with timeout is the way to go, that would help me too.
I tried to search, but could not find proper keywords for my problem (same holds for the question title).
There is certainly nothing wrong with your timeout implementation. Methods on asyncio synchronization primitives intentionally don't support explicit timeout arguments, leaving it to the caller to use cancellation or wait_for to time out when needed.
As for a single-element queue I would consider replacing it with a Future, which is not only designed to hold a single value, but is also very lightweight in asyncio given that it is the basic abstraction used to build almost everything else.
Instead of wait_for(cmd_queue.get(), ...), you'd write wait_for(cmd_future, ...), and instead of cmd_queue.put(value), you'd write cmd_future.set_result(value). The only important difference is that a future is one-shot, so after getting an item, you need to assign a new future to cmd_future.
There are a lot of tutorials etc. on Python and asynchronous coding techniques, but I am having difficulty filtering the through results to find what I need. I am new to Python, so that doesn't help.
Setup
I currently have two objects that look sort of like this (please excuse my python formatting):
class Alphabet(parent):
def init(self, item):
self.item = item
def style_alphabet(callback):
# this method presumably takes a very long time, and fills out some properties
# of the Alphabet object
callback()
class myobj(another_parent):
def init(self):
self.alphabets = []
refresh()
def foo(self):
for item in ['a', 'b', 'c']:
letters = new Alphabet(item)
self.alphabets.append(letters)
self.screen_refresh()
for item in self.alphabets
# this is the code that I want to run asynchronously. Typically, my efforts
# all involve passing item.style_alphabet to the async object / method
# and either calling start() here or in Alphabet
item.style_alphabet(self.screen_refresh)
def refresh(self):
foo()
# redraw screen, using the refreshed alphabets
redraw_screen()
def screen_refresh(self):
# a lighter version of refresh()
redraw_screen()
The idea is that the main thread initially draws the screen with incomplete Alphabet objects, fills out the Alphabet objects, updating the screen as they complete.
I've tried a number of implementations of threading.Tread, Queue.Queue, and even futures, and for some reason they either haven't worked, or they have blocked the main thread. so that the initial draw doesn't take place.
A few of the async methods I've attempted:
class Async (threading.Thread):
def __init__(self, f, cb):
threading.Thread.__init__(self)
self.f = f
self.cb = cb
def run(self):
self.f()
self.cb()
def run_as_thread(f):
# When I tried this method, I assigned the callback to a property of "Alphabet"
thr = threading.Thread(target=f)
thr.start()
def run_async(f, cb):
pool = Pool(processes=1)
result = pool.apply_async(func=f, args=args, callback=cb)
I ended up writing a thread pool to deal with this use pattern. Try creating a queue and handing a reference off to all the worker threads. Add task objects to the queue from the main thread. Worker threads pull objects from the queue and invoke the functions. Add an event to each task to be signaled on the worker thread at task completion. Keep a list of task objects on the main thread and use polling to see if the UI needs an update. One can get fancy and add a pointer to a callback function on the task objects if needed.
My solution was inspired by what I found on Google: http://code.activestate.com/recipes/577187-python-thread-pool/
I kept improving on that design to add features and give the threading, multiprocessing, and parallel python modules a consistent interface. My implementation is at:
https://github.com/nornir/nornir-pools
Docs:
http://nornir.github.io/packages/nornir_pools.html
If you are new to Python and not familiar with the GIL I suggest doing a search for Python threading and the global interpreter lock (GIL). It isn’t a happy story. Generally I find I need to use the multiprocessing module to get decent performance.
Hope some of this helps.
I have a project that I'm writing in Python that will be sending hardware (Phidgets) commands. Because I'll be interfacing with more than one hardware component, I need to have more than one loop running concurrently.
I've researched the Python multiprocessing module, but it turns out that the hardware can only be controlled by one process at a time, so all my loops need to run in the same process.
As of right now, I've been able to accomplish my task with a Tk() loop, but without actually using any of the GUI tools. For example:
from Tk import tk
class hardwareCommand:
def __init__(self):
# Define Tk object
self.root = tk()
# open the hardware, set up self. variables, call the other functions
self.hardwareLoop()
self.UDPListenLoop()
self.eventListenLoop()
# start the Tk loop
self.root.mainloop()
def hardwareLoop(self):
# Timed processing with the hardware
setHardwareState(self.state)
self.root.after(100,self.hardwareLoop)
def UDPListenLoop(self):
# Listen for commands from UDP, call appropriate functions
self.state = updateState(self.state)
self.root.after(2000,self.UDPListenLoop)
def eventListenLoop(self,event):
if event == importantEvent:
self.state = updateState(self.event.state)
self.root.after(2000,self.eventListenLoop)
hardwareCommand()
So basically, the only reason for defining the Tk() loop is so that I can call the root.after() command within those functions that need to be concurrently looped.
This works, but is there a better / more pythonic way of doing it? I'm also wondering if this method causes unnecessary computational overhead (I'm not a computer science guy).
Thanks!
The multiprocessing module is geared towards having multiple separate processes. Although you can use Tk's event loop, that is unnecessary if you don't have a Tk based GUI, so if you just want multiple tasks to execute in the same process you can use the Thread module. With it you can create specific classes which encapsulate a separate thread of execution, so you can have many "loops" executing simultaneously in the background. Think of something like this:
from threading import Thread
class hardwareTasks(Thread):
def hardwareSpecificFunction(self):
"""
Example hardware specific task
"""
#do something useful
return
def run(self):
"""
Loop running hardware tasks
"""
while True:
#do something
hardwareSpecificTask()
class eventListen(Thread):
def eventHandlingSpecificFunction(self):
"""
Example event handling specific task
"""
#do something useful
return
def run(self):
"""
Loop treating events
"""
while True:
#do something
eventHandlingSpecificFunction()
if __name__ == '__main__':
# Instantiate specific classes
hw_tasks = hardwareTasks()
event_tasks = eventListen()
# This will start each specific loop in the background (the 'run' method)
hw_tasks.start()
event_tasks.start()
while True:
#do something (main loop)
You should check this article to get more familiar with the threading module. Its documentation is a good read too, so you can explore its full potential.
I want to execute a function every 60 seconds on Python but I don't want to be blocked meanwhile.
How can I do it asynchronously?
import threading
import time
def f():
print("hello world")
threading.Timer(3, f).start()
if __name__ == '__main__':
f()
time.sleep(20)
With this code, the function f is executed every 3 seconds within the 20 seconds time.time.
At the end it gives an error and I think that it is because the threading.timer has not been canceled.
How can I cancel it?
You could try the threading.Timer class: http://docs.python.org/library/threading.html#timer-objects.
import threading
def f(f_stop):
# do something here ...
if not f_stop.is_set():
# call f() again in 60 seconds
threading.Timer(60, f, [f_stop]).start()
f_stop = threading.Event()
# start calling f now and every 60 sec thereafter
f(f_stop)
# stop the thread when needed
#f_stop.set()
The simplest way is to create a background thread that runs something every 60 seconds. A trivial implementation is:
import time
from threading import Thread
class BackgroundTimer(Thread):
def run(self):
while 1:
time.sleep(60)
# do something
# ... SNIP ...
# Inside your main thread
# ... SNIP ...
timer = BackgroundTimer()
timer.start()
Obviously, if the "do something" takes a long time, then you'll need to accommodate for it in your sleep statement. But, 60 seconds serves as a good approximation.
I googled around and found the Python circuits Framework, which makes it possible to wait
for a particular event.
The .callEvent(self, event, *channels) method of circuits contains a fire and suspend-until-response functionality, the documentation says:
Fire the given event to the specified channels and suspend execution
until it has been dispatched. This method may only be invoked as
argument to a yield on the top execution level of a handler (e.g.
"yield self.callEvent(event)"). It effectively creates and returns
a generator that will be invoked by the main loop until the event has
been dispatched (see :func:circuits.core.handlers.handler).
I hope you find it as useful as I do :)
./regards
It depends on what you actually want to do in the mean time. Threads are the most general and least preferred way of doing it; you should be aware of the issues with threading when you use it: not all (non-Python) code allows access from multiple threads simultaneously, communication between threads should be done using thread-safe datastructures like Queue.Queue, you won't be able to interrupt the thread from outside it, and terminating the program while the thread is still running can lead to a hung interpreter or spurious tracebacks.
Often there's an easier way. If you're doing this in a GUI program, use the GUI library's timer or event functionality. All GUIs have this. Likewise, if you're using another event system, like Twisted or another server-process model, you should be able to hook into the main event loop to cause it to call your function regularly. The non-threading approaches do cause your program to be blocked while the function is pending, but not between functioncalls.
Why dont you create a dedicated thread, in which you put a simple sleeping loop:
#!/usr/bin/env python
import time
while True:
# Your code here
time.sleep(60)
I think the right way to run a thread repeatedly is the next:
import threading
import time
def f():
print("hello world") # your code here
myThread.run()
if __name__ == '__main__':
myThread = threading.Timer(3, f) # timer is set to 3 seconds
myThread.start()
time.sleep(10) # it can be loop or other time consuming code here
if myThread.is_alive():
myThread.cancel()
With this code, the function f is executed every 3 seconds within the 10 seconds time.sleep(10). At the end running of thread is canceled.
If you want to invoke the method "on the clock" (e.g. every hour on the hour), you can integrate the following idea with whichever threading mechanism you choose:
import time
def wait(n):
'''Wait until the next increment of n seconds'''
x = time.time()
time.sleep(n-(x%n))
print(time.asctime())
[snip. removed non async version]
To use asyncing you would use trio. I recommend trio to everyone who asks about async python. It is much easier to work with especially sockets. With sockets I have a nursery with 1 read and 1 write function and the write function writes data from an deque where it is placed by the read function; and waiting to be sent. The following app works by using trio.run(function,parameters) and then opening an nursery where the program functions in loops with an await trio.sleep(60) between each loop to give the rest of the app a chance to run. This will run the program in a single processes but your machine can handle 1500 TCP connections insead of just 255 with the non async method.
I have not yet mastered the cancellation statements but I put at move_on_after(70) which is means the code will wait 10 seconds longer than to execute a 60 second sleep before moving on to the next loop.
import trio
async def execTimer():
'''This function gets executed in a nursery simultaneously with the rest of the program'''
while True:
trio.move_on_after(70):
await trio.sleep(60)
print('60 Second Loop')
async def OneTime_OneMinute():
'''This functions gets run by trio.run to start the entire program'''
with trio.open_nursery() as nursery:
nursery.start_soon(execTimer)
nursery.start_soon(print,'do the rest of the program simultaneously')
def start():
'''You many have only one trio.run in the entire application'''
trio.run(OneTime_OneMinute)
if __name__ == '__main__':
start()
This will run any number of functions simultaneously in the nursery. You can use any of the cancellable statements for checkpoints where the rest of the program gets to continue running. All trio statements are checkpoints so use them a lot. I did not test this app; so if there are any questions just ask.
As you can see trio is the champion of easy-to-use functionality. It is based on using functions instead of objects but you can use objects if you wish.
Read more at:
[1]: https://trio.readthedocs.io/en/stable/reference-core.html