I am using Python websockets 4.0.1 on Ubuntu. I want to have 2 websocket servers running. I was able to get this to "kind of work" by creating 2 threads and independent event loops for each one. By "kind of work", I mean both websockets work and are responsive for about 30 seconds and then one of them stops. I have to restart the process to get them both to work again. If I only run one or the other of these 2 threads, the single websocket works forever.
What am I doing wrong and how can I have 2 websockets work forever with asyncio?
# Start VL WebSocket Task
class vlWebSocketTask (threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
# Main while loops
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
while True:
try:
print("Starting VL WebSocket Server...")
startVLServer = websockets.serve(vlWebsocketServer, '192.168.1.3', 8777)
asyncio.get_event_loop().run_until_complete(startVLServer)
asyncio.get_event_loop().run_forever()
except Exception as ex:
print(ex)
time.sleep(5)
# Start IR WebSocket Task
class irWebSocketTask (threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
while True:
try:
print("Starting IR WebSocket Server...")
startIRServer = websockets.serve(irWebsocketServer, '192.168.1.3', 8555)
asyncio.get_event_loop().run_until_complete(startIRServer)
asyncio.get_event_loop().run_forever()
except Exception as ex:
print(ex)
time.sleep(5)
# Initialize VL WebSocket Task
#VLWebSocketTask = vlWebSocketTask()
#VLWebSocketTask.start()
# Initialize IR WebSocket Task
IRWebSocketTask = irWebSocketTask()
IRWebSocketTask.start()
You don't need threads to run multiple asyncio tasks - allowing multiple agents to share the same event loop is the strong suit of asyncio. You should be able to replace both thread-based classes with code like this:
loop = asyncio.new_event_loop()
loop.run_until_complete(websockets.serve(vlWebsocketServer, '192.168.1.3', 8777))
loop.run_until_complete(websockets.serve(irWebsocketServer, '192.168.1.3', 8555))
loop.run_forever()
While it is not exactly wrong to mix threads and asyncio, doing so correctly requires care not to mix up the separate asyncio instances. The safe way to use threads for asyncio is with loop.run_in_executor(), which runs synchronous code in a separate thread without blocking the event loop, while returning an object awaitable from the loop.
Note: the above code was written prior to the advent of asyncio.run() and manually spins the event loop. In Python 3.7 and later one would probably write something like:
async def main():
server1 = await websockets.serve(vlWebsocketServer, '192.168.1.3', 8777)
server2 = await websockets.serve(irWebsocketServer, '192.168.1.3', 8555)
await asyncio.gather(server1.wait_closed(), server2.wait_closed())
asyncio.run(main())
Related
Goal: To cancel pending tasks safely across all threads. Then safely end all the asyncio loops across all threads.
Code explanation:
I opened two threads, one for the server to run and the other for background processing. Each thread has there own separate asyncio loops.
Desired Function:
When I receive a message called onClose from the client, I want to immediately shutdown all processes safely across all the threads. Desired Function is in the func_websocket_connection() function after print('Running On Close Function')
Techniques Tried:
Of course I tried os._exit(0) to abruptly stop everything. It accomplishes what I want but I also know it is not safe and can corrupt processing data. I also tried
print('Running On Close Function')
loop = asyncio.get_event_loop()
loop = loop.stop()
which also works but I get Task was destroyed but it is pending!
Non-Server Code:
import asyncio
import websockets
from threading import Thread
from time import sleep
#=============================================
# Open Websocket Server:
def func_run_server():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
asyncio.ensure_future(func_websocket_connection())
loop.run_forever()
# Background Processing Function 1:
async def func_websocket_connection():
while i in range(100):
await asyncio.sleep(0.5)
print('Run Step Server 0')
#Some If Statement
if i == 10:
print('Running On Close Function')
#=============================================
# Open Background Processing:
def func_run_background():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
future_run_step0 = asyncio.ensure_future(func_run_step0())
future_run_step1 = asyncio.ensure_future(func_run_step1())
loop.run_until_complete(future_run_step0)
loop.run_until_complete(future_run_step1)
# Background Processing Function 1:
async def func_run_step0():
await asyncio.sleep(5.0)
print('Run Step 0')
# Background Processing Function 2:
async def func_run_step1():
await asyncio.sleep(5.0)
print('Run Step 1')
#================================================================================
#Running two separate threads
Thread(target=func_run_server).start()
Thread(target=func_run_background).start()
I'm trying to combine multiprocessing with asyncio. The program has two main components - one which streams/generates content, and another that consumes it.
What I want to do is to create multiple processes in order to exploit multiple CPU cores - one for the stream listener/generator, another for the consumer, and a simple one to shut down everything when the consumer has stopped.
My approach so far has been to create the processes, and start them. Each such process creates an async task. Once all processes have started, I run the asyncio tasks. What I have so far (stripped down) is:
def consume_task(loop, consumer):
loop.create_task(consume_queue(consumer))
def stream_task(loop, listener, consumer):
loop.create_task(create_stream(listener, consumer))
def shutdown_task(loop, listener):
loop.create_task(shutdown(consumer))
async def shutdown(consumer):
print("Shutdown task created")
while not consumer.is_stopped():
print("No activity")
await asyncio.sleep(5)
print("Shutdown initiated")
loop.stop()
async def create_stream(listener, consumer):
stream = Stream(auth, listener)
print("Stream created")
stream.filter(track=KEYWORDS, is_async=True)
await asyncio.sleep(EVENT_DURATION)
print("Stream finished")
consumer.stop()
async def consume_queue(consumer):
await consumer.run()
loop = asyncio.get_event_loop()
p_stream = Process(target=stream_task, args=(loop, listener, consumer, ))
p_consumer = Process(target=consume_task, args=(loop, consumer, ))
p_shutdown = Process(target=shutdown_task, args=(loop, consumer, ))
p_stream.start()
p_consumer.start()
p_shutdown.start()
loop.run_forever()
loop.close()
The problem is that everything hangs (or does it block?) - no tasks are actually running. My solution was to change the first three functions to:
def consume_task(loop, consumer):
loop.create_task(consume_queue(consumer))
loop.run_forever()
def stream_task(loop, listener, consumer):
loop.create_task(create_stream(listener, consumer))
loop.run_forever()
def shutdown_task(loop, listener):
loop.create_task(shutdown(consumer))
loop.run_forever()
This does actually run. However, the consumer and the listener objects are not able to communicate. As a simple example, when the create_stream function calls consumer.stop(), the consumer does not stop. Even when I change a consumer class variable, the changes are not made - case in point, the shared queue remains empty. This is how I am creating the instances:
queue = Queue()
consumer = PrintConsumer(queue)
listener = QueuedListener(queue, max_time=EVENT_DURATION)
Please note that if I do not use processes, but only asyncio tasks, everything works as expected, so I do not think it's a reference issue:
loop = asyncio.get_event_loop()
stream_task(loop, listener, consumer)
consume_task(loop, consumer)
shutdown_task(loop, listener)
loop.run_forever()
loop.close()
Is it because they are running on different processes? How should I go about fixing this issue please?
Found the problem! Multi-processing creates copies of instances. The solution is to create a Manager, which shares the instances itself.
EDIT [11/2/2020]:
import asyncio
from multiprocessing import Process, Manager
"""
These two functions will be created as separate processes.
"""
def task1(loop, shared_list):
output = loop.run_until_complete(asyncio.gather(async1(shared_list)))
def task2(loop, shared_list):
output = loop.run_until_complete(asyncio.gather(async2(shared_list)))
"""
These two functions will be called (in different processes) asynchronously.
"""
async def async1(shared_list):
pass
async def async2(shared_list):
pass
"""
Create the manager and start it up.
From this manager, also create a list that is shared by functions in different threads.
"""
manager = Manager()
manager.start()
shared_list = manager.list()
loop = asyncio.get_event_loop() # the event loop
"""
Create two processes.
"""
process1 = Process(target=task1, args=(loop, shared_list, ))
process2 = Process(target=task2, args=(loop, shared_list, ))
"""
Start the two processes and wait for them to finish.
"""
process1.start()
process2.start()
output1 = process1.join()
output2 = process2.join()
"""
Clean up
"""
loop.close()
manager.shutdown()
I'm trying to understand, is it possible to run the asyncio.Server instance while the event loop is already running by run_forever method (from a separate thread, of course).
As I understand, the server can be started either by loop.run_until_complete(asyncio.start_server(...)) or by
await asyncio.start_server(...), if the loop is already running.
The first way is not acceptable for me, since the loop is already running by run_forever method. But I also can't use the await expression, since I'm going to start it from outside the "loop area" (e.g. from the main method, which can't be marked as async, right?)
def loop_thread(loop):
asyncio.set_event_loop(loop)
try:
loop.run_forever()
finally:
loop.close()
print("loop clesed")
class SchedulerTestManager:
def __init__(self):
...
self.loop = asyncio.get_event_loop()
self.servers_loop_thread = threading.Thread(
target=loop_thread, args=(self.loop, ))
...
def start_test(self):
self.servers_loop_thread.start()
return self.servers_loop_thread
def add_router(self, router):
r = self.endpoint.add_router(router)
host = router.ConnectionParameters.Host
port = router.ConnectionParameters.Port
srv = TcpServer(host, port)
server_coro = asyncio.start_server(
self.handle_connection, self.host, self.port)
# does not work since add_router is not async
# self.server = await server_coro
# does not work, since the loop is already running
# self.server = self.loop.run_until_complete(server_coro)
return r
def maind():
st_manager = SchedulerTestManager()
thread = st_manager.start_test()
router = st_manager.add_router(router)
Of cource, the simplest solution is to add all routers (servers) before starting the test (running the loop). But I want try to implement it, so it would be possible to add a router when a test is already running. I thought the loop.call_soon (call_soon_threadsafe) methods can help me, but it seems the can't shedule a coroutine, but just a simple function.
Hope that my explanation is not very confusing. Thanks in advance!
For communicating between event loop executed in one thread and conventional old good threaded code executed in other thread you might use janus library.
It's a queue with two interfaces: async and thread-safe sync one.
This is usage example:
import asyncio
import janus
loop = asyncio.get_event_loop()
queue = janus.Queue(loop=loop)
def threaded(sync_q):
for i in range(100):
sync_q.put(i)
sync_q.join()
#asyncio.coroutine
def async_coro(async_q):
for i in range(100):
val = yield from async_q.get()
assert val == i
async_q.task_done()
fut = loop.run_in_executor(None, threaded, queue.sync_q)
loop.run_until_complete(async_coro(queue.async_q))
loop.run_until_complete(fut)
You may create a task waiting new messages from the queue in a loop and starting new servers on request. Other thread may push new message into the queue asking for a new server.
This is the basic tcp server from asyncio tutotial:
import asyncio
class EchoServerClientProtocol(asyncio.Protocol):
def connection_made(self, transport):
peername = transport.get_extra_info('peername')
print('Connection from {}'.format(peername))
self.transport = transport
def data_received(self, data):
message = data.decode()
print('Data received: {!r}'.format(message))
print('Send: {!r}'.format(message))
self.transport.write(data)
print('Close the client socket')
self.transport.close()
loop = asyncio.get_event_loop()
# Each client connection will create a new protocol instance
coro = loop.create_server(EchoServerClientProtocol, '127.0.0.1', 8888)
server = loop.run_until_complete(coro)
# Serve requests until CTRL+c is pressed
print('Serving on {}'.format(server.sockets[0].getsockname()))
try:
loop.run_forever()
except KeyboardInterrupt:
pass
# Close the server
server.close()
loop.run_until_complete(server.wait_closed())
loop.close()
Like all (i found) other examples it uses blocking loop.run_forever().
How do i start listeting server and do something else in the time?
I have tried to outsource starting server in a function and start this function with asyncio.async(), but with no success.
What i'm missing here?
You can schedule several concurrent asyncio tasks before calling loop.run_forever().
#asyncio.coroutine
def other_task_coroutine():
pass # do something
start_tcp_server_task = loop.create_task(loop.create_server(
EchoServerClientProtocol, '127.0.0.1', 8888))
other_task = loop.create_task(other_task_coroutine())
self.run_forever()
When you call loop.create_task(loop.create_server()) or loop.create_task(other_task_coroutine()), nothing is actually executed: a coroutine object is created and wrapped in a task (consider a task to be a shell and the coroutine an instance of the code that will be executed in the task). The tasks are scheduled on the loop when created.
The loop will execute start_tcp_server_task first (as it's scheduled first) until a blocking IO event is pending or the passive socket is ready to listen for incoming connections.
You can see asyncio as a non-preemptible scheduler running on one CPU: once the first task interrupts itself or is done, the second task will be executed. Hence, when one task is executed, the other one has to wait until the running task finishes or yields (or "awaits" with Python 3.5). "yielding" (yield from client.read()) or "awaiting" (await client.read()) means that the task gives back the hand to the loop's scheduler, until client.read() can be executed (data is available on the socket).
Once the task gave back the control to the loop, it can schedule the other pending tasks, process incoming events and schedule the tasks which were waiting for those events. Once there is nothing left to do, the loop will perform the only blocking call of the process: sleep until the kernel notifies it that events are ready to be processed.
In this context, you must understand that when using asyncio, everything running in the process must run asynchronously so the loop can do its work. You can not use multiprocessing objects in the loop.
Note that asyncio.async(coroutine(), loop=loop) is equivalent to loop.create_task(coroutine()).
Additionally, you can consider running what you want in an executor.
For example.
coro = loop.create_server(EchoServerClientProtocol, '127.0.0.1', 8888)
server = loop.run_until_complete(coro)
async def execute(self, loop):
await loop.run_in_executor(None, your_func_here, args:
asyncio.async(execute(loop))
loop.run_forever()
An executor will run whatever function you want in an executor, which wont block your server.
I have a tkinter based GUI program running in Python 3.4.1. I have several threads running in the program to get JSON data from various urls. I am wanting to add some WebSocket functionality to be able to allow program to act as a server and allow several clients to connect to it over a WebSocket and exchange other JSON data.
I am attempting to use the Autobahn|Python WebSocket server for asyncio.
I first tried to run the asyncio event loop in a separate thread under the GUI program. However, every attempt gives 'AssertionError: There is no current event loop in thread 'Thread-1'.
I then tried spawning a process with the standard library multiprocessing package that ran the asyncio event loop in another Process. When I try this I don't get any exception but the WebSocket server doesn't start either.
Is it even possible to run an asyncio event loop in a subprocess from another Python program?
Is there even a way to integrate an asyncio event loop into a currently multithreaded/tkinter program?
UPDATE
Below is the actual code I am trying to run for an initial test.
from autobahn.asyncio.websocket import WebSocketServerProtocol
from autobahn.asyncio.websocket import WebSocketServerFactory
import asyncio
from multiprocessing import Process
class MyServerProtocol(WebSocketServerProtocol):
def onConnect(self, request):
print("Client connecting: {0}".format(request.peer))
def onOpen(self):
print("WebSocket connection open.")
def onMessage(self, payload, isBinary):
if isBinary:
print("Binary message received: {0} bytes".format(len(payload)))
else:
print("Text message received: {0}".format(payload.decode('utf8')))
## echo back message verbatim
self.sendMessage(payload, isBinary)
def onClose(self, wasClean, code, reason):
print("WebSocket connection closed: {0}".format(reason))
def start_server():
factory = WebSocketServerFactory("ws://10.241.142.27:6900", debug = False)
factory.protocol = MyServerProtocol
loop = asyncio.get_event_loop()
coro = loop.create_server(factory, '10.241.142.27', 6900)
server = loop.run_until_complete(coro)
loop.run_forever()
server.close()
loop.close()
websocket_server_process = Process(target = start_server)
websocket_server_process.start()
Most of it is straight from the Autobahn|Python example code for asyncio. If I try to run it as a Process it doesn't do anything, no client can connect to it, if I run netstat -a there is no port 6900 being used. If just use start_server() in the main program it creates the WebSocket Server.
First, you're getting AssertionError: There is no current event loop in thread 'Thread-1'. because asyncio requires each thread in your program to have its own event loop, but it will only automatically create an event loop for you in the main thread. So if you call asyncio.get_event_loop once in the main thread it will automatically create a loop object and set it as the default for you, but if you call it again in a child thread, you'll get that error. Instead, you need to explicitly create/set the event loop when the thread starts:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
Once you've done that, you should be able to use get_event_loop() in that specific thread.
It is possible to start an asyncio event loop in a subprocess started via multiprocessing:
import asyncio
from multiprocessing import Process
#asyncio.coroutine
def coro():
print("hi")
def worker():
loop = asyncio.get_event_loop()
loop.run_until_complete(coro())
if __name__ == "__main__":
p = Process(target=worker)
p.start()
p.join()
Output:
hi
The only caveat is that if you start an event loop in the parent process as well as the child, you need to explicitly create/set a new event loop in the child if you're on a Unix platform (due to a bug in Python). It should work fine on Windows, or if you use the 'spawn' multiprocessing context.
I think it should be possible to start an asyncio event loop in a background thread (or process) of your Tkinter application and have both the tkinter and asyncio event loop run side-by-side. You'll only run into issues if you try to update the GUI from the background thread/process.
The answer by #dano might be correct, but creates an new process which is unnessesary in most situations.
I found this question on Google because i had the same issue myself. I have written an application where i wanted an websocket api to not run on the main thread and this caused your issue.
I found my alternate sollution by simply reading about event loops on the python documentation and found the asyncio.new_event_loop and asyncio.set_event_loop functions which solved this issue.
I didn't use AutoBahn but the pypi websockets library, and here's my solution
import websockets
import asyncio
import threading
class WebSocket(threading.Thread):
#asyncio.coroutine
def handler(self, websocket, path):
name = yield from websocket.recv()
print("< {}".format(name))
greeting = "Hello {}!".format(name)
yield from websocket.send(greeting)
print("> {}".format(greeting))
def run(self):
start_server = websockets.serve(self.handler, '127.0.0.1', 9091)
eventloop = asyncio.new_event_loop()
asyncio.set_event_loop(eventloop)
eventloop.run_until_complete(start_server)
eventloop.run_forever()
if __name__ == "__main__":
ws = WebSocket()
ws.start()
"Is there even a way to integrate an asyncio event loop into a currently multithreaded/tkinter program?"
Yes, run your tkinter program with an asyncio event loop. Proof of concept.
'''Proof of concept integrating asyncio and tk loops.
Terry Jan Reedy
Run with 'python -i' or from IDLE editor to keep tk window alive.
'''
import asyncio
import datetime as dt
import tkinter as tk
loop = asyncio.get_event_loop()
root = tk.Tk()
# Combine 2 event loop examples from BaseEventLoop doc.
# Add button to prove that gui remain responsive between time updates.
# Prints statements are only for testing.
def flipbg(widget, color):
bg = widget['bg']
print('click', bg, loop.time())
widget['bg'] = color if bg == 'white' else 'white'
hello = tk.Label(root)
flipper = tk.Button(root, text='Change hello background', bg='yellow',
command=lambda: flipbg(hello, 'red'))
time = tk.Label(root)
hello.pack()
flipper.pack()
time.pack()
def hello_world(loop):
hello['text'] = 'Hello World'
loop.call_soon(hello_world, loop)
def display_date(end_time, loop):
print(dt.datetime.now())
time['text'] = dt.datetime.now()
if (loop.time() + 1.0) < end_time:
loop.call_later(1, display_date, end_time, loop)
else:
loop.stop()
end_time = loop.time() + 10.1
loop.call_soon(display_date, end_time, loop)
# Replace root.mainloop with these 4 lines.
def tk_update():
root.update()
loop.call_soon(tk_update) # or loop.call_later(delay, tk_update)
# Initialize loop before each run_forever or run_until_complete call
tk_update()
loop.run_forever()
I have experimentally run IDLE with those 4 extra lines, with a slowdown only noticeable when syntax highlighting 1000s of lines.