this is probably more a question about threading than about my websocket.
I'm using "SimpleWebSocket" from github ( https://github.com/dpallot/simple-websocket-server )
The example works fine:
from SimpleWebSocketServer import SimpleWebSocketServer, WebSocket
class SimpleEcho(WebSocket):
def handleMessage(self):
# echo message back to client
self.sendMessage(self.data)
def handleConnected(self):
print self.address, 'connected'
def handleClose(self):
print self.address, 'closed'
server = SimpleWebSocketServer('', 8000, SimpleEcho)
server.serveforever()
The Server is running, I can connect and send Messages.
Now i try to run it as a Thread with those classes:
This one is supposed to create many threads including the WebSocketServer
from websockethread import WebSocketThread
class startManyThreads:
def __init__(self):
self.thread1 = WebSocketThread()
self.thread1.start()
if __name__ == "__main__":
startManyThreads = startManyThreads()
This class should run as my thread:
import threading
from SimpleWebSocketServer import SimpleWebSocketServer
from webSocketServer import WebSocketServer
class WebSocketThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
server = SimpleWebSocketServer('', 8000, WebSocketServer)
server.serveforever()
And this is the "customized" echo example:
from SimpleWebSocketServer import SimpleWebSocketServer, WebSocket
class SimpleEcho(WebSocket):
def handleMessage(self):
# echo message back to client
self.sendMessage(self.data)
def handleConnected(self):
print self.address, 'connected'
def handleClose(self):
print self.address, 'closed'
I have also tried to derive this: class SimpleEcho(WebSocket, threading.Thread):
Any Ideas what i'm doing wrong?
&
Thanks alot in advance!
Edit:
The result when i run "simpleEcho" is that i get a prompt can connect via the websocket.html (provided on github), send and receive Messages
The result when i put it in a thread (anyone of the 3 ways i tried) is the same behaviour except when i try to "connect" from the websocket.html i get a "error: undefined". With nmap i checked and the Server seems so be running & listening to port 8000
Edit 2: Derived new Class from SimpleWebSocketServer
import threading
from SimpleWebSocketServer import SimpleWebSocketServer
class ThreadSimpleWebSocketThread(threading.Thread, SimpleWebSocketServer):
def __init__(self, serversocket):
threading.Thread.__init__(self)
self.serversocket = serversocket
def serveforever(self):
SimpleWebSocketServer.serversocket = self.serversocket
SimpleWebSocketServer.selectInterval = 0.1
SimpleWebSocketServer.listeners = [self.serversocket]
super(ThreadSimpleWebSocketThread, self).serveforever()
def run(self):
self.serveforever()
The main problem seems to be where you're starting the server. The Thread.__init__() method runs inside the main thread (of the caller), not the actual WebSocketThread(). This needs to be done in the Thread.run() method:
class WebSocketThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
server = SimpleWebSocketServer('', 8000, WebSocketServer)
server.serveforever()
The code inside run() actually runs inside the thread.
Note that because of the Global Interpreter Lock, threads won't improve performance much, and you'll probably need multiprocessing. However, if you just want to offload the I/O waiting, this should work fine.
Edit: From looking at this GitHub project, and rethinking what you're trying to do, this isn't trivial. You'll have to override WebSocket.serveforever() in your SimpleEcho() class and change it to accept the socket and pass the accepted socket to a Thread (see here).
Related
I have a simple Python backend using falcon and websockets. If a client makes a call to an endpoint (e.g., to submit data) all other connected clients are notified via their respective websocket connection, i.e., the backend makes a broadcast to all currently connected clients. In general, this works just fine. Here's the minimal script for the falcon app
import falcon
from db.dbmanager import DBManager
from ws.wsserver import WebSocketServer
from api.resources.liveqa import DemoResource
dbm = DBManager() # PostgreSQL connection pool; works fine with multiple workers
wss = WebSocketServer() # Works only with 1 worker
app = falcon.App()
demo_resource = DemoResource(dbm, wss)
app.add_route('/api/v1/demo', demo_resource)
And here is the code for the websockets server which I instantiate and pass the resource class:
import json
import asyncio
import websockets
import threading
class WebSocketServer:
def __init__(self):
self.clients = {}
self.start_server()
async def handler(self, ws, path):
session_id = path.split('/')[-1]
if session_id in self.clients:
self.clients[session_id].add(ws)
else:
self.clients[session_id] = {ws}
try:
async for msg in ws:
pass # The clients are not supposed to send anything
except websockets.ConnectionClosedError:
pass
finally:
self.clients[session_id].remove(ws)
async def send(self, client, msg):
await client.send(msg)
def broadcast(self, session_id, msg):
if session_id not in self.clients:
return
for client in self.clients[session_id]:
try:
asyncio.run(self.send(client, json.dumps(msg)))
except:
pass
def start_server(self):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
start_server = websockets.serve(self.handler, host='111.111.111.111', port=5555)
asyncio.get_event_loop().run_until_complete(start_server)
threading.Thread(target=asyncio.get_event_loop().run_forever).start()
I use Gunicorn as server for the backend, and it works if I use just 1 worker. However, if I try --workers 2 I get the error that port 5555 is already in use. I guess this makes sense as each worker is trying to create a WebSocketServer instance using the same ip/port-pair.
What is the best / cleanest / most phytonic way to address this? I assume that I have to ensure that only one WebSocketServer instance is created. But how?
On a side note, I assume that a DBManager instance get created for each worker as well. While it doesn't throw an error as there can be multiple connections pools, I guess ensuring a single instance of DBManager is also the preferred way.
First of all, even running with one worker is potentially problematic, because Gunicorn is primarily a pre-forking server, and forking a process with threads is, in general, unsafe and may lead to unpredictable results.
One way to solve this is to use Gunicorn's server hooks to only start a thread (in this case a WebSocket server) in one of the workers, and only do that after forking. For instance,
import logging
import os
import threading
import falcon
import gunicorn.app.base
logging.basicConfig(
format='%(asctime)s [%(levelname)s] %(message)s', level=logging.INFO)
class HelloWorld:
def on_get(self, req, resp):
resp.media = {'message': 'Hello, World!'}
def do_something(fork_nr):
pid = os.getpid()
logging.info(f'in a thread, {pid=}')
if fork_nr == 1:
logging.info('we could start a WebSocket server...')
else:
logging.info('not the first worker, not starting any servers')
class HybridApplication(gunicorn.app.base.BaseApplication):
forks = 0
#classmethod
def pre_fork(cls, server, worker):
logging.info(f'about to fork a new worker #{cls.forks}')
cls.forks += 1
#classmethod
def post_fork(cls, server, worker):
thread = threading.Thread(
target=do_something, args=(cls.forks,), daemon=True)
thread.start()
def __init__(self):
self.options = {
'bind': '127.0.0.1:8000',
'pre_fork': self.pre_fork,
'post_fork': self.post_fork,
'workers': 4,
}
self.application = falcon.App()
self.application.add_route('/hello', HelloWorld())
super().__init__()
def load_config(self):
config = {key: value for key, value in self.options.items()
if key in self.cfg.settings and value is not None}
for key, value in config.items():
self.cfg.set(key.lower(), value)
def load(self):
return self.application
if __name__ == '__main__':
HybridApplication().run()
This simplistic prototype is not infallible, as we should also handle server reloads, the worker getting killed, etc. Speaking of which, you should probably use another worker type than sync for potentially long running requests, or set a long timeout, because otherwise the worker can get killed, taking the WebSocket thread with it. Specifying a number of threads should automatically change your worker type into gthread.
Note that here I implemented a custom Gunicorn application, but you could achieve the same effect by specifying hooks via a configuration file.
Another option is to use the ASGI flavour of Falcon, and implement even the WebSocket part inside your app:
import asyncio
import logging
import falcon.asgi
logging.basicConfig(
format='%(asctime)s [%(levelname)s] %(message)s', level=logging.INFO)
class HelloWorld:
async def on_get(self, req, resp):
resp.media = {'message': 'Hello, World!'}
async def on_websocket(self, req, ws):
await ws.accept()
logging.info(f'WS accepted {req.path=}')
try:
while True:
await ws.send_media({'message': 'hi'})
await asyncio.sleep(10)
finally:
logging.info(f'WS disconnected {req.path=}')
app = falcon.asgi.App()
app.add_route('/hello', HelloWorld())
Note that Gunicorn itself does not "speak" ASGI, so you would either need to use an ASGI app server, or use Gunicorn as a process manager for Uvicorn workers.
For instance, assuming your file is called test.py, you could run Uvicorn directly as:
pip install uvicorn[standard]
uvicorn test:app
However, if you went the ASGI route, you would need to implement your responders as coroutine functions (async def on_get(...) etc), or run your synchronous DB code in a threadpool executor.
I am trying to create a webserver in python which can be started and stopped using a tkinter GUI. In tkinter I have a button which will call start() and a button that will call stop(). Initially everything works fine, the server starts when I click the button and it also stops when I click the stop button. When I try to restart the server again using the start button, I get a runtime error
RuntimeError: threads can only be started once
I believe it has something to do with the fact that I have already initialized threading in my init, and I can not figure out how to get this to work.
I have read through the threading docs multiple times, but I am struggling to understand it entirely. Any assistance would be greatly appreciated.
Thank you!
import threading
import socketserver
import http.server
import os
class WebServer(object):
def __init__(self, host, port):
self.host = host
self.port = port
self.handler = http.server.SimpleHTTPRequestHandler
self.server = socketserver.TCPServer((self.host, self.port), self.handler)
socketserver.TCPServer.allow_reuse_address = True
self.server_thread = threading.Thread(target=self.server.serve_forever, name="Server_Thread")
self.server_thread.setDaemon(True)
def start(self):
web_dir = os.path.join(os.path.dirname(__file__), 'www')
os.chdir(web_dir)
self.server_thread.start()
def stop(self):
os.chdir('..')
self.server.shutdown()
self.server.server_close()
As the python documentation states, the start method of the Thread object can only be called once.
In your case, you can create new instance of the Thread object in the start method:
def start(self):
web_dir = os.path.join(os.path.dirname(__file__), 'www')
os.chdir(web_dir)
self.server_thread = threading.Thread(target=self.server.serve_forever, name="Server_Thread")
self.server_thread.start()
In addition you also may clean the reference to the thread in stop method:
self.server_thread = None
I need to create an independent class that provides an RPC server running in its own Python process using wamp and autobahn.
Following some guides found here, I managed to create this server in these classes saved into the file rpcwampserver.py:
from twisted.internet.defer import inlineCallbacks
from autobahn.twisted.wamp import ApplicationSession, ApplicationRunner
import multiprocessing
from twisted.internet import reactor
class Manager(ApplicationSession):
#inlineCallbacks
def onJoin(self, details):
print("session ready")
def test():
return u'hello!'
try:
yield self.register(test, u'rpc.test')
print("procedure registered")
except Exception as e:
print("could not register procedure: {0}".format(e))
class RPCWampServer:
def __init__(self):
self._url = u'ws://localhost:8080/ws'
self.real = u'realm'
self.runner = ApplicationRunner(url=self._url, realm=self.realm,
#debug=True, debug_wamp=True, debug_app=True
)
def start(self):
self.runner.run(Manager, start_reactor=False)
class RPC_Wamp_Server:
def __init__(self):
server = RPCWampServer()
server.start()
multi = multiprocessing.Process(target=reactor.run,args=())
multi.start()
If RPC_Wamp_Server is imported directly into a file close to rpcwampserver.py the the code works fine:
from rpcwampserver import RPC_Wamp_Server
c=RPC_Wamp_Server()
BUT
If I use this class inside a package located in a different path, the reactor doesn't work:
from mymodules.wamp.rpcwampserver import RPC_Wamp_Server
c=RPC_Wamp_Server()
The class RPC_Wamp_Server is found but the initialization of the Manager seems to be skipped.
Enabling the debug (commented in the code) it appears:
Starting factory [...]
and after a while
Stopped factory [...]
EDIT:
Using 127.0.0.1 instead of localhost solved the problem
today I was working to create some unittests for my application: a websocket client..
In the real world, ws server is an embeeded pc in the home network.
Now, for my unittest, I'd like to create a fake ws server and use it to test the client.
can you suggest me some ws-server plug&play that I can call inside my unittest setup and use it for testing?
I tried to use Autobahn ws server, but it is not plug&play.. It should work but I'm not able to handle correctly it in a separate thread.
My goal is to test the client not to develop a dummy server.
Can you help me with something easy and ready-to-use?
Thanks in advance,
Salvo
Here the minimal code, I wrote, in order to avoid the blocking command (serve_forever)
I used ws4py as websocket library.
from wsgiref.simple_server import make_server
from ws4py.websocket import WebSocket
from ws4py.server.wsgirefserver import WSGIServer, WebSocketWSGIRequestHandler
from ws4py.server.wsgiutils import WebSocketWSGIApplication
import threading
class TestWebSocket(WebSocket):
def received_message(self, message):
self.send("+OK", False)
class TestServer:
def __init__(self, hostname='127.0.0.1', port=8080):
self.server = make_server(hostname,\
port,\
server_class=WSGIServer,\
handler_class=WebSocketWSGIRequestHandler,\
app=WebSocketWSGIApplication(handler_cls=TestWebSocket)\
)
self.server.initialize_websockets_manager()
self.thread = threading.Thread(target=self.server.serve_forever)
self.thread.start()
print("Server started for {}:{}".format(hostname, str(port)))
def shutdown(self):
self.server.shutdown()
I am creating a robot which is going to be driven by the commands received over TCP connection. Therefore, I will have a robot class with methods (e.g. sense(), drive()...) and the class for TCP connection.
To establish TCP connection, I looked at examples from twisted. On the client side, I have written a client.py script for connection handling:
from twisted.internet import reactor, protocol
import random
from eventhook import EventHook
import common
#from Common.socketdataobjects import response
# a client protocol
class EchoClient(protocol.Protocol):
"""Once connected, send a message, then print the result."""
def connectionMade(self):
self.transport.write("hello, world!")
#the server should be notified that the connection to the robot has been established
#along with robot state (position)
#eventConnectionEstablishedHook.fire()
def dataReceived(self, data):
print "Server said:", data
self.transport.write("Hello %s" % str(random.randint(1,10)))
'''
serverMessage = common.deserializeJson(data)
command = serverMessage.command
arguments = serverMessage.arguments
#here we get for example command = "DRIVE"
#arguments = {motor1Speed: 50, motor2Speed: 40}
instead of above response, used for testing purposes,
the commands should be extracted from the data and according to the command,
the method in Robot instance should be called.
When the command execution finishes, the self.transport.write() method should be called
to notify the server that the command execution finished
'''
def connectionLost(self, reason):
print "connection lost"
class EchoFactory(protocol.ClientFactory):
protocol = EchoClient
def clientConnectionFailed(self, connector, reason):
print "Connection failed - goodbye!"
reactor.stop()
def clientConnectionLost(self, connector, reason):
print "Connection lost - goodbye!"
reactor.stop()
# this connects the protocol to a server runing on port 8000
def initializeEventHandlers(connectionEstablishedHook):
global connection
connection.established = 0
global eventConnectionEstablishedHook
eventConnectionEstablishedHook = connectionEstablishedHook
def main():
f = EchoFactory()
reactor.connectTCP("localhost", 8000, f)
reactor.run()
# this only runs if the module was *not* imported
if __name__ == '__main__':
main()
Beside this script, I have a robot class:
Class Robot(object():
def __init(self)__:
self.position = (0,0)
def drive(self, speedMotor1, speedMotor2, driveTime)
updateMotor1State(speedMotor1)
updateMotor2State(speedMotor2)
time.sleep(driveTime)
#when the execution finished, the finish status should be sent to client in order to inform the server
return "Finished"
def sense(self)
#logic to get the data from the environment
What I would like to do, is to receive the data(commands) from TCP connection and then call the according method in Robot instance. Some procedures might take longer (e.g. driving), so I tried to use events, but haven't figured out the appropriate way to communicate between TCP client and robot using events:
if __name__ == '__main__':
robotController = Robot()
eventController = Controller()
connectionEstablishedHook = EventHook()
client.initializeEventHandlers(connectionEstablishedHook)
eventController.connection = connectionEstablishedHook
client.main()
I tried to create ClientMainProgram script, where I wanted to create an instance of a robot, an instance of TCP client and implement the communication between them using events.
Previously I have managed to implement event handling using Michael Foord's events pattern on a simpler example. I would be very thankful if anyone could provide the solution to this question or any similar example which might be helpful to solve this problem.
Events are easily represented using regular Python function calls.
For example, if your protocol looks like this:
from twisted.internet.protocol import Protocol
class RobotController(Protocol):
def __init__(self, robot):
self.robot = robot
def dataReceived(self, data):
for byte in data:
self.commandReceived(byte)
def commandReceived(self, command):
if command == "\x00":
# drive:
self.robot.drive()
elif command == "\x01":
# sense:
self.robot.sense()
...
(The specifics of the protocol used in this example are somewhat incidental. I picked this protocol because it's very simple and has almost no parsing logic. For your real application I suggest you use twisted.protocols.amp.)
Then all you need to do is make sure the robot attribute is properly initialized. You can do this easily using the somewhat newer endpoint APIs that can often replace use of factories:
from sys import argv
from twisted.internet.endpoints import clientFromString, connectProtocol
from twisted.internet.task import react
def main(reactor, description):
robot = ...
endpoint = clientFromString(reactor, description)
connecting = connectProtocol(endpoint, RobotController(robot))
def connected(controller):
...
connecting.addCallback(connected)
return connecting
react(main, argv[1:])