What can shut down a websocket connection? - python

I use websocket_server in order to provide a one way (server to client) websocket connection.
I have several threads on the server which query at given intervals (while True: ... time.sleep(60)) an API and then perform a server.send_message() call to update the client. All of this works fine.
From time to time, without any particular reason, I get a crash:
Exception in thread Thread-3:
Traceback (most recent call last):
File "C:\Python35\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "C:\Python35\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "D:/Dropbox/dev/domotique/webserver.py", line 266, in calendar
server.send_message(client, json.dumps({"calendar": events}))
File "C:\Python35\lib\site-packages\websocket_server\websocket_server.py", line 71, in send_message
self._unicast_(client, msg)
File "C:\Python35\lib\site-packages\websocket_server\websocket_server.py", line 119, in _unicast_
to_client['handler'].send_message(msg)
File "C:\Python35\lib\site-packages\websocket_server\websocket_server.py", line 194, in send_message
self.send_text(message)
File "C:\Python35\lib\site-packages\websocket_server\websocket_server.py", line 240, in send_text
self.request.send(header + payload)
BrokenPipeError: [WinError 10058] A request to send or receive data was disallowed because the socket had already been shut down in that direction with a previous shutdown call
There is no shutdown call in my code. What else can shut a websocket down?

The WebSocket client can ask the server to close the connection (or directly close it). From the library's code:
if not b1:
logger.info("Client closed connection.")
self.keep_alive = 0
return
if opcode == CLOSE_CONN:
logger.info("Client asked to close connection.")
self.keep_alive = 0
return
You could check self.keep_alive to know if the socket is still open.

Related

How to restart a python script when it crashes but doesn't die?

I have a Python script that needs to be running continuously on a linux server, it connects to a webSocket and retrieve a realtime feed of data.
After a dozens of hours, the script crashes but doesn't exit, it displays an error message about losing connection to the server and I have to manually Ctrl+C and run it again.
I am looking for a way to automatically (instantly) restart the python script once it crashes, how can one achieve this ?
Edit : this is the message I get when the script crashes
Server disconnected.
disconnect handler error
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/engineio/client.py", line 496, in _trigger_event
return self.handlers[event](*args)
File "/usr/local/lib/python3.8/dist-packages/socketio/client.py", line 632, in _handle_eio_disconnect
self._trigger_event('disconnect', namespace=n)
File "/usr/local/lib/python3.8/dist-packages/socketio/client.py", line 550, in _trigger_event
return self.handlers[namespace][event](*args)
File "python_script.py", line 29, in on_disconnect
quit()
File "/usr/lib/python3.8/_sitebuiltins.py", line 26, in __call__
raise SystemExit(code)
SystemExit: None
Here is the on_disconnect class parameter code :
#socketio_client.on('disconnect', namespace='/streaming')
def on_disconnect():
print('Server disconnected.')
quit()
I think I may have to call an external script to start a new instance of the python script.

Python Correct Way to Use asyncio streams to open a connection, send and receive multiple transmissions, then close connection gracefully

I am asking where either my though process or my code is incorrect relative to using asyncio client streams to send data and receive responses from a server. When I call the method that disconnects the client an exception is thrown. I am learning python asyncio and ran across exceptions during testing trying to close the client connection. I am trying to 1). Create a client connection to a server 2). leave the client connection open so that it can be used across multiple send/receive cycles 3). close the client connection gracefully when complete.
This is the class that contains the asyncio methods to create the stream writer.
class hl7_client_sender:
SB = b'\x1B'
EB = b'\x1C'
CR = b'\x0D'
def __init__(self,address,port,timeout=-1,retry=3.0):
self._resend=0
self._timeout= timeout
self._retry = retry
#self._reader, self._writer = await asyncio.open_connection(address,port)
self._address = address
self._port = port
self._writer = None
self._reader = None
async def connect(self):
self._reader, self._writer = await asyncio.open_connection(self._address,self._port)
async def disconnect(self):
await self._writer.wait_closed()
and this is the code in my driver where the exception occurs during the call to disconnect
#test send and respond
import asyncio
import string
import unicodedata
import simple_hl7_client
import time
##open a connectino sleep 5 seconds then close###
myclient = simple_hl7_client.hl7_client_sender('192.168.226.128',54321)
asyncio.run(myclient.connect())
time.sleep(3)
asyncio.run(myclient.disconnect())
The exception occurs during the call to asycnio.run(myclient.disconnect())
This is the exception:
Traceback (most recent call last):
File ".\test_simple_hl7_client.py", line 11, in <module>
asyncio.run(myclient.disconnect())
File "C:\Users\billg\AppData\Local\Programs\Python\Python37\lib\asyncio\runners.py", line 43, in run
return loop.run_until_complete(main)
File "C:\Users\billg\AppData\Local\Programs\Python\Python37\lib\asyncio\base_events.py", line 583, in run_until_complete
return future.result()
File "D:\data\FromOldPC\code\ASYNCIOTESTING\simple_hl7_client.py", line 23, in disconnect
self._writer.close()
File "C:\Users\billg\AppData\Local\Programs\Python\Python37\lib\asyncio\streams.py", line 317, in close
return self._transport.close()
File "C:\Users\billg\AppData\Local\Programs\Python\Python37\lib\asyncio\selector_events.py", line 663, in close
self._loop.call_soon(self._call_connection_lost, None)
File "C:\Users\billg\AppData\Local\Programs\Python\Python37\lib\asyncio\base_events.py", line 687, in call_soon
self._check_closed()
File "C:\Users\billg\AppData\Local\Programs\Python\Python37\lib\asyncio\base_events.py", line 479, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed

Flask/http- tell the client that the request needs some more time to complete

I have a web service(REST) where one request might take up to 30 sec to return an answer (lots of calculation). There is a risk, that during the calculation, the client webbrowser aborts(?) the existing connection and retries. Here is the console-output of the server-side:
Exception happened during processing of request from ('127.0.0.1', 53209)
Traceback (most recent call last):
File "C:\Users\tmx\Anaconda2\lib\SocketServer.py", line 290, in _handle_request_noblock
self.process_request(request, client_address)
File "C:\Users\tmx\Anaconda2\lib\SocketServer.py", line 318, in process_request
self.finish_request(request, client_address)
File "C:\Users\tmx\Anaconda2\lib\SocketServer.py", line 331, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "C:\Users\tmx\Anaconda2\lib\SocketServer.py", line 654, in __init__
self.finish()
File "C:\Users\tmx\Anaconda2\lib\SocketServer.py", line 713, in finish
self.wfile.close()
File "C:\Users\tmx\Anaconda2\lib\socket.py", line 283, in close
self.flush()
File "C:\Users\tmx\Anaconda2\lib\socket.py", line 307, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 10053] An established connection was aborted by the software in your host machine
One option is what I thought of is to somehow notify the client that "I'm alivem but the request is still needs some more time", or to somehow set the timeout on server side. What are the possibilities?
It's difficult to run code in Flask after you've already returned some data. Your options are to either use something like a task queue (see Celery), or to yield your response in multiple parts.
Views in Flask can return strings, but they can also return iterables that contain strings. So you could return "abc", ["abc"], or a generator that will yield "abc". If you do your processing between yields, data will get sent to the client while the request is still running.
Take a look at the following example:
def generator_that_does_the_calculation():
sleep(1)
yield "I'm alive, but I need some time\n"
sleep(1)
yield "Still alive here\n"
sleep(1)
yield "Done\n"
#app.route('/calculate')
def calculate():
return Response(generator_that_does_the_calculation())

python xmlrpc timeout error

I am using xmlrpc to contact a local server. On the client side, Sometimes the following socket timeout error and happens and its not a consistent error.
Why is it happening? What could be the reason for socket timeout?
<class 'socket.timeout'>: timed out
args = ('timed out',)
errno = None
filename = None
message = 'timed out'
strerror = None
Traceback on the server side is as follows
Exception happened during processing of request from ('127.0.0.1', 34855)
Traceback (most recent call last):
File "/usr/lib/python2.4/SocketServer.py", line 222, in handle_request
self.process_request(request, client_address)
File "/usr/lib/python2.4/SocketServer.py", line 241, in process_request
self.finish_request(request, client_address)
File "/usr/lib/python2.4/SocketServer.py", line 254, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/lib/python2.4/SocketServer.py", line 521, in __init__
self.handle()
File "/usr/lib/python2.4/BaseHTTPServer.py", line 314, in handle
self.handle_one_request()
File "/usr/lib/python2.4/BaseHTTPServer.py", line 308, in handle_one_request
method()
File "/usr/lib/python2.4/SimpleXMLRPCServer.py", line 441, in do_POST
self.send_response(200)
File "/usr/lib/python2.4/BaseHTTPServer.py", line 367, in send_response
self.send_header('Server', self.version_string())
File "/usr/lib/python2.4/BaseHTTPServer.py", line 373, in send_header
self.wfile.write("%s: %s\r\n" % (keyword, value))
File "/usr/lib/python2.4/socket.py", line 256, in write
self.flush()
File "/usr/lib/python2.4/socket.py", line 243, in flush
self._sock.sendall(buffer)
error: (32, 'Broken pipe')
I killed the server and restarted it. Its working fine now.
What could be the reason?
My machine's RAM went full yesterday night by a process and came back to normal today morning.
Will this error be because of some swapping of processes?
Looks like the client socket it timing out waiting for the server to respond. Is it possible that your server might take a lot time to respond some times? Also, if the server is causing the machine to go into swap, that would slow it down making a timeout possible.
If I remember right, socket timeout is not set in xmlrpc in python. Are you doing socket.setdefaulttimeout somewhere in your code?
If it is expected that your server will take time once in a while, then you could set a higher timeout value using above.
HTH

tornado IOError "Stream is closed" on request finish()

I'm using tornado 2.0 and occassionally when I call self.finish() to end an asynchronous request, I'll get an IOError with the message "Stream is closed". It looks as though this happens when the client ends a request (ie by navigating to another page) prior to the server calling finish(). Is this expected behavior and something my code just needs to handle? I found this bug from a year ago that suggests this is NOT something client code should be handling: https://github.com/facebook/tornado/issues/81. Is this indicative of a bug in my code, and if so, what are the likely causes?
Stacktrace:
Traceback (most recent call last):
File "my_code.py", line 260, in my_method
self.finish()
File "/usr/lib/python2.6/site-packages/tornado/web.py", line 634, in finish
self.request.finish()
File "/usr/lib/python2.6/site-packages/tornado/httpserver.py", line 555, in finish
self.connection.finish()
File "/usr/lib/python2.6/site-packages/tornado/httpserver.py", line 349, in finish
self._finish_request()
File "/usr/lib/python2.6/site-packages/tornado/httpserver.py", line 372, in _finish_request
self.stream.read_until(b("\r\n\r\n"), self._header_callback)
File "/usr/lib/python2.6/site-packages/tornado/iostream.py", line 137, in read_until
self._check_closed()
File "/usr/lib/python2.6/site-packages/tornado/iostream.py", line 403, in _check_closed
raise IOError("Stream is closed")
IOError: Stream is closed
self.finish() is called to end the asynchronous request, and some functions like self.render() will call self.finish().
If you call self.finish() after the connection is closed, it will cause the error.
so you can check if you call some functions that finish the connection before self.finish()
or you can do like this:
if not self._finished:
#if the connection is closed, it won't call this function
self.finish()
else:
pass

Categories

Resources