Python SocketServer - python

How can I call shutdown() in a SocketServer after receiving a certain message "exit"? As I know, the call to serve_forever() will block the server.
Thanks!

Use the source, Luke!
Excerpt from SocketServer.py:
def serve_forever(self, poll_interval=0.5):
"""Handle one request at a time until shutdown.
Polls for shutdown every poll_interval seconds. Ignores
self.timeout. If you need to do periodic tasks, do them in
another thread.
"""
self.__is_shut_down.clear()
try:
while not self.__shutdown_request:
# XXX: Consider using another file descriptor or
# connecting to the socket to wake this up instead of
# polling. Polling reduces our responsiveness to a
# shutdown request and wastes cpu at all other times.
r, w, e = select.select([self], [], [], poll_interval)
if self in r:
self._handle_request_noblock()
finally:
self.__shutdown_request = False
self.__is_shut_down.set()
def shutdown(self):
"""Stops the serve_forever loop.
Blocks until the loop has finished. This must be called while
serve_forever() is running in another thread, or it will
deadlock.
"""
self.__shutdown_request = True
self.__is_shut_down.wait()

No the serve_forever is checking a flag on a regular basis (by default 0.5 sec). Calling shutdown will raise this flag and cause the serve_forever to end.

Related

Raise an exception in parent thread

I have an issue where a method call is blocking and not releasing. Unfortunately, the bug as to why isn't exactly solvable, so the workaround at the moment is to build in a timeout.
I've tried to do this by registering a timer and have it raise an exception to break from the blocked call. However, that raises the exception in the timer thread, not the main thread.
It looks like this right now:
from threading import Timer
def timeoutSocket():
raise InterruptedError
socketDeadlockDetector = Timer(DEADLOCK_TIMEOUT, timeoutSocket)
socketDeadlockDetector.start()
# receive and unpack data
try:
packet = server.receive()
except InterruptedError:
print("Interrupted socket receive, continuing")
continue
socketDeadlockDetector.cancel()
server.receive() is the method that is blocking when it shouldn't. However, when I run this, the socketDeadlockDetector thread interrupts itself, without affecting the original thread.
Is there a way to pass this exception up to the parent?
Timer creates a thread to run the function. It doesn't do you any good to raise an exception because that's not the thread needing interruption. When you hit the timeout, you need to cancel whatever is blocking in the other thread. In this case its a socket, so killing the socket should do.
import struct
def timeoutSocket():
# enable linger with timeout 0 to send RESET on close
server.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0))
server.close()

How to detect write failure in asyncio?

As a simple example, consider the network equivalent of /dev/zero, below. (Or more realistically, just a web server sending a large file.)
If a client disconnects early, you get a barrage of log messages:
WARNING:asyncio:socket.send() raised exception.
But I'm not finding any way to catch said exception. The hypothetical server continues reading gigabytes from disk and sending them to a dead socket, with no effort on the client's part, and you've got yourself a DoS attack.
The only thing I've found from the docs is to yield from a read, with an empty string indicating closure. But that's no good here because a normal client isn't going to send anything, blocking the write loop.
What's the right way to detect failed writes, or be notified that the TCP connection has been closed, with the streams API or otherwise?
Code:
from asyncio import *
import logging
#coroutine
def client_handler(reader, writer):
while True:
writer.write(bytes(1))
yield from writer.drain()
logging.basicConfig(level=logging.INFO)
loop = get_event_loop()
coro = start_server(client_handler, '', 12345)
server = loop.run_until_complete(coro)
loop.run_forever()
I did some digging into the asyncio source to expand on dano's answer on why the exceptions aren't being raised without explicitly passing control to the event loop. Here's what I've found.
Calling yield from wirter.drain() gives the control over to the StreamWriter.drain coroutine. This coroutine checks for and raises any exceptions that that the StreamReaderProtocol set on the StreamReader. But since we passed control over to drain, the protocol hasn't had the chance to set the exception yet. drain then gives control over to the FlowControlMixin._drain_helper coroutine. This coroutine the returns immediately because some more flags haven't been set yet, and the control ends up back with the coroutine that called yield from wirter.drain().
And so we have gone full circle without giving control to the event loop to allow it handle other coroutines and bubble up the exceptions to writer.drain().
yielding before a drain() gives the transport/protocol a chance to set the appropriate flags and exceptions.
Here's a mock up of what's going on, with all the nested calls collapsed:
import asyncio as aio
def set_exception(ctx, exc):
ctx["exc"] = exc
#aio.coroutine
def drain(ctx):
if ctx["exc"] is not None:
raise ctx["exc"]
return
#aio.coroutine
def client_handler(ctx):
i = 0
while True:
i += 1
print("write", i)
# yield # Uncommenting this allows the loop.call_later call to be scheduled.
yield from drain(ctx)
CTX = {"exc": None}
loop = aio.get_event_loop()
# Set the exception in 5 seconds
loop.call_later(5, set_exception, CTX, Exception("connection lost"))
loop.run_until_complete(client_handler(CTX))
loop.close()
This should probably fixed upstream in the Streams API by the asyncio developers.
This is a little bit strange, but you can actually allow an exception to reach the client_handler coroutine by forcing it to yield control to the event loop for one iteration:
import asyncio
import logging
#asyncio.coroutine
def client_handler(reader, writer):
while True:
writer.write(bytes(1))
yield # Yield to the event loop
yield from writer.drain()
logging.basicConfig(level=logging.INFO)
loop = asyncio.get_event_loop()
coro = asyncio.start_server(client_handler, '', 12345)
server = loop.run_until_complete(coro)
loop.run_forever()
If I do that, I get this output when I kill the client connection:
ERROR:asyncio:Task exception was never retrieved
future: <Task finished coro=<client_handler() done, defined at aio.py:4> exception=ConnectionResetError(104, 'Connection reset by peer')>
Traceback (most recent call last):
File "/usr/lib/python3.4/asyncio/tasks.py", line 238, in _step
result = next(coro)
File "aio.py", line 9, in client_handler
yield from writer.drain()
File "/usr/lib/python3.4/asyncio/streams.py", line 301, in drain
raise exc
File "/usr/lib/python3.4/asyncio/selector_events.py", line 700, in write
n = self._sock.send(data)
ConnectionResetError: [Errno 104] Connection reset by peer
I'm really not quite sure why you need to explicitly let the event loop get control for the exception to get through - don't have time at the moment to dig into it. I assume some bit needs to get flipped to indicate the connection dropped, and calling yield from writer.drain() (which can short-circuit going through the event loop) in a loop is preventing that from happening, but I'm really not sure. If I get a chance to investigate, I'll update the answer with that info.
The stream based API doesn't have a callback you can specify for when the connection is closed. But the Protocol API does, so use it instead: https://docs.python.org/3/library/asyncio-protocol.html#connection-callbacks

python asyncio run event loop once?

I am trying to understand the asyncio library, specifically with using sockets. I have written some code in an attempt to gain understanding,
I wanted to run a sender and a receiver sockets asynchrounously. I got to the point where I get all data sent up till the last one, but then I have to run one more loop. Looking at how to do this, I found this link from stackoverflow, which I implemented below -- but what is going on here? Is there a better/more sane way to do this than to call stop followed by run_forever?
The documentation for stop() in the event loop is:
Stop running the event loop.
Every callback scheduled before stop() is called will run. Callbacks scheduled after stop() is called will not run. However, those callbacks will run if run_forever() is called again later.
And run_forever()'s documentation is:
Run until stop() is called.
Questions:
why in the world is run_forever the only way to run_once? This doesn't even make sense
Is there a better way to do this?
Does my code look like a reasonable way to program with the asyncio library?
Is there a better way to add tasks to the event loop besides asyncio.async()? loop.create_task gives an error on my Linux system.
https://gist.github.com/cloudformdesign/b30e0860497f19bd6596
The stop(); run_forever() trick works because of how stop is implemented:
def stop(self):
"""Stop running the event loop.
Every callback scheduled before stop() is called will run.
Callback scheduled after stop() is called won't. However,
those callbacks will run if run() is called again later.
"""
self.call_soon(_raise_stop_error)
def _raise_stop_error(*args):
raise _StopError
So, next time the event loop runs and executes pending callbacks, it's going to call _raise_stop_error, which raises _StopError. The run_forever loop will break only on that specific exception:
def run_forever(self):
"""Run until stop() is called."""
if self._running:
raise RuntimeError('Event loop is running.')
self._running = True
try:
while True:
try:
self._run_once()
except _StopError:
break
finally:
self._running = False
So, by scheduling a stop() and then calling run_forever, you end up running one iteration of the event loop, then stopping once it hits the _raise_stop_error callback. You may have also noticed that _run_once is defined and called by run_forever. You could call that directly, but that can sometimes block if there aren't any callbacks ready to run, which may not be desirable. I don't think there's a cleaner way to do this currently - That answer was provided by Andrew Svetlov, who is an asyncio contributor; he would probably know if there's a better option. :)
In general, your code looks reasonable, though I think that you shouldn't be using this run_once approach to begin with. It's not deterministic; if you had a longer list or a slower system, it might require more than two extra iterations to print everything. Instead, you should just send a sentinel that tells the receiver to shut down, and then wait on both the send and receive coroutines to finish:
import sys
import time
import socket
import asyncio
addr = ('127.0.0.1', 1064)
SENTINEL = b"_DONE_"
# ... (This stuff is the same)
#asyncio.coroutine
def sending(addr, dataiter):
loop = asyncio.get_event_loop()
for d in dataiter:
print("Sending:", d)
sock = socket.socket()
yield from send_close(loop, sock, addr, str(d).encode())
# Send a sentinel
sock = socket.socket()
yield from send_close(loop, sock, addr, SENTINEL)
#asyncio.coroutine
def receiving(addr):
loop = asyncio.get_event_loop()
sock = socket.socket()
try:
sock.setblocking(False)
sock.bind(addr)
sock.listen(5)
while True:
data = yield from accept_recv(loop, sock)
if data == SENTINEL: # Got a sentinel
return
print("Recevied:", data)
finally: sock.close()
def main():
loop = asyncio.get_event_loop()
# add these items to the event loop
recv = asyncio.async(receiving(addr), loop=loop)
send = asyncio.async(sending(addr, range(10)), loop=loop)
loop.run_until_complete(asyncio.wait([recv, send]))
main()
Finally, asyncio.async is the right way to add tasks to the event loop. create_task was added in Python 3.4.2, so if you have an earlier version it won't exist.

Stopping socketserver.ThreadingMixIn in python

I'm extending socketserver.ThreadingMixIn in Python 3.4 to build my own threaded server while keeping the original callbacks overwintered only for logging porpoises.
The activation and creation is very simple and according to python documentation the problem i'm having is to stop that server with server.shutdown(). It frizzes and doesn't exit. I need way to shutdown that server without using ctrl-c because it will also involve some GUI to that server.
The basic server:
class ServerBasic(socketserver.ThreadingMixIn,socketserver.TCPServer):
logging.basicConfig(level=logging.DEBUG,format='%(name)s: %(message)s',)
def __init__(self, log_name,server_address, handler_class=ThreadedRequestHandler):
self.logger = logging.getLogger(log_name)
self.logger.debug('__init__')
socketserver.TCPServer.__init__(self, server_address, handler_class)
return
def server_activate(self):
self.logger.debug('server_activate')
socketserver.TCPServer.server_activate(self)
return
def serve_forever(self):
self.logger.debug('waiting for request')
self.logger.info('Handling requests, press <Ctrl-C> to quit')
while True:
self.handle_request()
return
The extending class:
class ManagerServer(PIRServerBasic):
def __init__(self, log_name, handler_class=T_ManagerRequestHandler):
self.tup_socket = (ipAddress, WELCOME_PORT) # tuple of the address and port
self.log_name = log_name
return ServerBasic.__init__(self, log_name, self.tup_socket, handler_class=handler_class)
And here how it all created and running:
o_serverManager = ManagerServer('Manager_Server', T_ManagerRequestHandler)
t_managerServer = threading.Thread(target=o_serverManager.serve_forever)
t_managerServer.daemon = True
t_managerServer.start()
sleep(15)
o_serverManager.shutdown()
After the shutdown command the program is stuck.
In your overwrite of the serve_forever method, you have removed the condition that breaks the while loop on a shutdown request. The original method looks like:
def serve_forever(self, poll_interval=0.5):
"""Handle one request at a time until shutdown.
Polls for shutdown every poll_interval seconds. Ignores
self.timeout. If you need to do periodic tasks, do them in
another thread.
"""
self.__is_shut_down.clear()
try:
while not self.__shutdown_request:
r, w, e = _eintr_retry(select.select, [self], [], [],
poll_interval)
if self in r:
self._handle_request_noblock()
finally:
self.__shutdown_request = False
self.__is_shut_down.set()
You need to implement a similar system in your overwrite, to look for a set __shutdown_request flag and take the appropriate action. This also requires that your handler is non-blocking.

How to create global error handler in a multi-threaded python applcation

I am developing a multi-threaded application in python. I have following scenario.
There are 2-3 producer threads which communicate with DB and get some data in large chunks and fill them up in a queue
There is an intermediate worker which breaks large chunks fetched by producer threads into smaller ones and fill them up in another queue.
There are 5 consumer threads which consume queue created by intermediate worker thread.
objects of data sources are accessed by producer threads through their API. these data sources are completely separate. So these producer understands only presence or absence of data which is supposed to be given out by data source object.
I create threads of these three types and i make main thread wait for completion of these threads by calling join() on them.
Now for such a setup I want a common error handler which senses failure of any thread, any exception and decides what to do. For e.g if I press ctrl+c after I start my application, main thread dies but producer, consumer threads continue to run. I would like that once ctrl+c is pressed entire application should shut down. Similarly if some DB error occurs in data source module, then producer thread should get notified of that.
This is what I have done so far:
I have created a class ThreadManager, it's object is passed to all threads. I have written an error handler method and passed it to sys.excepthook. This handler should catch exceptions, error and then it should call methods of ThreadManager class to control the running threads. Here is snippet:
class Producer(threading.Thread):
....
def produce():
data = dataSource.getData()
class DataSource:
....
def getData():
raise Exception("critical")
def customHandler(exceptionType, value, stackTrace):
print "In custom handler"
sys.excepthook = customHandler
Now when a thread of producer class calls getData() of DataSource class, exception is thrown. But this exception is never caught by my customHandler method.
What am I missing? Also in such scenario what other strategy can I apply? Please help. Thank you for having enough patience to read all this :)
What you need is a decorator. In essence you are modifying your original function and putting in inside a try-except:
def exception_decorator(func):
def _function(*args):
try:
result = func(*args)
except:
print('*** ESC default handler ***')
os._exit(1)
return result
return _function
If your thread function is called myfunc, then you add the following line above your function definition
#exception_decorator
def myfunc():
pass;
Can't you just catch "KeyboardInterrupt" when pressing Ctrl+C and do:
for thread in threading.enumerate():
thread._Thread__stop()
thread._Thread__delete()
while len(threading.enumerate()) > 1:
time.sleep(1)
os._exit(0)
and have a flag in each threaded class which is self.alive
you could theoretically call thread.alive = False and have it stop gracefully?
for thread in threading.enumerate():
thread.alive = False
time.sleep(5) # Grace period
thread._Thread__stop()
thread._Thread__delete()
while len(threading.enumerate()) > 1:
time.sleep(1)
os._exit(0)
example:
import os
from threading import *
from time import sleep
class worker(Thread):
def __init__(self):
self.alive = True
Thread.__init__(self)
self.start()
def run(self):
while self.alive:
sleep(0.1)
runner = worker()
try:
raw_input('Press ctrl+c!')
except:
pass
for thread in enumerate():
thread.alive = False
sleep(1)
try:
thread._Thread__stop()
thread._Thread__delete()
except:
pass
# There will always be 1 thread alive and that's the __main__ thread.
while len(enumerate()) > 1:
sleep(1)
os._exit(0)
Try going about it by changing the internal system exception handler?
import sys
origExcepthook = sys.excepthook
def uberexcept(exctype, value, traceback):
if exctype == KeyboardInterrupt:
print "Gracefully shutting down all the threads"
# enumerate() thingie here.
else:
origExcepthook(exctype, value, traceback)
sys.excepthook = uberexcept

Categories

Resources