Parent server fork needs to exit in timely fashion - python

I have this server draft that forks new children after a new client connection. Then depends on a client’s command child server does some work inside the function handler(connection).
In the meantime, I want to stop the parent server and before that let the parent wait for all working children.
The question is where shall I place this signal function for the Ctrl+C keyboard interrupt option.
signal.signal(signal.SIGINT, signal_handler)
children_list = []
conn = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
conn.bind((HOST, PORT))
conn.listen(5)
print("Listening on TCP port %s" % PORT)
def reaper(pids):
while children_list:
pid,stat = os.waitpid(0, os.WNOHANG)
if not pid:
break
pids.remove(pid)
def handler(connection):
cmd = connection.recv(socksize)
def signal_handler(signal, frame):
print 'You pressed Ctrl+C!'
sys.exit(0)
def accept():
while 1:
global connection
connection, address = conn.accept()
print "welcome new client!"
reaper(children_list)
pid = os.fork()
if pid:#parent
children_list.append(pid)
connection.close()
else:#child
handler(connection)
accept()

The question is where shall I place this signal function for the Ctrl+C keyboard interrupt option. signal.signal(signal.SIGINT, signal_handler)
Anywhere you want, as long as it's after you define signal_handler and before you call accept.
I'm not sure that signal handler is actually what you want. What you're actually doing will, on most platforms, exit immediately, just as you're asking it to, causing the children to be reparented to init or launchd or similar. But you want to actually wait for all children. So, you can't call exit.
In POSIX, you're not allowed to call waitpid inside a signal hander. Python signal handlers aren't real signal handlers, so this might not be relevant—but I don't think that fact is guaranteed anywhere. If you want to take the risk, you could try waiting for all of the children right there in signal_handler before calling exit and see if it works on your platform.
However, the standard POSIX way to do it is to have the signal handler set some global flag and return. The main program will then wake up from accept or whatever other blocking call it's waiting on with an EINTR error, at which point it can check the flag—if true, time to wait on the children and quit; otherwise, go back into the loop and accept again.
But all of this shouldn't be necessary anyway. The default SIGINT handler should just raise a KeyboardInterrupt, which means that—if you don't install a custom signal handler—all you really need to do is catch that exception. In other words, just replace the last line with this:
try:
accept()
except KeyboardInterrupt: # or maybe BaseException
# wait for children, blocking instead of WNOHANG of course
sys.exit(0)

Related

Raise an exception in parent thread

I have an issue where a method call is blocking and not releasing. Unfortunately, the bug as to why isn't exactly solvable, so the workaround at the moment is to build in a timeout.
I've tried to do this by registering a timer and have it raise an exception to break from the blocked call. However, that raises the exception in the timer thread, not the main thread.
It looks like this right now:
from threading import Timer
def timeoutSocket():
raise InterruptedError
socketDeadlockDetector = Timer(DEADLOCK_TIMEOUT, timeoutSocket)
socketDeadlockDetector.start()
# receive and unpack data
try:
packet = server.receive()
except InterruptedError:
print("Interrupted socket receive, continuing")
continue
socketDeadlockDetector.cancel()
server.receive() is the method that is blocking when it shouldn't. However, when I run this, the socketDeadlockDetector thread interrupts itself, without affecting the original thread.
Is there a way to pass this exception up to the parent?
Timer creates a thread to run the function. It doesn't do you any good to raise an exception because that's not the thread needing interruption. When you hit the timeout, you need to cancel whatever is blocking in the other thread. In this case its a socket, so killing the socket should do.
import struct
def timeoutSocket():
# enable linger with timeout 0 to send RESET on close
server.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0))
server.close()

Threads and twisted can not be stopped by ctrl + c

I installed the signal in the main method,
But when I pressed ctrl+c during running the process wasn't stopped,
exceptions.SystemExit: 0
^CKilled by user
Unhandled Error
EventTrigger and MemoryInfo are classes inherit from threading
and HttpStreamClient is a class inferits from twosted.reactor
How to kill my process by ctrl+c , thanks
Code
def signal_handler(*args):
print("Killed by user")
# teardown()
sys.exit(0)
def install_signal():
for sig in (SIGABRT, SIGILL, SIGINT, SIGSEGV, SIGTERM):
signal(sig, signal_handler)
def main():
try:
global cgi, config
install_signal()
config = Config().read_file(sys.argv[1])[0]
init_export_folder()
setup_logging()
threads = [
EventTrigger(config),
MemoryInfo(config),
]
for thr in threads:
thr.setDaemon(True)
thr.start()
HttpStreamClient(config).run()
for thr in threads:
thr.join()
except BaseException as e:
traceback.print_exc(file=sys.stdout)
raise e
I think your problem might be the forceful nature that you are terminating the process.
While using twisted you should call reactor.stop() to get the initial run call to stop blocking.
If you change your signal_handler to shutdown the reactor.
def signal_handler(*args):
print("Killed by user")
reactor.stop()
Your threads could still keep the process alive. Thread.join doesn't forcefully stop a thread, which in general is never really a good idea. If EventTrigger or MemoryInfo are still running the thr.join will block. You will need a mechanism to stop threads. Maybe take a look here.
sys.exit() raises a Python exception; I'm pretty sure raising an exception in a signal handler does not do much. Either call reactor.stop() as Alex says or use os._exit(0). Be aware that using os._exit(0) will terminate the process without further ado.

Can't catch SIGINT in multithreaded program

I've seen many topics about this particular problem but i still can't figure why i'm not catching a SIGINT in my main Thread.
Here is my code:
def connect(self, retry=100):
tries=retry
logging.info('connecting to %s' % self.path)
while True:
try:
self.sp = serial.Serial(self.path, 115200)
self.pileMessage = pilemessage.Pilemessage()
self.pileData = pilemessage.Pilemessage()
self.reception = reception.Reception(self.sp,self.pileMessage,self.pileData)
self.reception.start()
self.collisionlistener = collisionListener.CollisionListener(self)
self.message = messageThread.Message(self.pileMessage,self.collisionlistener)
self.datastreaminglistener = dataStreamingListener.DataStreamingListener(self)
self.datastreaming = dataStreaming.Data(self.pileData,self.datastreaminglistener)
return
except serial.serialutil.SerialException:
logging.info('retrying')
if not retry:
raise SpheroError('failed to connect after %d tries' % (tries-retry))
retry -= 1
def disconnect(self):
self.reception.stop()
self.message.stop()
self.datastreaming.stop()
while not self.pileData.isEmpty():
self.pileData.pop()
self.datastreaminglistener.remove()
while not self.pileMessage.isEmpty():
self.pileMessage.pop()
self.collisionlistener.remove()
self.sp.close()
if __name__ == '__main__':
import time
try:
logging.getLogger().setLevel(logging.DEBUG)
s = Sphero("/dev/rfcomm0")
s.connect()
s.set_motion_timeout(65525)
s.set_rgb(0,255,0)
s.set_back_led_output(255)
s.configure_locator(0,0)
except KeyboardInterrupt:
s.disconnect()
In the main function I call Connect() which is launching Threads over which i don't have direct controll.
When I launch this script I would like to be able to stop it when hitting Control+C by calling the "disconnect()" function which stops all the other threads.
In the code i provided it doesn't work because there is no thread in the main function. But I already tryied putting all the instuctions from Main() in a Thread with a While loop without success.
Is there a simple way to solve my problem ?
Thanx
Your indentation is messed up, but there's enough to go on.
Your main thread isn't catching SIGINT because it's not alive. There is nothing that stops your main thread from continuing past the try block, seeing no more code, and closing up shop.
I am not familiar with Sphero. I just attempted to google its docs and was linked to a bunch of 404 pages, so I'll tell you what you would normally do in a threaded environment - join your threads to the main thread so that the main thread can't finish execution before the worker threads.
for t in my_thread_list:
t.join() #main thread can't get past here until all the threads finish
If your Sphero object doesn't provide join-like functionality, you could hack something in that blocks, i.e.
raw_input('Press Enter to disconnect')
s.disconnect()

How to properly handle and retain system shutdown (and SIGTERM) in order to finish its job in Python?

Basic need : I've a Python daemon that's calling another program through os.system. My wish is to be able to properly to handle system shutdown or SIGTERM in order to let the called program return and then exiting.
What I've already tried: I've tried an approach using signal :
import signal, time
def handler(signum = None, frame = None):
print 'Signal handler called with signal', signum
time.sleep(3)
#here check if process is done
print 'Wait done'
signal.signal(signal.SIGTERM , handler)
while True:
time.sleep(6)
The usage of time.sleep doesn't seems to work and the second print is never called.
I've read few words about atexit.register(handler) instead of signal.signal(signal.SIGTERM, handler) but nothing is called on kill.
Your code does almost work, except you forgot to exit after cleaning up.
We often need to catch various other signals such as INT, HUP and QUIT, but not so much with daemons.
import sys, signal, time
def handler(signum = None, frame = None):
print 'Signal handler called with signal', signum
time.sleep(1) #here check if process is done
print 'Wait done'
sys.exit(0)
for sig in [signal.SIGTERM, signal.SIGINT, signal.SIGHUP, signal.SIGQUIT]:
signal.signal(sig, handler)
while True:
time.sleep(6)
On many systems, ordinary processes don't have much time to clean up during shutdown. To be safe, you could write an init.d script to stop your daemon and wait for it.

Dbus/GLib Main Loop, Background Thread

I'm starting out with DBus and event driven programming in general. The service that I'm trying to create really consists of three parts but two are really "server" things.
1) The actual DBus server talks to a remote website over HTTPS, manages sessions, and conveys info the clients.
2) The other part of the service calls a keep alive page every 2 minutes to keep the session active on the external website
3) The clients make calls to the service to retrieve info from the service.
I found some simple example programs. I'm trying to adapt them to prototype #1 and #2. Rather than building separate programs for both. I thought I that I can run them in a single, two threaded process.
The problem that I'm seeing is that I call time.sleep(X) in my keep alive thread. The thread goes to sleep, but won't ever wake up. I think that the GIL isn't released by the GLib main loop.
Here's my thread code:
class Keepalive(threading.Thread):
def __init__(self, interval=60):
super(Keepalive, self).__init__()
self.interval = interval
bus = dbus.SessionBus()
self.remote = bus.get_object("com.example.SampleService", "/SomeObject")
def run(self):
while True:
print('sleep %i' % self.interval)
time.sleep(self.interval)
print('sleep done')
reply_status = self.remote.keepalive()
if reply_status:
print('Keepalive: Success')
else:
print('Keepalive: Failure')
From the print statements, I know that the sleep starts, but I never see "sleep done."
Here is the main code:
if __name__ == '__main__':
try:
dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
session_bus = dbus.SessionBus()
name = dbus.service.BusName("com.example.SampleService", session_bus)
object = SomeObject(session_bus, '/SomeObject')
mainloop = gobject.MainLoop()
ka = Keepalive(15)
ka.start()
print('Begin main loop')
mainloop.run()
except Exception as e:
print(e)
finally:
ka.join()
Some other observations:
I see the "begin main loop" message, so I know it's getting control. Then, I see "sleep %i," and after that, nothing.
If I ^C, then I see "sleep done." After ~20 seconds, I get an exception from self.run() that the remote application didn't respond:
DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
What's the best way to run my keep alive code within the server?
Thanks,
You have to explicitly enable multithreading when using gobject by calling gobject.threads_init(). See the PyGTK FAQ for background info.
Next to that, for the purpose you're describing, timeouts seem to be a better fit. Use as follows:
# Enable timer
self.timer = gobject.timeout_add(time_in_ms, self.remote.keepalive)
# Disable timer
gobject.source_remove(self.timer)
This calls the keepalive function every time_in_ms (milli)seconds. Further details, again, can be found at the PyGTK reference.

Categories

Resources