I'm writing a server program in Python that uses the following workflow:
1) Start a daemon
2) Start a server socket and listen for incoming connections
3) When an incoming socket is accepted successfully, fork a new process to handle the connection, closing the client socket in the child and the server socket in the daemon.
When I register a signal handler for SIGCHLD in the daemon process to reap child processes (regardless of the content of the handler) and run the server, the daemon crashes when it receives SIGCHLD. I can't for the life of me figure out why because for whatever reason logging to syslog won't work for me, and I have no way of debugging this. I'm using PyCharm and it has no way to debug forked processes. How can I debug this problem ? What could be causing the program to fail on invocation of the SIGCHLD handler ?
I'm using Python 3.4 on Mac OS X.8
As it turns out, I was using an incorrect signature for my signal handling function. I was using def my_handler() instead of def my_handler(signum, frame), as per this thread
Related
I check a Java application with QFTest. I need to prove that the HMI is stopped at Shutdown.
In QFTest, I created a Jython procédure which try to send a socket to the HMI, if it can't, then it means that the HMI is stopped and then the test is OK. here is the jython script:
import threading
import time
rc.setLocal("returnValue", False)
for i in range(50):
time.sleep(0.5)
try:
# here we try to send a socket to HMI
rc.toSUT("client", vars)
except:
# here there was an exception trying to send the socket, the HMI is shutdown: test OK."
rc.setLocal("returnValue", True)
break
It seems that the QFTest javaagent used to connect my Java program to QFTest, prevents my application to be fully killed. Have you an idea to prove that my HMI is killed in a QFTest procedure ?
You should avoid any communication with the SUT while shutting down. QF-Tests tries to stop the application gracefully, if you record a sequence for the steps a user would do. There are also the dedicated nodes Additionally you may try to kill the SUT-client. For example of such a construct look in the procedure startStop.terminate from the demo suite delivered with QF-Test under <qftest_isntall_dir>\demo\carconfig\carconfig_en.qft.
If the problem persist you should write to the QF-Test support, since additional details my be required, and the stackoverflow.com is not suitable for such communcation.
Disclaimer: I am a QF-Test Employee
I have a Python program which spawns several other Python programs as subprocesses. One of these subprocesses is supposed to open and bind a ZMQ publisher socket, such that other subprocesses can subscribe to it.
I cannot give guarantees about which tcp ports will be available, so when I bind to a random port in the subprocess, my main program will not know what to tell the other subprocesses.
Is there a way to bind the socket in the main process and then somehow pass the socket to my subprocess? Or either some other way to preregister the socket or a standard way to pass the port information from the subprocess back to my main process (stdout and stderr are already used by other data)?
Just checking for a free port in the main process and passing that to the subprocess is not really optimal, because this could still fail if the socket is being assigned in the meantime. Also, since my program should work on Unix and Windows, I cannot really use ipc sockets, which would otherwise solve my problem.
The simplest is to create a logic for a pool-of-ports manager ( rather avoid attempts to share / pass ZeroMQ sockets to / among other processes )
One may create a persistent, a-priori known, tcp://A.B.C.D:8765-transport-class based .bind() access-point, exposed to all client processes as a port-assignment service, to which client processes .connect(), handshake in whatever manner is needed to proof an identity/credentials/purpose/etc and .recv() in a coordinated manner one actually free messaging/signalling-service port number, that is system-wide guaranteed to not be used at the very moment / until returned to the port-manager ( a rotating pool of ports is centrally managed, under your code-control, whereas all the sockets, created locally in the distributed process(es)/thread(s) .connect() / .bind()-ing to the pool-manager announced port#, and thus will still remain, and ought remain, consistently within ZeroMQ advice, not to be shared per-se ).
Let say we have a server application written in Python.
Let also say that this main server process forked two more processes at the startup.
Server awaits its clients, and when one comes decides to which of two forked processes should pass the client's socket.
I do not want to fork a process each time a client comes; I want to have fixed number of servers, but one main server that receives a connection, then pass it to a server that deals with a specific work client asked for.
This should be a DOS attack protection, job separation, etc. etc.
Is there any trick to pass a Python object between started Python programs.
Some shared memory or something like that?
Would pickling the socket object and pushing it through IPC work?
Would pickling the socket object and pushing it through IPC work?
No. Inside that object is a file descriptor or handle to the kernel socket. It's just a number that the process uses to identify the socket when making system calls.
If you pickle that Python socket object and send it to another process, that process will be using a handle for a socket it didn't open. Or worse, that handle may refer to a different open file.
The most efficient way to handle this (on Linux) is like this:
Master process opens listening socket (e.g. TCP port 80)
Master process forks N children who all inherit that open socket
They all call accept() and block, waiting for a new connection
When a new client connects, the kernel will select one of the processes with a handle to that socket to accept the connection; the others will continue to wait
This way, you let the kernel handle the load balancing.
If you don't want this behavior, there is a way (in UNIX) to pass an open socket to another process. Again, this is more than just the handle; the kernel effectively copies the open socket to your processs's open file list. This mechanism is known as SCM_RIGHTS, and you can see an example (in C) here:
http://man7.org/tlpi/code/online/dist/sockets/scm_rights_send.c.html
Otherwise, your master process will need to effectively proxy the connection to the child processes, reducing thr efficiency of the system.
I`m creating a twisted tcp server that needs to make subprocess command line call and relay the results to the client while still connected. But the subprocess needs to continue running until it is done, even after the client disconnects.
Is it possible to do this? And if so, please send me in the right direction..Its all new to me.
Thanks in advance!
There's nothing in Twisted's child-process support that will automatically kill the child process when any particular TCP client disconnects. The behavior you're asking about is basically the default behavior.
This may or may not being a coding issue. It may also be an xinetd deamon issue, i do not know.
I have a python script which is triggered from a linux server running xinetd. Xinetd has been setup to only allow one instance as I only want one machine to be able to connect to the service, which is therefore also limited by IP.
Currently when the client connects to xinetd the service works correctly and the script begins sending its output to the client machine. However, when the client disconnects (i.e: due to reboot), the process is still alive on the server, and this blocks the ability for the client to connect once its finished rebooting or so on.
Q: How can i detect in python that the client has disconnected. Perhaps i can test if stdout is no longer being read from by the client (and then exit the script), or is there a much eaiser way in xinetd to have the child process be killed when the client disconnects ?
(I'm using python 2.4.3 on RHEL5 linux - solutions for 2.4 are needed, but 3.1 solutions would be useful to know also.)
Add a signal handler for SIGHUP. (x)inetd sends this upon the socket disconnecting.
Monitor the signals sent to your proccess. Maybe your script isn't responding to the SIGHUP sent by xinet, monitor the signal and let it die.
You don't seem to get a SIGHUP, but you do get a SIGPIPE, at least so long as you are attempting any IO on the connection. If the application spends long periods of time not doing any IO, then you could just start a thread reading stdin to ensure you get the SIGPIPE as soon as the disconnection occurs. This was good enough for my application but then I didn't use any pipes other than the ones xinetd gave me.
I've seen several places on the net where people talk about the SIGHUP getting sent on client disconnection, so I've written an inetd python script to test out a couple of servers (one inetd and another xinetd), so you could use that to check on the signals getting sent. It just logs what it finds to /var/log/test.log. Perhaps it will be useful.
#!/usr/bin/python
import os, signal, sys
skip = ["SIGKILL", "SIG_DFL", "SIGSTOP", "SIG_IGN", "SIGCLD", "SIGCHLD"]
name_map = {}
identifiers = [i for i in dir(signal) if i.startswith("SIG") and not i in skip]
for i in identifiers:
name_map[getattr(signal, i)] = i
def handler(num, frame):
signame = name_map[num]
os.system("echo handled %s >> /var/log/test.log" % signame)
if __name__ == "__main__":
for id, name in name_map.iteritems():
signal.signal(id, handler)
while True:
print sys.stdin.readline()
sys.stdout.flush()