Moving a bound ZMQ socket to another process - python

I have a Python program which spawns several other Python programs as subprocesses. One of these subprocesses is supposed to open and bind a ZMQ publisher socket, such that other subprocesses can subscribe to it.
I cannot give guarantees about which tcp ports will be available, so when I bind to a random port in the subprocess, my main program will not know what to tell the other subprocesses.
Is there a way to bind the socket in the main process and then somehow pass the socket to my subprocess? Or either some other way to preregister the socket or a standard way to pass the port information from the subprocess back to my main process (stdout and stderr are already used by other data)?
Just checking for a free port in the main process and passing that to the subprocess is not really optimal, because this could still fail if the socket is being assigned in the meantime. Also, since my program should work on Unix and Windows, I cannot really use ipc sockets, which would otherwise solve my problem.

The simplest is to create a logic for a pool-of-ports manager ( rather avoid attempts to share / pass ZeroMQ sockets to / among other processes )
One may create a persistent, a-priori known, tcp://A.B.C.D:8765-transport-class based .bind() access-point, exposed to all client processes as a port-assignment service, to which client processes .connect(), handshake in whatever manner is needed to proof an identity/credentials/purpose/etc and .recv() in a coordinated manner one actually free messaging/signalling-service port number, that is system-wide guaranteed to not be used at the very moment / until returned to the port-manager ( a rotating pool of ports is centrally managed, under your code-control, whereas all the sockets, created locally in the distributed process(es)/thread(s) .connect() / .bind()-ing to the pool-manager announced port#, and thus will still remain, and ought remain, consistently within ZeroMQ advice, not to be shared per-se ).

Related

Local machine interprocess communication with multiple independent processes (1 server, n clients)

I would like to have a server process (preferably Python) that accepts simple messages and multiple clients (again, preferably Python) that connect to the server and send messages to it. The server and clients will only ever be running on the same local machine and the OS is Linux based. The server will be automatically started by the OS and the clients started later independent of the server. I strongly want to avoid installing a whole separate messaging framework/server to do this. The messages will be simple strings such as "kick" or even just a single byte representing the message type. It also needs to know when a connection is made and lost.
From these requirements, I think named pipes would be a feasible solution, with a new instance of that pipe created for each client connection. However, when I search for examples, all of the ones I have come across deal with processes that are spawned from the same parent process and not independently started which means they can pass a parent reference to the child.
Windows seems to allow multiple instances of a named pipe (one for each client connection), but I'm unsure if this is possible on a Linux based OS?
Please could someone point me in the right direction, preferably with a basic example, even if it's just pseudo-code.
I've looked at the multiprocessing module in Python, but this seems to be oriented around the server and client sharing the same process or having one spawn the other.
Edit
May be important, the host device is not guaranteed to have networking capabilities (embedded device).
I've used zeromq for this sort of thing before. it's a relatively lightweight library that exposes this sort of functionality
otherwise, you could implement it yourself by binding a socket in the server process and having clients connect to it. this works fine for unix domain sockets, just pass AF_UNIX when creating the socket, e.g:
import socket
with socket.socket(socket.AF_UNIX) as s:
s.bind('/tmp/srv')
s.listen(1)
(c, addr) = s.accept()
with c:
c.send(b"hello world")
for the server, and:
with socket.socket(socket.AF_UNIX) as c:
c.connect('/tmp/srv')
print(c.recv(8192))
for the client.
writing a protocol around this is more involved, which is where things like zmq really help where you can easily push JSON messages around

How to cancel a blocking thread caused by input() in Python?

I'm starting to learn more about TCP protocols in Python and I've been having some trouble with blocking threads inside clients.
Ideally, my application would work like this: I have different clients with thread functions, each one of them containing an input function in order to receive a specific command to send to the server (for example 'X'). When the 'X' is tapped in ONE client, the server receives it and sends a message to all the other clients informing that the program will continue and releasing them from their input functions - almost like cancelling them.
The problem lies on the fact that the input functions are blocking the clients from leaving the loop. I've tried setting the input thread functions as daemon but it blocks until you tap something anyway - which is unfortunately the only workaround that I've found so far.
I would like to use socket and the select module for connection, without being attached to any particular OS (so no msvcrt that works on Windows or the select module to monitor the stdin, which is only available in UNIX based OS).
Any help would be greatly appreciated!

Passing Python object to another Python process

Let say we have a server application written in Python.
Let also say that this main server process forked two more processes at the startup.
Server awaits its clients, and when one comes decides to which of two forked processes should pass the client's socket.
I do not want to fork a process each time a client comes; I want to have fixed number of servers, but one main server that receives a connection, then pass it to a server that deals with a specific work client asked for.
This should be a DOS attack protection, job separation, etc. etc.
Is there any trick to pass a Python object between started Python programs.
Some shared memory or something like that?
Would pickling the socket object and pushing it through IPC work?
Would pickling the socket object and pushing it through IPC work?
No. Inside that object is a file descriptor or handle to the kernel socket. It's just a number that the process uses to identify the socket when making system calls.
If you pickle that Python socket object and send it to another process, that process will be using a handle for a socket it didn't open. Or worse, that handle may refer to a different open file.
The most efficient way to handle this (on Linux) is like this:
Master process opens listening socket (e.g. TCP port 80)
Master process forks N children who all inherit that open socket
They all call accept() and block, waiting for a new connection
When a new client connects, the kernel will select one of the processes with a handle to that socket to accept the connection; the others will continue to wait
This way, you let the kernel handle the load balancing.
If you don't want this behavior, there is a way (in UNIX) to pass an open socket to another process. Again, this is more than just the handle; the kernel effectively copies the open socket to your processs's open file list. This mechanism is known as SCM_RIGHTS, and you can see an example (in C) here:
http://man7.org/tlpi/code/online/dist/sockets/scm_rights_send.c.html
Otherwise, your master process will need to effectively proxy the connection to the child processes, reducing thr efficiency of the system.

PyZmq ensure connect() after bind()

Trying to establish some communication between two python processes , I've come to use pyzmq. Since the communication is simple enough I 'm using the Zmq.PAIR messaging pattern with a tcp socket. Basically one process binds on an address and the other one connects to the same address . However both operations happen at startup , and since I cannot control the order in which the processes start , I am often encountering the case in which 'connect()' is called before 'bind()' which leads to failing in establishing communication.
Is there a way to know a socket is not yet ready to be connected to ?
What are the strategies to employ in such situations in order to obtain a safe connection ?
put some sleep before connecting. so bind will run first, and connect will continue after waiting for sometime

How to abruptly disconnect a socket without closing it appropriately

I have a Python test program for testing features of another software component, let's call the latter the component under test (COT).
The Python test program is connected to the COT via a persistent TCP connection.
The Python program is using the Python socket API for this.
Now in order to simulate a failure of the physical link, I'd like to have the Python program shut the socket down, but without disconnecting appropriately.
I.e. I don't want anything to be sent on the TCP channel any more, including any TCP SYN/ACK/FIN. I just want the socket to go silent. It must not respond to the remote packets any more.
This is not as easy as it seems, since calling close on a socket will send TCP FIN packets to the remote end. (graceful disconnection).
So how can I kill the socket without sending any packets out?
I cannot shut down the Python program itself, because it needs to maintain other connections to other components.
For information, the socket runs in a separate thread. So I thought of abruptly killing the thread, but this is also not so easy. (Is there any way to kill a Thread?)
Any ideas?
You can't do that from a userland process since in-kernel network stack still holds resources and state related to given TCP connection. Event if you kill your whole process the kernel is going to send a FIN to the other side since it knows what file descriptors your process had and will try to clean them up properly.
One way to get around this is to engage firewall software (on local or intermediate machine). Call a script that tells the firewall to drop all packets from/to given IP and port (that of course would need appropriate administrative privileges).
Contrary to Nikolai's answer, there is indeed a way to reset the connection from userland such that an RST is sent and pending data discarded, rather than a FIN after all the pending data. However as it is more abused than used, I won't publish it here. And I don't know whether it can be done from Python. Setting one of the three possible SO_LINGER configurations and closing will do it. I won't say more than that, and I will say that this technique should only be used for the purpose outlined in the question.

Categories

Resources