Ending a process when a tcp connection is closed - python

I am developing a client-server application where whenever a new client connects to the server, the server spawns a new process using the multiprocessing module. Its target function is a function where it takes the socket and does I/O. The problem I have is once the TCP connection is closed between the client and the process on the server how/where do I put the .join() function call to end the child process? Also do I need to do any waitpid in the parent process like in C?
Server code:
def new_client(conn_socket):
while True:
message = conn_socket.recv(BUFFER_SIZE)
conn_socket.send(message)
#just echo the message
#how to check to see if the TCP connection is still alive?
#put the .join() here??
def main():
#create the socket
server_socket = socket(AF_INET,SOCK_STREAM)
#bind the socket to the local ip address on a specific port and listen
server_port = 12000
server_socket.bind(('',server_port))
server_socket.listen(1)
#enter in a loop to accept client connections
while True:
connection_socket, client_address = server_socket.accept()
#create a new process with the new connection_socket
new_process = Process(target = new_client, args = (connection_socket,))
new_process.start()
#put the .join() here or what??
if __name__ == '__main__':
main()
Also for this setup would it be more beneficial to use threads in the thread module or stay with processes? The server code is being developed for heavy usage on a server with "average" specs(how to optimize this setup).

You need to check the return value of recv. If it returns zero then the connection is closed nicely, if negative then there was an error.
And the join call should be in the process that creates the sub-process. However, be carefull because join without argument will block the calling process until the sub-process is done. Put the processes in a list, and on regular intervals call join with a small timeout.
Edit: Simplest is to add, at the end of the infinite accept loop, to iterate over the list of processes, and check if it's is_alive. If not then call join and remove it from the list.
Something like:
all_processes = []
while True:
connection_socket, client_address = server_socket.accept()
#create a new process with the new connection_socket
new_process = Process(target = new_client, args = (connection_socket,))
new_process.start()
# Add process to our list
all_processes.append(new_process)
# Join all dead processes
for proc in all_processes:
if not proc.is_alive():
proc.join()
# And remove them from the list
all_processes = [proc for proc in all_processes if proc.is_alive()]
Note that purging of old processes will only happen if we get a new connection. This can take some time, depending on if you get new connections often or not. You could make the listening socket non-blocking and use e.g. select with a timeout to know if there are new connections or not, and the purging will happen at more regular intervals even if there are no new connections.

Related

How to fix multiprocessing echo server to handle multiple clients

I want to create a multiprocessing echo server. I am currently using telnet as my client to send messages to my echo server.Currently I can handle one telnet request and it echos the response. I initially, thought I should intialize the pid whenever I create a socket. Is that correct?
How do I allow several clients to connect to my server using multiprocessing.
#!/usr/bin/env python
import socket
import os
from multiprocessing import Process
def create_socket():
# Create socket
sockfd = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Port for socket and Host
PORT = 8002
HOST = 'localhost'
# bind the socket to host and port
sockfd.bind((HOST, PORT))
# become a server socket
sockfd.listen(5)
start_socket(sockfd)
def start_socket(sockfd):
while True:
# Establish and accept connections woth client
(clientsocket, address) = sockfd.accept()
# Get the process id.
process_id = os.getpid()
print("Process id:", process_id)
print("Got connection from", address)
# Recieve message from the client
message = clientsocket.recv(2024)
print("Server received: " + message.decode('utf-8'))
reply = ("Server output: " + message.decode('utf-8'))
if not message:
print("Client has been disconnected.....")
break
# Display messags.
clientsocket.sendall(str.encode(reply))
# Close the connection with the client
clientsocket.close()
if __name__ == '__main__':
process = Process(target = create_socket)
process.start()
It's probably a good idea to understand which are blocking system calls and which are not. listen for example is not blocking and accept is blocking one. So basically - you created one process through Process(..), that blocks at the accept and when a connection is made - handles that connection.
Your code should have a structure - something like following (pseudo code)
def handle_connection(accepted_socket):
# do whatever you want with the socket
pass
def server():
# Create socket and listen to it.
sock = socket.socket(....)
sock.bind((HOST, PORT))
sock.listen(5)
while True:
new_client = sock.accept() # blocks here.
# unblocked
client_process = Process(target=handle_connection, args=(new_client))
client_process.start()
I must also mention, while this is a good way to just understand how things can be done, it is not a good idea to start a new process for every connection.
The initial part of setting up the server, binding, listening etc (your create_socket) should be in the master process.
Once you accept and get a socket, you should spawn off a separate process to take care of that connection. In other words, your start_socket should be spawned off in a separate process and should loop forever.

Multithreaded TCP socket

I'm trying to create a threaded TCP socket server that can handle multiple socket request at a time.
To test it, I launch several thread in the client side to see if my server can handle it. The first socket is printed successfully but I get a [Errno 32] Broken pipe for the others.
I don't know how to avoid it.
import threading
import socketserver
import graphitesend
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
data = self.request.recv(1024)
if data != "":
print(data)
class ThreadedTCPServer(socketserver.ThreadingTCPServer):
allow_reuse_address = True
def __init__(self, host, port):
socketserver.ThreadingTCPServer.__init__(self, (host, port), ThreadedTCPRequestHandler)
def stop(self):
self.server_close()
self.shutdown()
def start(self):
threading.Thread(target=self._on_started).start()
def _on_started(self):
self.serve_forever()
def client(g):
g.send("test", 1)
if __name__ == "__main__":
HOST, PORT = "localhost", 2003
server = ThreadedTCPServer(HOST, PORT)
server.start()
g = graphitesend.init(graphite_server = HOST, graphite_port = PORT)
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
server.stop()
It's a little bit difficult to determine what exactly you're expecting to happen, but I think the proximate cause is that you aren't giving your clients time to run before killing the server.
When you construct a Thread object and call its start method, you're creating a thread, and getting it ready to run. It will then be placed on the "runnable" task queue on your system, but it will be competing with your main thread and all your other threads (and indeed all other tasks on the same machine) for CPU time.
Your multiple threads (main plus others) are also likely being serialized by the python interpreter's GIL (Global Interpreter Lock -- assuming you're using the "standard" CPython) which means they may not have even gotten "out of the gate" yet.
But then you're shutting down the server with server_close() before they've had a chance to send anything. That's consistent with the "Broken Pipe" error: your remaining clients are attempting to write to a socket that has been closed by the "remote" end.
You should collect the thread objects as you create them and put them in a list (so that you can reference them later). When you're finished creating and starting all of them, then go back through the list and call the .join method on each thread object. This will ensure that the thread has had a chance to finish. Only then should you shut down the server. Something like this:
threads = []
for n in range(7):
th = threading.Thread(target=client, args=(g,))
th.start()
threads.append(th)
# All threads created. Wait for them to finish.
for th in threads:
th.join()
server.stop()
One other thing to note is that all of your clients are sharing the same single connection to send to the server, so that your server will never create more than one thread: as far as it's concerned, there is only a single client. You should probably move the graphitesend.init into the client function if you actually want separate connections for each client.
(Disclaimer: I know nothing about graphitesend except what I could glean in a 15 second glance at the first result in google; I'm assuming it's basically just a wrapper around a TCP connection.)

Implementing a single thread server/daemon (Python)

I am developing a server (daemon).
The server has one "worker thread". The worker thread runs a queue of commands. When the queue is empty, the worker thread is paused (but does not exit, because it should preserve certain state in memory). To have exactly one copy of the state in memory, I need to run all time exactly one (not several and not zero) worker thread.
Requests are added to the end of this queue when a client connects to a Unix socket and sends a command.
After the command is issued, it is added to the queue of commands of the worker thread. After it is added to the queue, the server replies something like "OK". There should be not a long pause between server receiving a command and it "OK" reply. However, running commands in the queue may take some time.
The main "work" of the worker thread is split into small (taking relatively little time) chunks. Between chunks, the worker thread inspects ("eats" and empties) the queue and continues to work based on the data extracted from the queue.
How to implement this server/daemon in Python?
This is a sample code with internet sockets, easily replaced with unix domain sockets. It takes whatever you write to the socket, passes it as a "command" to worker, responds OK as soon as it has queued the command. The single worker simulates a lengthy task with sleep(30). You can queue as many tasks as you want, receive OK immediately and every 30 seconds, your worker prints a command from the queue.
import Queue, threading, socket
from time import sleep
class worker(threading.Thread):
def __init__(self,q):
super(worker,self).__init__()
self.qu = q
def run(self):
while True:
new_task=self.qu.get(True)
print new_task
i=0
while i < 10:
print "working ..."
sleep(1)
i += 1
try:
another_task=self.qu.get(False)
print another_task
except Queue.Empty:
pass
task_queue = Queue.Queue()
w = worker(task_queue)
w.daemon = True
w.start()
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('localhost', 4200))
sock.listen(1)
try:
while True:
conn, addr = sock.accept()
data = conn.recv(32)
task_queue.put(data)
conn.sendall("OK")
conn.close()
except:
sock.close()

Identify Thread in Python

I've a python Socket server running and also socket clients.
Now, for example say there are 3 clients connected to same server. Please find below the code of the server.
#!/usr/bin/python # This is server.py file
import socket # Import socket module
import threading
serversocket = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 1234 # Reserve a port for your service.
serversocket.bind((host, port)) # Bind to the port
serversocket.listen(5)
print("Bound the port ",port,"on Machine : ",host,", and ready to accept connections.\n")
def clientThread(connection):
while True:
data=connection.recv(1024)
if not data:
break
connection.send("Thanks")
connection.close()
def sendMessage(connection, message):
connection.send(message)
while 1:
connection, address = serversocket.accept()
start_new_thread(clientthread, (connection,))
serversocket.close();
Now, I need to call sendMessage for a particular client, say out of clients A,B and C, send it to B. In this case, how do I identify the thread and call that function?
You can use Queues and multiple threads per connection to solve this problem.
Basic outline:
Each client connection spawns two threads - one to monitor client input and another which monitors a Queue. Items placed on the queue will be sent to the client. Each client connection will have its own output queue.
You'll also need a global dictionary to map a client name to their output queue.
To send a message to a particular client, find the client's output queue and add the message to it.
You'll also need a way to shutdown the output thread for a client. A common approach is to use a sentinel value (like None) on the queue to inform the output thread to exit its processing loop. When the client's input thread detects EOF it can place the sentinel value on the client's output queue and eventually the output thread will shut itself down.

New client right after accept function call

Suppose that I have the following code:
import socket
listener = socket.socket()
listener.bind(('0.0.0.0', 59535))
while True:
conn, addr = listener.accept()
worker_thread = threading.Thread(target=client_handler, args=(conn, addr,)).start()
What will happen if new client will try to connect to our listener socket while we're creating worker thread? Will he wait for the next accept call or will it just rejected? If he'll wait, how many clients can be in that queue simultaneously by default (yeah, I know that I can set it via listen function)?
There is a listen queue in the kernel, so the kernel will deal with new clients while your user space part does something else. If the listen queue in the kernel is full no more clients will be accepted, that is the connect will fail.

Categories

Resources