Inheritance of file descriptors - Python 3.4 - python

The documentation under https://docs.python.org/3.5/library/os.html#fd-inheritance about "Inheritance of file descriptors" says:
"On UNIX, non-inheritable file descriptors are closed in child processes at the execution of a new program, other file descriptors are inherited."
Also the documentation to the sockets says that, "the newly created socket is non-inheritable."
I have just tested it with following code:
import socket, os
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(('localhost', 9999))
sock.listen(512)
sock.settimeout(1)
print("Socket inheritable?: {}".format(sock.get_inheritable()))
pid = os.fork()
if not pid: # child process
print(sock)
else:
pass
By calling "sock.get_inheritable()" I get False, that means the socket is not inheritable.
But the child process seems to have inherited the socket descriptor.
Am I missing something?
Why is that so?
Thanks
Update:
Here is the "server.py" waiting on the accept in the child process:
import socket, os, time
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(('localhost', 9999))
sock.listen(512)
sock.setblocking(True)
print("Socket inheritable?: {}".format(sock.get_inheritable()))
pid = os.fork()
if not pid: # child process
sock, addr = sock.accept()
data = sock.recv(100)
print(data.decode())
else:
while True:
time.sleep(1)
"client.py" sends "Hello" to the socket:
import socket, time, select, sys
msg = "Hello".encode()
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('localhost',9999))
s.setblocking(1)
s.send(msg)
s.close()
After starting the server.py and running the "client.py" I see "Hello" message printed in the terminal of "server.py".

Non-inheritable means that the descriptor is closed, as it says in your quote. It does not mean the reference to the socket object in Python disappears.
The more meaningful test would be to try and read from the socket in both branches of the conditional.

Related

Server process started by multiprocessing.Manager() makes piped socket not closed immediately

I have following code, server accept net connection, pass it to child to process with Manager().Queue():
q = Manager().Queue()
class Server:
def run(self, host, port):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((host, port))
s.listen(1)
print('parent', os.getpid())
while True:
c, _ = s.accept()
q.put(c)
c.close()
def handle_request():
print('child', os.getpid())
while True:
c = q.get()
time.sleep(1)
print(c.recv(4))
c.close()
Process(target=handle_request, args=()).start()
Server().run('127.0.0.1', 10000)
close doesn't work as expected, I think it is because Manager's server process sill have a reference on that socket, lsof -i confirmed. How to deal with this? I found there is not a way to close the socket in Manager process, shutdown could do the trick but not what I want.
Interesting problem.
I am not sure if this is of any help, but I found your code somewhat odd in the beginning, as sending socket objects using Manager().Queue() to another process does not sound like it is supported. It may be, but sending a file descriptor to another process needs a couple of hoops. I changed your code a bit to do it as I would do it - basically reducing and reconstructing handles.
from multiprocessing import Manager, Process
from multiprocessing.reduction import reduce_handle, rebuild_handle
import socket
import os
from time import sleep
q = Manager().Queue()
class Server:
def run(self, host, port):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((host, port))
s.listen(1)
print('parent', os.getpid())
while True:
c, _ = s.accept()
foo = reduce_handle(c.fileno())
q.put(foo)
c.close()
def handle_request():
print('child', os.getpid())
while True:
bar = q.get()
sleep(1)
barbar = rebuild_handle(bar)
c = socket.fromfd(barbar, socket.AF_INET, socket.SOCK_STREAM)
print(c.recv(4))
c.shutdown(socket.SHUT_RDWR)
Process(target=handle_request, args=()).start()
Server().run('127.0.0.1', 10000)
This does not leave any sockets behind in CLOSE_WAIT at least when I ran it, and it works as I would expect it to work.

Python socket allow only one connection, but disconnect on new, not refuse

In python, you can define maximum number of socket connections by parameter of listen() function... for example:
serversocket = socket.socket(
socket.AF_INET, socket.SOCK_STREAM)
serversocket.bind((socket.gethostname(), 80))
serversocket.listen(1) // allow only 1 connection
But the problem is that when second client wants to connect, connection is being refused. And I would like to disconnect the old user and connect the new one. Could anybody help me with that?
Probably an answer:
I am posting it in question as it is probable answer (I didn't have time to check it)
serversocket = socket.socket(
socket.AF_INET, socket.SOCK_STREAM)
serversocket.bind((socket.gethostname(), 80))
serversocket.listen(10) // allow 10 connections, but disconnect previous later
someone_connected = 0
while 1:
(clientsocket, address) = serversocket.accept()
if(someone_connected) someone_connected.close()
someone_connected = clientsocket
I am not sure that I fully understand you question, but I think the following example can meet your requirement. the server can disconnect the old user and serve the new one.
the sever side:
#!/usr/bin/env python
import socket
import multiprocessing
HOST = '127.0.0.1'
PORT = 50007
# you can do your real staff in handler
def handler(conn, addr):
try:
print 'processing...'
while 1:
data = conn.recv(1024)
if not data:
break
print data
conn.sendall(data)
conn.close()
print 'processing done'
except:
pass
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR, 1)
s.bind((HOST, PORT))
s.listen(5)
processes = []
while True:
conn, addr = s.accept()
print conn, addr
[p.terminate() for p in processes] # to disconnect the old connection
# start process newer connection and save it for next kill
p = multiprocessing.Process(target=handler, args=(conn, addr))
processes = [p]
p.start()
newest_conn = conn # this is the newest connection object, if you need it
For test, the client side:
#!/usr/bin/env python
import socket
import time
import multiprocessing
HOST = '127.0.0.1'
PORT = 50007
def client():
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
time.sleep(0.1)
try:
for n in range(20):
s.send(str(n))
data = s.recv(1024)
print data
time.sleep(0.5)
s.send('')
s.close()
except:
pass
if __name__ == "__main__":
for i in range(5):
print 'user %i connect' %i
p = multiprocessing.Process(target=client)
p.start() # simulate a new user start connect
time.sleep(3)
Try it :-)
You have a wrong assumption built into your question - the single argument to socket listen() is not the "number of connections", but a backlog - number of pending, but not yet accepted client connections the kernel holds for you for a while.
Your problem then seems to be that you have accepted one connection, and reading/writing to it in a loop, and not calling accept() again. The kernel holds the request for any new client connection for some timeout, then notifies the client that the server is not accepting it.
You want to look into select() functionality, as suggested in the comments.

Threading udp datas with Python

I'm trying to implement UDP socket's threading.
I want to be able to wait for clients to send me some data in a thread and wait for first datas in an other.
import threading
import socket
class Broker():
def __init__(self):
self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.sock.bind(('127.0.0.1', 4242))
self.clients_list = []
def talkToClient(self, ip):
self.sock.sendto("ok", ip)
def listen_clients(self):
while True:
msg, client = self.sock.recvfrom(1024)
t = threading.Thread(None, self.talkToClient, None, (client,), None)
b = Broker()
b.listen_clients()
and my client
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
sock.sendto("connection", ('127.0.0.1', 4242))
while True:
msg, b = sock.recvfrom(1024)
print msg
Problem is that my client is never receiving "ok"
Your main problem is that you are not starting the thread that you have created.
t.start()
Should do it. Please make sure you are using four spaces for indentation as well.
I didn't see the error first myself, but once I added some logging statements it was pretty obvious. The code ended up looking like this:
import threading
import socket
import logging
class Broker():
def __init__(self):
logging.info('Initializing Broker')
self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.sock.bind(('127.0.0.1', 4242))
self.clients_list = []
def talkToClient(self, ip):
logging.info("Sending 'ok' to %s", ip)
self.sock.sendto("ok", ip)
def listen_clients(self):
while True:
msg, client = self.sock.recvfrom(1024)
logging.info('Received data from client %s: %s', client, msg)
t = threading.Thread(target=self.talkToClient, args=(client,))
t.start()
if __name__ == '__main__':
# Make sure all log messages show up
logging.getLogger().setLevel(logging.DEBUG)
b = Broker()
b.listen_clients()
I'm afraid you will run into other problems however, because of your threaded solution. Most python modules are not thread-safe by default, unfortunately this is true for the socket module as well. I'm pretty sure that eventually your socket's internal state will be corrupted since you are reading in one thread and writing in another, or potentially in many others since you spawn a new process for each client.
If you look at multi-threaded socket code examples in Python, a socket is usually owned and used by only one thread. The key is to not reuse the listening socket for clients, but to use socket.accept to create a new socket for each client once it has connected.

Why does socket close when sending message over socket?

Why does my connection stop after 3 print statements if I have the conn.send() line? If that line is commented out, the connection stays open indefinitely. It is hitting the exception for some reason, but I don't know why and I am inexperienced with python.
server.py:
import random
import signal
import socket
import struct
import sys
import time
PORT = 1234
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(('', PORT))
s.listen(1)
print("Server started on port %u" % PORT)
try:
while True:
(conn, addr) = s.accept()
conn.setblocking(0)
print("Client connected: %s:%d" % addr)
while True:
print "hey"
conn.send("random")
time.sleep(1)
except:
s.close()
print "exception"
client.py:
import socket # Import socket module
s = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 1234 # Reserve a port for your service.
s.connect((host, port))
print s.recv(1024)
s.close
The reason for this behavior is the socket is actually closed after the clients first recv(), your server just doesn't realize it until the third attempt at send(). Calling close() in your client didn't change anything because previously the program was terminating which closed the socket anyway. This thread explains why this is very likely what's happening. To test this theory, you could try having the client sleep (or select) and print more recv() calls, before closing the socket.

Can socket objects be shared with Python's multiprocessing? socket.close() does not seem to be working

I'm writing a server which uses multiprocessing.Process for each client. socket.accept() is being called in a parent process and the connection object is given as an argument to the Process.
The problem is that when calling socket.close() the socket does not seem to be closing. The client's recv() should return immediately after close() has been called on the server. This is the case when using threading.Thread or just handle the requests in the main thread, however when using multiprocessing, the client's recv seem to be hanging forever.
Some sources indicate that socket objects should be shared as handles with multiprocessing.Pipes and multiprocess.reduction but it does not seem to make a difference.
EDIT: I am using Python 2.7.4 on Linux 64 bit .
Below are the sample implementation demonstrating this issue.
server.py
import socket
from multiprocessing import Process
#from threading import Thread as Process
s = socket.socket()
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(('', 5001))
s.listen(5)
def process(s):
print "accepted"
s.close()
print "closed"
while True:
print "accepting"
c, _ = s.accept()
p = Process(target=process, args=(c,))
p.start()
print "started process"
client.py
import socket
s = socket.socket()
s.connect(('', 5001))
print "connected"
buf = s.recv(1024)
print "buf: '" + buf +"'"
s.close()
The problem is that the socket is not closed in the parent process. Therefore it remains open, and causes the symptom you are observing.
Immediately after forking off the child process to handle the connection, you should close the parent process' copy of the socket, like so:
while True:
print "accepting"
c, _ = s.accept()
p = Process(target=process, args=(c,))
p.start()
print "started process"
c.close()

Categories

Resources