Paramiko SSH Tunnel Shutdown Issue - python

I'm working on a python script to query a few remote databases over an established ssh tunnel every so often. I'm fairly familiar with the paramiko library, so that was my choice of route. I'd prefer to keep this in complete python so I can use paramiko to deal with key issues, as well as uses python to start, control, and shutdown the ssh tunnels.
There have been a few related questions around here about this topic, but most of them seemed incomplete in answers. My solution below is a hacked together of the solutions I've found so far.
Now for the problem: I'm able to create the first tunnel quite easily (in a separate thread) and do my DB/python stuff, but when attempting to close the tunnel the localhost won't release the local port I binded to. Below, I've included my source and the relevant netstat data through each step of the process.
#!/usr/bin/python
import select
import SocketServer
import sys
import paramiko
from threading import Thread
import time
class ForwardServer(SocketServer.ThreadingTCPServer):
daemon_threads = True
allow_reuse_address = True
class Handler (SocketServer.BaseRequestHandler):
def handle(self):
try:
chan = self.ssh_transport.open_channel('direct-tcpip', (self.chain_host, self.chain_port), self.request.getpeername())
except Exception, e:
print('Incoming request to %s:%d failed: %s' % (self.chain_host, self.chain_port, repr(e)))
return
if chan is None:
print('Incoming request to %s:%d was rejected by the SSH server.' % (self.chain_host, self.chain_port))
return
print('Connected! Tunnel open %r -> %r -> %r' % (self.request.getpeername(), chan.getpeername(), (self.chain_host, self.chain_port)))
while True:
r, w, x = select.select([self.request, chan], [], [])
if self.request in r:
data = self.request.recv(1024)
if len(data) == 0:
break
chan.send(data)
if chan in r:
data = chan.recv(1024)
if len(data) == 0:
break
self.request.send(data)
chan.close()
self.request.close()
print('Tunnel closed from %r' % (self.request.getpeername(),))
class DBTunnel():
def __init__(self,ip):
self.c = paramiko.SSHClient()
self.c.load_system_host_keys()
self.c.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.c.connect(ip, username='someuser')
self.trans = self.c.get_transport()
def startTunnel(self):
class SubHandler(Handler):
chain_host = '127.0.0.1'
chain_port = 5432
ssh_transport = self.c.get_transport()
def ThreadTunnel():
global t
t = ForwardServer(('', 3333), SubHandler)
t.serve_forever()
Thread(target=ThreadTunnel).start()
def stopTunnel(self):
t.shutdown()
self.trans.close()
self.c.close()
Although I will end up using a stopTunnel() type method, I've realize that code isn't entirely correct, but more so an experimentation of trying to get the tunnel to shutdown properly and test my results.
When I first call create the DBTunnel object and call startTunnel(), netstat yields the following:
tcp4 0 0 *.3333 *.* LISTEN
tcp4 0 0 MYIP.36316 REMOTE_HOST.22 ESTABLISHED
tcp4 0 0 127.0.0.1.5432 *.* LISTEN
Once I call stopTunnel(), or even delete the DBTunnel object itself..I'm left with this connection until I exit python all together, and what I assume to be the garbage collector takes care of it:
tcp4 0 0 *.3333 *.* LISTEN
It would be nice to figure out why this open socket is hanging around independent of the DBConnect object, and how to close it properly from within my script. If I try and bind a different connection to different IP using the same local port before completely exiting python (time_wait is not the issue), then I get the infamous bind err 48 address in use. Thanks in advance :)

It appears the SocketServer's shutdown method isn't properly shutting down/closing the socket. With the below changes in my code, I retain access to the SocketServer object and access the socket directly to close it. Note that socket.close() works in my case, but others might be interested in socket.shutdown() followed by a socket.close() if other resources are accessing that socket.
[Ref: socket.shutdown vs socket.close
def ThreadTunnel():
self.t = ForwardServer(('127.0.0.1', 3333), SubHandler)
self.t.serve_forever()
Thread(target=ThreadTunnel).start()
def stopTunnel(self):
self.t.shutdown()
self.trans.close()
self.c.close()
self.t.socket.close()

Note that you don't have do the Subhandler hack as shown in the demo code. The comment is wrong. Handlers do have access to their Server's data. Inside a handler you can use self.server.instance_data.
If you use the following code, in your Handler, you would use
self.server.chain_host
self.server.chain_port
self.server.ssh_transport
class ForwardServer(SocketServer.ThreadingTCPServer):
daemon_threads = True
allow_reuse_address = True
def __init__(
self, connection, handler, chain_host, chain_port, ssh_transport):
SocketServer.ThreadingTCPServer.__init__(self, connection, handler)
self.chain_host = chain_host
self.chain_port = chain_port
self.ssh_transport = ssh_transport
...
server = ForwardServer(('', local_port), Handler,
remote_host, remote_port, transport)
server.serve_forever()

You may want to add some synchronization between the spawned thread and the caller so that you don't try to use the tunnel before it is ready. Something like:
from threading import Event
def startTunnel(self):
class SubHandler(Handler):
chain_host = '127.0.0.1'
chain_port = 5432
ssh_transport = self.c.get_transport()
mysignal = Event()
mysignal.clear()
def ThreadTunnel():
global t
t = ForwardServer(('', 3333), SubHandler)
mysignal.set()
t.serve_forever()
Thread(target=ThreadTunnel).start()
mysignal.wait()

You can also try sshtunnel it has two cases to close tunnel .stop() if you want to wait until the end of all active connections or .stop(force=True) to close all active connections.
If you don't want to use it you can check the source code for this logic here: https://github.com/pahaz/sshtunnel/blob/090a1c1/sshtunnel.py#L1423-L1456

Related

Paramiko: nest ssh session to another machine while preserving paramiko functionality (ProxyJump)

I'm trying to use paramiko to bounce an SSH session via netcat:
MyLocalMachine ----||----> MiddleMachine --(netcat)--> AnotherMachine
('localhost') (firewall) ('1.1.1.1') ('2.2.2.2')
There is no direct connection from MyLocalMachine to
AnotherMachine
The SSH server on MiddleMachine will not accept any attempts to open a direct-tcpip channel connected to AnotherMachine
I can't use SSH keys. I can only connect via given username and password.
I can't use sshpass
I can't use PExpect
I want to connect automatically
I want to preserve all of paramiko functionality
I can achieve this partially using the following code:
cli = paramiko.SSHClient()
cli.set_missing_host_key_policy(paramiko.AutoAddPolicy())
proxy = paramiko.ProxyCommand('ssh user#1.1.1.1 nc 2.2.2.2 22')
cli.connect(hostname='2.2.2.2', username='user', password='pass', sock=proxy)
The thing is, that because ProxyCommand is using subprocess.Popen to run the given command, it is asking me to give the password "ad-hoc", from user input (also, it requires the OS on MyLocalMachine to have ssh installed - which isn't always the case).
Since ProxyCommand's methods (recv, send) are a simple bindings to apropriate POpen methods, I was wondering if it would be possible to trick paramiko client into using another client's session as the proxy?
Update 15.05.18: added the missing code (copy-paste gods haven't been favorable to me).
TL;DR: I managed to do it using simple exec_command call and a class that pretends to be a sock.
To summarize:
This solution does not use any other port than 22. If you can manually connect to the machine by nesting ssh clients - it will work. It doesn't require any port forwarding nor configuration changes.
It works without prompting for password (everything is automatic)
It nests ssh sessions while preserving paramiko functionality.
You can nest sessions as many times as you want
It requires netcat (nc) installed on the proxy host - although anything that can provide basic netcat functionality (moving data between a socket and stdin/stdout) will work.
So, here be the solution:
The masquerader
The following code defines a class that can be used in place of paramiko.ProxyCommand. It supplies all the methods that a standard socket object does. The init method of this class takes the 3-tupple that exec_command() normally returns:
Note: It was tested extensively by me, but you shouldn't take anything for granted. It is a hack.
import paramiko
import time
import socket
from select import select
class ParaProxy(paramiko.proxy.ProxyCommand):
def __init__(self, stdin, stdout, stderr):
self.stdin = stdin
self.stdout = stdout
self.stderr = stderr
self.timeout = None
self.channel = stdin.channel
def send(self, content):
try:
self.stdin.write(content)
except IOError as exc:
raise socket.error("Error: {}".format(exc))
return len(content)
def recv(self, size):
try:
buffer = b''
start = time.time()
while len(buffer) < size:
select_timeout = self._calculate_remaining_time(start)
ready, _, _ = select([self.stdout.channel], [], [],
select_timeout)
if ready and self.stdout.channel is ready[0]:
buffer += self.stdout.read(size - len(buffer))
except socket.timeout:
if not buffer:
raise
except IOError as e:
return ""
return buffer
def _calculate_remaining_time(self, start):
if self.timeout is not None:
elapsed = time.time() - start
if elapsed >= self.timeout:
raise socket.timeout()
return self.timeout - elapsed
return None
def close(self):
self.stdin.close()
self.stdout.close()
self.stderr.close()
self.channel.close()
The usage
The following shows how I used the above class to solve my problem:
# Connecting to MiddleMachine and executing netcat
mid_cli = paramiko.SSHClient()
mid_cli.set_missing_host_key_policy(paramiko.AutoAddPolicy())
mid_cli.connect(hostname='1.1.1.1', username='user', password='pass')
io_tupple = mid_cli.exec_command('nc 2.2.2.2 22')
# Instantiate the 'masquerader' class
proxy = ParaProxy(*io_tupple)
# Connecting to AnotherMachine and executing... anything...
end_cli = paramiko.SSHClient()
end_cli.set_missing_host_key_policy(paramiko.AutoAddPolicy())
end_cli.connect(hostname='2.2.2.2', username='user', password='pass', sock=proxy)
end_cli.exec_command('echo THANK GOD FINALLY')
Et voila.
Better to post this as a proposed answer, you can do the following:
Code is not tested nor will work as it is very incomplete. I would recommend to check this amazing tut for reference http://www.revsys.com/writings/quicktips/ssh-tunnel.html
From the middle machine
"ssh -f user#anothermachine -L 2000:localhost:22 -N"
From localmachine:
paramiko.connect(middlemachine, 2000)

python multithreading server

I am new to networking programming and python.
I am trying to figure out how to run different jobs at the server side.
For example, I want one function to create connections for incoming clients but in the same time I can still do some administration work from the terminal.
My code is as below but it doesn't work:
Edited: it doesn't work means it will get stuck in the init_conn() function
Like:
starting up on localhost port 8887
Thread: 0 Connected with 127.0.0.1:48080
# waiting
I am looking into SocketServer framework but don't know how that works.
from thread import *
import socket
def init_conn():
thread_count =0
# Create a TCP/IP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Bind the socket to the port
server_address = ('localhost', 8887)
print >>sys.stderr, 'starting up on %s port %s' % server_address
sock.bind(server_address)
# Listen for incoming connections
sock.listen(10)
#now keep talking with the client
while 1:
#wait to accept a connection - blocking call
conn, addr = sock.accept()
print 'Thread: '+ str(thread_count) + ' Connected with ' + addr[0] + ':' + str(addr[1])
#start new thread takes 1st argument as a function name to be run, second is the tuple of arguments to the function.
start_new_thread(clientthread ,(conn,))
thread_count +=1
sock.close()
def clientthread(conn):
# receive data from client and send back
def console():
print 'this is console'
option = raw_input('-v view clients')
if option == 'v':
print 'you press v'
def main():
start_new_thread( init_conn(),() )
start_new_thread( console(),() )
if __name__ == "__main__":
main()
Your problem is probably that you start the program, sometimes it prints "this is console" and then it ends.
The first bug is that you call the methods instead of passing the handle to start_new_thread. It must be:
start_new_thread( init_conn, () )
i.e. no () after the function name.
The program doesn't do much because start_new_thread() apparent starts a thread and then waits for it to stop. The documentation is pretty unclear. It's better to use the new threading module; See http://pymotw.com/2/threading/
def main():
t = threading.Thread( target=init_conn )
t.daemon = True
t.start()
console()
so the code will run until console() ends.
I suggest to split the server and the command line tool. Create a client which accepts commands from the command line and sends them to the server. That way, you can start the console from anywhere and you can keep the code for the two separate.
Seeing that you're new to python, have you tried taking a look at the threading module that comes with the standard library?
import threading
... #rest of your code
while conditions==True:
i = threading.Thread(target=init_conn)
c = threading.Thread(target=console)
i.start()
c.start()
Can't say I've done too much with networking programming with python, so I don't really have much to say in that manner, but at least this should get you started with adding multithreading to your project.
Using SocketServer you may implement a client/server system. The documentation gives small examples which may be useful for you. Here is an extended example from there:
server.py :
import SocketServer
import os
import logging
FORMAT = '[%(asctime)-15s] %(message)s'
logging.basicConfig(format=FORMAT, level=logging.DEBUG)
class MyServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
# By setting this we allow the server to re-bind to the address by
# setting SO_REUSEADDR, meaning you don't have to wait for
# timeouts when you kill the server and the sockets don't get
# closed down correctly.
allow_reuse_address = True
request_queue_size = 10
def __init__(self, port):
self.host = os.uname()[1]
self.port = port
SocketServer.TCPServer.__init__(self, (self.host,self.port), MyTCPHandler)
logging.info( "Server has been started on {h}:{p}".format(h=self.host,p=self.port) )
class MyTCPHandler(SocketServer.BaseRequestHandler):
"""
The RequestHandler class for our server.
It is instantiated once per connection to the server, and must
override the handle() method to implement communication to the
client.
"""
def handle(self):
# self.request is the TCP socket connected to the client
# max length is here 1024 chars
self.data = self.request.recv(1024).strip()
logging.info( "received: {d}".format(d=self.data) )
# here you may execute different functions according to the
# request string
# here: just send back the same data, but upper-cased
self.request.sendall(self.data.upper())
PORT = 8887
if __name__ == "__main__":
# Create the server, binding to localhost on port 8887
#server = SocketServer.TCPServer((HOST, PORT), MyTCPHandler)
server = MyServer( PORT )
# Activate the server; this will keep running until you
# interrupt the program with Ctrl-C
server.serve_forever()
client.py
import socket
import sys
import logging
FORMAT = '[%(asctime)-15s] %(message)s'
logging.basicConfig(format=FORMAT, level=logging.DEBUG)
HOST, PORT = "workstation04", 8887
logging.info( "connect to server {h}:{p}".format(h=HOST,p=PORT ) )
# read command line
data = " ".join(sys.argv[1:])
# Create a socket (SOCK_STREAM means a TCP socket)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
# Connect to server and send data
sock.connect((HOST, PORT))
sock.sendall(data + "\n")
# Receive data from the server and shut down
received = sock.recv(1024)
finally:
sock.close()
logging.info( "Sent: {}".format(data) )
logging.info( "Received: {}".format(received) )
The output looks something like:
server side:
> python server.py
[2015-05-28 11:17:49,263] Server has been started on disasterarea:8887
[2015-05-28 11:17:50,972] received: my message
client side:
[2015-05-28 11:17:50,971] connect to server disasterarea:8887
[2015-05-28 11:17:50,972] Sent: my message
[2015-05-28 11:17:50,972] Received: MY MESSAGE
You can run several clients (from different consoles) in parallel. You may implement a request processor on the server side which processes the incoming requests and executes certain functions.
Alternatively, you may use the python module ParallelPython which executes python code locally on a multicore system or on a cluster and clusters. Check the http examples.
I had to force pip to install this module:
pip install --allow-external pp --allow-unverified pp pp

How can I write a socket server in a different thread from my main program (using gevent)?

I'm developing a Flask/gevent WSGIserver webserver that needs to communicate (in the background) with a hardware device over two sockets using XML.
One socket is initiated by the client (my application) and I can send XML commands to the device. The device answers on a different port and sends back information that my application has to confirm. So my application has to listen to this second port.
Up until now I have issued a command, opened the second port as a server, waited for a response from the device and closed the second port.
The problem is that it's possible that the device sends multiple responses that I have to confirm. So my solution was to keep the port open and keep responding to incoming requests. However, in the end the device is done sending requests, and my application is still listening (I don't know when the device is done), thereby blocking everything else.
This seemed like a perfect use case for a thread, so that my application launches a listening server in a separate thread. Because I'm already using gevent as a WSGI server for Flask, I can use the greenlets.
The problem is, I have looked for a good example of such a thing, but all I can find is examples of multi-threading handlers for a single socket server. I don't need to handle a lot of connections on the socket server, but I need it launched in a separate thread so it can listen for and handle incoming messages while my main program can keep sending messages.
The second problem I'm running into is that in the server, I need to use some methods from my "main" class. Being relatively new to Python I'm unsure how to structure it in a way to make that possible.
class Device(object):
def __init__(self, ...):
self.clientsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
def _connect_to_device(self):
print "OPEN CONNECTION TO DEVICE"
try:
self.clientsocket.connect((self.ip, 5100))
except socket.error as e:
pass
def _disconnect_from_device(self):
print "CLOSE CONNECTION TO DEVICE"
self.clientsocket.close()
def deviceaction1(self, ...):
# the data that is sent is an XML document that depends on the parameters of this method.
self._connect_to_device()
self._send_data(XMLdoc)
self._wait_for_response()
return True
def _send_data(self, data):
print "SEND:"
print(data)
self.clientsocket.send(data)
def _wait_for_response(self):
print "WAITING FOR REQUESTS FROM DEVICE (CHANNEL 1)"
self.serversocket.bind(('10.0.0.16', 5102))
self.serversocket.listen(5) # listen for answer, maximum 5 connections
connection, address = self.serversocket.accept()
# the data is of a specific length I can calculate
if len(data) > 0:
self._process_response(data)
self.serversocket.close()
def _process_response(self, data):
print "RECEIVED:"
print(data)
# here is some code that processes the incoming data and
# responds to the device
# this may or may not result in more incoming data
if __name__ == '__main__':
machine = Device(ip="10.0.0.240")
Device.deviceaction1(...)
This is (globally, I left out sensitive information) what I'm doing now. As you can see everything is sequential.
If anyone can provide an example of a listening server in a separate thread (preferably using greenlets) and a way to communicate from the listening server back to the spawning thread, it would be of great help.
Thanks.
EDIT:
After trying several methods, I decided to use Pythons default select() method to solve this problem. This worked, so my question regarding the use of threads is no longer relevant. Thanks for the people who provided input for your time and effort.
Hope it can provide some help, In example class if we will call tenMessageSender function then it will fire up an async thread without blocking main loop and then _zmqBasedListener will start listening on separate port untill that thread is alive. and whatever message our tenMessageSender function will send, those will be received by client and respond back to zmqBasedListener.
Server Side
import threading
import zmq
import sys
class Example:
def __init__(self):
self.context = zmq.Context()
self.publisher = self.context.socket(zmq.PUB)
self.publisher.bind('tcp://127.0.0.1:9997')
self.subscriber = self.context.socket(zmq.SUB)
self.thread = threading.Thread(target=self._zmqBasedListener)
def _zmqBasedListener(self):
self.subscriber.connect('tcp://127.0.0.1:9998')
self.subscriber.setsockopt(zmq.SUBSCRIBE, "some_key")
while True:
message = self.subscriber.recv()
print message
sys.exit()
def tenMessageSender(self):
self._decideListener()
for message in range(10):
self.publisher.send("testid : %d: I am a task" %message)
def _decideListener(self):
if not self.thread.is_alive():
print "STARTING THREAD"
self.thread.start()
Client
import zmq
context = zmq.Context()
subscriber = context.socket(zmq.SUB)
subscriber.connect('tcp://127.0.0.1:9997')
publisher = context.socket(zmq.PUB)
publisher.bind('tcp://127.0.0.1:9998')
subscriber.setsockopt(zmq.SUBSCRIBE, "testid")
count = 0
print "Listener"
while True:
message = subscriber.recv()
print message
publisher.send('some_key : Message received %d' %count)
count+=1
Instead of thread you can use greenlet etc.

How to handle TCP connection events in order to call methods within other class?

I am creating a robot which is going to be driven by the commands received over TCP connection. Therefore, I will have a robot class with methods (e.g. sense(), drive()...) and the class for TCP connection.
To establish TCP connection, I looked at examples from twisted. On the client side, I have written a client.py script for connection handling:
from twisted.internet import reactor, protocol
import random
from eventhook import EventHook
import common
#from Common.socketdataobjects import response
# a client protocol
class EchoClient(protocol.Protocol):
"""Once connected, send a message, then print the result."""
def connectionMade(self):
self.transport.write("hello, world!")
#the server should be notified that the connection to the robot has been established
#along with robot state (position)
#eventConnectionEstablishedHook.fire()
def dataReceived(self, data):
print "Server said:", data
self.transport.write("Hello %s" % str(random.randint(1,10)))
'''
serverMessage = common.deserializeJson(data)
command = serverMessage.command
arguments = serverMessage.arguments
#here we get for example command = "DRIVE"
#arguments = {motor1Speed: 50, motor2Speed: 40}
instead of above response, used for testing purposes,
the commands should be extracted from the data and according to the command,
the method in Robot instance should be called.
When the command execution finishes, the self.transport.write() method should be called
to notify the server that the command execution finished
'''
def connectionLost(self, reason):
print "connection lost"
class EchoFactory(protocol.ClientFactory):
protocol = EchoClient
def clientConnectionFailed(self, connector, reason):
print "Connection failed - goodbye!"
reactor.stop()
def clientConnectionLost(self, connector, reason):
print "Connection lost - goodbye!"
reactor.stop()
# this connects the protocol to a server runing on port 8000
def initializeEventHandlers(connectionEstablishedHook):
global connection
connection.established = 0
global eventConnectionEstablishedHook
eventConnectionEstablishedHook = connectionEstablishedHook
def main():
f = EchoFactory()
reactor.connectTCP("localhost", 8000, f)
reactor.run()
# this only runs if the module was *not* imported
if __name__ == '__main__':
main()
Beside this script, I have a robot class:
Class Robot(object():
def __init(self)__:
self.position = (0,0)
def drive(self, speedMotor1, speedMotor2, driveTime)
updateMotor1State(speedMotor1)
updateMotor2State(speedMotor2)
time.sleep(driveTime)
#when the execution finished, the finish status should be sent to client in order to inform the server
return "Finished"
def sense(self)
#logic to get the data from the environment
What I would like to do, is to receive the data(commands) from TCP connection and then call the according method in Robot instance. Some procedures might take longer (e.g. driving), so I tried to use events, but haven't figured out the appropriate way to communicate between TCP client and robot using events:
if __name__ == '__main__':
robotController = Robot()
eventController = Controller()
connectionEstablishedHook = EventHook()
client.initializeEventHandlers(connectionEstablishedHook)
eventController.connection = connectionEstablishedHook
client.main()
I tried to create ClientMainProgram script, where I wanted to create an instance of a robot, an instance of TCP client and implement the communication between them using events.
Previously I have managed to implement event handling using Michael Foord's events pattern on a simpler example. I would be very thankful if anyone could provide the solution to this question or any similar example which might be helpful to solve this problem.
Events are easily represented using regular Python function calls.
For example, if your protocol looks like this:
from twisted.internet.protocol import Protocol
class RobotController(Protocol):
def __init__(self, robot):
self.robot = robot
def dataReceived(self, data):
for byte in data:
self.commandReceived(byte)
def commandReceived(self, command):
if command == "\x00":
# drive:
self.robot.drive()
elif command == "\x01":
# sense:
self.robot.sense()
...
(The specifics of the protocol used in this example are somewhat incidental. I picked this protocol because it's very simple and has almost no parsing logic. For your real application I suggest you use twisted.protocols.amp.)
Then all you need to do is make sure the robot attribute is properly initialized. You can do this easily using the somewhat newer endpoint APIs that can often replace use of factories:
from sys import argv
from twisted.internet.endpoints import clientFromString, connectProtocol
from twisted.internet.task import react
def main(reactor, description):
robot = ...
endpoint = clientFromString(reactor, description)
connecting = connectProtocol(endpoint, RobotController(robot))
def connected(controller):
...
connecting.addCallback(connected)
return connecting
react(main, argv[1:])

python socket + multiprocessing

I'm currently working on a websocket implementation that allows multiprocessing over the same listening socket.
I'm able to achieve an amazing performance with 4 processes on a quad core machine.
When I go upper, like 8 processes, after 4 request, the epoll.poll don't fire any event anymore. Interestingly, I tried running the same program , with 2 listener on 2 different ports. With 4 processes per listener, it blocks after 2 requests per socket. With 2 processes per listener, il all go fine through it.
Any thought?
main.py (extract)
#create the WSServer
wsserver = WSServer(s.bind_ip, s.bind_port, s.max_connections)
# specify on how many process we'll run
wsserver.num_process = s.num_process
Process(target=wsserver.run,args=()).start()
wsserver.py (extract)
def serve_forever_epoll(wsserver):
log(current_process())
epoll = select.epoll()
epoll.register(wsserver.socket.fileno(), select.EPOLLIN)
try:
client_map = {}
while wsserver.run:
events = epoll.poll(1)
for fileno, event in events:
if fileno == wsserver.socket.fileno():
channel, details = wsserver.socket.accept()
channel.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
aclient = wsclient.WSClient(channel, wsserver, process_server.client_manager)
client_map[channel.fileno()] = aclient
epoll.register(channel.fileno(), select.EPOLLIN )
log('Accepting client on %s' % current_process())
aclient.do_handshake()
elif event & select.EPOLLIN:
aclient = client_map[fileno]
threading.Thread(target=aclient.interact).start()
except Exception, e:
log(e)
finally:
epoll.unregister(wsserver.socket.fileno())
epoll.close()
wsserver.socket.close()
class WSServer():
def __init__(self, address, port, connections):
self.address = address
self.port = port
self.connections = connections
self.onopen = onopen
self.onclose = onclose
log('server init')
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
#self.socket.setblocking(0)
self.socket.bind((self.address, int(self.port)))
self.socket.listen(self.connections)
def run(self, *args):
multiprocessing.log_to_stderr(logging.DEBUG)
log("Run server")
try:
log("Starting Server")
self.run = True
serve_forever = serve_forever_epoll
for i in range(self.num_process-1):
log('Starting Process')
Process(target=serve_forever,args=(self,)).start()
serve_forever(self)
except Exception as e:
log("Exception-- %s " % e)
pass
OK so finally this weird case was caused by another module I was using.
I am using Pyro4 as a manager for keeping track of which process holds what client. This simplifies greately the IPC and also permits me for some client filtering based on some user_data.
The problem was the Pyro4 daemon was running on the MainProcess but not on the Main Thread!...
As long as I had less that 4 processes, all was OK (don't ask me why).
Moving Pyro in the main-process + thread event loop, it was working perfectly!
So now, i'm able to achieve 8, 16 or 32 processes for the same listening port, as well as spawning new configuration to replicate it or expose a new endpoint for the websocket server!
Thanks for your contributions, and sorry for your time...

Categories

Resources