Suppress output of subprocess - python

I want to use the subprocess module to control some processes spawned via ssh.
By searching and testing I found that this works:
import subprocess
import os
import time
node = 'guest#localhost'
my_cmd = ['sleep','1000']
devnull = open(os.devnull, 'wb')
cmd = ['ssh', '-t', '-t', node] + my_cmd
p = subprocess.Popen(cmd, stderr=devnull, stdout=devnull)
while True:
time.sleep(1)
print 'Normal output'
The -t -t option I provide allows me to terminate the remote process instead of just the ssh command. But this, also scrambles my program output as newlines are no longer effective making it a long and hard to read string.
How can I make ssh not affecting the formatting of the python program?
Sample output:
guest:~$ python2 test.py
Normal output
Normal output
Normal output
Normal output
Normal output
Normal output
Normal output
(First ctrl-c)
Normal output
Normal output
Normal output
(Second ctrl-c)
^CTraceback (most recent call last):
File "test.py", line 13, in <module>
time.sleep(1)
KeyboardInterrupt

Ok, the output is now clear. I do not exactly know why, but the command ssh -t -t puts the local terminal in raw mode. It makes sense anyway, because it is intended to allow you to directly use curses programs (such as vi) on the remote, and in that case, no conversion should be done, not even the simple \n -> \r\n that allows a simple new line to leave the cursor on first column. But I could not find a reference on this in ssh documentation.
It (-t -t) allows you to kill the remote process because the raw mode let the Ctrl + C to be sent to the remote instead of being processed by the local tty driver.
IMHO, this is design smell, because you only use a side effect of the pty allocation to pass a Ctrl + C to the remote and you suffer for another side effect which is the raw mode on local system. You should rather process the standard input (stdinput = subprocess.PIPE) and explicitely send a chr(3) when you input a special character on local keyboard, or install a signal handler for SIG-INT that does it.
Alternatively, as a workaround, you can simply use something like os.system("stty opost -igncr") (or better its subprocess equivalent) after starting the remote command to reset the local terminal in an acceptable mode.

Related

How to disable stdout buffer when running shell

I am using Python to call a Shell script with
def run_command(cmd):
print "Start to run: " + cmd
run = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
line = run.stdout.readline().decode()[:-1]
if line == '' and run.poll() is not None:
break
print line # print the log from shell
recode = run.returncode
if recode != 0:
raise Exception("Error occurs!")
print "End to run: " + cmd
Then I run
run_command("sh /home/tome/a.sh")
I notice the console output from a.sh is not in real time, looks like that there is a buffer for the stdout and when the stdout buffer is full, then the output is printed out.
I would ask how to disable the shell stdout buffer in my script a.sh
Thanks!
The buffering in question would largely be a problem on the script's side, not the Python side; while Python would buffer the reads, it wouldn't block unless the buffer was emptied and there was nothing available to read.
So really, you need to disable buffering in the script itself. Adding stdbuf -oL (or -o0 for completely unbuffered, but line buffering should cover you since you read by line as well) to your commands should help in some cases (where the programs don't adjust their own buffering internally).
If you're seeing this behavior only by looking at Python's output, be aware that Python itself can buffer output as well. You can disable this by passing -u when running Python, or setting the environment variable PYTHONUNBUFFERED=1 before running it, or from within a script, you can manually call sys.stdout.flush() after any writes (direct, or implicit via print) to stdout. On modern Python, print takes an argument to force a flush after printing, but since you're on Python 2.x, that's not an option.

SSH to remote server - and write results to local server

So I want to be able to get this info that I initiate from my local server to this remote appliance and instead of getting the results to my local screen. I want to write it to a local file. I can see examples in paramiko, but I am having issues installing it for python3 as this is what I prefer to use. so I am trying using subprocess. now the unique thing is this remote appliance has limited commands it accepts, it is more like I literally have to run a 'show' command on the appliance. so there is nothing to SCP..hence the reason I did not use SCP.
This will write it to my screen, but that does not do me much good :(
xfer = subprocess.Popen(["ssh", "user#mysystem.com", " show my_secret_file"], stderr=subprocess.PIPE)
errdata = prog.communicate()[1]
Is this possible?
Assuming your appliance will actually write its output to stdout, its output will actually be returned in prog.communicate(), as long as you asked for stdout in Popen().
You can then save the returned stdout to a file using the standard file IO functions.
In other words, here's how it would work:
import subprocess
# Call subprocess and save stdout and stderr
prog = subprocess.Popen(["ssh", "user#mysystem.com", " show my_secret_file"],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# ^ Add this bit
out, err = prog.communicate()
# Do your error handling here...
# ...
# Now write to file
writefile = open("Put your file name here", "w")
writefile.write(out.decode("utf-8"))
writefile.close()
Note that the above assumes stdout is in text mode. If it is in binary mode, you may have to do some str/bytes conversion, or open the file in a different mode.

python subprocess.Popen stdin.write

I'm new to python and would like to open a windows cmd prompt, start a process, leave the process running and then issue commands to the same running process.
The commands will change so i cant just include these commands in the cmdline variable below. Also, the process takes 10-15 seconds to start so i dont want to waste time waiting for the process to start and run commands each time. just want to start process once. and run quick commands as needed in the same process
I was hoping to use subprocess.Popen to make this work, though i am open to better methods. Note that my process to run is not cmd, but im just using this as example
import subprocess
cmdline = ['cmd', '/k']
cmd = subprocess.Popen(cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
cmd.stdin.write("echo hi") #would like this to be written to the cmd prompt
print cmd.stdout.readline() #would like to see 'hi' readback
cmd.stdin.write("echo hi again") #would like this to be written to the cmd prompt
print cmd.stdout.readline() #would like to see 'hi again' readback
The results arent what i expect. Seems as though the stdin.write commands arent actually getting in and the readline freezes up with nothing to read.
I have tried the popen.communicate() instead of write/readline, but it kills the process. I have tried setting bufsize in the Popen line, but that didn't make too much difference
Your comments suggest that you are confusing command-line arguments with input via stdin. Namely, the fact that system-console.exe program accepts script=filename parameter does not imply that you can send it the same string as a command via stdin e.g., python executable accepts -c "print(1)" command-line arguments but it is a SyntaxError if you pass it as a command to Python shell.
Therefore, the first step is to use the correct syntax. Suppose the system-console.exe accepts a filename by itself:
#!/usr/bin/env python3
import time
from subprocess import Popen, PIPE
with Popen(r'C:\full\path\to\system-console.exe -cli -',
stdin=PIPE, bufsize=1, universal_newlines=True) as shell:
for _ in range(10):
print('capture.tcl', file=shell.stdin, flush=True)
time.sleep(5)
Note: if you've redirected more than one stream e.g., stdin, stdout then you should read/write both streams concurrently (e.g., using multiple threads) otherwise it is very easy to deadlock your program.
Related:
Q: Why not just use a pipe (popen())? -- mandatory reading for Unix environment but it might also be applicable for some programs on Windows
subprocess readline hangs waiting for EOF -- code example on how to pass multiple inputs, read multiple outputs using subprocess, pexpect modules.
The second and the following steps might have to deal with buffering issues on the side of the child process (out of your hands on Windows), whether system-console allows to redirect its stdin/stdout or whether it works with a console directly, and character encoding issues (how various commands in the pipeline encode text).
Here is some code that I tested and is working on Windows 10, Quartus Prime 15.1 and Python 3.5
import subprocess
class altera_system_console:
def __init__(self):
sc_path = r'C:\altera_lite\15.1\quartus\sopc_builder\bin\system-console.exe --cli --disable_readline'
self.console = subprocess.Popen(sc_path, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
def read_output(self):
rtn = ""
loop = True
i = 0
match = '% '
while loop:
out = self.console.stdout.read1(1)
if bytes(match[i],'utf-8') == out:
i = i+1
if i==len(match):
loop=False
else:
rtn = rtn + out.decode('utf-8')
return rtn
def cmd(self,cmd_string):
self.console.stdin.write(bytes(cmd_string+'\n','utf-8'))
self.console.stdin.flush()
c = altera_system_console()
print(c.read_output())
c.cmd('set jtag_master [lindex [get_service_paths master] 0]')
print(c.read_output())
c.cmd('open_service master $jtag_master')
print(c.read_output())
c.cmd('master_write_8 $jtag_master 0x00 0xFF')
print(c.read_output())
You need to use iter if you want to see the output in real time:
import subprocess
cmdline = ['cmd', '/k']
cmd = subprocess.Popen(cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
cmd.stdin.write("echo hi\n")#would like this to be written to the cmd prompt
for line in iter(cmd.stdout.readline,""):
print line
cmd.stdin.write("echo hi again\n")#would like this to be written to the cmd prompt
Not sure exactly what you are trying to do but if you want to input certain data when you get certain output then I would recommend using pexpect

run command in interactive opend bash with python

I use this script to run interactive VLC bash:
import os
import sys
if len(sys.argv) > 1:
tmp = os.popen('vlc -I rc --novideo --noaudio --rc-fake-tty -q udp://#1.2.3.4:1234').read()
else :
print "Error: no input"
now in this bash opend I like to run 'info' command, How to do this?
if in bash I type
vlc -I rc --novideo --noaudio --rc-fake-tty -q udp://#1.2.3.4:1234
It shows this
VLC media player 2.0.8 Twoflower (revision 2.0.8a-0-g68cf50b)
VLC media player 2.0.8 Twoflower
Command Line Interface initialized. Type `help' for help.
>
and It wait for get a command.
This can be done with pure pipes, but it's going to be hard. And even harder if you use os.popen() instead of using subprocess.
The right way to script an interactive program is to use a higher-level library that's designed to make it easy, like pexpect. Then you just write something like:
import pexpect
child = pexpect.spawn('vlc -I rc --novideo --noaudio --rc-fake-tty -q udp://#1.2.3.4:1234')
child.expect('>')
child.sendline('info')
response = child.before
However, a much better solution is to not run VLC in interactive mode; just run it in batch mode and pass it commands. Going out of your way to have it treat your input as a TTY just so you can try to figure out how to act like a human at a TTY is making things harder for no good reason.
Or, even better, use libVLC instead. As you can see from that link, there are Python bindings for it.
If you really want to do it interactively, and you want to do it manually over pipes, you will have to be very careful. If you don't mind just deadlocking on any unexpected results, you can do something like this:
import subprocess
child = subprocess.Popen(['vlc', '-I', 'rc', '--novideo', '--noaudio',
'--rc-fake-tty', '-q', 'udp://#1.2.3.4:1234'],
stdin=PIPE, stdout=PIPE)
def split_on_prompts():
rbuf = ''
while True:
newbuf = child.stdout.read()
rbuf += newbuf
out, prompt, rest = rbuf.partition('\n>')
if prompt:
yield out
rbuf = rest
if not newbuf:
yield rest
return
output = split_on_prompts()
banner = next(output)
child.stdin.write('info\n')
response = next(output)
# etc.
As you can see, this is a lot less fun.
And if you insist on using os.open instead even though it's deprecated and even more painful to use, you obviously can't write to it if you open the pipe in the default 'r' mode, just like any other file-like object, and of course tacking .read() on the end means you don't even have the popen object anymore, you just stored the first buffer it gave you and then leaked the handle. If you change that to open in 'r+' mode, if that works on your platform, and you store the popen object itself, you can use it similarly to the subprocess.Popen object above, using child.write and child.read instead of child.stdin.writeandchild.stdout.read`.

Displaying output of shell commands with shared environments

Is there any way to display the output of a shell command in Python, as the command runs?
I have the following code to send commands to a specific shell (in this case, /bin/tcsh):
import subprocess
import select
cmd = subprocess.Popen(['/bin/tcsh'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
poll = select.poll()
poll.register(cmd.stdout.fileno(),select.POLLIN)
# The list "commands" holds a list of shell commands
for command in commands:
cmd.stdin.write(command)
# Must include this to ensure data is passed to child process
cmd.stdin.flush()
ready = poll.poll()
if ready:
result = cmd.stdout.readline()
print result
Also, I got the code above from this thread, but I am not sure I understand how the polling mechanism works.
What exactly is registered above?
Why do I need the variable ready if I don't pass any timeout to poll.poll()?
Yes, it is entirely possible to display the output of a shell comamand as the command runs. There are two requirements:
1) The command must flush its output.
Many programs buffer their output differently according to whether the output is connected to a terminal, a pipe, or a file. If they are connected to a pipe, they might write their output in much bigger chunks much less often. For each program that you execute, consult its documentation. Some versions of /bin/cat', for example, have the -u switch.
2) You must read it piecemeal, and not all at once.
Your program must be structured to one piece at a time from the output stream. This means that you ought not do these, which each read the entire stream at one go:
cmd.stdout.read()
for i in cmd.stdout:
list(cmd.stdout.readline())
But instead, you could do one of these:
while not_dead_yet:
line = cmd.stdout.readline()
for line in iter(cmd.stdout.readline, b''):
pass
Now, for your three specific questions:
Is there any way to display the output of a shell command in Python, as the command runs?
Yes, but only if the command you are running outputs as it runs and doesn't save it up for the end.
What exactly is registered above?
The file descriptor which, when read, makes available the output of the subprocess.
Why do I need the variable ready if I don't pass any timeout to poll.poll()?
You don't. You also don't need the poll(). It is possible, if your commands list is fairly large, that might need to poll() both the stdin and stdout streams to avoid a deadlock. But if your commands list is fairly modest (less than 5Kbytes), then you will be OK just writing them at the beginning.
Here is one possible solution:
#! /usr/bin/python
import subprocess
import select
# Critical: all of this must fit inside ONE pipe() buffer
commands = ['echo Start\n', 'date\n', 'sleep 10\n', 'date\n', 'exit\n']
cmd = subprocess.Popen(['/bin/tcsh'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# The list "commands" holds a list of shell commands
for command in commands:
cmd.stdin.write(command)
# Must include this to ensure data is passed to child process
cmd.stdin.flush()
for line in iter(cmd.stdout.readline, b''):
print line

Categories

Resources