Python raw_input doesn't work after using subprocess module - python

I'm using the subprocess module to invoke plink and run some commands on a remote server. This works as expected, but after a successful call to subprocess.check_call or subprocess.check_output the raw_input method seems to block forever and doesn't accept input at the command line.
I've reduced it to this simple example:
import subprocess
def execute(command):
return subprocess.check_call('plink.exe -ssh ' + USER + '#' + HOST + ' -pw ' + PASSWD + ' ' + command)
input = raw_input('Enter some text: ')
print('You entered: ' + input)
execute('echo "Hello, World"')
# I see the following prompt, but it's not accepting input
input = raw_input('Enter some more text: ')
print('You entered: ' + input)
I see the same results with subprocess.check_call and subprocess.check_output. If I replace the final raw_input call with a direct read from stdin (sys.stdin.read(10)) the program does accept input.
This is Python 2.7 on Windows 7 x64. Any ideas what I'm doing wrong?'
Edit: If I change execute to call something other than plink it seems to work okay.
def execute(command):
return subprocess.check_call('cmd.exe /C ' + command)
This suggests that plink might be the problem. However, I can run multiple plink commands directly in a console window without issue.

I was able to resolve this by attaching stdin to devnull:
def execute(command):
return subprocess.check_call('plink.exe -ssh ' + USER + '#' + HOST + ' -pw ' + PASSWD + ' ' + command, stdin=open(os.devnull))

Related

Python ffmpeg subprocess never exits on Linux, works on Windows

I wonder if someone can help explain what is happening?
I run 2 subprocesses, 1 for ffprobe and 1 for ffmpeg.
popen = subprocess.Popen(ffprobecmd, stderr=subprocess.PIPE, shell=True)
And
popen = subprocess.Popen(ffmpegcmd, shell=True, stdout=subprocess.PIPE)
On both Windows and Linux the ffprobe command fires, finishes and gets removed from taskmanager/htop. But only on Windows does the same happen to ffmpeg. On Linux the command remains in htop...
Can anyone explain what is going on, if it matters and how I can stop it from happening please?
EDIT: Here are the commands...
ffprobecmd = 'ffprobe' + \
' -user_agent "' + request.headers['User-Agent'] + '"' + \
' -headers "Referer: ' + request.headers['Referer'] + '"' + \
' -timeout "5000000"' + \
' -v error -select_streams v -show_entries stream=height -of default=nw=1:nk=1' + \
' -i "' + request.url + '"'
and
ffmpegcmd = 'ffmpeg' + \
' -re' + \
' -user_agent "' + r.headers['User-Agent'] + '"' + \
' -headers "Referer: ' + r.headers['Referer'] + '"' + \
' -timeout "10"' + \
' -i "' + r.url + '"' + \
' -c copy' + \
' -f mpegts' + \
' pipe:'
EDIT: Here is a example that behaves as described...
import flask
from flask import Response
import subprocess
app = flask.Flask(__name__)
#app.route('/', methods=['GET'])
def go():
def stream(ffmpegcmd):
popen = subprocess.Popen(ffmpegcmd, stdout=subprocess.PIPE, shell=True)
try:
for stdout_line in iter(popen.stdout.readline, ""):
yield stdout_line
except GeneratorExit:
raise
url = "https://bitdash-a.akamaihd.net/content/MI201109210084_1/m3u8s/f08e80da-bf1d-4e3d-8899-f0f6155f6efa.m3u8"
ffmpegcmd = 'ffmpeg' + \
' -re' + \
' -timeout "10"' + \
' -i "' + url + '"' + \
' -c copy' + \
' -f mpegts' + \
' pipe:'
return Response(stream(ffmpegcmd))
if __name__ == '__main__':
app.run(host= '0.0.0.0', port=5000)
You have the extra sh process due to shell=True, and your copies of ffmpeg are allowed to try to attach to the original terminal's stdin because you aren't overriding that file handle. To fix both those issues, and also some security bugs, switch to shell=False, set stdin=subprocess.DEVNULL, and (to stop zombies from potentially being left behind, note the finally: block below that calls popen.poll() to see if the child exited, and popen.terminate() to tell it to exit if it hasn't):
#!/usr/bin/env python
import flask
from flask import Response
import subprocess
app = flask.Flask(__name__)
#app.route('/', methods=['GET'])
def go():
def stream(ffmpegcmd):
popen = subprocess.Popen(ffmpegcmd, stdin=subprocess.DEVNULL, stdout=subprocess.PIPE)
try:
# NOTE: consider reading fixed-sized blocks (4kb at least) at a time
# instead of parsing binary streams into "lines".
for stdout_line in iter(popen.stdout.readline, ""):
yield stdout_line
finally:
if popen.poll() == None:
popen.terminate()
popen.wait() # yes, this can cause things to actually block
url = "https://bitdash-a.akamaihd.net/content/MI201109210084_1/m3u8s/f08e80da-bf1d-4e3d-8899-f0f6155f6efa.m3u8"
ffmpegcmd = [
'ffmpeg',
'-re',
'-timeout', '10',
'-i', url,
'-c', 'copy',
'-f', 'mpegts',
'pipe:'
]
return Response(stream(ffmpegcmd))
if __name__ == '__main__':
app.run(host= '127.0.0.1', port=5000)
Mind, it's not appropriate to be parsing a binary stream as a series of lines at all. It would be much more appropriate to use blocks (and to change your response headers so the browser knows to parse the content as a video).
What type is the ffmpegcmd variable? Is it a string or a list/sequence?
Note that Windows and Linux/POSIX behave differently with the shell=True parameter enabled or disabled. It matters whether ffmpegcmd is a string or a list.
Direct excerpt from the documentation:
On POSIX with shell=True, the shell defaults to /bin/sh. If args is a
string, the string specifies the command to execute through the shell.
This means that the string must be formatted exactly as it would be
when typed at the shell prompt. This includes, for example, quoting or
backslash escaping filenames with spaces in them. If args is a
sequence, the first item specifies the command string, and any
additional items will be treated as additional arguments to the shell
itself. That is to say, Popen does the equivalent of:
Popen(['/bin/sh', '-c', args[0], args[1], ...])
On Windows with shell=True, the COMSPEC environment variable specifies
the default shell. The only time you need to specify shell=True on
Windows is when the command you wish to execute is built into the
shell (e.g. dir or copy). You do not need shell=True to run a batch
file or console-based executable.

Executing a shell script using python's paramiko library

I am trying to execute a shell script with command line arguments using python's paramiko library and the code is as shown below.
import paramiko
ip = input("Enter the ip address of the machine: ")
mac = input("Enter the mac address of the machine: ")
model = input("Enter the model of the box(moto/wb): ")
spec = input("Enter the spec of the box(A/B/C/CI/D/E): ")
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect('hostname', username='xxxx', password='yyyy')
stdin, stdout, stderr = ssh_client.exec_command('ls -l')
for line in iter(stdout.readline, ""):
print(line, end = "")
stdin, stdout, stderr = ssh_client.exec_command('./name.sh'+ ip+ model + spec+ mac)
for line in iter(stdout.readline, ""):
print(line, end = "")
print('finished.')
I am not getting the output of second execute command. Instead directly it is jumping to finished. May i know how to get the output of the command execution?
You are not getting output because there is none. Your command is not valid, so the shell fails. The only output is sent to stderr, which you do not print.
Details:
stdin, stdout, stderr = ssh_client.exec_command('./name.sh'+ ip+ model + spec+ mac)
Assuming the user put these values:
ip: 111.222.333.444
model: wb
spec: C
mac: 28:d2:33:e6:4e:73
Then your command is:
./name.sh111.222.333.444wbC28:d2:33:e6:4e:73
Everything is appended with no spaces in between. Simple fix:
stdin, stdout, stderr = ssh_client.exec_command('./name.sh "' + ip + '" "' + model + '" "' + spec + '" "' + mac + '"')
Your command will now be:
./name.sh "111.222.333.444" "wb" "C" "28:d2:33:e6:4e:73"
I put the " around the values to ensure it will work even if the user puts spaces in the values.
Or store your 4 variables into a list and the join() function (https://www.geeksforgeeks.org/join-function-python/) to build your command.

How to avoid displaying errors caused after running subprocess.call

So when I run subprocess.call in python, after running the script, if there are error messages caused by the bash, I would like to not display it to the user.
So for instance,
for i in range(len(official_links)):
if(subprocess.call('pacman ' + '-Qi ' + official_links[i].replace('https://www.archlinux.org/packages/?q=', ''),shell=True, stdout=subprocess.PIPE) == 0):
print(official_links[i].replace('https://www.archlinux.org/packages/?q=', '') + ' installed')
else:
print(official_links[i].replace('https://www.archlinux.org/packages/?q=', '') + ' not installed')
the command pacman -Qi packagename cheks if the packagename is already installed or not. When I run my script, if it is installed, I get no extra messages from the bash, only what I print. But if the package does not exist and an error is caused, both the error and my print gets printed on the screen.
Is there a way to avoid printing command errors too?
Thanks.
Redirect the stderr as well:
if(subprocess.call('pacman ' + '-Qi ' + official_links[i].replace('https://www.archlinux.org/packages/?q=', ''),shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) == 0):
That's where the error is displayed.

Is there any difference in using subprocess.check_output() in Windows and OS X?

I want to use subprocess.check_output(cmd, shell=True) to execute cmd in Windows. It turns out that there is no output after executing this statement, but it works in OS X. I want to know if there is some problem when using shell=True. Here's my original source.
paper_name = sheet[location].value
name = '"' + paper_name + '"'
cmd = py + options + name + ' -t'
out_str = subprocess.check_output(cmd,shell=True)
pdb.set_trace()
#a = out_str.split('\n')
fp_str = to_str(out_str)
a = fp_str.split('\n')
cmd is like below
cmd

Python subprocess not returning

I want to call a Python script from Jenkins and have it build my app, FTP it to the target, and run it.
I am trying to build and the subprocess command fails. I have tried this with both subprocess.call() and subprocess.popen(), with the same result.
When I evaluate shellCommand and run it from the command line, the build succeeds.
Note that I have 3 shell commands: 1) remove work directory, 2) create a fresh, empty, work directory, then 3) build. The first two commands return from subprocess, but the third hangs (although it completes when run from the command line).
What am I doing wrongly? Or, what alternatives do I have for executing that command?
# +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=
def ExcecuteShellCommandAndGetReturnCode(arguments, shellCommand):
try:
process = subprocess.call(shellCommand, shell=True, stdout=subprocess.PIPE)
#process.wait()
return process #.returncode
except KeyboardInterrupt, e: # Ctrl-C
raise e
except SystemExit, e: # sys.exit()
raise e
except Exception, e:
print 'Exception while executing shell command : ' + shellCommand
print str(e)
traceback.print_exc()
os._exit(1)
# +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=
def BuildApplciation(arguments):
# See http://gnuarmeclipse.github.io/advanced/headless-builds/
jenkinsWorkspaceDirectory = arguments.eclipseworkspace + '/jenkins'
shellCommand = 'rm -r ' + jenkinsWorkspaceDirectory
ExcecuteShellCommandAndGetReturnCode(arguments, shellCommand)
shellCommand = 'mkdir ' + jenkinsWorkspaceDirectory
if not ExcecuteShellCommandAndGetReturnCode(arguments, shellCommand) == 0:
print "Error making Jenkins work directory in Eclipse workspace : " + jenkinsWorkspaceDirectory
return False
application = 'org.eclipse.cdt.managedbuilder.core.headlessbuild'
shellCommand = 'eclipse -nosplash -application ' + application + ' -import ' + arguments.buildRoot + '/../Project/ -build myAppApp/TargetRelease -cleanBuild myAppApp/TargetRelease -data ' + jenkinsWorkspaceDirectory + ' -D DO_APPTEST'
if not ExcecuteShellCommandAndGetReturnCode(arguments, shellCommand) == 0:
print "Error in build"
return False
I Googled further and found this page, which, at 1.2 says
One way of gaining access to the output of the executed command would
be to use PIPE in the arguments stdout or stderr, but the child
process will block if it generates enough output to a pipe to fill up
the OS pipe buffer as the pipes are not being read from.
Sure enough, when I deleted the , stdout=subprocess.PIPE from the code above, it worked as expected.
As I only want the exit code from the subprocess, the above code is enough for me. Read the linked page if you want the output of the command.

Categories

Resources