I'm trying to run this commands from a python script:
def raw(path_avd_py, path_avd, snp_name, out_file):
if OS == 'Windows':
cmd_raw = f"wsl.exe -e sh -c 'python3 {path_avd_py} -a {path_avd}
-s {snp_name} -o {out_file}'"
else:
cmd_raw = f'python3 {path_avd_py} -a {path_avd} -s {snp_name} -o {out_file}'
subprocess.Popen(cmd_raw, shell=True)
time.sleep(25)
return None
def idiffer(i_path, raw_1, raw_2, path, state):
if OS == 'Windows':
cmd_idiff = f"wsl.exe -e sh -c 'python3 {i_path} {raw_1} {raw_2}'"
[...]
file = os.path.join(path, f'{state}.idiff')
with open(file, 'w') as f:
subprocess.Popen(cmd_idiff, stdout=f, text=True)
If im executing cmd_raw with subprocess.run from a python-shell (Powershell), things are working. If im try running this via script, this exception occurs, using different shells:
-e sh: avdecrypt-master\avdecrypt.py: 1: Syntax error: Unterminated quoted string
-e bash: avdecrypt-master\avdecrypt.py: -c: line 0: unexpected EOF while looking for matching `''
avdecrypt-master\avdecrypt.py: -c: line 1: syntax error: unexpected end of file
I already tried os.system, os.run([list]) no change.
Thanks for the help!
For those who have a similar question, I found a solution, which is working for me:
Apparently calling scripts with some argv has to be in one single quotation mark and can be executed via run (in my case important, because the process has to be terminated). This leads to a form like:
cmd = ['wsl.exe', '-e', 'bash', '-c', '-a foo -b bar [...]']
subprocess.run(cmd, shell=True)
Lib shlex is helping here and formatting the strings like subprocess is needing it:
cmd_finished = shlex.split(cmd)
https://docs.python.org/3/library/shlex.html
Related
I have written a C code where I have converted one file format to another file format. To run my C code, I have taken one command line argument : filestem.
I executed that code using : ./executable_file filestem > outputfile
Where I have got my desired output inside outputfile
Now I want to take that executable and run within a python code.
I am trying like :
import subprocess
import sys
filestem = sys.argv[1];
subprocess.run(['/home/dev/executable_file', filestem , 'outputfile'])
But it is unable to create the outputfile. I think some thing should be added to solve the > issue. But unable to figure out. Please help.
subprocess.run has optional stdout argument, you might give it file handle, so in your case something like
import subprocess
import sys
filestem = sys.argv[1]
with open('outputfile','wb') as f:
subprocess.run(['/home/dev/executable_file', filestem],stdout=f)
should work. I do not have ability to test it so please run it and write if it does work as intended
You have several options:
NOTE - Tested in CentOS 7, using Python 2.7
1. Try pexpect:
"""Usage: executable_file argument ("ex. stack.py -lh")"""
import pexpect
filestem = sys.argv[1]
# Using ls -lh >> outputfile as an example
cmd = "ls {0} >> outputfile".format(filestem)
command_output, exitstatus = pexpect.run("/usr/bin/bash -c '{0}'".format(cmd), withexitstatus=True)
if exitstatus == 0:
print(command_output)
else:
print("Houston, we've had a problem.")
2. Run subprocess with shell=true (Not recommended):
"""Usage: executable_file argument ("ex. stack.py -lh")"""
import sys
import subprocess
filestem = sys.argv[1]
# Using ls -lh >> outputfile as an example
cmd = "ls {0} >> outputfile".format(filestem)
result = subprocess.check_output(shlex.split(cmd), shell=True) # or subprocess.call(cmd, shell=True)
print(result)
It works, but python.org frowns upon this, due to the chance of a shell injection: see "Security Considerations" in the subprocess documentation.
3. If you must use subprocess, run each command separately and take the SDTOUT of the previous command and pipe it into the STDIN of the next command:
p = subprocess.Popen(cmd, stdin=PIPE, stdout=PIPE)
stdout_data, stderr_data = p.communicate()
p = subprocess.Popen(cmd, stdin=stdout_data, stdout=PIPE)
etc...
Good luck with your code!
I'm having a problem with my subprocess command, I like to grep out the lines that match with "Online" line.
def run_command(command):
p = subprocess.Popen(command,shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
return iter(p.stdout.readline, b'')
command = 'mosquitto_sub -u example -P example -t ITT/# -v | grep "Online" '.split()
for line in run_command(command):
print(line)
But I will get an error
Error: Unknown option '|'.
Use 'mosquitto_sub --help' to see usage.
But when running with linux shell
user#server64:~/Pythoniscriptid$ mosquitto_sub -u example -P example -t ITT/# -v | grep "Online"
ITT/C5/link Online
ITT/IoT/tester55/link Online
ITT/ESP32/TEST/link Online
I also tried shell = True, but with no success, because I will get another error, that dosen't recognize the topic ITT/#
Error: You must specify a topic to subscribe to.
Use 'mosquitto_sub --help' to see usage.
The "possible dublicate" didn't help me at all, So I think I'm having a different problem. I tried to change code to this, put in not getting any return
def run_command(command,command2):
p1 = subprocess.Popen(command,stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
p2 = subprocess.Popen(command2,stdin=p1.stdout,stdout=subprocess.PIPE)
return iter(p2.stdout.readline,'')
command = 'mosquitto_sub -u example -P example -t ITT/# -v'.split()
command2 = 'grep Online'.split()
#subprocess.getoutput(command)
for line in run_command(command,command2):
print(line)
When you split the text, the list will look like
['mosquitto_sub', ..., 'ITT/#', '-v', '|', 'grep', '"Online"']
When you pass this list to subprocess.Popen, a literal '|' will be one of the arguments to mosquitto_sub.
If you use shell=True, you must escape any special characters like # in the command, for instance with double quotes:
import subprocess
command = 'echo -e "ITT/#\\ni am Online\\nbar Online\\nbaz" | grep "Online" '
p = subprocess.Popen(
command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in iter(p.stdout.readline, b''):
print(line)
Alternatively, connect the pipes as you wrote, but make sure to iterate until b'', not u'':
import subprocess
def run_command(command, command2):
p1 = subprocess.Popen(command,stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
p2 = subprocess.Popen(command2,stdin=p1.stdout,stdout=subprocess.PIPE)
return iter(p2.stdout.readline, b'')
command = ['echo', '-e', 'ITT/#\\ni am Online\\nbar Online\\nbaz']
command2 = 'grep Online'.split()
for line in run_command(command,command2):
print(line)
I am trying to run a command line argument through python script. Script triggers the .exe but it throws an error as System.IO.IOException: The handle is invalid..
Following is my code :
import os , sys , os.path
from subprocess import call
import subprocess, shlex
def execute(cmd):
"""
Purpose : To execute a command and return exit status
"""
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(result, error) = process.communicate()
rc = process.wait()
if rc != 0:
print "Error: failed to execute command:",cmd
print error
return result
found_alf = r"C:\AniteSAS\ResultData\20170515\Run01\1733200515.alf"
filter_alvf = r"C:\Users\sshaique\Desktop\ALF\AniteLogFilter.alvf"
command = str(r'ALVConsole.exe -e -t -i ' + '\"'+found_alf+'\"' + ' --ffile ' + '\"'+filter_alvf+'\"')
print command
os.chdir('C:\Program Files\Anite\LogViewer\ALV2')
print os.getcwd()
print "This process detail: \n", execute(command)
Output is as follows :
ALVConsole.exe -e -t -i "C:\AniteSAS\ResultData\20170515\Run01\1733200515.alf" --ffile "C:\Users\sshaique\Desktop\ALF\AniteLogFilter.alvf"
C:\Program Files\Anite\LogViewer\ALV2
This process detail:
Error: failed to execute command: ALVConsole.exe -e -t -i "C:\AniteSAS\ResultData\20170515\Run01\1733200515.alf" --ffile "C:\Users\sshaique\Desktop\ALF\AniteLogFilter.alvf"
Unhandled Exception: System.IO.IOException: The handle is invalid.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.Console.GetBufferInfo(Boolean throwOnNoConsole, Boolean& succeeded)
at ALV.Console.CommandLineParametersHandler.ConsoleWriteLine(String message, Boolean isError)
at ALV.Console.CommandLineParametersHandler.InvokeActions()
at ALV.Console.Program.Main(String[] args)
When I copy the command line argument from the above output and run manually from cmd it works fine.
ALVConsole.exe -e -t -i "C:\AniteSAS\ResultData\20170515\Run01\1733200515.alf" --ffile "C:\Users\sshaique\Desktop\ALF\AniteLogFilter.alvf"
I am using Windows 7 and Python 2.7.13 for. Please suggest overcoming this issue.
EDIT:
I have also tried to pass command as a list s as per below code but the issue remains the same.
command = str(r'ALVConsole.exe -e --csv -i ' + '\"'+found_alf+'\"' + ' --ffile ' + '\"'+filter_alvf+'\"')
s=shlex.split(command)
print s
print "This process detail: \n", execute(s)
Based on your error messages I think that this problem is with ALVConsole.exe, not your Python script.
When you redirect the output, ALVConsole.exe tries to do something to the console (like setting cursor position, or getting the size of the terminal) but fails like this.
Is there a flag to ALVConsole.exe that modifies the output to a machine-readable version? I wasn't able to find the documentation for this program.
When I run a command through subprocess I get exit status 1 without my print or the error raised.
here is my code:
def generate_model(self):
if not ((self.username == None) or (self.password == None) or (self.database == None)):
cmd = "python -m pwiz -e %s -H %s -u %s -P %s %s > %s"%(self.engine,self.host,self.username,self.password,self.database,self.database+".py")
print subprocess.check_call(cmd)
else:
raise ValueError
command asks an input once terminal is opened. After that it closes with exit status 1
When I run the same command directly in command prompt it works fine
subprocess.check_call() does not run the shell by default and therefore the redirection operator > won't work. To redirect stdout, pass stdout parameter instead:
with open(filename, 'wb', 0) as file:
check_call([sys.executable, '-m', 'pwiz', '-e', ...], stdout=file)
Related: Python subprocess.check_output(args) fails, while args executed via Windows command line work OK.
My python script needs to invoke a program, detect if it failed (eg, result != 0) and send the output of the program to both stdout like normal plus a log file.
My default shell is bash. I'm using Python 2.7.9
To send output to both stdout and a file I'd normally use tee:
result = subprocess.call('some_program --an-option | tee -a ' + logfile , shell=True)
However, the pipe in bash will return true even if the first command fails, so this approach fails to detect if the command fails.
If I try to use set -o pipefail in the command (so that the result will indicate if the first command fails) like this:
result = subprocess.call('set -o pipefail && some_program --an_option | tee -a ' + logfile , shell=True)
I get the error /bin/sh: 1: set: Illegal option -o pipefail
Is there a way in python to invoke a command, send the output to both the normal stdout console and a logfile, and still detect if the command failed?
Note: we have to continue sending some_program's output to stdout since stdout is being sent to a websocket.
I get the error /bin/sh: 1: set: Illegal option -o pipefail
Pass executable='/bin/bash' otherwise /bin/sh is used.
You could implement tee in pure Python:
#!/usr/bin/env python2
import sys
from subprocess import Popen, PIPE
chunk_size = 1 << 13
p = Popen(["some_program", "--an-option"], stdout=PIPE, bufsize=1)
with p.stdout, open('logfile', 'ab') as logfile:
for chunk in iter(lambda: p.stdout.read(chunk_size), b''):
sys.stdout.write(chunk)
logfile.write(chunk)
if p.wait() != 0:
raise Error
My preference would to to send stdout to a pipe, and then read the pipe in the Python code. The Python code can write to stdout, a file, etc as required. It would also enable you to set shell=False as setting it to True is a potential security issue, as mentioned in the documentation.
However, the pipe in bash will return true even if the first command
fails, so this approach fails to detect if the command fails.
That is not true.
But I think you mean: the 'some_program --an-option | tee -a ' + logfile exit status code always is 0 even though fails in any command part.
Well, using multiple commands (when using && or ||) or connecting multiple commands together via pipes causes unreliable exit status code when returned.
Regardless, in the command: some_program --an-option | tee -a ' + logfile logfile is not written if some_program fails. So you don't need to worry regarding exit code.
Anyway the best way to do pipe along with subprocess is creating Popen objects ans handling stdout and stdin:
import subprocess as sp
STATUS_OK = 0
logfile = '/tmp/test.log'
commands = {
'main' : 'ls /home',
'pipe_to': 'tee -a ' + logfile
}
process = sp.Popen(commands['main'], shell=True, stdout=sp.PIPE)
# explicitly force waits till command terminate, set and return exit status code
process.wait()
if process.returncode == STATUS_OK:
stdoutdata = process.communicate()[0]
# pipe last command output to "tee" command
sp.Popen(commands['pipe_to'], stdin=sp.PIPE, shell=1).communicate(stdoutdata)
else:
# do something when command fails 'ls /hom' (in this case) fails
pass
That is it!
I the last Popen we invoke Popen.communicate() to send the last output from ls command to tee command STDIN.
In the Python doc there's a tiny tutorial called Replacing shell pipeline, maybe you want take a look.