I am wanting to run an executable that would normally be run directly on the command line but ultimately via a Python script.
I used subprocess.Popen after reading through here and multiple Google results to achieve some limited success.
>>>import subprocess
>>>exe_path = sys.argv[1]
>>>dir_path_in = sys.argv[2]
>>>dir_path_out = sys.argv[3]
>>>subprocess.Popen([exe_path])
It then displays
<subprocess.Popen object at 0x021B7B30>
Followed by
>>>usage: <path to exe> [options] <dir_path> <dir_path_out>
But if I enter what you would normally expect to on the command line if used exclusively it returns:
>>>SyntaxError: invalid token
I have tested what is entered exclusively on the command line with the exe and it works fine just not via Python
I have had a look through StackOverFlow and the best kind of comparison I found was here How to handle an executable requiring interactive responses?
Ultimately the "usage" part will not even be required in the end as the declared sys.argvs will provide all the information the executable requires to run automatically.
The subprocess.call() achieved the desired result by declaring the argv variables and then concatenating the variables and using that final variable in a subprocess.call() as opposed to using shlex.split() which I first tried but it struggled with paths even with the '\' escaped for Windows
import subprocess
exe_path = sys.argv[1]
dir_path_in = sys.argv[2]
dir_path_out = sys.argv[3]
command = exe_path, dir_path_in, dir_path_out
p = subprocess.call(command)
Related
I'm learning python and I'm trying to use subprocess.run in Windows to pass a script via python to an external program using a command line argument that looks like
program -r script.
At the moment program is a variable defined by user input (that includes spaces) to allow for different versions. And script is also a path that will point to the temp script path. Eg:
program = C:\Program files\folder prog\folder\program.exe
script = C:\User\path\to script\script.txt
I tried this, but it involved manual input of escape characters and only the program argument works.
programm = "\"C:\\Program Files\\folder prog\\folder\\program.exe\""
script = "\"C:\\User\\path\\to script\\script.txt\""
process = subprocess.run([program, '-r', script],
stdout=subprocess.PIPE,
universal_newlines=True)
process
process.stdout
How do I use subprocess correctly to run this command line argument with the correct escape characters for the variable program and script path?
Iv'e been using the following shell command to read the image off a scanner named scanner_name and save it in a file named file_name
scanimage -d <scanner_name> --resolution=300 --format=tiff --mode=Color 2>&1 > <file_name>
This has worked fine for my purposes.
I'm now trying to embed this in a python script. What I need is to save the scanned image, as before, into a file and also capture any std output (say error messages) to a string
I've tried
scan_result = os.system('scanimage -d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} '.format(scanner, file_name))
But when I run this in a loop (with different scanners), there is an unreasonably long lag between scans and the images aren't saved until the next scan starts (the file is created as an empty file and is not filled until the next scanning command). All this with scan_result=0, i.e. indicating no error
The subprocess method run() has been suggested to me, and I have tried
with open(file_name, 'w') as scanfile:
input_params = '-d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} '.format(scanner, file_name)
scan_result = subprocess.run(["scanimage", input_params], stdout=scanfile, shell=True)
but this saved the image in some kind of an unreadable file format
Any ideas as to what may be going wrong? Or what else I can try that will allow me to both save the file and check the success status?
subprocess.run() is definitely preferred over os.system() but neither of them as such provides support for running multiple jobs in parallel. You will need to use something like Python's multiprocessing library to run several tasks in parallel (or painfully reimplement it yourself on top of the basic subprocess.Popen() API).
You also have a basic misunderstanding about how to run subprocess.run(). You can pass in either a string and shell=True or a list of tokens and shell=False (or no shell keyword at all; False is the default).
with_shell = subprocess.run(
"scanimage -d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} ".format(
scanner, file_name), shell=True)
with open(file_name) as write_handle:
no_shell = subprocess.run([
"scanimage", "-d", scanner, "--resolution=300", "--format=tiff",
"--mode=Color"], stdout=write_handle)
You'll notice that the latter does not support redirection (because that's a shell feature) but this is reasonably easy to implement in Python. (I took out the redirection of standard error -- you really want error messages to remain on stderr!)
If you have a larger working Python program this should not be awfully hard to integrate with a multiprocessing.Pool(). If this is a small isolated program, I would suggest you peel off the Python layer entirely and go with something like xargs or GNU parallel to run a capped number of parallel subprocesses.
I suspect the issue is you're opening the output file, and then running the subprocess.run() within it. This isn't necessary. The end result is, you're opening the file via Python, then having the command open the file again via the OS, and then closing the file via Python.
JUST run the subprocess, and let the scanimage 2>&1> filename command create the file (just as it would if you ran the scanimage at the command line directly.)
I think subprocess.check_output() is now the preferred method of capturing the output.
I.e.
from subprocess import check_output
# Command must be a list, with all parameters as separate list items
command = ['scanimage',
'-d{}'.format(scanner),
'--resolution=300',
'--format=tiff',
'--mode=Color',
'2>&1>{}'.format(file_name)]
scan_result = check_output(command)
print(scan_result)
However, (with both run and check_output) that shell=True is a big security risk ... especially if the input_params come into the Python script externally. People can pass in unwanted commands, and have them run in the shell with the permissions of the script.
Sometimes, the shell=True is necessary for the OS command to run properly, in which case the best recommendation is to use an actual Python module to interface with the scanner - versus having Python pass an OS command to the OS.
I am trying to create a python script that runs a perl script on the Mac terminal. The popular 3D printer slicing engine, Slic3r, has the ability to use command line usage, which is written in Perl. I want to write a python script to automate some processes, which is the language I know best. If I type the commands I want to use directly into the terminal, it works as it should, however, if I try to use python's subprocess, it works for some commands but not others.
For example if I use my script to fetch the Slic3r version using the syntax outlined in the docs, it works correctly. This script works:
import os
import subprocess
os.system("cd Slic3r")
perl = "perl"
perl_script = "/Users/path/to/Slic3r/slic3r.pl"
params = "--version"
pl_script = subprocess.Popen([perl, perl_script, params], stdout=sys.stdout)
pl_script.communicate()
print 'done'
This returns:
1.3.0-dev
done
If I use a command such as --info (see Slic3r docs under repairing models for more info) using the same script I get:
In:
import os
import subprocess
os.system("cd Slic3r")
perl = "perl"
perl_script = "/Users/path/to/Slic3r/slic3r.pl"
params = "--info /Users/path/to/Desktop/STL_Files/GEAR.stl"
pl_script = subprocess.Popen([perl, perl_script, params], stdout=sys.stdout)
pl_script.communicate()
print 'done'
Out:
Unknown option: info /Users/path/to/Desktop/STL_Files/GEAR.stl
Slic3r 1.3.0-dev is a STL-to-GCODE translator for RepRap 3D printers
written by Alessandro Ranellucci <aar#cpan.org> - http://slic3r.org/
Usage: slic3r.pl [ OPTIONS ] [ file.stl ] [ file2.stl ] ...
From what I have researched, I suspect that there is some issue with the whitespace of a string being used as a argument. I have never used subprocess until attempting this project, so a simple syntax error could be likely.
I know that the Slic3r syntax is correct because it works perfectly if I type it directly into the terminal. Can anybody see what I am doing wrong?
subprocess.Popen accepts args as the first parameter. This can be either a string with the complete command (including parameters):
args = "perl /Users/path/to/Slic3r/slic3r.pl --info /Users/path/to/Desktop/STL_Files/GEAR.stl"
pl_script = subprocess.Popen(args, stdout=sys.stdout)
or a list consisting of the actual command and all its parameters (the actual command in your case is perl):
args = ["perl",
"/Users/path/to/Slic3r/slic3r.pl",
"--info",
"/Users/path/to/Desktop/STL_Files/GEAR.stl"]
pl_script = subprocess.Popen(args, stdout=sys.stdout)
The latter is preferred because it bypasses the shell and directly executes perl. From the docs:
args should be a sequence of program arguments or else a single
string. By default, the program to execute is the first item in args
if args is a sequence. If args is a string, the interpretation is
platform-dependent and described below. See the shell and executable
arguments for additional differences from the default behavior. Unless
otherwise stated, it is recommended to pass args as a sequence.
(emphasis mine)
The args list may of course be built with Python's standard list operations:
base_args = ["perl",
"/Users/path/to/Slic3r/slic3r.pl"]
options = ["--info",
"/Users/path/to/Desktop/STL_Files/GEAR.stl"]
args = base_args + options
args.append("--verbose")
pl_script = subprocess.Popen(args, stdout=sys.stdout)
Sidenote: You wrote os.system("cd Slic3r"). This will open a shell, change the directory in that shell, and then exit. Your Python script will still operate in the original working directory. To change it, use os.chdir("Slic3r") instead. (See here.)
you can also use shlex to break down the complex arguments expecially in mac or unix
more information here
https://docs.python.org/2/library/shlex.html#shlex.split
e.g.
import shlex, subprocess
args = "perl /Users/path/to/Slic3r/slic3r.pl --info /Users/path/to/Desktop/STL_Files/GEAR.stl"
#using shlex to break down the arguments
mac_arg=shlex.split(args)
#shlex.split will return all the arguments in a list
output
['perl', '/Users/path/to/Slic3r/slic3r.pl', '--info', '/Users/path/to/Desktop/STL_Files/GEAR.stl']
This can then further be used with Popen
p1=Popen(mac_arg)
Shlex main adavantage is that you dont need to worry about the commands , it will always split them in a manner accepted by Popen
Some problems with python subprocess.check_output.
output = subprocess.check_output(args)
where my args is:
args = "C:\\DO\\bin\\Config.exe --ChCfg7 --LFE -b1152000 C:\\DO\\PCM\\1.wav C:\\DO\\PCM\\2.wav C:\\DO\\PCM\\3.wav C:\\DO\\PCM\\4.wav C:\\DO\\PCM\\5.wav C:\\DO\\PCM\6.wav --ModeBCast -oC:\\DO\\OUT\\outfile > C:\\DO\\OUT\\log.txt
This works when executed from standard windows command line, but doesn't work when executed via Python subprocess.check_output. In win cmd case there is output file produced and log.txt too, and python script produces out file with size 0, and no log.txt at all.
output = subprocess.check_output(args,shell=True)
Run this with shell=True
Use a list of args and redirect the output to a file:
import subprocess
args = ['C:/DO/bin/Config.exe', '--ChCfg7', '--LFE', '-b1152000', 'C:/DO/PCM/1.wav', 'C:/DO/PCM/2.wav', 'C:/DO/PCM/3.wav', 'C:/DO/PCM/4.wav', 'C:/DO/PCM/5.wav', 'C:/DO/PCM/6.wav', '--ModeBCast', '-oC:/DO/OUT/outfile']
with open("C:/DO/OUT/log.txt", "w") as f:
subprocess.check_call(args, stdout=f)
You can use shell=Truebut for security reasons generally it is not a very good idea and the same can be quite easily achieved using the code above and simply redirecting the output to the file.
> is a shell redirection operator. Either run the command in a shell or (better) as #Padraic Cunningham suggested emulate it in Python:
#!/usr/bin/env python
import subprocess
args = r"C:\DO\bin\Config.exe --ChCfg7 --LFE -b1152000".split()
args += [r'C:\DO\PCM\%d.wav' % i for i in range(1, 7)]
args += ["--ModeBCast", r"-oC:\DO\OUT\outfile"]
with open(r"C:\DO\OUT\log.txt", "wb", 0) as output_file:
subprocess.check_call(args, stdout=output_file)
The code uses raw string literals for Windows paths to avoid escaping backslashes.
There is usually no point to use shell=True on Windows unless you want to run a built-in command such as dir. If args is not constructed using input from an external source then security considerations do not apply. shell=True starts additional process (%COMSPEC%) and it changes how the executable is searched and it changes what characters should be escaped (what characters are metacharacters) — do not use shell=True unless necessary.
I'm trying to get the filename thats given in the command line. For example:
python3 ritwc.py < DarkAndStormyNight.txt
I'm trying to get DarkAndStormyNight.txt
When I try fileinput.filename() I get back same with sys.stdin. Is this possible? I'm not looking for sys.argv[0] which returns the current script name.
Thanks!
In general it is not possible to obtain the filename in a platform-agnostic way. The other answers cover sensible alternatives like passing the name on the command-line.
On Linux, and some related systems, you can obtain the name of the file through the following trick:
import os
print(os.readlink('/proc/self/fd/0'))
/proc/ is a special filesystem on Linux that gives information about processes on the machine. self means the current running process (the one that opens the file). fd is a directory containing symbolic links for each open file descriptor in the process. 0 is the file descriptor number for stdin.
You can use ArgumentParser, which automattically gives you interface with commandline arguments, and even provides help, etc
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument('fname', metavar='FILE', help='file to process')
args = parser.parse_args()
with open(args.fname) as f:
#do stuff with f
Now you call python2 ritwc.py DarkAndStormyNight.txt. If you call python3 ritwc.py with no argument, it'll give an error saying it expected argument for FILE. You can also now call python3 ritwc.py -h and it will explain that a file to process is required.
PS here's a great intro in how to use it: http://docs.python.org/3.3/howto/argparse.html
In fact, as it seams that python cannot see that filename when the stdin is redirected from the console, you have an alternative:
Call your program like this:
python3 ritwc.py -i your_file.txt
and then add the following code to redirect the stdin from inside python, so that you have access to the filename through the variable "filename_in":
import sys
flag=0
for arg in sys.argv:
if flag:
filename_in = arg
break
if arg=="-i":
flag=1
sys.stdin = open(filename_in, 'r')
#the rest of your code...
If now you use the command:
print(sys.stdin.name)
you get your filename; however, when you do the same print command after redirecting stdin from the console you would got the result: <stdin>, which shall be an evidence that python can't see the filename in that way.
I don't think it's possible. As far as your python script is concerned it's writing to stdout. The fact that you are capturing what is written to stdout and writing it to file in your shell has nothing to do with the python script.