File not found error when launching a subprocess containing piped commands - python

I need to run the command date | grep -o -w '"+tz+"'' | wc -w using Python on my localhost. I am using subprocess module for the same and using the check_output method as I need to capture the output for the same.
However it is throwing me an error :
Traceback (most recent call last):
File "test.py", line 47, in <module>
check_timezone()
File "test.py", line 40, in check_timezone
count = subprocess.check_output(command)
File "/usr/lib/python2.7/subprocess.py", line 537, in check_output
process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child
raise child_exception-
OSError: [Errno 2] No such file or directory

You have to add shell=True to execute a shell command. check_output is trying to find an executable called: date | grep -o -w '"+tz+"'' | wc -w and he cannot find it. (no idea why you removed the essential information from the error message).
See the difference between:
>>> subprocess.check_output('date | grep 1')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.4/subprocess.py", line 603, in check_output
with Popen(*popenargs, stdout=PIPE, **kwargs) as process:
File "/usr/lib/python3.4/subprocess.py", line 848, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.4/subprocess.py", line 1446, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'date | grep 1'
And:
>>> subprocess.check_output('date | grep 1', shell=True)
b'gio 19 giu 2014, 14.15.35, CEST\n'
Read the documentation about the Frequently Used Arguments for more information about the shell argument and how it changes the interpretation of the other arguments.
Note that you should try to avoid using shell=True since spawning a shell can be a security hazard (even if you do not execute untrusted input attacks like Shellshock can still be performed!).
The documentation for the subprocess module has a little section about replacing the shell pipeline.
You can do so by spawning the two processes in python and use subprocess.PIPE:
date_proc = subprocess.Popen(['date'], stdout=subprocess.PIPE)
grep_proc = subprocess.check_output(['grep', '1'], stdin=date_proc.stdout, stdout=subprocess.PIPE)
date_proc.stdout.close()
output = grep_proc.communicate()[0]
You can write some simple wrapper function to easily define pipelines:
import subprocess
from shlex import split
from collections import namedtuple
from functools import reduce
proc_output = namedtuple('proc_output', 'stdout stderr')
def pipeline(starter_command, *commands):
if not commands:
try:
starter_command, *commands = starter_command.split('|')
except AttributeError:
pass
starter_command = _parse(starter_command)
starter = subprocess.Popen(starter_command, stdout=subprocess.PIPE)
last_proc = reduce(_create_pipe, map(_parse, commands), starter)
return proc_output(*last_proc.communicate())
def _create_pipe(previous, command):
proc = subprocess.Popen(command, stdin=previous.stdout, stdout=subprocess.PIPE)
previous.stdout.close()
return proc
def _parse(cmd):
try:
return split(cmd)
except Exception:
return cmd
With this in place you can write pipeline('date | grep 1') or pipeline('date', 'grep 1') or pipeline(['date'], ['grep', '1'])

The most common cause of FileNotFound with subprocess, in my experience, is the use of spaces in your command. If you have just a single command (not a pipeline, and no redirection, wildcards, etc), use a list instead.
# Wrong, even with a valid command string
subprocess.run(['grep -o -w "+tz+"'])
# Fixed; notice also
subprocess.run(["grep", "-o", "-w", '"+tz+"'])
This change results in no more FileNotFound errors, and is a nice solution if you got here searching for that exception with a simpler command.
If you need a pipeline or other shell features, the simple fix is to add shell=True:
subprocess.run(
'''date | grep -o -w '"+tz+"'' | wc -w''',
shell=True)
However, if you are using python 3.5 or greater, try using this approach:
import subprocess
a = subprocess.run(["date"], stdout=subprocess.PIPE)
print(a.stdout.decode('utf-8'))
b = subprocess.run(["grep", "-o", "-w", '"+tz+"'],
input=a.stdout, stdout=subprocess.PIPE)
print(b.stdout.decode('utf-8'))
c = subprocess.run(["wc", "-w"],
input=b.stdout, stdout=subprocess.PIPE)
print(c.stdout.decode('utf-8'))
You should see how one command's output becomes another's input just like using a shell pipe, but you can easily debug each step of the process in python. Using subprocess.run is recommended for python > 3.5, but not available in prior versions.

The FileNotFoundError happens because - in the absence of shell=True - Python tries to find an executable whose file name is the entire string you are passing in. You need to add shell=True to get the shell to parse and execute the string, or figure out how to rearticulate this command line to avoid requiring a shell.
As an aside, the shell programming here is decidedly weird. On any normal system, date will absolutely never output "+tz+" and so the rest of the processing is moot.
Further, using wc -w to count the number of output words from grep is unusual. The much more common use case (if you can't simply use grep -c to count the number of matching lines) would be to use wc -l to count lines of output from grep.
Anyway, if you can, you want to avoid shell=True; if the intent here is to test the date command, you should probably replace the rest of the shell script with native Python code.
Pros:
The person trying to understand the program only needs to understand Python, not shell script.
The script will have fewer external dependencies (here, date) rather than require a Unix-like platform.
Cons:
Reimplementing standard Unix tools in Python is tiresome and sometimes rather verbose.
With that out of the way, if the intent is simply to count how wany times "+tz+" occurs in the output from date, try
p = subprocess.run(['date'],
capture_output=True, text=True,
check=True)
result = len(p.stdout.split('"+tz+"'))-1
The keyword argument text=True requires Python 3.7; for compatibility back to earlier Python versions, try the (misnomer) legacy synonym universal_newlines=True. For really old Python versions, maybe fall back to subprocess.check_output().
If you really need the semantics of the -w option of grep, you need to check if the characters adjacent to the match are not alphabetic, and exclude those which are. I'm leaving that as an exercise, and in fact would assume that the original shell script implementation here was not actually correct. (Maybe try re.split(r'(?<=^|\W)"\+tz\+"(?=\W|$)', p.stdout).)
In more trivial cases (single command, no pipes, wildcards, redirection, shell builtins, etc) you can use Python's shlex.split() to parse a command into a correctly quoted list of arguments. For example,
>>> import shlex
>>> shlex.split(r'''one "two three" four\ five 'six seven' eight"'"nine'"'ten''')
['one', 'two three', 'four five', 'six seven', 'eight\'nine"ten']
Notice how the regular string split() is completely unsuitable here; it simply splits on every whitespace character, and doesn't support any sort of quoting or escaping. (But notice also how it boneheadedly just returns a list of tokens from the original input:
>>> shlex.split('''date | grep -o -w '"+tz+"' | wc -w''')
['date', '|', 'grep', '-o', '-w', '"+tz+"', '|', 'wc', '-w']
(Even more parenthetically, this isn't exactly the original input, which had a superfluous extra single quote after '"+tz+"').
This is in fact passing | and grep etc as arguments to date, not implementing a shell pipeline! You still have to understand what you are doing.)

The question already has an answer above but just in case the solutions are not working for you; Please check the path itself and if all the environment variables are set for the process to locate the path.

what worked for me on python 3.8.10 (inspired by #mightypile solution here: https://stackoverflow.com/a/49986004/12361522), was removed splits of parametres and i had to enable shell, too:
this:
c = subprocess.run(["wc -w"], input=b.stdout, stdout=subprocess.PIPE, shell=True)
instead of:
c = subprocess.run(["wc", "-w"], input=b.stdout, stdout=subprocess.PIPE)
if anyone wanted to try my solution (at least for v3.8.10), here is mine:
i have directory with multiple files of at least 2 file-types (.jpg and others). i needed to count specific file type (.jpg) and not all files in the directory, via 1 pipe:
ls *.jpg | wc -l
so eventually i got it working like here:
import subprocess
proc1 = subprocess.run(["ls *.jpg"], stdout=subprocess.PIPE, shell=True)
proc2 = subprocess.run(['wc -l'], input=proc1.stdout, stdout=subprocess.PIPE, shell=True)
print(proc2.stdout.decode())
it would not work with splits:
["ls", "*.jpg"] that would make ls to ignore contraint *.jpg
['wc', '-l'] that would return correct count, but will all 3 outputs and not just one i was after
all that would not work without enabled shell shell=True

I had this error too and what worked for me was setting the line endings of the .sh file - that I was calling with subprocess - to Unix (LF) instead of Windows CRLF.

Related

Python subprocess.run not able to handle large argument string

I need to invoke a powershell script and capture the output as generated from it.
Since I need to capture the output, I chose to use subprocess.run()
Powershell invocation
powershell DeleteResults -resultscsv '1111,2222,3333,4444'
Python(Python 3.5.2 :: Anaconda 4.1.1 (64-bit)) code
command = "powershell DeleteResults -resultscsv '{}'".format(resultscsv)
output = subprocess.run(command, stdout=subprocess.PIPE).stdout.decode('utf-8')
All goes fine as long as the length of command is less than 33K(approx)
However, subprocess.call() throws error when the length exceeds 33K
(There is no issue on the powershell side as it works perfectly fine when invoked directly)
ERROR: [WinError 206] The filename or extension is too long
Traceback (most recent call last):
File "D:\path\to\python\wrapper.py", line 154, in <module>
output = subprocess.run(command, stdout=subprocess.PIPE).stdout.decode('utf-8')
File "D:\Anaconda3\lib\subprocess.py", line 693, in run
with Popen(*popenargs, **kwargs) as process:
File "D:\Anaconda3\lib\subprocess.py", line 947, in __init__
restore_signals, start_new_session)
File "D:\Anaconda3\lib\subprocess.py", line 1224, in _execute_child
startupinfo)
Any pointer will be great help.
Not sure if relevant - the python script is invoked via Control-M on a windows environment.
--Edit--
Adding this section to add more details in response to answer by Alex.
We don't own the ps script DeleteResults. So, we can't modify it. We just consume it.
As it is done today,
The resultscsv(80K chars) is stored in a results.ini file
A small piece of ps inline code parses .ini file and then invokes DeleteResults. Note: There is powershell call inside the outer powershell invocation(invocation below).
This approach works perfectly fine even if chars >80K.
However, we don't want the inline ini parser to be part of invocation - looks ugly.
So, the idea is to write a python wrapper which will parse .ini file and invoke the powershell
powershell -NoLogo -NonInteractive -Command "Get-Content 'results.ini' | foreach-object -begin {$h=#{}} -process { $k = [regex]::split($_,'='); if(($k[0].compareTo(' ') -ne 0) -and ($k[0].startswith('[') -ne $True)) {$h.Add($k[0], $k[1]) }}; powershell DeleteResults -resultscsv $h.Get_Item('resultscsv')"
So, wondering why the above ps 1-liner not hitting the char length limit ? Is it that the line powershell DeleteResults -resultscsv $h.Get_Item('resultscsv') is NOT actually expanded inline - thereby not hitting the char length limit ?
There is command line string limitation, it's value depends on OS version.
It is not possible to pass large data through command line arguments. Pass a filename instead.
Documentation and workaround https://support.microsoft.com/en-us/help/830473/command-prompt-cmd-exe-command-line-string-limitation

subprocess checkouput OSError: [Errno 2] No such file or directory

Below is example code:
from subprocess import check_output
list1 = ['df', 'df -h']
for x in list1:
output = check_output([x])
Getting below error for list1 of dh -h value.
File "/usr/lib64/python2.7/subprocess.py", line 568, in check_output
process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
what is best method to read linux command output's in python2.7
You should provide check_output arguments as a list.
This works:
from subprocess import check_output
list1 = ['df', 'df -h']
for x in list1:
output = check_output(x.split())
I recommend delegator written by kennethreitz, with his package https://github.com/kennethreitz/delegator.py, you can simply do, and both the API and output is cleaner:
import delegator
cmds = ['df', 'df -h']
for cmd in cmds:
p = delegator.run(cmd)
print(p.out)
There are a few options with this situation, for ways of passing a cmd and args:
# a list broken into individual parts, can be passed with `shell=False
['cmd', 'arg1', 'arg2', ... ]
# a string with just a `cmd`, can be passed with `shell=False`
'cmd`
# a string with a `cmd` and `args`
# can only be passed to subprocess functions with `shell=True`
'cmd arg1 arg2 ...'
Just to follow up on mariis answer. The subprocess docs on python.org have more info on why you may want to pick one of a couple of options.
args is required for all calls and should be a string, or a sequence
of program arguments. Providing a sequence of arguments is generally
preferred, as it allows the module to take care of any required
escaping and quoting of arguments (e.g. to permit spaces in file
names). If passing a single string, either shell must be True (see
below) or else the string must simply name the program to be executed
without specifying any arguments.
(emphesis added)
While adding shell=True would be OK for this, it's recommended to avoid, as changing 'df -h' to ['df', '-h'] isn't very difficult, and is a good habit to get into, only using the shell if you really need to. As the docs also add, against a red background no less:
Warning.
Executing shell commands that incorporate unsanitized input from an
untrusted source makes a program vulnerable to shell injection, a
serious security flaw which can result in arbitrary command execution.
For this reason, the use of shell=True is strongly discouraged in
cases where the command string is constructed from external input

Unable to capture result of ls -la with subprocess.Popen

I am trying to capture the output when I execute a custom command using Popen:
import subprocess
def exec_command():
command = "ls -la" # will be replaced by my custom command
result = subprocess.Popen(command, stdout=subprocess.PIPE).communicate()[0]
print(result)
exec_command()
I get an OSError with following stacktrace:
File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Please let me know what I would need to use.
Note: The stacktrace shows the code was executed in Python 2.7, but I got the same error running with Python 2.6
When running without shell=True (which you are doing, correctly; shell=True is dangerous), you should pass your command as a sequence of the command and the arguments, not a single string. Fixed code is:
def exec_command():
command = ["ls", "-la"] # list of command and argument
... rest of code unchanged ...
If you had user input involved for some of the arguments, you'd just insert it into the list:
def exec_command(somedirfromuser):
command = ["ls", "-la", somedirfromuser]
Note: If your commands are sufficiently simple, I'd recommend avoiding subprocess entirely. os.listdir and os.stat (or on Python 3.5+, os.scandir alone) can get you the same info as ls -la in a more programmatically usable form without the need to parse it, and likely faster than launching an external process and communicating with it via a pipe.

Getting console output of a Perl script through Python

There are a variety of posts and resources explaining how to use Python to get output of an outside call. I am familiar with using these--I've used Python to get output of jars and exec several times, when it was not realistic or economical to re-implement the functionality of that jar/exec inside Python itself.
I am trying to call a Perl script via Python's subprocess module, but I have had no success with this particular Perl script. I carefully followed the answers here, Call Perl script from Python, but had no results.
I was able to get the output of this test Perl script from this question/answer: How to call a Perl script from Python, piping input to it?
#!/usr/bin/perl
use strict;
use warnings;
my $name = shift;
print "Hello $name!\n";
Using this block of Python code:
import subprocess
var = "world"
args_test = ['perl', 'perl/test.prl', var]
pipe = subprocess.Popen(args_test, stdout=subprocess.PIPE)
out, err = pipe.communicate()
print out, err
However, if I swap out the arguments and the Perl script with the one I need output from, I get no output at all.
args = ['perl', 'perl/my-script.prl', '-a', 'perl/file-a.txt',
'-t', 'perl/file-t.txt', 'input.txt']
which runs correctly when entered on the command line, e.g.
>perl perl/my-script.prl -a perl/file-a.txt -t perl/file-t.txt input.txt
but this produces no output when called via subprocess:
pipe = subprocess.Popen(args, stdout=subprocess.PIPE)
out, err = pipe.communicate()
print out, err
I've done another sanity check as well. This correctly outputs the help message of Perl as a string:
import subprocess
pipe = subprocess.Popen(['perl', '-h'], stdout=subprocess.PIPE)
out, err = pipe.communicate()
print out, err
As shown here:
>>> ================================ RESTART ================================
>>>
Usage: perl [switches] [--] [programfile] [arguments]
-0[octal] specify record separator (\0, if no argument)
-a autosplit mode with -n or -p (splits $_ into #F)
-C[number/list] enables the listed Unicode features
-c check syntax only (runs BEGIN and CHECK blocks)
-d[:debugger] run program under debugger
-D[number/list] set debugging flags (argument is a bit mask or alphabets)
-e program one line of program (several -e's allowed, omit programfile)
-f don't do $sitelib/sitecustomize.pl at startup
-F/pattern/ split() pattern for -a switch (//'s are optional)
-i[extension] edit <> files in place (makes backup if extension supplied)
-Idirectory specify #INC/#include directory (several -I's allowed)
-l[octal] enable line ending processing, specifies line terminator
-[mM][-]module execute "use/no module..." before executing program
-n assume "while (<>) { ... }" loop around program
-p assume loop like -n but print line also, like sed
-P run program through C preprocessor before compilation
-s enable rudimentary parsing for switches after programfile
-S look for programfile using PATH environment variable
-t enable tainting warnings
-T enable tainting checks
-u dump core after parsing program
-U allow unsafe operations
-v print version, subversion (includes VERY IMPORTANT perl info)
-V[:variable] print configuration summary (or a single Config.pm variable)
-w enable many useful warnings (RECOMMENDED)
-W enable all warnings
-x[directory] strip off text before #!perl line and perhaps cd to directory
-X disable all warnings
None

python subprocess call failing while same command line call works fine

I am trying to replace a command line call by a Python script using subprocess:
path_to_executable = r'c:\UK\app\Debug\lll.exe'
x = subprocess.call([path_to_executable, args])
args is a string that looks like this:
-unemp Base -rate Base -scen_name Base -print_progress 0 -rate_date 1 -hpa Base -dealpath C:\data\ -nthread 4 -deallist C:\config\all.txt -outdir c:\outdir\Base
The call is working when run from the command line, however failing with the same arguments in subprocess with the following error:
FileIOException(Unable to open directory C:/.../hist.csv)
(The csv file is present - but it's a file, not a directory.)
My questions:
1. How could it be that it work through the command line but not subprocess?
2. Why might it be trying to open a csv file as a directory, when it's not doing the same thing on the command line?
Maybe subprocess is not able to locate the file/directory..Are you sure the file is present and the path to the file does not contain any special character (e.g. ~/) ?
Otherwise try using argument shell=True
from the subprocess doc:
subprocess.call(args, *, stdin=None, stdout=None, stderr=None, shell=False)

Categories

Resources