I want to create a shell pipeline using call. For example, I want to run this code get the number of lines having 123:
The shell command would be:
grep "123" myfile | wc -l > sum.txt
But "123" is a variable so I want to use python:
A= ["123","1234","12345"]
for i in A:
call(["grep", i,"| wc >> sum.txt"])
This code does not work!
call only calls one executable. What you want is that executable to be a shell (e.g. bash), which parses the command line. bash is also responsible for handling pipes. You can do this using shell=True option, which is off by default.
When you give it an array like you do, then you are calling that executable with those arguments. A pipe is not an argument; and grep does not know how to pipe, nor how to invoke wc.
You can do
call(["grep '%s' myfile | wc >> sum.txt" % i, shell=True)
If you are using the pipe character you would need shell=True,pass i each time using str.format:
call('grep "{}" myfile | wc >> sum.txt'.format(i),shell=True)
You can also do it without shell=True and using python to open the file instead of shell redirection:
from subprocess import Popen, PIPE, check_call
p = Popen(["grep","myfile" i], stdout=PIPE)
with open('sum.txt', "a") as f:
check_call(["wc"], stdin=p.stdout,stdout=f)
Also your > and >> are not the same so what mode you open the file in will depend on which one you actually want to replicate
Related
import os
val = os.popen("ls | grep a").read()
Let's say I want to check if a given directory has a any file named a. If the directory doesn't have a file with a in it the val is empty and if not val should be assigned with some output gotten from executing that command.
Are there any cases where the value of val could be still something even with empty output? In this case, there are no files with a but could val still have some value? Are there any cases where the output looks empty when we execute on a terminal, but value still has some value (e.g. white space)?
Is it an effective approach to use in general? (I am not really trying to check for files with certain names. This is actually just an example.)
Are there any better ways of doing such a thing?
I'd recommend using python3 subprocess, where you can use the check parameter. Then your command will throw an error if it does not succeed:
import subprocess
proc = subprocess.run(["ls | grep a"], shell=True, check=True, stdout=subprocess.PIPE)
# proc = subprocess.run(["ls | grep a"], shell=True, check=True, capture_output=True) # starting python3.7
print(proc.stdout)
but as #JohnKugelman suggested, in this case you'd better use glob:
import glob
files_with_a = glob.glob("*a*")
If you reallly go for the approach of running a os command from python, I'd recommend using the subprocess package:
import subprocess
command = "ls | grep a"
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
process.wait()
print(process.returncode)
This question already has answers here:
How do I use subprocess.Popen to connect multiple processes by pipes?
(9 answers)
Closed 7 years ago.
I want to run this command using call subprocess
ls -l folder | wc -l
My code in Python file is here:
subprocess.call(["ls","-l","folder","|","wc","-l"])
I got an error message like this:
ls: cannot access |: No such file or directory
ls: cannot access wc: No such file or directory
It's like command |wc can't be read by call subprocess.
How can i fix it?
Try out the shell option using a string as first parameter:
subprocess.call("ls -l folder | wc -l",shell=True)
Although this work, note that using shell=True is not recommended since it can introduce a security issue through shell injection.
You can setup a command pipeline by connecting one process's stdout with another's stdin. In your example, errors and the final output are written to the screen, so I didn't try to redirect them. This is generally preferable to something like communicate because instead of waiting for one program to complete before starting another (and encouring the expense of moving the data into the parent) they run in parallel.
import subprocess
p1 = subprocess.Popen(["ls","-l"], stdout=subprocess.PIPE)
p2 = subprocess.Popen(["wc","-l"], stdin=p1.stdout)
# close pipe in parent, its still open in children
p1.stdout.close()
p2.wait()
p1.wait()
You'll need to implement the piping logic yourself to make it work properly.
def piped_call(prog1, prog2):
out, err = subprocess.call(prog1).communicate()
if err:
print(err)
return None
else:
return subprocess.call(prog2).communicate(out)
You could try using subprocess.PIPE, assuming you wanted to avoid using subprocess.call(..., shell=True).
import subprocess
# Run 'ls', sending output to a PIPE (shell equiv.: ls -l | ... )
ls = subprocess.Popen('ls -l folder'.split(),
stdout=subprocess.PIPE)
# Read output from 'ls' as input to 'wc' (shell equiv.: ... | wc -l)
wc = subprocess.Popen('wc -l'.split(),
stdin=ls.stdout,
stdout=subprocess.PIPE)
# Trap stdout and stderr from 'wc'
out, err = wc.communicate()
if err:
print(err.strip())
if out:
print(out.strip())
For Python 3 keep in mind the communicate() method used here will return a byte object instead of a string. :
In this case you will need to convert the output to a string using decode():
if err:
print(err.strip().decode())
if out:
print(out.strip().decode())
I want to execute bash command
'/bin/echo </verbosegc> >> /tmp/jruby.log'
in python using Popen. The code does not raise any exception, but none change is made on the jruby.log after execution. The python code is shown below.
>>> command='/bin/echo </verbosegc> >> '+fullpath
>>> command
'/bin/echo </verbosegc> >> /tmp/jruby.log'
>>process = subprocess.Popen(command.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
>>> output= process.communicate()[0]
>>> output
'</verbosegc> >> /tmp/jruby.log\n
I also print out the process.pid and then check the pid using ps -ef | grep pid. The result shows that the process pid has been finished.
Just use pass file object if you want to append the output to a file, you cannot redirect to a file unless you set shell=True:
command = ['/bin/echo', '</verbosegc>']
with open('/tmp/jruby.log',"a") as f:
subprocess.check_call(command, stdout=f,stderr=subprocess.STDOUT)
The first argument to subprocess.Popen is the array ['/bin/echo', '</verbosegc>', '>>', '/tmp/jruby.log']. When the first argument to subprocess.Popen is an array, it does not launch a shell to run the command, and the shell is what's responsible for interpreting >> /tmp/jruby.log to mean "write output to jruby.log".
In order to make the >> redirection work in this command, you'll need to pass command directly to subprocess.Popen() without splitting it into a list. You'll also need to quote the first argument (or else the shell will interpret the "<" and ">" characters in ways you don't want):
command = '/bin/echo "</verbosegc>" >> /tmp/jruby.log'
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
Consider the following:
command = [ 'printf "%s\n" "$1" >>"$2"', # shell script to execute
'', # $0 in shell
'</verbosegc>', # $1
'/tmp/jruby.log' ] # $2
subprocess.Popen(command, shell=True)
The first argument is a shell script referring to $1 and $2, which are in turn passed as separate arguments. Keeping data separate from code, rather than trying to substitute the former into the latter, is a precaution against shell injection (think of this as an analog to SQL injection).
Of course, don't actually do anything like this in Python -- the native primitives for file IO are far more appropriate.
Have you tried without splitting the command and using shell=True? My usual format is:
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
output = process.stdout.read() # or .readlines()
is there a "nice" way to iterate over the output of a shell command?
I'm looking for the python equivalent for something like:
ls | while read file; do
echo $file
done
Note that 'ls' is only an example for a shell command which will return it's result in multiple lines and of cause 'echo' is just: do something with it.
I known of these alternatives: Calling an external command in Python but I don't know which one to use or if there is a "nicer" solution to this. (In fact "nicer" is the main focus of this question.)
This is for replacing some bash scripts with python.
you can open a pipe ( see doc ):
import os
with os.popen('ls') as pipe:
for line in pipe:
print (line.strip())
as in the document this syntax is depreciated and is replaced with more complicated subprocess.Popen
from subprocess import Popen, PIPE
pipe = Popen('ls', shell=True, stdout=PIPE)
for line in pipe.stdout:
print(line.strip())
check_output will give you back a string you can parse. call will simply re-use your existing stdout/stderr/stdin and simply return the process exit code.
Currently, I am using the following command to do this
$ python scriptName.py <filePath
This command uses "<" to stdin the file to script.
and it works fine, I can use sys.stdin.read to get the file data.
But, what if I want to pass file data as a string,
I don't want to pass file path in operator "<".
Is there is anyway, where I can pass String as stdin to a python script?
Thanks,
Kamal
The way I read your question, you currently have some file abc.txt with content
Input to my program
And you execute it this way:
python scriptName.py <abc.txt
Now you no longer want to go by way of this file, and instead type the input as part of the command, while still reading from stdin. Working on the windows command line you may do it like this:
echo Input to my program | python scriptName.py
while on Linux/Mac you'd better quote it to avoid shell expansion:
echo "Input to my program" | python scriptName.py
This only works for single-line input on windows (AFAIK), while on linux (and probably Mac) you can use the -e switch to insert newlines:
echo -e "first line\nsecond line" | python scriptName.py
There is raw_input which you can use make the program prompt for input and you can send in a string. And yes, it is mentioned in the first few pages of the tutorial at http://www.python.org.
>>> x = raw_input()
Something # you type
>>> x
'Something'
And sending the input via < the shell redirection operation is the property of shell and not python.
I could be wrong, but the way that I read the OP's question, I think he may currently be calling an os command to run a shell script inside of his python script, and then using a < operator to pass a file's contents into this shell script, and he is just hard coding the < and filename.
What he really desires to do is a more dynamic approach where he can pass a string defined in Python to this shell script.
If this is the case, the method I would suggest is this:
import subprocess;
script_child = subprocess.Popen(['/path/to/script/myScript.sh'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
stdout, stderr = clone_child.communicate("String to pass to the script.")
print "Stdout: ", stdout
print "Stderr: ", stderr
Alternatively, you can pass arguments to the script in the initial Popen like so:
script_child = subprocess.Popen(['/path/to/script/myScript.sh', '-v', 'value', '-fs'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)