I am using Ubuntu and I am running a command using the subprocess module. I am trying to find the maximum number of days a password can be use.
import subprocess
pass_max = subprocess.check_output('grep PASS_MAX_DAYS /etc/login.defs')
print(pass_max)
After running this code, I receive the error no such file or directory. How am I able to find the maximum number of days a password can be use?
check_output expects the command as a list:
subprocess.check_output(['grep', 'PASS_MAX_DAYS', '/etc/login.defs'])
Alternative is to pass shell=True, taking into account the security considerations
subprocess.check_output('grep PASS_MAX_DAYS /etc/login.defs', shell=True)
grep PASS_MAX_DAYS /etc/login.defs is being interpreted as a single executable, which can't be found. Use an array to pass an executable with arguments.
subprocess.check_output(['grep', 'PASS_MAX_DAYS', '/etc/login.defs'])
the argument of the function check_output has to be a list, so just add split at the end of your command string
import subprocess
pass_max = subprocess.check_output('grep PASS_MAX_DAYS /etc/login.defs'.split())
print(pass_max)
That should work
try this -
shell_output
By using the above you should be able to overcome all the challenges that otherwise appear with a subprocess module
Usage -
print(shell_output("your shell command here"))
Related
My perl script is at path:
a/perl/perlScript.pl
my python script is at path:
a/python/pythonScript.py
pythonScript.py gets an argument from stdin, and returns result to stdout. From perlScript.pl , I want to run pythonScript.py with the argument hi to stdin, and save the results in some variable. That's what I tried:
my $ret = `../python/pythonScript.py < hi`;
but I got the following error:
The system cannot find the path specified.
Can you explain the path can't be found?
The qx operator (backticks) starts a shell (sh), in which prog < input syntax expects a file named input from which it will read lines and feed them to the program prog. But you want the python script to receive on its STDIN the string hi instead, not lines of a file named hi.
One way is to directly do that, my $ret = qx(echo "hi" | python_script).
But I'd suggest to consider using modules for this. Here is a simple example with IPC::Run3
use warnings;
use strict;
use feature 'say';
use IPC::Run3;
my #cmd = ('program', 'arg1', 'arg2');
my $in = "hi";
run3 \#cmd, \$in, \my $out;
say "script's stdout: $out";
The program is the path to your script if it is executable, or perhaps python script.py. This will be run by system so the output is obtained once that completes, what is consistent with the attempt in the question. See documentation for module's operation.
This module is intended to be simple while "satisfy 99% of the need for using system, qx, and open3 [...]. For far more power and control see IPC::Run.
You're getting this error because you're using shell redirection instead of just passing an argument
../python/pythonScript.py < hi
tells your shell to read input from a file called hi in the current directory, rather than using it as an argument. What you mean to do is
my $ret = `../python/pythonScript.py hi`;
Which correctly executes your python script with the hi argument, and returns the result to the variable $ret.
The Some of the other answers assume that hi must be passed as a command line parameter to the Python script but the asker says it comes from stdin.
Thus:
my $ret = `echo "hi" | ../python/pythonScript.py`;
To launch your external script you can do
system "python ../python/pythonScript.py hi";
and then in your python script
import sys
def yourFct(a, b):
...
if __name__== "__main__":
yourFct(sys.argv[1])
you can have more informations on the python part here
I was trying to use subprocess calls to perform a copy operation (code below):
import subprocess
pr1 = subprocess.call(['cp','-r','./testdir1/*','./testdir2/'], shell = True)
and I got an error saying:
cp: missing file operand
Try `cp --help' for more information.
When I try with shell=False , I get
cp: cannot stat `./testdir1/*': No such file or directory
How do I get around this problem?
I'm using RedHat Linux GNOME Deskop version 2.16.0 and bash shell and Python 2.6
P.S. I read the question posted in Problems with issuing cp command with Popen in Python, and it suggested using shell = True option, which is not working for me as I mentioned :(
When using shell=True, pass a string, not a list to subprocess.call:
subprocess.call('cp -r ./testdir1/* ./testdir2/', shell=True)
The docs say:
On Unix with shell=True, the shell defaults to /bin/sh. If args is a
string, the string specifies the command to execute through the shell.
This means that the string must be formatted exactly as it would be
when typed at the shell prompt. This includes, for example, quoting or
backslash escaping filenames with spaces in them. If args is a
sequence, the first item specifies the command string, and any
additional items will be treated as additional arguments to the shell
itself.
So (on Unix), when a list is passed to subprocess.Popen (or subprocess.call), the first element of the list is interpreted as the command, all the other elements in the list are interpreted as arguments for the shell. Since in your case you do not need to pass arguments to the shell, you can just pass a string as the first argument.
This is an old thread now, but I was just having the same problem.
The problem you were having with this call:
subprocess.call(['cp','-r','./testdir1/*','./testdir2/'], shell = False)
was that each of the parameters after the first one are quoted. So to the shell sees the command like this:
cp '-r' './testdir1/*' './testdir2/'
The problem with that is the wildcard character (*). The filesystem looks for a file with the literal name '*' in the testdir1 directory, which of course, is not there.
The solution is to make the call like the selected answer using the shell = True option and none of the parameters quoted.
I know that the option of shell=True may be tempting but it's always inadvisable due to security issues. Instead, you can use a combination of the subprocess and glob modules.
For Python 3.5 or higher:
import subprocess
import glob
subprocess.run(['cp', '-r'] + glob.glob('./testdir1/*') + ['./testdir2/'])
For Python 3.4 or lower:
import subprocess
import glob
subprocess.call(['cp', '-r'] + glob.glob('./testdir1/*') + ['./testdir2/'])
I'm trying to get the filename thats given in the command line. For example:
python3 ritwc.py < DarkAndStormyNight.txt
I'm trying to get DarkAndStormyNight.txt
When I try fileinput.filename() I get back same with sys.stdin. Is this possible? I'm not looking for sys.argv[0] which returns the current script name.
Thanks!
In general it is not possible to obtain the filename in a platform-agnostic way. The other answers cover sensible alternatives like passing the name on the command-line.
On Linux, and some related systems, you can obtain the name of the file through the following trick:
import os
print(os.readlink('/proc/self/fd/0'))
/proc/ is a special filesystem on Linux that gives information about processes on the machine. self means the current running process (the one that opens the file). fd is a directory containing symbolic links for each open file descriptor in the process. 0 is the file descriptor number for stdin.
You can use ArgumentParser, which automattically gives you interface with commandline arguments, and even provides help, etc
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument('fname', metavar='FILE', help='file to process')
args = parser.parse_args()
with open(args.fname) as f:
#do stuff with f
Now you call python2 ritwc.py DarkAndStormyNight.txt. If you call python3 ritwc.py with no argument, it'll give an error saying it expected argument for FILE. You can also now call python3 ritwc.py -h and it will explain that a file to process is required.
PS here's a great intro in how to use it: http://docs.python.org/3.3/howto/argparse.html
In fact, as it seams that python cannot see that filename when the stdin is redirected from the console, you have an alternative:
Call your program like this:
python3 ritwc.py -i your_file.txt
and then add the following code to redirect the stdin from inside python, so that you have access to the filename through the variable "filename_in":
import sys
flag=0
for arg in sys.argv:
if flag:
filename_in = arg
break
if arg=="-i":
flag=1
sys.stdin = open(filename_in, 'r')
#the rest of your code...
If now you use the command:
print(sys.stdin.name)
you get your filename; however, when you do the same print command after redirecting stdin from the console you would got the result: <stdin>, which shall be an evidence that python can't see the filename in that way.
I don't think it's possible. As far as your python script is concerned it's writing to stdout. The fact that you are capturing what is written to stdout and writing it to file in your shell has nothing to do with the python script.
I am using Fedora 17 xfce and I am programming in Python 2.7.3. Fedora uses a package manager called yum. I have a python script that searches for packages like this:
import os
package = raw_input("Enter package name to search: ")
os.system("yum list " + package)
So I want python to check if in the output of this command exists the words No matching packages to list. I checked a similar question and I tried some methods here
but the string contained only the first line of the output.
Thanks in advance
os.system will not return any of the output. The question you linked to has the right answer. If you only got the first line of the output, maybe you were trying to read it line by line?
The right way to get the entire output is this:
import subprocess
package = raw_input("...")
p = subprocess.Popen(["yum", "install", package], stdout=subprocess.PIPE)
out, err = p.communicate()
# Wait for the process to exit before reading
p.wait()
full_output = out.read()
You would want to use the subprocess module for that, since os.system() simply returns the exit code of a command:
from subprocess import check_output
out = check_output(['yum', 'list', raw_input('package name')])
You could also use Yum's API directly to search packages:
from yum import YumBase
base = YumBase()
for package, name in base.searchGenerator(['name'], ['python']):
print(package.name, package.version)
How do you specify what directory the python module subprocess uses to execute commands? I want to run a C program and process it's output in Python in near realtime.
I want to excute the command (this runs my makefile for my C program:
make run_pci
In:
/home/ecorbett/hello_world_pthread
I also want to process the output from the C program in Python. I'm new to Python so example code would be greatly appreciated!
Thanks,
Edward
Use the cwd argument to Popen, call, check_call or check_output. E.g.
subprocess.check_output(["make", "run_pci"],
cwd="/home/ecorbett/hello_world_pthread")
Edit: ok, now I understand what you mean by "near realtime". Try
p = subprocess.Popen(["make", "run_pci"],
stdout=subprocess.PIPE,
cwd="/home/ecorbett/hello_world_pthread")
for ln in p.stdout:
# do processing
Read the documentation. You can specify it with the cwd argument, otherwise it uses the current directory of the parent (i.e., the script running the subprocess).