I was trying to write a python script like this:
import sys
print sys.argv[1]
print sys.argv[2]
Let's call it arg.py, and run it in command line:
python arg.py one two
it printed: one two.
Everything was fine.
Then I wanted it to be handy so I put arg.py in my $PATH and gave it permission to exacuate so wherever I'm I can simply type arg in command line to run this script. I tried
arg one two
but it failed. The exception said:"bash: test: one: unary operator expected". But if I just do
arg one
it worked fine.
My question is: why I can't pass multiple arguments like this? And what is the right way?
Thanks!
You probably named your script test, which is a Bash builtin name. Name it something else.
$ help test
test: test [expr]
Evaluate conditional expression.
Exits with a status of 0 (true) or 1 (false) depending on
the evaluation of EXPR. Expressions may be unary or binary. Unary
expressions are often used to examine the status of a file. There
are string operators and numeric comparison operators as well.
The behavior of test depends on the number of arguments. Read the
bash manual page for the complete specification.
...
That's why you're getting the error from bash:
bash: test: one: unary operator expected
^--------- because it expects an operator to go before 'two'
^-------- and test doesn't like the argument 'one' you've provided
^-------- because it's interpreting your command as the builtin 'test'
^--- Bash is giving you an error
You should parse command line arguments in Python using argparse or the older optparse.
Your script, as it is, should work. Remember to put a shebang line that tells bash to use Python as interpreter, e.g. #! /usr/bin/env python
Related
I have to Python scripts: Tester1.py and Tester2.py.
Within Tester1 I want to start from time to time Tester2.py. I also want to pass Tester2.py some arguments. At the moment my code looks like this:
Tester1:
subprocess.call(['python3 Tester2.py testString'])
Tester2:
def start():
message = sys.argv[1]
print(message)
start()
Now my problem: If I run with my terminal Tester2 like 'python3 Tester2.py testString'my console prints out testString. But if I run Tester1 and Tester1 tries to start Tester2, I get an IndexError: "list index out of range".
How do I need to change my code to get everything working?
EDIT:
niemmi told me that I have to change my code to:
subprocess.call(['python3', 'Tester2.py', 'testString'])
but now I get a No such file or directory Error although both scripts are in the same directory. Someone knows why?
You need to provide the arguments either as separate elements on a list or as a string:
subprocess.call(['python3', 'Tester2.py', 'testString'])
# or
subprocess.call('python3 Tester2.py testString')
Python documentation has following description:
args is required for all calls and should be a string, or a sequence of program arguments. Providing a sequence of arguments is generally preferred, as it allows the module to take care of any required escaping and quoting of arguments (e.g. to permit spaces in file names). If passing a single string, either shell must be True (see below) or else the string must simply name the program to be executed without specifying any arguments.
I would like to be able to log the command used to run the current python script within the script itself. For instance this is something I tried:
#test.py
import sys,subprocess
with open('~/.bash_history','r') as f:
for line in f.readlines():
continue
with open('logfile','r') as f:
f.write('the command you ran: %s'%line.strip('\n'))
However the .bash_history does not seem to be ordered in chronological order. What's the best recommended way to achieve the above for easy logging? Thanks.
Update: unfortunately sys.argv doesn't quite solve my problem because I need to use process subtitution as input variables sometimes.
e.g. python test.py <( cat file | head -3)
What you want to do is not universally possible. As devnull says, the history file in bash is not written for every command typed. In some cases it's not written at all (user sets HISTFILESIZE=0, or uses a different shell).
The command as typed is parsed and processed long before your python script is invoked. Your question is therefore not related to python at all. Wether what you want to do is possible or not is entirely up to the invoking shell. bash does not provide what you want.
If your can control the caller's shell, you could try using zsh instead. There, if you setopt INC_APPEND_HISTORY, zsh will append to its history file for each command typed, so you can do the parse history file hack.
One option is to use sys.argv. It will contain a list of arguments you passed to the script.
import sys
print 'Number of arguments:', len(sys.argv), 'arguments.'
print 'Argument List:', str(sys.argv)
Example output:
>python test.py
Number of arguments: 1 arguments.
Argument List: ['test.py']
>python test.py -l ten
Number of arguments: 3 arguments.
Argument List: ['test.py', '-l', 'ten']
As you can see, the sys.argv variable contains the name of the script and then each individual parameter passed. It does miss the python portion of the command, though.
I am a bit confused as to how to get this done.
What I need to do is call an external command, from within a Python script, that takes as input several arguments, and a file name.
Let's call the executable that I am calling "prog", the input file "file", so the command line (in Bash terminal) looks like this:
$ prog --{arg1} {arg2} < {file}
In the above {arg1} is a string, and {arg2} is an integer.
If I use the following:
#!/usr/bin/python
import subprocess as sbp
sbp.call(["prog","--{arg1}","{arg2}","<","{file}"])
The result is an error output from "prog", where it claims that the input is missing {arg2}
The following produces an interesting error:
#!/usr/bin/python
import subprocess as sbp
sbp.call(["prog","--{arg1} {arg2} < {file}"])
all the spaces seem to have been removed from the second string, and equal sign appended at the very end:
command not found --{arg1}{arg2}<{file}=
None of this behavior seems to make any sense to me, and there isn't much that one can go by from the Python man pages found online. Please note that replacing sbp.call with sbp.Popen does not fix the problem.
The issue is that < {file} isn’t actually an argument to the program, but is syntax for the shell to set up redirection. You can tell Python to use the shell, or you can setup the redirection yourself.
from subprocess import *
# have shell interpret redirection
check_call('wc -l < /etc/hosts', shell=True)
# set up redirection in Python
with open('/etc/hosts', 'r') as f:
check_call(['wc', '-l'], stdin=f.fileno())
The advantage of the first method is that it’s faster and easier to type. There are a lot of disadvantages, though: it’s potentially slower since you’re launching a shell; it’s potentially non-portable because it depends on the operating system shell’s syntax; and it can easily break when there are spaces or other special characters in filenames.
So the second method is preferred.
I was reading programming python 4th edition by Mark Luze, Oreilly, by teaching myself.
There's an example on how to fork a child process, which I do not quite understand:
os.execlp('python', 'python', 'child.py', #other args#)
In an interactive shell(like bash), I know I can type python child.py #args# to ask python interpreter to run child.py with args.
Why are there TWO 'python' in the execlp() function? If I put only one python in the function, I would get an error complainting cannot find file or directory, which is the 1st args of child.py
The first argument is the program to execute (found on the PATH). The rest are the sys.argv arguments to the program.
The first such argument is the program name used to invoke it, and the display value used in the OS process list. It is the value of sys.argv[0] in a python script.
First of all, execlp is rarely used today. In most cases, you'd use the subprocess module, like this:
subprocess.call(['python', 'child.py'])
The first argument of execlp is the file you want to execute.
The latter arguments form the argument array to that program (sys.argv in Python). The first argument is then the name the program got invoked with. For example, Python sets the name to '-c' if the program is being run with the -c option. Similarly, grep behaves differently depending on the first argument, so that users can execute rgrep to imply grep -r.
Simply put, how can I differentiate these two in test.py:
python test.py 1
python test.py '1'
Workaround is OK.
Edit:
This workaround looks cool but too complex: argparse
Let the invoker specify args later, in python code use arg = input('Please enter either an integer or a string')
And other workarounds as presented in the answers of this question.
Thank you all for the replies. Every body +1.
The quotes are consumed by the shell. If you want to get them into python, you'll have to invoke like python test.py 1 "'2'" "'3'" 4
It is common handling of args, performed by shell. " and ' are ignored, since you may use them to pass, for instance, few words as one argument.
This means that you can't differentiate '1' and 1 in Python.
The shell command line doesn't support passing arguments of different types. If you want to have commands with arguments of different types you need to write your own command line or at least your own command parser.
Variant 1:
Usage:python test.py "1 2 '3' '4'"
Implementation:
command = sys.argv[1]
arguments = map(ast.literal_eval, command.split())
print arguments
Variant 2:
Usage:
python test.py
1 2 '3' 4'
5 6 '7' 8'
Implementation:
for line in sys.stdin:
arguments = map(ast.literal_eval, line.split())
print arguments
(Of course, you'd probably want to use raw_input to read the command lines, and readline when it is available, that's merely an example.)
A much better solution would be to actually know what kind of arguments you're expected to get and parse them as such, preferably by using a module like argparse.
Windows-specific:
# test.py
import win32api
print(win32api.GetCommandLine())
Example:
D:\>python3 test.py 3 "4"
C:\Python32\python3.EXE test.py 3 "4"
You can then parse the command line yourself.
As you can see from your experiment, the quotes are gone by the time Python is invoked. You'll have to change how the Python is invoked.
I'm not sure how correct I am, but if you're using only integer command line arguments, you can typecast it to be int.
suppose (in *nix), I run my program as:
./test.py 1
I can in my program say something line
import sys
def main():
a=int(sys.argv[1])