Calling py script from another py prints the value - python

Hi I am writing python for the first time:
I have a existing getprop.py script that loads the property file and prints the value of a given property:
import sys
import util
if len(sys.argv) < 3:
print "Error! Usage is: getprop.py [propfile] [propname]"
sys.exit(1)
props = util.loadprops(sys.argv[1])
if sys.argv[2] in props:
print props(sys.argv[2]);
Now I need to get the value of a property in another py script, so I modified the above script such that I do not disturb its functionality and I can use it in another script:
import sys
import util
def getpropvalue(propfile, propname):
props = util.loadprops(propfile)
if propname in props:
return props[propname]
if len(sys.argv) < 3:
print "Error! Usage is: getprop.py [propfile] [propname]"
sys.exit(1)
else:
print getpropvalue(sys.argv[1], sys.argv[2]);
and then in other script I import getprop and call the method like getprop.getpropvalue(FILE_NAME, PROP_NAME)and it prints the value of the property on the screen.
why does it prints the value? Is there any better way to solve this problem?

There is a way to run the script only if it was called directly. Add those lines to the end of your getprop code:
if __name__ == "__main__":
main()
This way the main function is only going to be called if you run the script directly (not importing). Is that what you're looking for?
Some explanation: every running script has a __name__ variable that will be set to "__main__" if you run the script from an IDE or console like python script.py

Change your getprop.py to this:
import sys
import util
def getpropvalue(propfile, propname):
props = util.loadprops(propfile)
if propname in props:
return props[propname]
if __name__ == '__main__':
if len(sys.argv) < 3:
print "Error! Usage is: getprop.py [propfile] [propname]"
sys.exit(1)
else:
print getpropvalue(sys.argv[1], sys.argv[2]);
This will prevent the code from being executed when it is imported.

Related

Calling a python script with arguments using subprocess

I have a python script which call another python script from another directory. To do that I used subprocess.Popen :
import os
import subprocess
arg_list = [project, profile, reader, file, str(loop)]
where all args are string if not converted implicitely
f = open(project_path + '/log.txt','w')
proc = subprocess.Popen([sys.executable, python_script] + arg_list, stdin=subprocess.PIPE, stdout=f, stderr=f)
streamdata = proc.communicate()[0]
retCode = proc.returncode
f.close()
This part works well, because of the log file I can see errors that occurs on the called script. Here's the python script called:
import time
import csv
import os
class loading(object):
def __init__(self, project=None, profile=None, reader=None, file=None, loop=None):
self.project=project
self.profile=profile
self.reader=reader
self.file=file
self.loop=loop
def csv_generation(self):
f=open(self.file,'a')
try:
writer=csv.writer(f)
if self.loop==True:
writer.writerow((self.project,self.profile,self.reader))
else:
raise('File already completed')
finally:
file.close()
def main():
p = loading(project, profile, reader, file, loop)
p.csv_generation()
if __name__ == "__main__":
main()
When I launch my subprocess.Popen, I have an error from the called script which tell me that 'project' is not defined. It looks the Popen method doesn't pass arguments to that script. I think i'm doing something wrong, someone has an idea ?
When you pass parameters to a new process they are passed positionally, the names from the parent process do not survive, only the values. You need to add:
import sys
def main():
if len(sys.argv) == 6:
project, profile, reader, file, loop = sys.argv[1:]
else:
raise ValueError,("incorrect number of arguments")
p = loading(project, profile, reader, file, loop)
p.csv_generation()
We are testing the length of sys.argv before the assignment (the first element is the name of the program).

Better solution than if __name__ == '__main__' twice in Python script

I have multiple Python scripts which use docopt.
My issue is that the available options for the two scripts differ slightly - one option is present in one script, but not the other.
I've included a minimum working example below.
If I run:
python main.py --num=7 --name=John
the script does not run, as --name=John is also passed to module1.py, where it is not a valid option.
With my actual script, I have several imports after docopt parses the arguments, and so I cannot simply move the docopt call to the bottom of the script (if __name__ == '__main__':). If I do this, the imports in the imported script never get called, and I get undefined name errors.
I have found a workaround, but I don't think it is good practice at all.
What I am doing is adding:
if __name__ == '__main__':
arguments = docopt.docopt(__doc__, version=0.1)
just after the import docopt.
However, I believe that having two of these statements in a script is bad practice. I cannot think of any other workarounds at this time though.
Can someone suggest a better solution? Thanks in advance.
main.py
"""
main.py
Usage:
main.py [--num=<num>] [--name=<name>] [--lib=<lib-dir>]
main.py -h | --help
main.py --version
Options:
--num=<num> A number
--name=<name> A name
--lib=<lib-dir> Path to the directory containing lib
--version
"""
import docopt
arguments = docopt.docopt(__doc__, version=0.1)
library_path = os.path.abspath(arguments['--lib'])
sys.path.insert(1, library_path)
NUM = arguments['--num']
from other_file import x, y
from module1 import function
def main():
print 'In main()'
function()
print NUM
if __name__ == '__main__':
print '{} being executed directly'.format(__name__)
main()
module1.py:
"""
module1.py
Usage:
module1.py [--num=<num>] [--lib=<lib-dir>]
module1.py -h | --help
module1.py --version
Options:
--num=<num> A number
--lib=<lib-dir> Path to the directory containing lib
--version
"""
import docopt
arguments = docopt.docopt(__doc__, version=0.1)
library_path = os.path.abspath(arguments['--lib'])
sys.path.insert(1, library_path)
NUM = arguments['--num']
from other_file import z
def main():
print 'In main()'
print NUM
def function():
print 'In function in {}'.format(__name__)
# print NUM
if __name__ == '__main__':
print '{} being executed directly'.format(__name__)
main()
EDIT:
I forgot to mention that the other_file module has many different versions. Because of this, one of the docopt options is the path to the file. This is then added to sys.path as follows:
library_path = os.path.abspath(arguments['--lib'])
sys.path.insert(1, library_path)
For this reason, the import of docopt in the global scope is needed to add the path to the other_file module to my system path.
The global variable (NUM below, DEBUG in my actual file) I can live without.
The clean solution is to refactor your code so it doesn't rely on a global, neither in main.py nor module1.py:
"""
main.py
Usage:
main.py [--num=<num>] [--name=<name>]
main.py -h | --help
main.py --version
Options:
--num=<num> A number
--name=<name> A name
--version
"""
from other_file import x, y
from module1 import function
def main(num):
print 'In main()'
function(num)
print num
if __name__ == '__main__':
import docopt
arguments = docopt.docopt(__doc__, version=0.1)
NUM = arguments['--num']
print '{} being executed directly'.format(__name__)
main(NUM)
And:
"""
module1.py
Usage:
module1.py [--num=<num>]
module1.py -h | --help
module1.py --version
Options:
--num=<num> A number
--version
"""
from other_file import z
def main(num):
print 'In main()'
print num
def function(num):
print 'In function in {}'.format(__name__)
print num
if __name__ == '__main__':
import docopt
arguments = docopt.docopt(__doc__, version=0.1)
NUM = arguments['--num']
print '{} being executed directly'.format(__name__)
main(NUM)

Python3: command not found, when running from cli

I am trying to run my python module as a command, however I am always getting the error: command not found.
#!/usr/bin/env python
import sys
import re
from sys import stdin
from sys import stdout
class Grepper(object):
def __init__(self, pattern):
self.pattern = pattern
def pgreper(self):
y = (str(self.pattern))
for line in sys.stdin:
regex = re.compile(y)
x = re.search(regex, line)
if x:
sys.stdout.write(line)
if __name__ == "__main__":
print("hello")
pattern = str(sys.argv[1])
Grepper(pattern).pgreper()
else:
print("nope")
I am sure whether it has something to do with the line:
if __name__ == "__main__":
However I just can't figure it out, this is a new area for me, and it's a bit stressful.
Your script name should have a .py extension, so it should be named something like pgreper.py.
To run it, you need to do either python pgreper.py pattern_string or if it has executable permission, as explained by Gabriel, you can do ./pgreper.py pattern_string. Note that you must give the script path (unless the current directory is in your command PATH); pgreper.py pattern_string will cause bash to print the "command not found" error message.
You can't pass the pattern data to it by piping, IOW, cat input.txt | ./pgreper.py "pattern_string" won't work: the pattern has to be passed as an argument on the command line. I guess you could do ./pgreper.py "$(cat input.txt)" but it'd be better to modify the script to read from stdin if you need that functionality.
Sorry, I didn't read the body of your script properly. :embarrassed:
I now see that your pgreper() method reads data from stdin. Sorry if the paragraph above caused any confusion.
By way of apology for my previous gaffe, here's a slightly cleaner version of your script.
#! /usr/bin/env python
import sys
import re
class Grepper(object):
def __init__(self, pattern):
self.pattern = pattern
def pgreper(self):
regex = re.compile(self.pattern)
for line in sys.stdin:
if regex.search(line):
sys.stdout.write(line)
def main():
print("hello")
pattern = sys.argv[1]
Grepper(pattern).pgreper()
if __name__ == "__main__":
main()
else:
print("nope")
Make sure you have something executable here : /usr/bin/env.
When you try to run your python module as a command, it will call this as an interpreter. You may need to replace it with /usr/bin/python or /usr/bin/python3 if you don't have an env command.
Also, make sure your file is executable : chmod +x my_module.py and try to run it with ./my_module.py.

how to pass arguments to a module in python 2.x interactive mode

I'm using Python 2.7, and I have the following simple script, which expects one command line argument:
#!/usr/bin/env python
import sys
if (len(sys.argv) == 2):
print "Thanks for passing ", sys.argv[1]
else:
print "Oops."
I can do something like this from the command line:
My-Box:~/$ ./useArg.py asdfkjlasdjfdsa
Thanks for passing asdfkjlasdjfdsa
or this:
My-Box:~/$ ./useArg.py
Oops.
I would like to do something similar from the interactive editor:
>>> import useArg asdfasdf
File "<stdin>", line 1
import useArg asdfasdf
^
SyntaxError: invalid syntax
but I don't know how. How can I pass a parameters to import/reload in the interactive editor ?
You can't. Wrap your code inside the function
#!/usr/bin/env python
import sys
def main(args):
if (len(args) == 2):
print "Thanks for passing ", args[1]
else:
print "Oops."
if __name__ == '__main__':
main(sys.argv)
If you execute your script from command line you can do it like before, if you want to use it from interpreter:
import useArg
useArg.main(['foo', 'bar'])
In this case you have to use some dummy value at the first position of the list, so most of the time much better solution is to use argparse library. You can also check the number of command line arguments before calling the main function:
import sys
def main(arg):
print(arg)
if __name__ == '__main__':
if len(sys.argv) == 2:
main(sys.argv[1])
else:
main('Oops')
You can find good explanation what is going on when you execute if __name__ == '__main__': here: What does if __name__ == "__main__": do?

executing a python script from another python script

I have a script a.py :
#!/usr/bin/env python
def foo(arg1, arg2):
return int(arg1) + int(arg2)
if __name__ == "__main__":
import sys
print foo(sys.argv[1], sys.argv[2])`
I now want to make a script that can run the first script and write the output of a.py to a file with some arguments as well. I want to make the automate_output(src,arglist) generate some kind of an output that I can write to the outfile :
import sys
def automate_output(src, arglist):
return ""
def print_to_file (src, outfile, arglist):
print "printing to file %s" %(outfile)
out = open(outfile, 'w')
s = open(src, 'r')
for line in s:
out.write(line)
s.close()
out.write(" \"\"\"\n Run time example: \n")
out.write(automate(src, arglist))
out.write(" \"\"\"\n")
out.close()
try:
src = sys.argv[1]
outfile = sys.argv[2]
arglist = sys.argv[3:]
automate(src, arglist)
print_to_file(src,outfile,arglist)
except:
print "error"
#print "usage : python automate_runtime.py scriptname outfile args"
I have tried searching around, but so far I do not understand how to pass arguments by using os.system with arguments. I have also tried doing :
import a
a.main()
There I get a NameError: name 'main' is not defined
Update :
I researched some more and found subprocess and I'm quite close to cracking it now it seems.
The following code does work, but I would like to pass args instead of manually passing '2' and '3'
src = 'bar.py'
args = ('2' , '3')
proc = subprocess.Popen(['python', src, '2' , '3'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
print proc.communicate()[0]
This is not a function, it's an if statement:
if __name__ == "__main__":
...
If you want a main function, define one:
import sys
def main():
print foo(sys.argv[1], sys.argv[2])`
Then just call it if you need to:
if __name__ == "__main__":
main()
a.main() has nothing to do with if __name__=="__main__" block. The former calls a function named main() from a module, the latter executes its block if current module name is __main__ i.e., when a module is called as a script.
#!/usr/bin/env python
# a.py
def func():
print repr(__name__)
if __name__=="__main__":
print "as a script",
func()
Compare a module executed as a script and a function called from the imported module:
$ python a.py
as a script '__main__'
$ python -c "import a; print 'via import',; a.func()"
via import 'a'
See section Modules in the Python tutorial.
To get output from the subprocess you could use subprocess.check_output() function:
import sys
from subprocess import check_output as qx
args = ['2', '3']
output = qx([sys.executable, 'bar.py'] + args)
print output

Categories

Resources