I have a python script which call another python script from another directory. To do that I used subprocess.Popen :
import os
import subprocess
arg_list = [project, profile, reader, file, str(loop)]
where all args are string if not converted implicitely
f = open(project_path + '/log.txt','w')
proc = subprocess.Popen([sys.executable, python_script] + arg_list, stdin=subprocess.PIPE, stdout=f, stderr=f)
streamdata = proc.communicate()[0]
retCode = proc.returncode
f.close()
This part works well, because of the log file I can see errors that occurs on the called script. Here's the python script called:
import time
import csv
import os
class loading(object):
def __init__(self, project=None, profile=None, reader=None, file=None, loop=None):
self.project=project
self.profile=profile
self.reader=reader
self.file=file
self.loop=loop
def csv_generation(self):
f=open(self.file,'a')
try:
writer=csv.writer(f)
if self.loop==True:
writer.writerow((self.project,self.profile,self.reader))
else:
raise('File already completed')
finally:
file.close()
def main():
p = loading(project, profile, reader, file, loop)
p.csv_generation()
if __name__ == "__main__":
main()
When I launch my subprocess.Popen, I have an error from the called script which tell me that 'project' is not defined. It looks the Popen method doesn't pass arguments to that script. I think i'm doing something wrong, someone has an idea ?
When you pass parameters to a new process they are passed positionally, the names from the parent process do not survive, only the values. You need to add:
import sys
def main():
if len(sys.argv) == 6:
project, profile, reader, file, loop = sys.argv[1:]
else:
raise ValueError,("incorrect number of arguments")
p = loading(project, profile, reader, file, loop)
p.csv_generation()
We are testing the length of sys.argv before the assignment (the first element is the name of the program).
Related
For a Python script that uses argparse and has a very long argument list, is it possible to make argparse page what it prints to the terminal when calling the script with the -h option?
I could not find a quick answer, so I wrote a little something:
# hello.py
import argparse
import os
import shlex
import stat
import subprocess as sb
import tempfile
def get_pager():
"""
Get path to your pager of choice, or less, or more
"""
pagers = (os.getenv('PAGER'), 'less', 'more',)
for path in (os.getenv('PATH') or '').split(os.path.pathsep):
for pager in pagers:
if pager is None:
continue
pager = iter(pager.split(' ', 1))
prog = os.path.join(path, next(pager))
args = next(pager, None) or ''
try:
md = os.stat(prog).st_mode
if md & (stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH):
return '{p} {a}'.format(p=prog, a=args)
except OSError:
continue
class CustomArgParser(argparse.ArgumentParser):
"""
A custom ArgumentParser class that prints help messages
using either your pager, or less or more, if available.
Otherwise, it does what ArgumentParser would do.
Use the PAGER environment variable to force it to use your pager
of choice.
"""
def print_help(self, file=None):
text = self.format_help()
pager = get_pager()
if pager is None:
return super().print_help(file)
fd, fname = tempfile.mkstemp(prefix='simeon_help_', suffix='.txt')
with open(fd, 'w') as fh:
super().print_help(fh)
cmd = shlex.split('{p} {f}'.format(p=pager, f=fname))
with sb.Popen(cmd) as proc:
rc = proc.wait()
if rc != 0:
super().print_help(file)
try:
os.unlink(fname)
except:
pass
if __name__ == '__main__':
parser = CustomArgParser(description='Some little program')
parser.add_argument('--message', '-m', help='Your message', default='hello world')
args = parser.parse_args()
print(args.message)
This snippet does main things. First, it defines a function to get the absolute path to a pager. If you set the environment variable PAGER, it will try and use it to display the help messages. Second, it defines a custom class that inherits pretty much everything from argparse.ArgumentParser. The only method that gets overridden here is print_help. It implements print_help by defaulting to super().print_help() whenever a valid pager is not found. If a valid is found, then it writes the help message to a temporary file and then opens a child process that invokes the pager with the path to the temporary file. When the pager returns, the temporary file is deleted. That's pretty much it.
You are more than welcome to update get_pager to add as many pager programs as you see fit.
Call the script:
python3 hello.py --help ## Uses less
PAGER='nano --view' python3 hello.py --help ## Uses nano
PAGER=more python3 hello.py --help ## Uses more
I have some code that parses command line options using argparse.
For example:
# mycode.py
import argparse
def parse_args():
parser = argparse.ArgumentParser('my code')
# list of arguments
# ...
# ...
return vars(parser.parse_args())
if __name__ == "__main__":
parse_args()
I would like to use unittest to check the output of the help function. I also don't want to change the actual code unless there is no other solution.
The help action has a SystemExit call built into it after printing to stdout, so I have had to try and catch it in the unittest.
Here is my unittest code with the following steps:
1) Set the sys.argv list to include the -h flag.
2) Wrap the function call in a context manager to prevent the SystemExit being viewed as an error.
3) Switch the sys.stdout temporarily to an io.StringIO object so I can inspect it without having it print to screen.
4) Call the function in a try...finally block so the SystemExit isn't fatal.
5) Switch sys.stdout back to the real stdout.
6) Open a file to which I had previously saved the help text (by entering python mycode.py -h > help_out.txt in the terminal) to verify it is the same as the captured output from the StringIO.
import unittest
import mycode
import sys
import io
class TestParams(unittest.TestCase):
def setUp(self):
pass
def test_help(self):
args = ["-h"]
sys.argv[1:] = args
with self.assertRaises(SystemExit):
captured_output = io.StringIO()
sys.stdout = captured_output
try:
mycode.parse_args()
finally:
sys.stdout = sys.__stdout__
with open("help_out.txt", "r") as f:
help_text = f.read()
self.assertEqual(captured_output, help_text)
def tearDown(self):
pass
This code works, but the captured_output StringIO object is empty, so the test fails.
I am looking for an explanation as to what is going wrong with the captured output and/or an alternative solution.
I was very close. The captured_output wasn't actually empty - I just wasn't accessing the contents correctly.
Substitute captured_output.get_value() for captured_value in my example code and it works perfectly.
Python 3.6
I want to take all input from a subprocess which I run with the subprocess module. I can easily pipe this output to a log file, and it works great.
But, I want to filter out a lot of the lines (lots of noisy output from modules I do not control).
Attempt 1
def run_command(command, log_file):
process = subprocess.Popen(command, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, bufsize=1,
universal_newlines=True)
while True:
output = process.stdout.readline()
if output == '' and process.poll() is not None:
break
if output and not_noisy_line(output):
log_file.write(output)
log_file.flush()
return process.poll()
But this introduced a race condition between my subprocess and the output.
Attempt 2
I created a new method and a class to wrap the logging.
def run_command(command, log_file):
process = subprocess.run(command, stdout=QuiteLogger(log_file), stderr=QuiteLogger(log_file), timeout=120)
return process.returncode
class QuiteLogger(io.TextIOWrapper):
def write(self, data, encoding=sys.getdefaultencoding()):
data = filter(data)
super().write(data)
This does however just completely skip my filter function, my write method is not called at all by the subprocess. (If I call QuietLogger().write('asdasdsa') it goes through the filters)
Any clues?
This is an interesting situation in which the file object abstraction partially breaks down. The reason your solution does not work, is because subprocess is not actually using your QuietLogger but is getting the raw file number out of it (then repackaging it as a io.TextIOWrapper object).
I don't know if this is an intrinsic limitation in how the subprocess is handled, relying on OS support, or if this is just a mistake in the Python design, but in order to achieve what you want, you need to use the standard subprocess.PIPE and then roll your own file writer.
If you can wait for the subprocess to finish, then it can be trivially done, using the subprocess.run and then picking the stream out of the CompletedProcess (p) object:
p = subprocess.run(command, stdout=subprocess.PIPE, universal_newlines=True)
data = filter(p.stdout)
with open(logfile, 'w') as f:
f.write(data)
If you must work with the ouput while it is being generated (thus, you cannot wait for the subprocess to end) the simplest way is to resort to subprocess.Popen and threads:
import subprocess
import threading
logfile ='tmp.txt'
filter_passed = lambda line: line[:3] != 'Bad'
command = ['my_cmd', 'arg']
def writer(p, logfile):
with open(logfile, 'w') as f:
for line in p.stdout:
if filter_passed(line):
f.write(line)
p = subprocess.Popen(command, stdout=subprocess.PIPE, universal_newlines=True)
t = threading.Thread(target=writer, args=(p,logfile))
t.start()
t.join()
[Edit: My brain got derailed along the way, and I ended up answering another question than was actually asked. The following solution is useful for concurrently writing to a file, not for using the logging module in any way. However, since at least it's useful for that, I'll leave the answer in place for now.]
If you were just using threads, not separate processes, you'd just have to have a standard lock. So you could try something similar.
There's always the option of locking the output file. I don't know if your operating system supports anything like that, but the usual Unix way of doing it is to create a lock file. Basically, if the file exists, then wait; otherwise create the file before writing to your log file, and after you're done, remove the lock file again. You could use a context manager like this:
import os
import os.path
from time import sleep
class LockedFile():
def __init__(self, filename, mode):
self.filename = filename
self.lockfile = filename + '.lock'
self.mode = mode
def __enter__(self):
while True:
if os.path.isfile(self.lockfile):
sleep(0.1)
else:
break
with open(self.lockfile, 'a'):
os.utime(self.lockfile)
self.f = open(self.filename, self.mode)
return self.f
def __exit__(self, *args):
self.f.close()
os.remove(self.lockfile)
# And here's how to use it:
with LockedFile('blorg', 'a') as f:
f.write('foo\n')
I have a script a.py :
#!/usr/bin/env python
def foo(arg1, arg2):
return int(arg1) + int(arg2)
if __name__ == "__main__":
import sys
print foo(sys.argv[1], sys.argv[2])`
I now want to make a script that can run the first script and write the output of a.py to a file with some arguments as well. I want to make the automate_output(src,arglist) generate some kind of an output that I can write to the outfile :
import sys
def automate_output(src, arglist):
return ""
def print_to_file (src, outfile, arglist):
print "printing to file %s" %(outfile)
out = open(outfile, 'w')
s = open(src, 'r')
for line in s:
out.write(line)
s.close()
out.write(" \"\"\"\n Run time example: \n")
out.write(automate(src, arglist))
out.write(" \"\"\"\n")
out.close()
try:
src = sys.argv[1]
outfile = sys.argv[2]
arglist = sys.argv[3:]
automate(src, arglist)
print_to_file(src,outfile,arglist)
except:
print "error"
#print "usage : python automate_runtime.py scriptname outfile args"
I have tried searching around, but so far I do not understand how to pass arguments by using os.system with arguments. I have also tried doing :
import a
a.main()
There I get a NameError: name 'main' is not defined
Update :
I researched some more and found subprocess and I'm quite close to cracking it now it seems.
The following code does work, but I would like to pass args instead of manually passing '2' and '3'
src = 'bar.py'
args = ('2' , '3')
proc = subprocess.Popen(['python', src, '2' , '3'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
print proc.communicate()[0]
This is not a function, it's an if statement:
if __name__ == "__main__":
...
If you want a main function, define one:
import sys
def main():
print foo(sys.argv[1], sys.argv[2])`
Then just call it if you need to:
if __name__ == "__main__":
main()
a.main() has nothing to do with if __name__=="__main__" block. The former calls a function named main() from a module, the latter executes its block if current module name is __main__ i.e., when a module is called as a script.
#!/usr/bin/env python
# a.py
def func():
print repr(__name__)
if __name__=="__main__":
print "as a script",
func()
Compare a module executed as a script and a function called from the imported module:
$ python a.py
as a script '__main__'
$ python -c "import a; print 'via import',; a.func()"
via import 'a'
See section Modules in the Python tutorial.
To get output from the subprocess you could use subprocess.check_output() function:
import sys
from subprocess import check_output as qx
args = ['2', '3']
output = qx([sys.executable, 'bar.py'] + args)
print output
Hello i am using the subprocess.Popen() class and i succesful execute commands on the terminal, but when i try to execute programs for example an script written on Python and i try to pass arguments the system fails.
This is the code:
argPath = "test1"
args = open(argPath, 'w')
if self.extract.getByAttr(self.block, 'name', 'args') != None:
args.write("<request>"+self.extract.getByAttr(self.block, 'name', 'args')[0].toxml()+"</request>")
else:
args.write('')
car = Popen(shlex.split('python3.1 /home/hidura/webapps/karinapp/Suite/ForeingCode/saveCSS.py', stdin=args, stdout=subprocess.PIPE, stderr=subprocess.PIPE))
args.close()
dataOut = car.stdout.read().decode()
log = car.stderr.read().decode()
if dataOut!='':
return dataOut.split('\n')
elif log != '':
return log.split('\n')[0]
else:
return None
And the code from the saveCSS.py
from xml.dom.minidom import parseString
import os
import sys
class savCSS:
"""This class has to save
the changes on the css file.
"""
def __init__(self, args):
document = parseString(args)
request = document.firstChild
address = request.getElementsByTagName('element')[0]
newdata = request.getElementsByTagName('element')[1]
cssfl = open("/webapps/karinapp/Suite/"+address.getAttribute('value'), 'r')
cssData = cssfl.read()
cssfl.close()
dataCSS = ''
for child in newdata.childNodes:
if child.nodeType == 3:
dataCSS += child.nodeValue
nwcssDict = {}
for piece in dataCSS.split('}'):
nwcssDict[piece.split('{')[0]] = piece.split('{')[1]
cssDict = {}
for piece in cssData.split('}'):
cssDict[piece.split('{')[0]] = piece.split('{')[1]
for key in nwcssDict:
if key in cssDict == True:
del cssDict[key]
cssDict[key] = nwcssDict[key]
result = ''
for key in cssDict:
result += key+"{"+cssDict[key]+"}"
cssfl = open(cssfl.name, 'a')
cssfl.write(result)
cssfl.close()
if __name__ == "__main__":
savCSS(sys.stdin)
BTW: There's no output...
Thanks in advance.
OK, I'm ignoring that your code doesn't run (neither the script you try to execute, not the main script actually works), and looking at what you are doing:
It does execute the script, or you would get an error, like "bin/sh: foo: not found".
Also you seem to be using an open file as stdin after you have written to it. That doesn't work.
>>> thefile = open('/tmp/foo.txt', 'w')
>>> thefile.write("Hej!")
4
>>> thefile.read()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IOError: not readable
You need to close the file, and reopen it as a read file. Although better in this case would be to use StringIO, I think.
To talk to the subprocess, you use communicate(), not read() on the pipes.
I'm not sure why you are using shell=True here, it doesn't seem necessary, I would remove it if I was you, it only complicates stuff unless you actually need the shell to do things.
Specifically you should not split the command into a list when using shell=True. What your code is actually doing, is starting a Python prompt.
You should rather use communicate() instead of .stdout.read().
And the code you posted isn't even correct:
Popen(shlex.split('python3.1 /home/hidura/webapps/karinapp/Suite/ForeingCode/saveCSS.py', stdin=args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
There's a missing parenthesis, and from the stdout/stderr parameters, it's clear that you get no output to the console, but rather into pipes (if that's what you meant by "There's no output...").
Your code will actually work on Windows, but on Linux you must remove the shell=True parameter. You should always omit that parameter if you provide the full command line yourself (as a sequence).