Running a set of Python scripts in a list - python

I am working on a Python project that includes a lot of simple example scripts to help new users get used to the system. As well as the source code for each example, I include the output I get on my test machine so users know what to expect when all goes well.
It occured to me that I could use this as a crude form of unit testing. Automatically run all the example scripts and do a load of diffs against the expected output.
All of my example scripts end with extension .py so I can get their filenames easily enough with something like
pythonfiles=[filename for filename in os.listdir(source_directory) if filename[-3:]=='.py']
So, pythonfiles contains something like ['example1.py', 'cool_example.py'] and so on.
What syntax can I use to actually run the scripts referenced in this list?

You could leverage doctest to help you get this done. Write a method that executes each script, and in the docstring for each method you paste the expected output:
def run_example1():
"""
This is example number 1. Running it should give you the following output:
>>> run_example1()
"This is the output from example1.py"
"""
os.system('python example1.py') # or you could use subprocess here
if __name__ == "__main__":
import doctest
doctest.testmod()
Note I haven't tested this.
Alternatively, as Shane mentioned, you could use subprocess. Something like this will work:
import subprocess
cmd = ('example1.py', 'any', 'more', 'arguments')
expected_out = """Your expected output of the script"""
exampleP = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = exampleP.communicate() # out and err are stdout and stderr, respectively
if out != expected_out:
print "Output does not match"

You want to use the subprocess module.

If they are similarly structured (All are executed with a run function for example), you can import the them as python scripts, and call thier run function.
import sys
import os
import imp
pythonfiles = [filename for filename in os.listdir(source_directory) if filename[-3:]=='.py']
for py_file in pythonfiles:
mod_name = os.path.splitext(py_file)[0]
py_filepath = os.path.join(source_directory, py_file)
py_mod = imp.load_source(mod_name, py_filepath)
if hasattr(py_mod, "run"):
py_mod.run()
else:
print '%s has no "run"' % (py_filepath)

Related

How to call a python file that uses flags from another python script?

I picked up a script (script1.py) from another dev which uses the flags module from absl. Basically it is ran as such:
python script1.py --input1 input1 --input2 input2
I currently have another script (script2.py) which generates that input1 and input2 for script1 to run. How can I actually pass the args over to script1 from within script2? I know I have to import script1 but how can I then point it to those inputs?
Use the Python subprocess module for this. I am assuming that you are using a version that is 3.5 or newer. In this case you can use the run function.
import subprocess
result = subprocess.run(
['python', 'script1.py', '--input1 input1', '--input2 input2'],
capture_output=True)
# Get the output as a string
output = result.stdout.decode('utf-8')
Some notes:
This example ignores the return code. The code should check result.returncode and take correct actions based on that.
If the output is not needed, the capture_output=True and the last line can be dropped.
The documentation for the subprocess module can be found here.
A alternative (better) solution
A better solution (IMHO) would be to change the called script to be a module with a single function that you call. Then python code in script1.py could probably be simplified quite a bit. The resulting code in your script would then be something similar to:
from script1 import my_function
my_function(input1, input2)
It could be that the script from the other dev already has a function you can call directly.

DOS command in MATLAB

I want to ask about a command in Python that performs exactly like the dos() command in MATLAB. For instance, I have the following code block in MATLAB and I want to do exactly the same in Python.
**DIANA = '"C:\Program Files\Diana 10.3\\bin\\DianaIE.exe"';
MODEL = 'Input.dcp';
INPUTDIR = 'C:\Users\pc\Desktop\Thesis\PSO\';
OUTPUTDIR = 'C:\Users\pc\Desktop\Thesis\PSO\';
OUTPUT = 'FILE_OUTPUT.out';
WORKDIRINPUT = sprintf('%s%s',INPUTDIR,MODEL);
WORKDIROUTPUT = sprintf('%s%s',OUTPUTDIR,OUTPUT);
%
NAMES = sprintf('%s %s %s',DIANA,WORKDIRINPUT,WORKDIROUTPUT);
disp('Start DIANA');
dos(NAMES);
disp('End DIANA');**
To execute a block of code and get the output in python inside the code you can use a function called exec() and pass the expression or the code to be executed as a string.
This accepts the code as a string. Example..
code = 'x=100\nprint(x)'
exec(code)
Output:
100
And if you want to use the command prompt or power shell commands in python you should you a library named os in python
import os
os.system('cd document')
you can use os.path module for manipulation path and a lot more to know more about this go through this documentation OS-Documentation
import subprocess
subprocess.call(['C:\Program Files\Diana 10.3\bin\DianaIE.exe',
'C:\Users\pc\Desktop\Thesis\PSO\Input.py'])

Import bash variables from a python script

I have seen plenty examples of running a python script from inside a bash script and either passing in variables as arguments or using export to give the child shell access, I am trying to do the opposite here though.
I am running a python script and have a separate file, lets call it myGlobalVariables.bash
myGlobalVariables.bash:
foo_1="var1"
foo_2="var2"
foo_3="var3"
My python script needs to use these variables.
For a very simple example:
myPythonScript.py:
print "foo_1: {}".format(foo_1)
Is there a way I can import them directly? Also, I do not want to alter the bash script if possible since it is a common file referenced many times elsewhere.
If your .bash file is formatted as you indicated - you might be able to just import it direct as a Python module via the imp module.
import imp
bash_module = imp.load_source("bash_module, "/path/to/myGlobalVariables.bash")
print bash_module.foo_1
You can also use os.environ:
Bash:
#!/bin/bash
# works without export as well
export testtest=one
Python:
#!/usr/bin/python
import os
os.environ['testtest'] # 'one'
I am very new to python, so I would welcome suggestions for more idiomatic ways to do this, but the following code uses bash itself to tell us which values get set by first calling bash with an empty environment (env -i bash) to tell us what variables are set as a baseline, then I call it again and tell bash to source your "variables" file, and then tell us what variables are now set. After removing some false-positives and an apparently-blank line, I loop through the "additional" output, looking for variables that were not in the baseline. Newly-seen variables get split (carefully) and put into the bash dictionary. I've left here (but commented-out) my previous idea for using exec to set the variables natively in python, but I ran into quoting/escaping issues, so I switched gears to using a dict.
If the exact call (path, etc) to your "variables" file is different than mine, then you'll need to change all of the instances of that value -- in the subprocess.check_output() call, in the list.remove() calls.
Here's the sample variable file I was using, just to demonstrate some of the things that could happen:
foo_1="var1"
foo_2="var2"
foo_3="var3"
if [[ -z $foo_3 ]]; then
foo_4="test"
else
foo_4="testing"
fi
foo_5="O'Neil"
foo_6='I love" quotes'
foo_7="embedded
newline"
... and here's the python script:
#!/usr/bin/env python
import subprocess
output = subprocess.check_output(['env', '-i', 'bash', '-c', 'set'])
baseline = output.split("\n")
output = subprocess.check_output(['env', '-i', 'bash', '-c', '. myGlobalVariables.bash; set'])
additional = output.split("\n")
# these get set when ". myGlobal..." runs and so are false positives
additional.remove("BASH_EXECUTION_STRING='. myGlobalVariables.bash; set'")
additional.remove('PIPESTATUS=([0]="0")')
additional.remove('_=myGlobalVariables.bash')
# I get an empty item at the end (blank line from subprocess?)
additional.remove('')
bash = {}
for assign in additional:
if not assign in baseline:
name, value = assign.split("=", 1)
bash[name]=value
#exec(name + '="' + value + '"')
print "New values:"
for key in bash:
print "Key: ", key, " = ", bash[key]
Another way to do it:
Inspired by Marat's answer, I came up with this two-stage hack. Start with a python program, let's call it "stage 1", which uses subprocess to call bash to source the variable file, as my above answer does, but it then tells bash to export all of the variables, and then exec the rest of your python program, which is in "stage 2".
Stage 1 python program:
#!/usr/bin/env python
import subprocess
status = subprocess.call(
['bash', '-c',
'. myGlobalVariables.bash; export $(compgen -v); exec ./stage2.py'
]);
Stage 2 python program:
#!/usr/bin/env python
# anything you want! for example,
import os
for key in os.environ:
print key, " = ", os.environ[key]
As stated in #theorifice answer, the trick here may be that such formatted file may be interpreted by both as bash and as python code. But his answer is outdated. imp module is deprecated in favour of importlib.
As your file has extension other than ".py", you can use the following approach:
from importlib.util import spec_from_loader, module_from_spec
from importlib.machinery import SourceFileLoader
spec = spec_from_loader("foobar", SourceFileLoader("foobar", "myGlobalVariables.bash"))
foobar = module_from_spec(spec)
spec.loader.exec_module(foobar)
I do not completely understand how this code works (where there are these foobar parameters), however, it worked for me. Found it here.

Python stdin filename

I'm trying to get the filename thats given in the command line. For example:
python3 ritwc.py < DarkAndStormyNight.txt
I'm trying to get DarkAndStormyNight.txt
When I try fileinput.filename() I get back same with sys.stdin. Is this possible? I'm not looking for sys.argv[0] which returns the current script name.
Thanks!
In general it is not possible to obtain the filename in a platform-agnostic way. The other answers cover sensible alternatives like passing the name on the command-line.
On Linux, and some related systems, you can obtain the name of the file through the following trick:
import os
print(os.readlink('/proc/self/fd/0'))
/proc/ is a special filesystem on Linux that gives information about processes on the machine. self means the current running process (the one that opens the file). fd is a directory containing symbolic links for each open file descriptor in the process. 0 is the file descriptor number for stdin.
You can use ArgumentParser, which automattically gives you interface with commandline arguments, and even provides help, etc
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument('fname', metavar='FILE', help='file to process')
args = parser.parse_args()
with open(args.fname) as f:
#do stuff with f
Now you call python2 ritwc.py DarkAndStormyNight.txt. If you call python3 ritwc.py with no argument, it'll give an error saying it expected argument for FILE. You can also now call python3 ritwc.py -h and it will explain that a file to process is required.
PS here's a great intro in how to use it: http://docs.python.org/3.3/howto/argparse.html
In fact, as it seams that python cannot see that filename when the stdin is redirected from the console, you have an alternative:
Call your program like this:
python3 ritwc.py -i your_file.txt
and then add the following code to redirect the stdin from inside python, so that you have access to the filename through the variable "filename_in":
import sys
flag=0
for arg in sys.argv:
if flag:
filename_in = arg
break
if arg=="-i":
flag=1
sys.stdin = open(filename_in, 'r')
#the rest of your code...
If now you use the command:
print(sys.stdin.name)
you get your filename; however, when you do the same print command after redirecting stdin from the console you would got the result: <stdin>, which shall be an evidence that python can't see the filename in that way.
I don't think it's possible. As far as your python script is concerned it's writing to stdout. The fact that you are capturing what is written to stdout and writing it to file in your shell has nothing to do with the python script.

How do I export the output of Python's built-in help() function

I've got a python package which outputs considerable help text from: help(package)
I would like to export this help text to a file, in the format in which it's displayed by help(package)
How might I go about this?
pydoc.render_doc(thing) to get thing's help text as a string. Other parts of pydoc like pydoc.text and pydoc.html can help you write it to a file.
Using the -w modifier in linux will write the output to a html in the current directory, for example;
pydoc -w Rpi.GPIO
Puts all the help() text that would be presented from the command help(Rpi.GPIO) into a nicely formatted file Rpi.GPIO.html, in the current directory of the shell
This is a bit hackish (and there's probably a better solution somewhere), but this works:
import sys
import pydoc
def output_help_to_file(filepath, request):
f = open(filepath, 'w')
sys.stdout = f
pydoc.help(request)
f.close()
sys.stdout = sys.__stdout__
return
And then...
>>> output_help_to_file(r'test.txt', 're')
An old question but the newer recommended generic solution (for Python 3.4+) for writing the output of functions that print() to terminal is using contextlib.redirect_stdout:
import contextlib
def write_help(func, out_file):
with open(out_file, 'w') as f:
with contextlib.redirect_stdout(f):
help(func)
Usage example:
write_help(int, 'test.txt')
To get a "clean" text output, just as the built-in help() would deliver, and suitable for exporting to a file or anything else, you can use the following:
>>> import pydoc
>>> pydoc.render_doc(len, renderer=pydoc.plaintext)
'Python Library Documentation: built-in function len in module builtins\n\nlen(obj, /)\n Return the number of items in a container.\n'
If you do help(help) you'll see:
Help on _Helper in module site object:
class _Helper(__builtin__.object)
| Define the builtin 'help'.
| This is a wrapper around pydoc.help (with a twist).
[rest snipped]
So - you should be looking at the pydoc module - there's going to be a method or methods that return what help(something) does as a string...
Selected answer didn't work for me, so I did a little more searching and found something that worked on Daniweb. Credit goes to vegaseat. https://www.daniweb.com/programming/software-development/threads/20774/starting-python/8#post1306519
# simplified version of sending help() output to a file
import sys
# save present stdout
out = sys.stdout
fname = "help_print7.txt"
# set stdout to file handle
sys.stdout = open(fname, "w")
# run your help code
# its console output goes to the file now
help("print")
sys.stdout.close()
# reset stdout
sys.stdout = out
The simplest way to do that is via using
sys module
it opens a data stream between the operation system and it's self , it grab the data from the help module then save it in external file
file="str.txt";file1="list.txt"
out=sys.stdout
sys.stdout=open('str_document','w')
help(str)
sys.stdout.close
The cleanest way
Assuming help(os)
Step 1 - In Python Console
import pydoc
pydoc.render_doc(os, renderer=pydoc.plaintext)`
#this will display a string containing help(os) output
Step 2 - Copy string
Step 3 - On a Terminal
echo "copied string" | tee somefile.txt
If you want to write Class information in a text file. Follow below steps
Insert pdb hook somewhere in the Class and run file
import pdb; pdb.set_trace()
Perform step 1 to 3 stated above
In Windows, just open up a Windows Command Line window, go to the Lib subfolder of your Python installation, and type
python pydoc.py moduleName.memberName > c:\myFolder\memberName.txt
to put the documentation for the property or method memberName in moduleName into the file memberName.txt. If you want an object further down the hierarchy of the module, just put more dots. For example
python pydoc.py wx.lib.agw.ultimatelistctrl > c:\myFolder\UltimateListCtrl.txt
to put the documentation on the UltimateListCtrl control in the agw package in the wxPython package into UltimateListCtrl.txt.
pydoc already provides the needed feature, a very well-designed feature that all question-answering systems should have. The pydoc.Helper.init has an output object, all output being sent there. If you use your own output object, you can do whatever you want. For example:
class OUTPUT():
def __init__(self):
self.results = []
def write(self,text):
self.results += [text]
def flush(self):
pass
def print_(self):
for x in self.results: print(x)
def return_(self):
return self.results
def clear_(self):
self.results = []
when passed as
O = OUTPUT() # Necessarily to remember results, but see below.
help = pydoc.Helper(O)
will store all results in the OUTPUT instance. Of course, beginning with O = OUTPUT() is not the best idea (see below). render_doc is not the central output point; output is. I wanted OUTPUT so I could keep large outputs from disappearing from the screen using something like Mark Lutz' "More". A different OUTPUT would allow you to write to files.
You could also add a "return" to the end of the class pydoc.Helper to return the information you want. Something like:
if self.output_: return self.output_
should work, or
if self.output_: return self.output.return_()
All of this is possible because pydoc is well-designed. It is hidden because the definition of help leaves out the input and output arguments.
Using the command line we can get the output directly and pipe it to whatever is useful.
python -m pydoc ./my_module_file.py
-- the ./ is important, it tells pydoc to look at your local file and not attempt to import from somewhere else.
If you're on the mac you can pipe the output to pbcopy and paste it into a documentation tool of your choice.
python -m pydoc ./my_module_file.py | pbcopy

Categories

Resources