Python Equivalent to Perl's prove - python

I am working on a tdd project in python and I am looking for a quick way to run all unit tests in my t/ directory. In perl this is easy:
$ prove -lvr t/
I am looking for the python equivalent. It does not seem that nose has this functionality. I rolled a command line statement to do something like this:
for x in `find t/ | grep py`; do echo $x && python $x ; done
But this lacks flags like -l (include the local lib dir) and -v (verbose). Does this or does this not exist in python? I want a one liner like this:
$ pyprove -lvr t/

You can do this by running python -m unittest discover -s t/, which will discover your unit tests and run them for you. It looks like there's a verbose flag, but I don't see a flag to include the local lib dir.

Try nose. Add -v for verbosity, -w to specify search directories (ref: usage).
nosetests -v -w t/

Related

unittest discover with pattern for multiple files

what will be the way to pass multiple files to the pattern in unittest discover?
it looks like it is using Shell patterns and from source code I see that fnmatch is used.
goal is to run multiple IronPython jobs concurrently that started from CPython (we need cross-platform func)
C:\IronPython\net45\ipy.exe -m unittest discover -s C:\git\TEST\Common -p "test1.py test2.py" -t C:\git\TEST\Common
is it possible to pass multiple files as a pattern for unittests?
You can add multiple -p arguments one after the other, e.g. -p arg1 -p arg2.
In your case,
C:\IronPython\net45\ipy.exe -m unittest discover -s C:\git\TEST\Common -p "test1.py" -p "test2.py" -t C:\git\TEST\Common should do the trick!

How to use wildcard in command prompt while executing the pytest test cases

I have below project structure in Pycharm.
Project Folder: PythonTutorial
Package: pytestpackage
Python Files: test_conftest_demo1.py, test_conftest_demo2.py
I'm trying to run the above 2 python files having almost similar name using pytest from command prompt with the below command. But I'm facing the below issue. Please help me on the same.
Note: I'm using windows 10 operating system .
Command Used:
py.test -s -v test_conftest_demo*.py
use the -k option to specify substring matching.
$ pytest -s -v -k "test_conftest_demo"

Relative shebang: How to write an executable script running portable interpreter which comes with it

Let's say we have a program/package which comes along with its own interpreter and a set of scripts which should invoke it on their execution (using shebang).
And let's say we want to keep it portable, so it remains functioning even if simply copied to a different location (different machines) without invoking setup/install or modifying environment (PATH). A system interpreter should not be mixed in for these scripts.
The given constraints exclude both known approaches like shebang with absolute path:
#!/usr/bin/python
and search in the environment
#!/usr/bin/env python
Separate launchers look ugly and are not acceptable.
I found good summary of the shebang limitations which describe why relative path in the shebang are useless and there cannot be more than one argument to the interpreter: http://www.in-ulm.de/~mascheck/various/shebang/
And I also found practical solutions for most of the languages with 'multi-line shebang' tricks. It allows to write scripts like this:
#!/bin/sh
"exec" "`dirname $0`/python2.7" "$0" "$#"
print copyright
But sometimes, we don't want to extend/patch existing scripts which rely on shebang with an absolute path to interpreter using this approach. E.g. Python's setup.py supports --executable option which basically allows to specify the shebang content for the scripts it produces:
python setup.py build --executable=/opt/local/bin/python
So, in particular, what can be specified for --executable= in order to enable the desired kind of portability? Or in other words, since I'd like to keep the question not too specific to Python...
The question
How to write a shebang which specifies an interpreter with a path which is relative to the location of the script being executed?
The relative path written directly in a shebang is treated relative to the current working directory, so something like #!../bin/python2.7 will not work for any other working directory except few.
Since OS does not support it, why not to use external program like using env for PATH lookup. But I know no specialized program which computes the relative paths from arguments and executes the resulting command.. except the shell itself and other scripting engines.
But trying to compute the path in a shell script like
#!/bin/sh -c '`dirname $0`/python2.7 $0'
does not work because on Linux shebang is limited by one argument only. And that suggested me to look for scripting engines which accept a script as the first argument on the command line and are able to execute new process:
Using AWK
#!/usr/bin/awk BEGIN{a=ARGV[1];sub(/[a-z_.]+$/,"python2.7",a);system(a"\t"ARGV[1])}
Using Perl
#!/usr/bin/perl -e$_=$ARGV[0];exec(s/\w+$/python2.7/r,$_)
update from 11Jan21:
Using updated env utility:
$ env --version | grep env
env (GNU coreutils) 8.30
$ env --help
Usage: env [OPTION]... [-] [NAME=VALUE]... [COMMAND [ARG]...]
Set each NAME to VALUE in the environment and run COMMAND.
Mandatory arguments to long options are mandatory for short options too.
-i, --ignore-environment start with an empty environment
-0, --null end each output line with NUL, not newline
-u, --unset=NAME remove variable from the environment
-C, --chdir=DIR change working directory to DIR
-S, --split-string=S process and split S into separate arguments;
used to pass multiple arguments on shebang lines
So, passing -S to env will do the job
The missing "punchline" from Anton's answer:
With an updated version of env, we can now realize the initial idea:
#!/usr/bin/env -S /bin/sh -c '"$(dirname "$0")/python3" "$0" "$#"'
Note that I switched to python3, but this question is really about shebang - not python - so you can use this solution with whatever script environment you want. You can also replace /bin/sh with just sh if you prefer.
There is a lot going on here, including some quoting hell, and at first glance it's not clear what's happening. I think there's little worth to just saying "this is how to do it" without explanation, so let's unpack it.
It breaks down like this:
The shebang is interpreted to run /usr/bin/env with the following arguments:
-S /bin/sh -c '"$(dirname "$0")/python3" "$0" "$#"'
full path (either local or absolute) to the script file
onwards, any extra commandline arguments
env finds the -S at the start of the first argument, and splits it according to (simplified) shell rules. In this case, only the single-quotes are relevant - all the other fancy syntax is within single-quotes so it gets ignored. The new arguments to env become:
/bin/sh
-c
"$(dirname "$0")/python3" "$0" "$#"
full path to script file (either local or absolute)
onwards, (possibly) extra arguments
It runs /bin/sh - the default shell - with the arguments:
-c
"$(dirname "$0")/python3" "$0" "$#"
full path to script file
onwards, (possibly) extra arguments
As the shell was run with -c, it runs in the second operating mode defined here (and also re-described many times by different man pages of all shells, e.g. dash, which is much more approachable). In our case we can ignore all the extra options, the syntax is:
sh -c command_string command_name [argument ...]
In our case:
command_string is "$(dirname "$0")/python3" "$0" "$#"
command_name is the script path, e.g. ./path to/script dir/script file.py
argument(s) are any extra arguments (it's possible to have zero arguments)
As described, the shell wants to run command_string ("$(dirname "$0")/python3" "$0" "$#") as a command, so now we turn to the Shell Command Language:
Parameter Expansion is performed on "$0" and "$#", which are both Special Parameters:
"$#" expands to the argument(s). If there were no arguments, it will "expand" into nothing. Because of this special behaviour, it's explained horribly in the spec I linked, but the man page for dash explains it much better.
$0 expands to command_name - our script file. Every occurrence of $0 is within double-quotes so it doesn't get split, i.e. spaces in the path won't break it up into multiple arguments.
Command Substitution is applied, substituting $(dirname "$0") with the standard output of running the command dirname "./path to/script dir/script file.py", i.e. the folder that our script file resides in: ./path to/script dir.
After all of the substitutions and expansions, the command becomes, for example:
"./path to/script dir/python3" "./path to/script dir/script file.py" "first argument" "second argument" ...
Finally, the shell runs the expanded command, and executes our local python3 with our script file as an argument followed by any other arguments we passed to it.
Phew!
What follows is basically my attempts to demonstrate that those steps are occuring. It's probably not worth your time, but I already wrote it and I don't think it's so bad that it should be removed. If nothing else, it might be useful to someone if they want to see an example of how to reverse-engineer things like this. It doesn't include extra arguments, those were added after Emanuel's comment.
It also has a lousy joke at the end..
First let's start simpler. Take a look at the following "script", replacing env with echo:
$ cat "/home/neatnit/Projects/SO question 33225082/my script.py"
#!/usr/bin/echo -S /bin/sh -c '"$( dirname "$0" )/python2.7" "$0"'
print("This is python")
It's hardly a script - the shebang calls echo which will just print whichever arguments it's given. I've deliberately put two spaces between the words, this way we can see how they get preserved. As an aside, I've deliberately put the script in a path that contains spaces, to show that they are handled correctly.
Let's run it:
$ "/home/neatnit/Projects/SO question 33225082/my script.py"
-S /bin/sh -c '"$( dirname "$0" )/python2.7" "$0"' /home/neatnit/Projects/SO question 33225082/my script.py
We see that with that shebang, echo is run with two arguments:
-S /bin/sh -c '"$( dirname "$0" )/python2.7" "$0"'
/home/neatnit/Projects/SO question 33225082/my script.py
These are the literal arguments echo sees - no quoting or escaping.
Now, let's get env back but use printf [1] ahead of sh to explore how env processes these arguments:
$ cat "/home/neatnit/Projects/SO question 33225082/my script.py"
#!/usr/bin/env -S printf %s\n /bin/sh -c '"$( dirname "$0" )/python2.7" "$0"'
print("This is python")
And run it:
$ "/home/neatnit/Projects/SO question 33225082/my script.py"
/bin/sh
-c
"$( dirname "$0" )/python2.7" "$0"
/home/neatnit/Projects/SO question 33225082/my script.py
env splits the string after -S [2] according to ordinary (but simplified) shell rules. In this case, all $ symbols were within single-quotes, so env did not expand them. It then appended the additional argument - the script file - to the end.
When sh gets these arguments, the first argument after -c (in this case: "$( dirname "$0" )/python2.7" "$0") gets interpreted as a shell command, and the next argument acts as the first parameter in that command ($0).
Pushing the printf one level deeper:
$ cat "/home/neatnit/Projects/SO question 33225082/my script.py"
#!/usr/bin/env -S /bin/sh -c 'printf %s\\\n "$( dirname "$0" )/python2.7" "$0"'
print("This is python")
And running it:
$ "/home/neatnit/Projects/SO question 33225082/my script.py"
/home/neatnit/Projects/SO question 33225082/python2.7
/home/neatnit/Projects/SO question 33225082/my script.py
At last - it's starting to look like the command we were looking for! The local python2.7 and our script as an argument!
sh expanded $0 into /home/[ ... ]/my script.py, giving this command:
"$( dirname "/home/[ ... ]/my script.py" )/python2.7" "/home/[ ... ]/my script.py"
dirname snips off the last part of the path to get the containing folder, giving this command:
"/home/[ ... ]/SO question 33225082/python2.7" "/home/[ ... ]/my script.py"
To highlight a common pitfall, this is what happens if we don't use double-quotes and our path contains spaces:
$ cat "/home/neatnit/Projects/SO question 33225082/my script.py"
#!/usr/bin/env -S /bin/sh -c 'printf %s\\\n $( dirname $0 )/python2.7 $0'
print("This is python")
$ "/home/neatnit/Projects/SO question 33225082/my script.py"
/home/neatnit/Projects
.
33225082
./python2.7
/home/neatnit/Projects/SO
question
33225082/my
script.py
Needless to say, running this as a command would not give the desired result. Figuring out exactly what happened here is left as an exercise to the reader :)
At last, we put the quote marks back where they belong and get rid of the printf, and we finally get to run our script:
$ "/home/neatnit/Projects/SO question 33225082/my script.py"
/home/neatnit/Projects/SO question 33225082/my script.py: 1: /home/neatnit/Projects/SO question 33225082/python2.7: not found
Wait, uh, let me fix that
$ ln --symbolic $(which python3) "/home/neatnit/Projects/SO question 33225082/python2.7"
$ "/home/neatnit/Projects/SO question 33225082/my script.py"
This is python
Rejoice!
[1] This way we can see each argument in a separate line, and we don't have to get confused by space-delimited arguments.
[2] There doesn't need to be a space after -S, I just prefer the way it looks. -Sprintf sounds really exhausting.

How to use cProfile with nosetest --with-profile?

nosetest --with-profile --profile-stats-file output
The output can't read by runsnake, because nosetest uses hotshot, if I want to generate a file that can be read with runsnake, I need to convert it so:
st = hotshot.stats.load('output')
st.dump_stats('output_new')
Could I run the test with cProfile directly for read with runsnake?
Evolving on the answer of #squid, you can use a nose plugin called nose-cprof to replace the nose default profiler, hotshot, with cProfile.
To install it:
pip install nose-cprof
Then call nose like this:
nosetests --with-cprofile
It should generate a cProfile output file, that you can then analyze with tools like runsnakerun.
#cihanpesend's answer didn't quite work for me (cProfile couldn't find 'nosetests'), but I did have success on Linux using:
python -m cProfile -o profile.out `which nosetests` .
The resulting output works fine in runsnake.
(Presumably on Windows you could replace which nosetests with the hard-coded path to your nosetests top-level python script.)
I think you are right that the output from nosetests' hotshot profiler is not compatible with runsnake. Certainly the two don't play nice together out of the box for me either.
I don't have info about nosetest except it is python project. So;
python -m cProfile -o outputfile nosetest
Then,
runsnake outputfile
RunSnakeRun is extremly useful to visualize profiler.
Note: to run runsnake, you must install wx and numpy.
update: from omikron's comment; runsnakerun can not support python3 profile output. (i didn't try)
Or you can try nose-cprof plugin: https://github.com/msherry/nose-cprof 
It replaces hotshot with cProfile
With pyprof2calltree:
$ pip install pyprof2calltree
$ nosetests --with-cprofile --profile-stats=profile.out tests/
$ pyprof2calltree -i profile.out -k
With xdot:
$ sudo apt install xdot
$ gprof2dot -f pstats profile.out | dot -Tpng -o profile.png

How to combine unittest results using a Makefile?

I want to use a Makefile to run individual test files or a combined version of all tests or a coverage report.
I'm pretty new to Makefiles so I borrowed one and adapted it. The result is here.
The problem is that make test will run each test in sequence and it is hard to see which ones failed when you have a bunch and the screen scrolls a lot. I like that each one uses a separate process so they don't interfere with each other, though.
Question is: can I combine the results more nicely using only the Makefile or I need a separate script? Do you know some good examples of Makefiles to run tests?
(I want to use Makefile + unittest + coverage only, and no other dependencies)
An alternative approach is to use unittest discovery, which will aggregate all your separate test files into a single run, e.g. in the Makefile
test:
python -m unittest discover -p '*tests.py' -v
If running the tests in parallel processes is important to you, then instead of using unittest to run the tests, use either nose or pytest. They each have options to run tests in parallel. You should be able to do this without any modifications to your test code.
Here is a quick hack you can insert into your Makefile with no changes to your infrastructure.
The special variable $? contains the exit status of the last command. Using it you can examine the return value of each test. In the script below I counted the number of tests that fail, and output that at the end of the run. You could also just exit immediately if one test fails so you wouldn't have to scroll up to see the output.
failed=0 \
for i in $(TESTS); \
do \
echo $$i; \
PYTHONPATH=$(GAEPATH):. $(PYTHON) -m tests.`basename $$i .py` $(FLAGS); \
if [ $? -ne 0 ] \
then \
$failed=$(($failed+1)) \
fi \
done \
if [$failed -ne 0] \
then \
echo $failed Tests Failed \
exit $failed \
fi \
There are definitely better and more robust ways of testing, but this hack should work if you so desire. I would suggest eventually moving the bash script below into python, and then all you would have to do is call ./run_tests.py to run all your unit tests. Since python is infinitely more expressive than bash, you have a lot more freedom to interpret and display the results. Depending on your needs, using a unit testing framework like unittest might be desirable to rolling your own code.
I had a library directory with several subdirectories, each with its own unit test. To run them all, I added the following test target:
test: $(addprefix test-,$(SUBDIRS))
test-%:
$(MAKE) -k --directory=$* test
This is cool, because it runs the tests in each subdirectory, and can be distributed using, e.g. make test -j5. However, it has a problem. Ideally, I would like to run tests in all directories regardless of failures in individual directories. I also want to be able to summarize the failures at the end, and (more importantly), to return a non-zero exit code if one or more tests fail. The above code runs all the tests, but it does not print a summary, nor a non-zero exit status.
Here is some more complicated code that does what I want it to do. It's not very elegant, though:
clean_test:
rm -f testfailures
test: clean_test
$(MAKE) $(addprefix test-,$(SUBDIRS))
#echo "=== TEST SUMMARY ==="
#if [ -f $(BUILD_DIR)/testfailures ]; then \
echo "The following tests failed:"; \
cat $(BUILD_DIR)/testfailures; \
false; \
else \
echo "All tests passed."; \
fi
test-%:
$(MAKE) -k --directory=$* test || echo \
" $*" >> $(BUILD_DIR)/testfailures

Categories

Resources