I recently extended the scope of one of the functions of my python module so that it can be executed sequentially or in parallel (with mpi4py).
def foo(param, use_mpi=False):
pass
When I run my test by hand it works:
# (1) Sequentially
>>> python my_test_seq.py
# (2) In parallel
>>> mpirun -n 3 my_test_par.py
I've been using pytest so far and everything was fine until i wanted to add parallel tests.
Indeed, I can't find a way to launch the parallel test (2) with several processes. The only thing I managed to do is to run several pytests in parallel (thus running multiple times a test) but that doesn't meet my needs...
Does anyone know a way to do this ?
I've found two possible ways to do it:
Use pytest-mpi, a package which assists you with running tests when pytest is run under MPI, e.g. $ mpirun -n 2 python -m pytest --with-mpi. It provides simple markers to skip test that should no be run with mpi, or should have a minimum number of process required to run, ...
Use pytest_easyMPI, a package which aims at making MPI code testing as similar as possible to testing regular serial code. Using a decorator such as follow will allow you to mix serial and parallel code (run using 4 MPI ranks) in a same file:
def foo_serial():
pass
#mpi_parallel(4)
def foo_parallel():
pass
The command used to launch my test is then pytest --pyargs my_package
I'm going to use pytest_easyMPI since since it allows me to run my serial and parallel tests transparently. Moreover it seems to be compatible with pytest-xdist! If I ever have a negative feedback to give, I will modify or add a comment to my answer.
Related
Have script a.py, it will run some task with multiple-thread, please noticed that I have no control over the a.py.
I'm looking for a way to limit the number of thread it could use, as I found that using more threads than my CPU core will slow down the script.
It could be something like:
python --nthread=2 a.py
or Modifying something in my OS is also acceptable .
I am using ubuntu 16.04
As requested:
the a.py is just a module in scikit-learn the MLPRegressor .
I also asked this question here.
A more general way, not specific to python:
taskset -c 1-3 python yourProgram.py
In this case the threads 1-3 (3 in total) will be used. Any parallelization invoked by your program will share those resources.
For a solution that fits your exact problem you should better identify which part of the code parellelizes. For instance, if it is due to numpy routines, you could limit it by using:
OMP_NUM_THREADS=4 python yourProgram.py
Again, the first solution is general and handled by the os, whereas the second is python (numpy) specific.
Read the threading doc, it said:
CPython implementation detail: In CPython, due to the Global Interpreter Lock, only one thread can execute Python code at once (even though certain performance-oriented libraries might overcome this limitation). If you want your application to make better use of the computational resources of multi-core machines, you are advised to use multiprocessing. However, threading is still an appropriate model if you want to run multiple I/O-bound tasks simultaneously.
If you will like to take advantage of the multi-core processing better update the code to use multiprocessing module.
In case that you prefer anyway continue using threading, you have one option passing the number of threads to the application as an argument like:
python a.py --nthread=2
Then you can update the script code to limit the threads.
I am working on a project that has many "unit tests" that have hard dependencies that need to interact with the database and other APIs. The tests are a valuable and useful resource to our team, but they just cannot be ran independently, without relying on the functionality of other services within the test environment. Personally I would call these "functional tests", but this is just the semantics already established within our team.
The problem is, now that we are beginning to introduce more pure unit tests into our code, we have a medley of tests that do or do not have external dependencies. These tests can be ran immediately after checking out code with no requirement to install or configure other tools. They can also be ran in a continuous integration environment like jenkins.
So my question is, how I can denote which is which for a cleaner separation? Is there an existing decorator within unit testing library?
You can define which test should be skipped with the skipIf decorator. In combinations with setting an environmental variable you can skip tests in some environments. An example:
from unittest import skipIf
class MyTest(Testcase):
#skipIf(os.environ.get('RUNON') == 'jenkins', 'Does not run in Jenkins')
def test_my_code(self):
...
Here's another option. You could separate different test categories by directory. If you wanted to try this strategy, it may look something like:
python
-modules
unit
-pure unit test modules
functional
-other unit test modules
In your testing pipeline, you can call your testing framework to only execute the desired tests. For example, with Python's unittest, you could run your 'pure unit tests' from within the python directory with
python -m unittest discover --start-directory ../unit
and the functional/other unit tests with
python -m unittest discover --start-directory ../functional
An advantage of this setup is that your tests are easily categorized and you can do any scaffolding or mocked up services that you need in each testing environment. Someone with a little more Python experience might be able to help you run the tests regardless of the current directory, too.
Is there anything special with using Nose for tests? From what I have heard the reason most people use Nose is..
because it gives you a report
because it shows you the time it took for the tests
How is that any better than using simple Bash like below?
tests.py:
assert test1()
assert test2()
assert test3()
print("No errors")
runtests:
#!/bin/sh
(time python tests.py) > log
return $?
The benefit of using a standard tool is that you are more likely to find third-party tools which build on top of the tool. So for just running a test, it doesn't matter what you use, but as soon as you start having many components in a Jenkins rig, having multiple different tools with different output formats and conventions makes it a real problem to maintain and develop monitoring and reporting.
For shell scripts (which I imagine is part of the question because you used the bash tag and wrote your script in sh), it's not like Nose is "the standard", and if you have multiple tools in different languages, it might not be possible to standardize on a single tool / framework / convention (TAP for Perl, Nose for Python, JUnit or whatever for Java ...)
One benefit which you didn't mention is that the framework takes care of a lot of the footwork for you. A single file with tests could be managed (with some pain) by hand, but once we start talking dozens of files with hundreds or thousands of test cases, you want a decent platform for managing those and let you focus on the actual testing instead of reinventing the wheels that the framework puts there for you to use.
What I want
I would like to create a set of benchmarks for my Python project. I would like to see the performance of these benchmarks change as I introduce new code. I would like to do this in the same way that I test Python, by running the utility command like nosetests and getting a nicely formatted readout.
What I like about nosetests
The nosetests tool works by searching through my directory structure for any functions named test_foo.py and runs all functions test_bar() contained within. It runs all of those functions and prints out whether or not they raised an exception.
I'd like something similar that searched for all files bench_foo.py and ran all contained functions bench_bar() and reported their runtimes.
Questions
Does such a tool exist?
If not what are some good starting points? Is some of the nose source appropriate for this?
nosetests can run any type of test, so you can decide if they test functionality, input/output validity etc., or performance or profiling (or anything else you'd like). The Python Profiler is a great tool, and it comes with your Python installation.
import unittest
import cProfile
class ProfileTest(unittest.TestCase):
test_run_profiler:
cProfile.run('foo(bar)')
cProfile.run('baz(bar)')
You just add a line to the test, or add a test to the test case for all the calls you want to profile, and your main source is not polluted with test code.
If you only want to time execution and not all the profiling information, timeit is another useful tool.
The wheezy documentation has a good example on how to do this with nose. The important part if you just want to have the timings is to use options -q for quiet run, -s for not capturing the output (so you will see the output of the report) and -m benchmark to only run the 'timing' tests.
I recommend using py.test for testing over. To run the example from wheezy with that, change the name of the runTest method to test_bench_run and run only this benchmark with:
py.test -qs -k test_bench benchmark_hello.py
(-q and -s having the same effect as with nose and -k to select the pattern of the test names).
If you put your benchmark tests in file in a separate file or directory from normal tests they are of course more easy to select and don't need special names.
I have several thousand tests that I want to run in parallel. The tests are all compiled binaries that give a return code of 0 or non-zero (on failure). Some unknown subsets of them try to use the same resources (files, ports, etc). Each test assumes that it is running independently and just reports a failure if a resources isn't available.
I'm using Python to launch each test using the subprocess module, and that works great serially. I looked into Nose for parallelizing, but I need to autogenerate the tests (to wrap each of the 1000+ binaries into Python class that uses subprocess) and Nose's multiprocessing module doesn't support parallelizing autogenerated tests.
I ultimately settled on PyTest because it can run autogenerated tests on remote hosts over SSH with the xdist plugin.
However, as far as I can tell, it doesn't look like xdist supports any kind of control of how the tests get distributed. I want to give it a pool of N machines, and have one test run per machine.
Is what I want possible with PyTest/xdist? If not, is there a tool out there that can do what I'm looking for?
I am not sure if this would help. But if you know ahead of time how you want to divide up your tests, instead of having pytest distribute your tests, you could use your continuous integration server to call a different run of pytest for each different machine. Using -k or -m to select a subset of tests, or simply specifying different test dir paths, you could control which tests are run together.