I have a python library that reads a config file or environment variables to set some global configuration variables.
I would like to run my test suite multiple times with different settings.
I could do this manually like
MYLIB_SETTINGS=enable_foo=True nosetests
MYLIB_SETTINGS=enable_foo=False nosetests
I'm wondering if there's a way to do this automatically using the nose api and combine the results.
Normally, you would run your test in some sort of Continuous Integration framework (like Jenkins) with --with-xunit and --xunit-file TEST_NAME_XXX.xml. Each test run will produce a separate xml file, and the CI tool will combine them together into a pretty table, showing all tests from both cases.
You can do something similar using nose API, by setting os.env variables appropriately in python and calling nose.run()
Related
I have separate preprocessing and training Python scripts. I would like to track my experiments using mlflow.
Because my scripts are separate, I am using a Powershell script (think of it as a shell script, but on Windows) to trigger the Python scripts with the same configuration and to make sure that all the scripts operate with the same parameters and data.
How can I track across scripts into the same mlflow run?
I am passing the same experiment name to the scripts. To make sure the same run is picked up, I thought to generate a (16 bit hex) run ID in my Powershell script and pass it to all the Python scripts.
This doesn't work as when a run ID is given to mlflow_start_run(), mlflow expects that run ID to already exist and it fails with mlflow.exceptions.MlflowException: Run 'dfa31595f0b84a1e0d1343eedc81ee75' not found.
If I pass a run name, each of the scripts gets logged to a different run anyways (which is expected).
I cannot use mlflow.last_active_run() in subsequent scripts, because I need to preserve the ability to run and track/log each script separately.
I am currently developing some tests using python py.test / unittest that, via subprocess, invoke another python application (so that I can exercise the command line options, and confirm that the tool is installed correctly).
I would like to be able to run the tests in such a way that I can get a view of the code coverage metrics (using coverage.py) for the target application using pytest_cov. By default this does not work as the code coverage instrumentation does not apply to code invoked with subprocess.
Code Coverage of the code does work if I update the tests to directly invoke the entry class for the target application (rather than running via the command line).
Ideally I want to have a single set of code which can be run in two ways:
If code coverage monitoring is not enabled then use command line
Otherwise execute the main class of the target application.
Which leads to my question(s):
Is it possible for a python unit test to determine if it is being run with code coverage enabled?
Otherwise: is there any easy way to pass a command line flag from the pytest invocation that can be used to set the mode within the code.
Coverage.py has a facility to automatically measure coverage in sub-processes that are spawned: http://coverage.readthedocs.io/en/latest/subprocess.html
Coverage sets three environment flags when running tests: COV_CORE_SOURCE, COV_CORE_CONFIG and COV_CORE_DATAFILE.
So you can use a simple if-statement to verify whether the current test is being run with coverage enabled:
import os
if "COV_CORE_SOURCE" in os.environ:
# do what yo need to do when Coverage is enabled
In my project I created a unittest test file for each Python file. For example, I have file component.py and its accompanying test_component.py. Similarly for path.py and test_path.py, etc.
However, since these files depend on each other it is possible that a change in one file affects another, thus if I change something I need to rerun all my testfiles. For now, I have to do this manually. Is it possible to run all these test files at once with only one handling? Maybe call them from an extra file? I want however still use the testsuite as before (see the image below).
I am using Python 2.7 and JetBrains' PyCharm.
It's possible to run all tests located in some folder.
Go to Run - Edit Configurations, select or create run configuration for tests and specify path to folder.
I would recommend using Pytest.
Another alternative is to have a separate file that calls tests or instantiates classes from each test file. Based on the returns it calls the next test.
You also might have a need store information in a .txt file. You could write and read to a file that holds your test variables, conditions, etc.
I'm creating a python test suite (using py.test). I'm coding the tests in Idea and I don't know how to debug a single test.
This is my setting of the debugger. It runs the whole testsuite. So I have to run all the tests before it gets to the one I'm trying to debug.
In your configuration, set:
Target to the relative path of one of your test files, i.e. testsuite/psa/test_psa_integration.py
Keywords to a keyword that identifies the test you are trying to run specifically. If tests are part of a class, Keywords should be something like: TestPsaIntegration and test_psa_integration_example
I don't use IntelliJ, but in PyCharm, you can easily debug tests without going through this tedious process of adding a Run/Debug configuration each time.
To do this with PyCharm, go to:
Preferences (or Settings) > Tools > Python Integrated Tools and set Default test runner to py.test.
Then, back in your file (i.e. test_psa_integration.py), you could just right-click anywhere within the code of a test, and select either Run 'py.test in ...' or Debug 'py.test in...' which will automatically create a new Run/Debug configuration as explained previously.
An alternative is adding --no-cov --capture=no into Additional Arguments. To make this automatic for other test file, add those into the Template part.
I want to run the unittests of Jinja2 whenever I change something to make sure I'm not breaking something.
There's a package full of unit tests. Basically it's a folder full of Python files with the name "test_xxxxxx.py"
How do I run all of these tests in one command?
It looks like Jinja uses the py.test testing tool. If so you can run all tests by just running py.test from within the tests subdirectory.
You could also take a look at nose too. It's supposed to be a py.test evolution.
Watch out for "test.py" in the Jinja2 package! -- Those are not unit tests! That is a set of utility functions for checking attributes, etc. My testing package is assuming that they are unit tests because of the name "test" -- and returning strange messages.
Try to 'walk' through the directories and import all from files like "test_xxxxxx.py", then call unittest.main()