Context
Suppose one has a project structure with src.projectname.__main__.py which can be executed using the following command with accompanying arguments:
python -m src.projectname -e mdsa_size3_m1 -v -x
Question
How would one run vulture whilst passing cli arguments to the script on which vulture runs?
Approach I
When I run:
python -m src.projectname -e mdsa_size3_m1 -v -x
It throws the following
usage: vulture [options] [PATH ...]
vulture: error: unrecognized arguments: -e mdsa_size3_m1 -x
because vulture tries to parse the arguments for the script that is being ran.
Notes
I am aware normally one would expect to run vulture on the script and its entirety without narrowing down the scope with arguments. However, in this case the arguments are required to specify the number of runs/duration of the code execution.
One can hack around this issue by temporarily manually hardcoding the args with (for example):
args = parse_cli_args()
args.experiment_settings_name = "mdsa_size3_m1"
args.export_images = True
process_args(args)
assuming one has such an args object, however, I thought perhaps this functionality can be realised using the CLI, without temporarily modifying the code.
To figure out Why is this so slow? I wanted to profile my example generation using line_profiler.
However, I can't get it to work. Here's what I tried:
kernprof -m pytest -n 3 tests/test_conv.py --hypothesis-profile default
pytest -m line_profiler -n 3 tests/test_conv.py --hypothesis-profile default
python3 -m line_profiler -m pytest -n 3 tests/test_conv.py --hypothesis-profile default
https://pypi.org/project/pytest-line-profiler/
None of which worked. The first three failed immediately either not knowing #profile or pytest or line_profiler IIRC.
How do I get those three to cooperate together?
I had high hopes in Nr. 4, however its documentation states you should might want to annotate your test function:
#pytest.mark.line_profile.with_args(f, g)
But my parameters are supplied by hypothesis, so I tried just with
#pytest.mark.line_profile
which did nothing.
I also tried the alternative approach of using the command line:
pytest --line-profile path.to.function_to_be profiled [...]
However, I couldn't figure out what the path.to.function_to_be profiled was in my case. My test function is residing in tests/test_conv.py and is named test_padding. However, neither
pytest -n auto --line-profile test_padding tests/test_conv.py --hypothesis-profile default
nor
pytest -n auto --line-profile test_conv.test_padding tests/test_conv.py --hypothesis-profile default
worked. Both error'd saying they couldn't find that function.
I therefore ask: How?
Installed is (via pip):
pytest
pytest-xdist[psutil]
hypothesis
line_profiler
pytest-line-profiler
I have below project structure in Pycharm.
Project Folder: PythonTutorial
Package: pytestpackage
Python Files: test_conftest_demo1.py, test_conftest_demo2.py
I'm trying to run the above 2 python files having almost similar name using pytest from command prompt with the below command. But I'm facing the below issue. Please help me on the same.
Note: I'm using windows 10 operating system .
Command Used:
py.test -s -v test_conftest_demo*.py
use the -k option to specify substring matching.
$ pytest -s -v -k "test_conftest_demo"
How do you test a single file in pytest? I could only find ignore options and no "test this file only" option in the docs.
Preferably this would work on the command line instead of setup.cfg, as I would like to run different file tests in the ide. The entire suite takes too long.
simply run pytest with the path to the file
something like
pytest tests/test_file.py
Use the :: syntax to run a specific test in the test file:
pytest test_mod.py::test_func
Here test_func can be a test method or a class (e.g.: pytest test_mod.py::TestClass).
For more ways and details, see "Specifying which tests to run" in the docs.
This is pretty simple:
$ pytest -v /path/to/test_file.py
The -v flag is to increase verbosity. If you want to run a specific test within that file:
$ pytest -v /path/to/test_file.py::test_name
If you want to run test which names follow a patter you can use:
$ pytest -v -k "pattern_one or pattern_two" /path/to/test_file.py
You also have the option of marking tests, so you can use the -m flag to run a subset of marked tests.
test_file.py
def test_number_one():
"""Docstring"""
assert 1 == 1
#pytest.mark.run_these_please
def test_number_two():
"""Docstring"""
assert [1] == [1]
To run test marked with run_these_please:
$ pytest -v -m run_these_please /path/to/test_file.py
This worked for me:
python -m pytest -k some_test_file.py
This works for individual test functions too:
python -m pytest -k test_about_something
tl;dr:
I'm setting up CI for a project of mine, hosted on github, using tox and travis-ci. At the end of the build, I run converalls to push the coverage reports to coveralls.io. I would like to make this command 'conditional' - for execution only when the tests are run on travis; not when they are run on my local machine. Is there a way to make this happen?
The details:
The package I'm trying to test is a python package. I'm using / planning to use the following 'infrastructure' to set up the tests :
The tests themselves are of the py.test variety.
The CI scripting, so to speak, is from tox. This lets me run the tests locally, which is rather important to me. I don't want to have to push to github every time I need a test run. I also use numpy and matplotlib in my package, so running an inane number of test cycles on travis-ci seems overly wasteful to me. As such, ditching tox and simply using .travis.yml alone is not an option.
The CI server is travis-ci
The relevant test scripts look something like this :
.travis.yml
language: python
python: 2.7
env:
- TOX_ENV=py27
install:
- pip install tox
script:
- tox -e $TOX_ENV
tox.ini
[tox]
envlist = py27
[testenv]
passenv = TRAVIS TRAVIS_JOB_ID TRAVIS_BRANCH
deps =
pytest
coverage
pytest-cov
coveralls
commands =
py.test --cov={envsitepackagesdir}/mypackage --cov-report=term --basetemp={envtmpdir}
coveralls
This file lets me run the tests locally. However, due to the final coveralls call, the test fails in principle, with :
py27 runtests: commands[1] | coveralls
You have to provide either repo_token in .coveralls.yml, or launch via Travis
ERROR: InvocationError: ...coveralls'
This is an expected error. The passenv bit sends along the necessary information from travis to be able to write to coveralls, and without travis there to provide this information, the command should fail. I don't want this to push the results to coveralls.io, either. I'd like to have coveralls run only if the test is occuring on travis-ci. Is there any way in which I can have this command run conditionally, or set up a build configuration which achieves the same effect?
I've already tried moving the coveralls portion into .travis.yml, but when that is executed coveralls seems to be unable to locate the appropriate .coverage file to send over. I made various attempts in this direction, none of which resulted in a successful submission to coveralls.io except the combination listed above. The following was what I would have hoped would work, given that when I run tox locally I do end up with a .coverage file where I'd expect it - in the root folder of my source tree.
No submission to coveralls.io
language: python
python: 2.7
env:
- TOX_ENV=py27
install:
- pip install tox
- pip install python-coveralls
script:
- tox -e $TOX_ENV
after_success:
- coveralls
An alternative solution would be to prefix the coveralls command with a dash (-) to tell tox to ignore its exit code as explained in the documentation. This way even failures from coveralls will be ignored and tox will consider the test execution as successful when executed locally.
Using the example configuration above, it would be as follows:
[tox]
envlist = py27
[testenv]
passenv = TRAVIS TRAVIS_JOB_ID TRAVIS_BRANCH
deps =
pytest
coverage
pytest-cov
coveralls
commands =
py.test --cov={envsitepackagesdir}/mypackage --cov-report=term --basetemp={envtmpdir}
- coveralls
I have a similar setup with Travis, tox and coveralls. My idea was to only execute coveralls if the TRAVIS environment variable is set. However, it seems this is not so easy to do as tox has trouble parsing commands with quotes and ampersands. Additionally, this confused Travis me a lot.
Eventually I wrote a simple python script run_coveralls.py:
#!/bin/env/python
import os
from subprocess import call
if __name__ == '__main__':
if 'TRAVIS' in os.environ:
rc = call('coveralls')
raise SystemExit(rc)
In tox.ini, replace your coveralls command with python {toxinidir}/run_coveralls.py.
I am using a environmental variable to run additional commands.
tox.ini
commands =
coverage run runtests.py
{env:POST_COMMAND:python --version}
.travis.yml
language: python
python:
- "3.6"
install: pip install tox-travis
script: tox
env:
- POST_COMMAND=codecov -e TOX_ENV
Now in my local setup, it print the python version. When run from Travis it runs codecov.
Alternative solution if you use a Makefile and dont want a new py file:
define COVERALL_PYSCRIPT
import os
from subprocess import call
if __name__ == '__main__':
if 'TRAVIS' in os.environ:
rc = call('coveralls')
raise SystemExit(rc)
print("Not in Travis CI, skipping coveralls")
endef
export COVERALL_PYSCRIPT
coveralls: ## runs coveralls if TRAVIS in env
#python -c "$$COVERALL_PYSCRIPT"
In tox.ini add make coveralls to commands