pytest-xdist crashes with pytest-cov error - python

I was trying to execute tests for my package over remote machine through ssh from my master node. Both nodes have same version of the packages installed.
I'm running test like this,
pytest -d --tx ssh=ubuntu//python=python3 --rsyncdir /home/ubuntu/pkg/ /home/ubuntu/pkg -n 7
on running this, I'm getting following error,
------------------------------ coverage ------------------------------
---------------------- coverage: failed workers ----------------------
The following workers failed to return coverage data, ensure that pytest-cov is installed on these workers.
gw0
gw1
gw2
gw3
gw4
gw5
gw6
Coverage XML written to file coverage.xml
I've made sure that coverage is installed in the worker node.
coverage==6.2
pytest-cov==3.0.0
I don't know why it is still failing.
I also noticed that the code files have not been synced in the worker machine for some reason.
I'm trying to understand what is going wrong here and how to fix this.

Related

Coverage reports not being processed by codecov anymore

I had setup codecov with gitlab pipelines a while back and was able to see coverage reports in codecov. Since the initial setup the reports stopped processing after a few commits, and I have not been able to figure out what I'm doing wrong to get the reports processing again.
In gitlab pipelines I use tox and pip install codecov:
test:
stage: test
script:
- pip install circuitpython-build-tools Sphinx sphinx-rtd-theme tox codecov
- tox
- codecov -t $CODECOV_TOKEN
artifacts:
paths:
- htmlcov/
In tox I run coverage:
[testenv:coverage]
deps = -rrequirements.txt
-rtest-requirements.txt
commands = coverage run -m unittest discover tests/
coverage html
In codecov I can see where the upload attempts to process, but it fails without much description:
There was an error processing coverage reports.
I've referenced the python tutorials, but can't see what I'm getting wrong.
https://github.com/codecov/codecov-python
https://github.com/codecov/example-python
Looks like maybe this was something on the codecov.io side. I didn't change anything, but came into coverage reports parsing and the badge working again this morning.

pygradle: Multi-Project-Example: buildPex FAILED when using project dependency

i would like to use pygradle in a multi-project setup with project dependencies. I created two gradle sub-projects. A python-cli project (example-app) and a python-sdist project (example-lib) on which the cli project depends on.
But currently i'm facing the following error (gist) when i try to build the app:
multi-project-example/example-app> gradle build --info
> Task :example-app:buildPex FAILED
Task ':example-app:buildPex' is not up-to-date because:
Task has not declared any outputs despite executing actions.
Starting process 'command '/home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python''. Working directory: /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app Command: /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/pip freeze --all --disable-pip-version-check
Successfully started process 'command '/home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python''
/home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/deployable/bin/example-app.pex
Starting process 'command '/home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python''. Working directory: /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app Command: /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/pex --no-pypi --cache-dir /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/pex-cache --output-file /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/deployable/bin/example-app.pex --repo /home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/wheel-cache --python-shebang /home/kkdh/.anaconda3/bin/python UNKNOWN==0.0.0 example-app==0.3.0a1
Successfully started process 'command '/home/kkdh/Projects/pygradle/examples/multi-project-example/example-app/build/venv/bin/python''
Could not satisfy all requirements for example-lib:
example-lib(from: example-app==0.3.0a1)
:example-app:buildPex (Thread[Execution worker for ':',5,main]) completed. Took 1.165 secs.
You will find the example in my fork of pygradle: https://github.com/kKdH/pygradle/tree/master/examples/multi-project-example
I opened an issue about this problem but without any response from the project maintainers. So now i'am asking here for any pointers to a solution or further troubleshoot steps.

ERROR: py35: InterpreterNotFound: python3.5 even though python3.5 is installed

I'm running my builds on my CI (bamboo) via tox on docker
my tox.ini look like this
[tox]
envlist = py27,py35
[testenv]
deps=-rrequirements.txt
commands=pytest
i'm running the tests like so
tox --recreate -vv -i $myindexserver
Testing the setup locally works (inside docker)
py27: commands succeeded
py35: commands succeeded
congratulations :)
But while running the same thing on the CI instance failes with
___________________________________ summary_________________________________
py27: commands succeeded
ERROR: py35: InterpreterNotFound: python3.5
inside the docker, running which python3 and which python3.5 succeeds
Has anyone faced similar issue?
Turns out that the docker container versions used by my local and the one used by the CI were different.
I'm keeping the answer here in the hopes that someone else finds this useful and possibly save the many hours of debugging that I had to waste.
do a docker images to find the tag that you're using locally, and check it against the version running inside your CI.

mpi4py only works under mpiexec

I have set up mpi4py on a new server, and it isn't quite working. When I import mpi4py.MPI, it crashes. However, if I do the same thing under mpiexec, it works. On my other server and on my workstation, both techniques work fine. What am I missing on the new server?
Here's what happens on the new server:
$ python -c 'from mpi4py import MPI; print("OK")'
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):
PMI2_Job_GetId failed failed
--> Returned value (null) (14) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):
orte_ess_init failed
--> Returned value (null) (14) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):
ompi_mpi_init: ompi_rte_init failed
--> Returned "(null)" (14) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
[Octomore:45430] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed!
If I run it with mpiexec, it's fine.
$ mpiexec -np 1 python -c 'from mpi4py import MPI; print("OK")'
OK
I'm running on CentOS 6.7. I've installed Python 2.7 as a software collection, and I've loaded the openmpi/gnu/1.10.2 module. MPICH and MPICH2 are also installed, so they may be conflicting with OpenMPI. I haven't loaded the MPICH modules, though. I'm running Python in a virtualenv:
$ pip list
mpi4py (2.0.0)
pip (8.1.2)
setuptools (18.0.1)
wheel (0.24.0)
It turned out that mpi4py is not compatible with version 1.10.2 of OpenMPI. It works fine with version 1.6.5.
$ module load openmpi/gnu/1.6.5
$ python -c 'from mpi4py import MPI; print("OK")'
OK

Conditional Commands in tox? (tox, travis-ci, and coveralls)

tl;dr:
I'm setting up CI for a project of mine, hosted on github, using tox and travis-ci. At the end of the build, I run converalls to push the coverage reports to coveralls.io. I would like to make this command 'conditional' - for execution only when the tests are run on travis; not when they are run on my local machine. Is there a way to make this happen?
The details:
The package I'm trying to test is a python package. I'm using / planning to use the following 'infrastructure' to set up the tests :
The tests themselves are of the py.test variety.
The CI scripting, so to speak, is from tox. This lets me run the tests locally, which is rather important to me. I don't want to have to push to github every time I need a test run. I also use numpy and matplotlib in my package, so running an inane number of test cycles on travis-ci seems overly wasteful to me. As such, ditching tox and simply using .travis.yml alone is not an option.
The CI server is travis-ci
The relevant test scripts look something like this :
.travis.yml
language: python
python: 2.7
env:
- TOX_ENV=py27
install:
- pip install tox
script:
- tox -e $TOX_ENV
tox.ini
[tox]
envlist = py27
[testenv]
passenv = TRAVIS TRAVIS_JOB_ID TRAVIS_BRANCH
deps =
pytest
coverage
pytest-cov
coveralls
commands =
py.test --cov={envsitepackagesdir}/mypackage --cov-report=term --basetemp={envtmpdir}
coveralls
This file lets me run the tests locally. However, due to the final coveralls call, the test fails in principle, with :
py27 runtests: commands[1] | coveralls
You have to provide either repo_token in .coveralls.yml, or launch via Travis
ERROR: InvocationError: ...coveralls'
This is an expected error. The passenv bit sends along the necessary information from travis to be able to write to coveralls, and without travis there to provide this information, the command should fail. I don't want this to push the results to coveralls.io, either. I'd like to have coveralls run only if the test is occuring on travis-ci. Is there any way in which I can have this command run conditionally, or set up a build configuration which achieves the same effect?
I've already tried moving the coveralls portion into .travis.yml, but when that is executed coveralls seems to be unable to locate the appropriate .coverage file to send over. I made various attempts in this direction, none of which resulted in a successful submission to coveralls.io except the combination listed above. The following was what I would have hoped would work, given that when I run tox locally I do end up with a .coverage file where I'd expect it - in the root folder of my source tree.
No submission to coveralls.io
language: python
python: 2.7
env:
- TOX_ENV=py27
install:
- pip install tox
- pip install python-coveralls
script:
- tox -e $TOX_ENV
after_success:
- coveralls
An alternative solution would be to prefix the coveralls command with a dash (-) to tell tox to ignore its exit code as explained in the documentation. This way even failures from coveralls will be ignored and tox will consider the test execution as successful when executed locally.
Using the example configuration above, it would be as follows:
[tox]
envlist = py27
[testenv]
passenv = TRAVIS TRAVIS_JOB_ID TRAVIS_BRANCH
deps =
pytest
coverage
pytest-cov
coveralls
commands =
py.test --cov={envsitepackagesdir}/mypackage --cov-report=term --basetemp={envtmpdir}
- coveralls
I have a similar setup with Travis, tox and coveralls. My idea was to only execute coveralls if the TRAVIS environment variable is set. However, it seems this is not so easy to do as tox has trouble parsing commands with quotes and ampersands. Additionally, this confused Travis me a lot.
Eventually I wrote a simple python script run_coveralls.py:
#!/bin/env/python
import os
from subprocess import call
if __name__ == '__main__':
if 'TRAVIS' in os.environ:
rc = call('coveralls')
raise SystemExit(rc)
In tox.ini, replace your coveralls command with python {toxinidir}/run_coveralls.py.
I am using a environmental variable to run additional commands.
tox.ini
commands =
coverage run runtests.py
{env:POST_COMMAND:python --version}
.travis.yml
language: python
python:
- "3.6"
install: pip install tox-travis
script: tox
env:
- POST_COMMAND=codecov -e TOX_ENV
Now in my local setup, it print the python version. When run from Travis it runs codecov.
Alternative solution if you use a Makefile and dont want a new py file:
define COVERALL_PYSCRIPT
import os
from subprocess import call
if __name__ == '__main__':
if 'TRAVIS' in os.environ:
rc = call('coveralls')
raise SystemExit(rc)
print("Not in Travis CI, skipping coveralls")
endef
export COVERALL_PYSCRIPT
coveralls: ## runs coveralls if TRAVIS in env
#python -c "$$COVERALL_PYSCRIPT"
In tox.ini add make coveralls to commands

Categories

Resources