Function import issue running pytest on Gitlab CI - python

I have a script 'interceptor.py' with a function
def isIPValid(string):
and a unit test script for pytest 'test_interceptor.py' with a test for this function:
import interceptor
def test_isIPValid():
ip1 = 'localhost'
ip2 = '192.168.4.52'
ip3 = '55..5.7.1'
ip4 = 'badip'
assert interceptor.isIPValid(ip1)
assert interceptor.isIPValid(ip2)
assert not interceptor.isIPValid(ip3)
assert not interceptor.isIPValid(ip4)
Running the tests locally either from PyCharm or cmd works well
pytest
or
coverage run -m pytest
work and the test passes
On Gitlab CI however, when running coverage run -m pytest (after cloning and installing pytest and coverage via pip) I get the following output:
> assert interceptor.isIPValid(ip1)
E AttributeError: module 'interceptor' has no attribute 'isIPValid'
test_interceptor.py:10: AttributeError
=========================== short test summary info ============================
FAILED test_interceptor.py::test_isIPValid - AttributeError: module 'interceptor' has no attribute 'isIPValid'
============================== 1 failed in 0.09s ===============================
I've tried importing the specific function or all of the functions from the module, always with the same error. Does anyone have a pointer where this issue could come from and why I am getting different behaviors on the Gitlab runner vs locally on Windows?
Some additional information:
The pipeline is running on a linux runner in hosted Gitlab
All requirements are installed from a requirements.txt file during the CI script, plus pytest and coverage
The file containing the function uses PySide6, but the tested function does not use any function that depends on it.
Edit:
As requested, here is my gitlab yaml
stages:
- test
image: python:3
unit-test-job: # This job runs in the test stage.
stage: test
script:
- python --version
- pip install -r requirements.txt
- pip install coverage
- pip install pytest
- pip install pytest-mock
- echo "Running unit tests"
- coverage run -m pytest
- coverage report
- coverage xml
... plus the upload part we don't actually reach
Edit 2:
There seems to be a serious and unaddressed bug in PySide6 when using GitlabCI, as pointed out in this thread:
Why does PySide6 on GitLab CI result in ImportError?
This solution, however, is still not working for me:
This
script:
- python -m venv .venv
- . .venv/bin/activate
- pip install -r requirements.txt
- pip install coverage
- pip install pytest
- pip install pytest-mock
- python -m pip install PySide6
- strip --remove-section=.note.ABI-tag .venv/lib/python3.11/site-packages/PySide6/Qt/lib/libQt6Core.so.6
- python -c 'import PySide6; print(PySide6.__version__)'
- ldd .venv/lib/python3.11/site-packages/PySide6/Qt/lib/libQt6Core.so.6
- python -c 'import PySide6.QtCore'
- echo "Running unit tests"
- coverage run -m pytest
- coverage report
- coverage xml
Still generates an error:
ImportError while importing test module '<mypath>test_interceptor.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/local/lib/python3.11/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
test_interceptor.py:1: in <module>
from Interceptor_module import isIPValid
Interceptor_module.py:5: in <module>
from PySide6 import QtCore, QtGui, QtWidgets
E ImportError: libGL.so.1: cannot open shared object file: No such file or directory
=========================== short test summary info ============================
ERROR test_interceptor.py
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 0.86s ===============================

Related

Is there a way to map python nose2 to coverage plugin which is installed in custom location?

I have installed nose2 with the following command:
pip3 install nose2
And I have installed the coverage in the custom path with the following command:
pip3 install --target=/tmp/coverage_pkg coverage
I want to execute the test cases and generate the coverage report. Is there a way to map my coverage plugin installed in custom path to the nose2?
I tried to execute the below command:
nose2 --with-coverage
I got the following output:
Warning: you need to install "coverage_plugin" extra requirements to use this plugin. e.g. `pip install nose2[coverage_plugin]`
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
I want to generate the coverage report by using the coverage package installed in custom path.
Can someone please explain if I can achieve this?
I was able to achieve this using the following command:
cd <custom location where packages are installed>
python3 -m nose2 --with-coverage -s "path_to_source_dir" --coverage "path_to_source_dir"
You need to stay in the location where nose2 and coverage in installed(Custom dir/ Eg: /tmp/coverage_pkg).
For more configuration and generating junit report and coverage report we can use the unittest.cfg file and .coveragerc for controlling coverage report and unit testing.
Sample unittest.cfg:
[unittest]
plugins = nose2.plugins.junitxml
start-dir =
code-directories =
test-file-pattern = *_test.py
test-method-prefix = t
[junit-xml]
always-on = True
keep_restricted = False
path = nose2-junit.xml
test_fullname = False
[coverage]
always-on = True
coverage =
coverage-config = .coveragerc
coverage-report = term-missing
xml
Sample .coveragerc:
[run]
branch = True
[xml]
output =coverage.xml
[report]
show_missing = True
omit=
/test/*
/poc1.py
/bin/__init__.py
Use the below command for using config file and generate unittest report and coverage report.
python3 -m nose2 --config "<path to config file>/unittest.cfg"
The error message includes the exact command you need:
pip install nose2[coverage_plugin]

How to integrate AWS CodeBuild with Python pytest-cov code coverage report in buildspec.yaml

I have a Python-based application that consists of:
Some Python source code that uses the AWS Boto3 SDK to interact with AWS resource
A Dockerfile that builds upon the AWS public.ecr.aws/lambda/python:3.9 image
An AWS SAM (Serverless Application Model) template that builds a lambda to execute the Docker image when the lambda is invoked
The first part of my build commands in the buildspec.yaml file are intended to execute all unit tests with a code coverage report. This works well.
I was able to integrate the unit test report with AWS CodeBuild using the reports section of the buildspec:
reports:
pytest_reports:
files:
- junitxml-report.xml
base-directory: ./pytest_reports
file-format: JUNITXML
This works as expected. I can see that a new "Report group" and the first report was created in CodeBuild after my code pipeline executed. Unfortunately, this only includes the unit test results report.
QUESTION: How do I integrate my Python code coverage report with CodeBuild via the buildspec.yaml file?
I have found some information on this AWS documentation page, but the list of code coverage report formats did not include anything that I can generate from a Python code coverage run. I am still somewhat new to Python development, so I was hoping an expert may have already solved this.
For reference, here is my complete buildspec.yaml file (with some sensitive values scrubbed):
version: 0.2
env:
variables:
# Elastic Container Registry (ECR) hosts
MAIN_REPO: 999999999999.dkr.ecr.us-east-1.amazonaws.com
DR_REPO: 999999999999.dkr.ecr.us-west-2.amazonaws.com
phases:
install:
runtime-versions:
python: 3.9
build:
on-failure: ABORT
commands:
# -------------------------------------------------------------------------------------------
# PART 1 - EXECUTE UNIT TESTS AND CODE COVERAGE ON THE PYTHON SOURCE CODE
# -------------------------------------------------------------------------------------------
# install/upgrade build-related modules that CodeBuild will use
- python3 -m pip install --upgrade pip
- python3 -m pip install --upgrade pytest
- python3 -m pip install --upgrade pytest-mock
- python3 -m pip install --upgrade pytest-cov
# do local user 'install' of source code, then run pytest (company-private Pypi repo must be explicitly included)
- pip install --extra-index-url https://artifactory.my-company-domain.com/artifactory/api/pypi/private-pypi/simple -e ./the_python_code
- python3 -m pytest --junitxml=./pytest_reports/junitxml-report.xml --cov-fail-under=69 --cov-report xml:pytest_reports/cov.xml --cov-report html:pytest_reports/cov_html --cov-report term-missing --cov=./the_python_code/src/ ./the_python_code
# -------------------------------------------------------------------------------------------
# PART 2 - BUILD THE DOCKER IMAGE AND PUBLISH TO ECR
# -------------------------------------------------------------------------------------------
# REFERENCE: https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html
# Pre-authenticate access to Docker Hub and Elastic Container Registry for image pulls and pushes
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin 999999999999.dkr.ecr.us-east-1.amazonaws.com
- docker image build -t 999999999999.dkr.ecr.us-east-1.amazonaws.com/my-docker-image-tag-name .
- docker push 999999999999.dkr.ecr.us-east-1.amazonaws.com/my-docker-image-tag-name
# -------------------------------------------------------------------------------------------
# PART 3 - BUILD THE SAM PROJECT
# -------------------------------------------------------------------------------------------
- printenv
- echo "-----------------------------------------------------"
- 'echo "ARTIFACTS_BUCKET_NAME : $ARTIFACTS_BUCKET_NAME"'
- 'echo "ARTIFACTS_BUCKET_PATH : $ARTIFACTS_BUCKET_PATH"'
- 'echo "CODEBUILD_KMS_KEY_ID : $CODEBUILD_KMS_KEY_ID"'
- echo "-----------------------------------------------------"
- MAIN_TEMPLATE="main-template.yaml"
- sam build --debug
- |
sam package \
--template-file .aws-sam/build/template.yaml \
--output-template-file "${MAIN_TEMPLATE}" \
--image-repository "999999999999.dkr.ecr.us-east-1.amazonaws.com/my-docker-image-tag-name" \
--s3-bucket "${ARTIFACTS_BUCKET_NAME}" \
--s3-prefix "${ARTIFACTS_BUCKET_PATH}" \
--kms-key-id "${CODEBUILD_KMS_KEY_ID}" \
--force-upload
reports:
pytest_reports:
files:
- junitxml-report.xml
base-directory: ./pytest_reports
file-format: JUNITXML
artifacts:
files:
- main-template.yaml
- parameters/*.json

Publish Python project code coverage with pytest in Azure Pipelines

I am trying to pblish code coverage results on the pipeline run summary page. This is my pipeline.yaml file:
- bash: |
pip install .[test]
pip install pytest pytest-azurepipelines pytest-cov
pytest --junitxml=junit.xml --cov=./src_dir --cov-report=xml --cov-report=html tests
displayName: Test
- task: PublishCodeCoverageResults#1
inputs:
codeCoverageTool: Cobertura
summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/coverage.xml'
reportDirectory: '$(System.DefaultWorkingDirectory)/**/htmlcov'
The coverage report keeps showing 0% always
How to get the correct code coverage results?
Thanks!
I was able to get the correct code coverage using the following in my Azure pipeline "Test" stage:
echo "
[run]
source =
$(Build.Repository.Name)" > .coveragerc
coverage run --context=mytest -m pytest -v -rA --junitxml=junit.xml --rootdir=tests
coverage report -m --context=mytest

Coverage reports not being processed by codecov anymore

I had setup codecov with gitlab pipelines a while back and was able to see coverage reports in codecov. Since the initial setup the reports stopped processing after a few commits, and I have not been able to figure out what I'm doing wrong to get the reports processing again.
In gitlab pipelines I use tox and pip install codecov:
test:
stage: test
script:
- pip install circuitpython-build-tools Sphinx sphinx-rtd-theme tox codecov
- tox
- codecov -t $CODECOV_TOKEN
artifacts:
paths:
- htmlcov/
In tox I run coverage:
[testenv:coverage]
deps = -rrequirements.txt
-rtest-requirements.txt
commands = coverage run -m unittest discover tests/
coverage html
In codecov I can see where the upload attempts to process, but it fails without much description:
There was an error processing coverage reports.
I've referenced the python tutorials, but can't see what I'm getting wrong.
https://github.com/codecov/codecov-python
https://github.com/codecov/example-python
Looks like maybe this was something on the codecov.io side. I didn't change anything, but came into coverage reports parsing and the badge working again this morning.

Conditional Commands in tox? (tox, travis-ci, and coveralls)

tl;dr:
I'm setting up CI for a project of mine, hosted on github, using tox and travis-ci. At the end of the build, I run converalls to push the coverage reports to coveralls.io. I would like to make this command 'conditional' - for execution only when the tests are run on travis; not when they are run on my local machine. Is there a way to make this happen?
The details:
The package I'm trying to test is a python package. I'm using / planning to use the following 'infrastructure' to set up the tests :
The tests themselves are of the py.test variety.
The CI scripting, so to speak, is from tox. This lets me run the tests locally, which is rather important to me. I don't want to have to push to github every time I need a test run. I also use numpy and matplotlib in my package, so running an inane number of test cycles on travis-ci seems overly wasteful to me. As such, ditching tox and simply using .travis.yml alone is not an option.
The CI server is travis-ci
The relevant test scripts look something like this :
.travis.yml
language: python
python: 2.7
env:
- TOX_ENV=py27
install:
- pip install tox
script:
- tox -e $TOX_ENV
tox.ini
[tox]
envlist = py27
[testenv]
passenv = TRAVIS TRAVIS_JOB_ID TRAVIS_BRANCH
deps =
pytest
coverage
pytest-cov
coveralls
commands =
py.test --cov={envsitepackagesdir}/mypackage --cov-report=term --basetemp={envtmpdir}
coveralls
This file lets me run the tests locally. However, due to the final coveralls call, the test fails in principle, with :
py27 runtests: commands[1] | coveralls
You have to provide either repo_token in .coveralls.yml, or launch via Travis
ERROR: InvocationError: ...coveralls'
This is an expected error. The passenv bit sends along the necessary information from travis to be able to write to coveralls, and without travis there to provide this information, the command should fail. I don't want this to push the results to coveralls.io, either. I'd like to have coveralls run only if the test is occuring on travis-ci. Is there any way in which I can have this command run conditionally, or set up a build configuration which achieves the same effect?
I've already tried moving the coveralls portion into .travis.yml, but when that is executed coveralls seems to be unable to locate the appropriate .coverage file to send over. I made various attempts in this direction, none of which resulted in a successful submission to coveralls.io except the combination listed above. The following was what I would have hoped would work, given that when I run tox locally I do end up with a .coverage file where I'd expect it - in the root folder of my source tree.
No submission to coveralls.io
language: python
python: 2.7
env:
- TOX_ENV=py27
install:
- pip install tox
- pip install python-coveralls
script:
- tox -e $TOX_ENV
after_success:
- coveralls
An alternative solution would be to prefix the coveralls command with a dash (-) to tell tox to ignore its exit code as explained in the documentation. This way even failures from coveralls will be ignored and tox will consider the test execution as successful when executed locally.
Using the example configuration above, it would be as follows:
[tox]
envlist = py27
[testenv]
passenv = TRAVIS TRAVIS_JOB_ID TRAVIS_BRANCH
deps =
pytest
coverage
pytest-cov
coveralls
commands =
py.test --cov={envsitepackagesdir}/mypackage --cov-report=term --basetemp={envtmpdir}
- coveralls
I have a similar setup with Travis, tox and coveralls. My idea was to only execute coveralls if the TRAVIS environment variable is set. However, it seems this is not so easy to do as tox has trouble parsing commands with quotes and ampersands. Additionally, this confused Travis me a lot.
Eventually I wrote a simple python script run_coveralls.py:
#!/bin/env/python
import os
from subprocess import call
if __name__ == '__main__':
if 'TRAVIS' in os.environ:
rc = call('coveralls')
raise SystemExit(rc)
In tox.ini, replace your coveralls command with python {toxinidir}/run_coveralls.py.
I am using a environmental variable to run additional commands.
tox.ini
commands =
coverage run runtests.py
{env:POST_COMMAND:python --version}
.travis.yml
language: python
python:
- "3.6"
install: pip install tox-travis
script: tox
env:
- POST_COMMAND=codecov -e TOX_ENV
Now in my local setup, it print the python version. When run from Travis it runs codecov.
Alternative solution if you use a Makefile and dont want a new py file:
define COVERALL_PYSCRIPT
import os
from subprocess import call
if __name__ == '__main__':
if 'TRAVIS' in os.environ:
rc = call('coveralls')
raise SystemExit(rc)
print("Not in Travis CI, skipping coveralls")
endef
export COVERALL_PYSCRIPT
coveralls: ## runs coveralls if TRAVIS in env
#python -c "$$COVERALL_PYSCRIPT"
In tox.ini add make coveralls to commands

Categories

Resources