pytest coverage report missing code in parallel gitlab pipelines' test execution - python

I am trying to set up test coverage and generate reports for sonarqube, however, there are some inconsistencies with my results.
I have several steps defined in my gitlab-ci.yml file to run tests in parallel, that generate a single coverage report:
.run_warehouse_tests: &run_warehouse_tests |
echo "Running tests for Warehouse"
python -m pytest --durations=0 ./src/unittest -m eagle --junitxml="junit-test-result-warehouse.xml" --cov-config=.coveragerc --cov-report xml --cov=./src/main --cov-append
coverage lcov
.run_logistics_tests: &run_logistics_tests |
echo "Running tests for Logistics"
python -m pytest --durations=0 ./src/unittest -m eagle --junitxml="junit-test-result-logistics.xml" --cov-config=.coveragerc --cov-report xml --cov=./src/main --cov-append
coverage lcov
.run_packaging_tests: &run_packaging_tests |
echo "Running tests for Packaging"
python -m pytest --durations=0 ./src/unittest -m eagle --junitxml="junit-test-result-packaging.xml" --cov-config=.coveragerc --cov-report xml --cov=./src/main --cov-append
coverage lcov
.
.
.
test_warehouse:
stage: test
tags:
- dind
script:
- *set_common_env_vars
- *setup_venv
- *run_warehouse_tests
artifacts:
when: always
paths:
- infrastructure/junit-test-result-warehouse.xml
- infrastructure/coverage.xml
- infrastructure/coverage.lcov
except:
- tags
test_logistics:
stage: test
tags:
- dind
script:
- *set_common_env_vars
- *setup_venv
- *run_logistics_tests
artifacts:
when: always
paths:
- infrastructure/junit-test-result-logistics.xml
- infrastructure/coverage.xml
- infrastructure/coverage.lcov
except:
- tags
test_packaging:
stage: test
tags:
- dind
script:
- *set_common_env_vars
- *setup_venv
- *run_packaging_tests
artifacts:
when: always
paths:
- infrastructure/junit-test-result-packaging.xml
- infrastructure/coverage.xml
- infrastructure/coverage.lcov
except:
- tags
I also have a .coveragerc file:
[run]
parallel = true
However, at the moment the report that gets generated misses some code. For some of the files the coverage is shown only for imports and function definitions, but not the functions' code itself.
Does anyone have experience with this and has an idea of why some code is not covered? I am thinking this might be because of the parallel execution in gitlab pipelines and maybe some tests are finishing before others? Although I'm using --cov-append to make sure that only one report is generated...
I have tried various things, like using coverage instead of pytest, trying to configure tox.ini, setting various flags in the pytest command, but so far no success.

Related

Cobertura coverage report not showing in MR's diff

I have recently added the cobertura coverage report to my repository, but it still does not show the coverage in an MR's diff.
Here is the job of my .gitlab-ci.yml that generates the coverage report:
coverage-report:
stage: coverage
script:
- tox -e coverage-report
coverage: '/(?i)total.*? (100(?:\.0+)?\%|[1-9]?\d(?:\.\d+)?\%)$/'
artifacts:
name: "coverage"
paths:
- public/coverage
expire_in: 1 week
reports:
cobertura: public/coverage/coverage.xml
expose_as: "coverage"
And here is my tox.ini:
[tox]
envlist =
coverage-report
minversion = 3.4
[testenv:coverage-report]
basepython = python2.7-32
skip_install = True
deps =
coverage
commands =
coverage run -m pytest -s -vv -x --junitxml=public/test-report.xml tests/
coverage report
coverage html
coverage xml
I am pretty sure everything goes well with the report because not only does its XML exist under public/coverage (which I can see through the published artifacts), but the coverage % summary also shows up in the job and MR. But the coverage still does not show up in the MR's diff. I also tried opening the Network tab of my browser and look for the merge_requests/26/coverage_reports.json HTTP request, and that is coming up empty (more specifically, the response is {"files":{}}), which I do not think is supposed to be happening.
I am using Python 2.7-32 and Coverage.py to get the report. My GitLab is self-hosted with version 14.9.5-ee. Here is a link to download my coverage.xml. It is not the complete coverage, but it shows 2 files which show up in the MR's diff but have no coverage information.
Give coverage_report in this form.
artifacts:
expire_in: 1 week
paths:
- public/coverage
reports:
coverage_report:
coverage_format: cobertura
path: public/coverage/coverage.xml
Also at times it can take some time for the coverage to be reflected in the diff. Gitlab runs it a background process when it's completed it should show. Sometimes takes a while for me.

Pipeline job fails after tests complete

There is job script:
image: python:3.7.9
stages:
- test
run_ui_tests:
tags:
- est
stage: test
before_script:
- echo "Prepairing enviroment..."
- python --version
- pip install -r requirements.txt
script:
- echo "Executing ui tests with Pytest..."
- cd cio_tests
- pytest -v authorize_test.py
after_script:
- echo "Cleaning test catalogue..."
job fails after all tests are completed:
What is the reason for this behaviour? After all, the tests are completed and one of the tests found a bug

Sonarqube zero coverage with python tests

I am trying to run sonarqube in a gitlab pipeline and pytests, and it does not return coverage.
Seems it finds the coverage file, according to the logs, but shows 0% coverage.
I am quite desperate as tried multiple solutions and combinations already.
Gitlab pipeline is (where commented out things is what I ran with/without for tests)
Unit tests:
image: python:3.9-slim
stage: test
before_script:
- python3 -V
- pip install --upgrade setuptools
- pip install ez_setup
# - pip install unroll
# - pip install -r requirements.txt
- pip install pytest pytest-cov
- pip install pytest
- pip install pytest-metadata
script:
- export PYTHONUNBUFFERED=1
# - python3 -m pytest
# - coverage run -m pytest
# - coverage report
# - coverage run -m pytest -rap --junitxml coverage.xml
# - coverage xml -i
- pytest -v --cov --cov-report=xml --cov-report=html
# - coverage lcov
- python3 -V
- ls -a
coverage: /All\sfiles.*?\s+(\d+.\d+)/
artifacts:
# reports:
# cobertura: cobertura-coverage.xml
paths:
# - coverage.lcov
- coverage.xml
- .coverage
only:
- merge_requests
- master
- development
sonarqube-check:
stage: analysis
image:
name: sonarsource/sonar-scanner-cli:latest
entrypoint: [""]
variables:
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar" # Defines the location of the analysis task cache
GIT_DEPTH: "0" # Tells git to fetch all the branches of the project, required by the analysis task
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script:
- ls -a
- ls -a .coverage
- sonar-scanner -X
allow_failure: true
only:
- merge_requests
- main
- main
sonar-project file
sonar.projectKey=XXXXX
sonar.qualitygate.wait=true
sonar.language=py
sonar.python.version=3.9
sonar.projectVersion=1.0
sonar.core.codeCoveragePlugin=cobertura
sonar.python.coverage.reportPaths=coverage.xml
sonar.python.xunit.reportPaths=coverage.xml
sonar.verbose=true
sonar.sources=src
sonar.tests=src
sonar.test.inclusions=tests/*.py, src/*.py
Folder structure is just 2 folders tests and src, with .py files in each.
Logs are
16:08:59.221 INFO: Sensor Cobertura Sensor for Python coverage [python]
16:08:59.221 DEBUG: Using pattern 'coverage.xml' to find reports
16:08:59.251 INFO: Python test coverage
16:08:59.255 INFO: Parsing report '/correctpath/coverage.xml'
16:08:59.373 DEBUG: 'src/delta.py' generated metadata as test with charset 'UTF-8'
16:08:59.376 DEBUG: 'src/invoice.py' generated metadata as test with charset 'UTF-8'
16:08:59.383 DEBUG: 'src/portfolio.py' generated metadata as test with charset 'UTF-8'
16:08:59.395 DEBUG: Saving coverage measures for file 'src/p1.py'
16:08:59.420 DEBUG: Saving coverage measures for file 'src/__init__.py'
16:08:59.424 DEBUG: 'src/__init__.py' generated metadata as test with charset 'UTF-8'
16:08:59.425 DEBUG: Saving coverage measures for file 'src/invoice.py'
16:08:59.426 DEBUG: Saving coverage measures for file 'src/delta.py'
16:08:59.428 INFO: Sensor Cobertura Sensor for Python coverage [python] (done) | time=207ms
16:08:59.429 INFO: Sensor JaCoCo XML Report Importer [jacoco]
16:08:59.435 INFO: 'sonar.coverage.jacoco.xmlReportPaths' is not defined. Using default locations: target/site/jacoco/jacoco.xml,target/site/jacoco-it/jacoco.xml,build/reports/jacoco/test/jacocoTestReport.xml
16:08:59.436 INFO: No report imported, no coverage information will be imported by JaCoCo XML Report Importer
Pipelines pass, but coverages are 0%.
I tried both coverage and pytest libraries, in case one of them has wrong format of coverage.xml
Thanks for any help!

How to implement parallel pytesting with code coverage in Azure CI

I was able to implement parallel pytesting in Azure CI. See this repo for reference.
But still code coverage is not working as expected.
It is individually working, but it is not combining all tests coverage.
Here is the Azure config file I am using:
# Python test sample
# Sample that demonstrates how to leverage the parallel jobs capability of Azure Pipelines to run python tests in parallel.
# Parallelizing tests helps in reducing the time spent in testing and can speed up the pipelines significantly.
variables:
disable.coverage.autogenerate: 'true'
jobs:
- job: 'ParallelTesting'
pool:
vmImage: 'windows-latest'
strategy:
parallel: 3
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.7'
addToPath: true
architecture: 'x64'
- script: |
python -m pip install --upgrade pip setuptools wheel
displayName: 'Install tools'
- script: 'pip install -r $(System.DefaultWorkingDirectory)/requirements.txt'
displayName: 'Install dependencies'
- powershell: ./DistributeTests.ps1
displayName: 'PowerShell Script to distribute tests'
- script: |
pip install pytest-azurepipelines pytest-cov
displayName: 'Install Pytest dependencies'
- script: |
echo $(pytestfiles)
pytest $(pytestfiles) --junitxml=junit/$(pytestfiles)-results.xml --cov=. --cov-report=xml --cov-report=html
displayName: 'Pytest'
continueOnError: true
- task: PublishTestResults#2
displayName: 'Publish Test Results **/*-results.xml'
inputs:
testResultsFiles: '**/*-results.xml'
testRunTitle: $(python.version)
condition: succeededOrFailed()
- task: PublishCodeCoverageResults#1
inputs:
codeCoverageTool: Cobertura
summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/coverage.xml'
reportDirectory: '$(System.DefaultWorkingDirectory)/**/htmlcov'
displayName: 'Publish code coverage results'
And the powershell script to distribute tests:
<#
.SYNOPSIS
Distribute the tests in VSTS pipeline across multiple agents
.DESCRIPTION
This script slices tests files across multiple agents for faster execution.
We search for specific type of file structure (in this example test*), and slice them according to agent number
If we encounter multiple files [file1..file10] and if we have 2 agents, agent1 executes tests odd number of files while agent2 executes even number of files
For detalied slicing info: https://learn.microsoft.com/en-us/vsts/pipelines/test/parallel-testing-any-test-runner
We use JUnit style test results to publish the test reports.
#>
$tests = Get-ChildItem .\ -Filter "test*" # search for test files with specific pattern.
$totalAgents = [int]$Env:SYSTEM_TOTALJOBSINPHASE # standard VSTS variables available using parallel execution; total number of parallel jobs running
$agentNumber = [int]$Env:SYSTEM_JOBPOSITIONINPHASE # current job position
$testCount = $tests.Count
# below conditions are used if parallel pipeline is not used. i.e. pipeline is running with single agent (no parallel configuration)
if ($totalAgents -eq 0) {
$totalAgents = 1
}
if (!$agentNumber -or $agentNumber -eq 0) {
$agentNumber = 1
}
Write-Host "Total agents: $totalAgents"
Write-Host "Agent number: $agentNumber"
Write-Host "Total tests: $testCount"
$testsToRun= #()
# slice test files to make sure each agent gets unique test file to execute
For ($i=$agentNumber; $i -le $testCount;) {
$file = $tests[$i-1]
$testsToRun = $testsToRun + $file
Write-Host "Added $file"
$i = $i + $totalAgents
}
# join all test files seperated by space. pytest runs multiple test files in following format pytest test1.py test2.py test3.py
$testFiles = $testsToRun -Join " "
Write-Host "Test files $testFiles"
# write these files into variable so that we can run them using pytest in subsequent task.
Write-Host "##vso[task.setvariable variable=pytestfiles;]$testFiles"
If you take a look at the pipeline, it is possible to see that pytests are passing alright. It is also creating code coverage report accordingly. I believe the problem lies in consolidating code coverage reports into a single one.
Now if looking for the summary of last run, it is possible to notice that there is only one attachment per run. This is probably the last executed job attachment, most likely.
In this case test_chrome.py-results.xml.
If I don't miss anything you need to call coverage combine somewhere in your pipeline (at the moment you don't) and then upload the combined coverage.
❯ coverage --help
Coverage.py, version 6.4 with C extension
Measure, collect, and report on code coverage in Python programs.
usage: coverage <command> [options] [args]
Commands:
annotate Annotate source files with execution information.
combine Combine a number of data files.
...
With regards to the powershell script to distribute pytests in different workers you could directly instruct pytest to do that for you with pytest_collection_modifyitems in conftest.py, or you could also install pytest-azure-devops

nosetests - excluding a dir from the coverage report

I have my python app structured as follows:
proj
- comp1
- comp2
tests
- comp1
- comp2
other
- contains some python code
I am running nosetests as following:
nosetests --with-coverage --cover-package=proj --exclude-dir=other -v tests
However, in the coverage report that nosetests prints at the end I still entries from 'other'. How do I exclude 'other' from the coverage report?
Quite possible your tests or proj are referring to the other python code, and as result, it makes it into report. You could see debug coverage plugin printout with:
nosetests --with-coverage --cover-package=proj -vv tests -l nose.plugins.cover

Categories

Resources