Publish Python project code coverage with pytest in Azure Pipelines - python

I am trying to pblish code coverage results on the pipeline run summary page. This is my pipeline.yaml file:
- bash: |
pip install .[test]
pip install pytest pytest-azurepipelines pytest-cov
pytest --junitxml=junit.xml --cov=./src_dir --cov-report=xml --cov-report=html tests
displayName: Test
- task: PublishCodeCoverageResults#1
inputs:
codeCoverageTool: Cobertura
summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/coverage.xml'
reportDirectory: '$(System.DefaultWorkingDirectory)/**/htmlcov'
The coverage report keeps showing 0% always
How to get the correct code coverage results?
Thanks!

I was able to get the correct code coverage using the following in my Azure pipeline "Test" stage:
echo "
[run]
source =
$(Build.Repository.Name)" > .coveragerc
coverage run --context=mytest -m pytest -v -rA --junitxml=junit.xml --rootdir=tests
coverage report -m --context=mytest

Related

How to write if condition in python for testcase

script: |
python3 -m coverage erase
python3 -m coverage run -m pytest tests/ -v --junitxml=junit/test-results.xml --capture=${{ parameters.pytest_capture }}
TEST_RESULT=$? # Create coverage report even if a test case failed
python3 -m coverage html
python3 -m coverage xml
python3 -m coverage report --show-missing --fail-under=${{ parameters.covFailUnder }} && exit $TEST_RESULT
Here in python3 -m coverage run -m pytest tests/ -v --junitxml=junit/test-results.xml --capture=${{ parameters.pytest_capture }}
tests are running from test folder, i need to make sure to check for both test and unittestcase folder.
so how do i write the script for reading both tests and **unittests **
I am struck in these, please provide the solution
Here is an example of how to
def test():
result = your_function()
if result == expected_result:
print("Passed")
else:
print("Fail")
Also you may need to know maybe this was answered previously: Whats the efficient way to test if-elif-else conditions in python
To run tests from both tests/ and unittests/ folders, you can modify the script as follows:
python3 -m coverage erase
python3 -m coverage run -m pytest tests/ unittests/ -v --junitxml=junit/test-results.xml --capture=${{ parameters.pytest_capture }}
TEST_RESULT=$? # Create coverage report even if a test case failed
python3 -m coverage html
python3 -m coverage xml
python3 -m coverage report --show-missing --fail-under=${{ parameters.covFailUnder }} && exit $TEST_RESULT
Here, the argument tests/ unittests/ specifies that pytest should search for tests in both tests/ and unittests/ folders.

Is there a way to map python nose2 to coverage plugin which is installed in custom location?

I have installed nose2 with the following command:
pip3 install nose2
And I have installed the coverage in the custom path with the following command:
pip3 install --target=/tmp/coverage_pkg coverage
I want to execute the test cases and generate the coverage report. Is there a way to map my coverage plugin installed in custom path to the nose2?
I tried to execute the below command:
nose2 --with-coverage
I got the following output:
Warning: you need to install "coverage_plugin" extra requirements to use this plugin. e.g. `pip install nose2[coverage_plugin]`
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
I want to generate the coverage report by using the coverage package installed in custom path.
Can someone please explain if I can achieve this?
I was able to achieve this using the following command:
cd <custom location where packages are installed>
python3 -m nose2 --with-coverage -s "path_to_source_dir" --coverage "path_to_source_dir"
You need to stay in the location where nose2 and coverage in installed(Custom dir/ Eg: /tmp/coverage_pkg).
For more configuration and generating junit report and coverage report we can use the unittest.cfg file and .coveragerc for controlling coverage report and unit testing.
Sample unittest.cfg:
[unittest]
plugins = nose2.plugins.junitxml
start-dir =
code-directories =
test-file-pattern = *_test.py
test-method-prefix = t
[junit-xml]
always-on = True
keep_restricted = False
path = nose2-junit.xml
test_fullname = False
[coverage]
always-on = True
coverage =
coverage-config = .coveragerc
coverage-report = term-missing
xml
Sample .coveragerc:
[run]
branch = True
[xml]
output =coverage.xml
[report]
show_missing = True
omit=
/test/*
/poc1.py
/bin/__init__.py
Use the below command for using config file and generate unittest report and coverage report.
python3 -m nose2 --config "<path to config file>/unittest.cfg"
The error message includes the exact command you need:
pip install nose2[coverage_plugin]

How to integrate AWS CodeBuild with Python pytest-cov code coverage report in buildspec.yaml

I have a Python-based application that consists of:
Some Python source code that uses the AWS Boto3 SDK to interact with AWS resource
A Dockerfile that builds upon the AWS public.ecr.aws/lambda/python:3.9 image
An AWS SAM (Serverless Application Model) template that builds a lambda to execute the Docker image when the lambda is invoked
The first part of my build commands in the buildspec.yaml file are intended to execute all unit tests with a code coverage report. This works well.
I was able to integrate the unit test report with AWS CodeBuild using the reports section of the buildspec:
reports:
pytest_reports:
files:
- junitxml-report.xml
base-directory: ./pytest_reports
file-format: JUNITXML
This works as expected. I can see that a new "Report group" and the first report was created in CodeBuild after my code pipeline executed. Unfortunately, this only includes the unit test results report.
QUESTION: How do I integrate my Python code coverage report with CodeBuild via the buildspec.yaml file?
I have found some information on this AWS documentation page, but the list of code coverage report formats did not include anything that I can generate from a Python code coverage run. I am still somewhat new to Python development, so I was hoping an expert may have already solved this.
For reference, here is my complete buildspec.yaml file (with some sensitive values scrubbed):
version: 0.2
env:
variables:
# Elastic Container Registry (ECR) hosts
MAIN_REPO: 999999999999.dkr.ecr.us-east-1.amazonaws.com
DR_REPO: 999999999999.dkr.ecr.us-west-2.amazonaws.com
phases:
install:
runtime-versions:
python: 3.9
build:
on-failure: ABORT
commands:
# -------------------------------------------------------------------------------------------
# PART 1 - EXECUTE UNIT TESTS AND CODE COVERAGE ON THE PYTHON SOURCE CODE
# -------------------------------------------------------------------------------------------
# install/upgrade build-related modules that CodeBuild will use
- python3 -m pip install --upgrade pip
- python3 -m pip install --upgrade pytest
- python3 -m pip install --upgrade pytest-mock
- python3 -m pip install --upgrade pytest-cov
# do local user 'install' of source code, then run pytest (company-private Pypi repo must be explicitly included)
- pip install --extra-index-url https://artifactory.my-company-domain.com/artifactory/api/pypi/private-pypi/simple -e ./the_python_code
- python3 -m pytest --junitxml=./pytest_reports/junitxml-report.xml --cov-fail-under=69 --cov-report xml:pytest_reports/cov.xml --cov-report html:pytest_reports/cov_html --cov-report term-missing --cov=./the_python_code/src/ ./the_python_code
# -------------------------------------------------------------------------------------------
# PART 2 - BUILD THE DOCKER IMAGE AND PUBLISH TO ECR
# -------------------------------------------------------------------------------------------
# REFERENCE: https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html
# Pre-authenticate access to Docker Hub and Elastic Container Registry for image pulls and pushes
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin 999999999999.dkr.ecr.us-east-1.amazonaws.com
- docker image build -t 999999999999.dkr.ecr.us-east-1.amazonaws.com/my-docker-image-tag-name .
- docker push 999999999999.dkr.ecr.us-east-1.amazonaws.com/my-docker-image-tag-name
# -------------------------------------------------------------------------------------------
# PART 3 - BUILD THE SAM PROJECT
# -------------------------------------------------------------------------------------------
- printenv
- echo "-----------------------------------------------------"
- 'echo "ARTIFACTS_BUCKET_NAME : $ARTIFACTS_BUCKET_NAME"'
- 'echo "ARTIFACTS_BUCKET_PATH : $ARTIFACTS_BUCKET_PATH"'
- 'echo "CODEBUILD_KMS_KEY_ID : $CODEBUILD_KMS_KEY_ID"'
- echo "-----------------------------------------------------"
- MAIN_TEMPLATE="main-template.yaml"
- sam build --debug
- |
sam package \
--template-file .aws-sam/build/template.yaml \
--output-template-file "${MAIN_TEMPLATE}" \
--image-repository "999999999999.dkr.ecr.us-east-1.amazonaws.com/my-docker-image-tag-name" \
--s3-bucket "${ARTIFACTS_BUCKET_NAME}" \
--s3-prefix "${ARTIFACTS_BUCKET_PATH}" \
--kms-key-id "${CODEBUILD_KMS_KEY_ID}" \
--force-upload
reports:
pytest_reports:
files:
- junitxml-report.xml
base-directory: ./pytest_reports
file-format: JUNITXML
artifacts:
files:
- main-template.yaml
- parameters/*.json

How to implement parallel pytesting with code coverage in Azure CI

I was able to implement parallel pytesting in Azure CI. See this repo for reference.
But still code coverage is not working as expected.
It is individually working, but it is not combining all tests coverage.
Here is the Azure config file I am using:
# Python test sample
# Sample that demonstrates how to leverage the parallel jobs capability of Azure Pipelines to run python tests in parallel.
# Parallelizing tests helps in reducing the time spent in testing and can speed up the pipelines significantly.
variables:
disable.coverage.autogenerate: 'true'
jobs:
- job: 'ParallelTesting'
pool:
vmImage: 'windows-latest'
strategy:
parallel: 3
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.7'
addToPath: true
architecture: 'x64'
- script: |
python -m pip install --upgrade pip setuptools wheel
displayName: 'Install tools'
- script: 'pip install -r $(System.DefaultWorkingDirectory)/requirements.txt'
displayName: 'Install dependencies'
- powershell: ./DistributeTests.ps1
displayName: 'PowerShell Script to distribute tests'
- script: |
pip install pytest-azurepipelines pytest-cov
displayName: 'Install Pytest dependencies'
- script: |
echo $(pytestfiles)
pytest $(pytestfiles) --junitxml=junit/$(pytestfiles)-results.xml --cov=. --cov-report=xml --cov-report=html
displayName: 'Pytest'
continueOnError: true
- task: PublishTestResults#2
displayName: 'Publish Test Results **/*-results.xml'
inputs:
testResultsFiles: '**/*-results.xml'
testRunTitle: $(python.version)
condition: succeededOrFailed()
- task: PublishCodeCoverageResults#1
inputs:
codeCoverageTool: Cobertura
summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/coverage.xml'
reportDirectory: '$(System.DefaultWorkingDirectory)/**/htmlcov'
displayName: 'Publish code coverage results'
And the powershell script to distribute tests:
<#
.SYNOPSIS
Distribute the tests in VSTS pipeline across multiple agents
.DESCRIPTION
This script slices tests files across multiple agents for faster execution.
We search for specific type of file structure (in this example test*), and slice them according to agent number
If we encounter multiple files [file1..file10] and if we have 2 agents, agent1 executes tests odd number of files while agent2 executes even number of files
For detalied slicing info: https://learn.microsoft.com/en-us/vsts/pipelines/test/parallel-testing-any-test-runner
We use JUnit style test results to publish the test reports.
#>
$tests = Get-ChildItem .\ -Filter "test*" # search for test files with specific pattern.
$totalAgents = [int]$Env:SYSTEM_TOTALJOBSINPHASE # standard VSTS variables available using parallel execution; total number of parallel jobs running
$agentNumber = [int]$Env:SYSTEM_JOBPOSITIONINPHASE # current job position
$testCount = $tests.Count
# below conditions are used if parallel pipeline is not used. i.e. pipeline is running with single agent (no parallel configuration)
if ($totalAgents -eq 0) {
$totalAgents = 1
}
if (!$agentNumber -or $agentNumber -eq 0) {
$agentNumber = 1
}
Write-Host "Total agents: $totalAgents"
Write-Host "Agent number: $agentNumber"
Write-Host "Total tests: $testCount"
$testsToRun= #()
# slice test files to make sure each agent gets unique test file to execute
For ($i=$agentNumber; $i -le $testCount;) {
$file = $tests[$i-1]
$testsToRun = $testsToRun + $file
Write-Host "Added $file"
$i = $i + $totalAgents
}
# join all test files seperated by space. pytest runs multiple test files in following format pytest test1.py test2.py test3.py
$testFiles = $testsToRun -Join " "
Write-Host "Test files $testFiles"
# write these files into variable so that we can run them using pytest in subsequent task.
Write-Host "##vso[task.setvariable variable=pytestfiles;]$testFiles"
If you take a look at the pipeline, it is possible to see that pytests are passing alright. It is also creating code coverage report accordingly. I believe the problem lies in consolidating code coverage reports into a single one.
Now if looking for the summary of last run, it is possible to notice that there is only one attachment per run. This is probably the last executed job attachment, most likely.
In this case test_chrome.py-results.xml.
If I don't miss anything you need to call coverage combine somewhere in your pipeline (at the moment you don't) and then upload the combined coverage.
❯ coverage --help
Coverage.py, version 6.4 with C extension
Measure, collect, and report on code coverage in Python programs.
usage: coverage <command> [options] [args]
Commands:
annotate Annotate source files with execution information.
combine Combine a number of data files.
...
With regards to the powershell script to distribute pytests in different workers you could directly instruct pytest to do that for you with pytest_collection_modifyitems in conftest.py, or you could also install pytest-azure-devops

Coverage reports not being processed by codecov anymore

I had setup codecov with gitlab pipelines a while back and was able to see coverage reports in codecov. Since the initial setup the reports stopped processing after a few commits, and I have not been able to figure out what I'm doing wrong to get the reports processing again.
In gitlab pipelines I use tox and pip install codecov:
test:
stage: test
script:
- pip install circuitpython-build-tools Sphinx sphinx-rtd-theme tox codecov
- tox
- codecov -t $CODECOV_TOKEN
artifacts:
paths:
- htmlcov/
In tox I run coverage:
[testenv:coverage]
deps = -rrequirements.txt
-rtest-requirements.txt
commands = coverage run -m unittest discover tests/
coverage html
In codecov I can see where the upload attempts to process, but it fails without much description:
There was an error processing coverage reports.
I've referenced the python tutorials, but can't see what I'm getting wrong.
https://github.com/codecov/codecov-python
https://github.com/codecov/example-python
Looks like maybe this was something on the codecov.io side. I didn't change anything, but came into coverage reports parsing and the badge working again this morning.

Categories

Resources