I am adding a pipeline step to run unit tests - the test suite is small and should execute quickly. However, the Run PyTest task is timing out. I set the timeout to 15 minutes which should be far more than enough time for the test suite to run (it takes 2.5 seconds for them to run in the IDE)
The logs show the last command being run is:
python.exe -m pytest --color=no -q --test-run-title="Unit Tests" --basetemp="D:\a\1\s" --junitprefix="py%winver%" --cov=test_module_A --cov=test_module_B --cov-report=xml --cov-report=html --pylint "D:\a\1\s\tests\test_module_A.py" "D:\a\1\s\tests\test_module_B.py"
The YAML for my Run PyTest task:
steps:
- task: stevedower.python.PyTest.PyTest#2
displayName: 'Run PyTest'
inputs:
title: 'Unit Tests'
testroot: tests
patterns: 'test_*.py'
resultfile: tests
doctests: false
pylint: true
codecoverage: 'test_module_A, test_module_B'
timeoutInMinutes: 15
It seems that the tests are not actually executing despite the pytest command being run. I am not aware of any additional logs that I should be looking at for more detailed test run information.
My tests were (unbeknownst to me) attempting to log in to Azure, so the test run was hanging at the login prompt. Be sure to mock out Azure ML Workspace object, not just the GetWorkspace() calls
In the corresponding pipeline job, enabling:
Allow scripts to access the OAuth token
Solved the issue for me.
Related
I have multiple tests that run as part of a GitLab pipline, the issue is that the tests run became very long and when a test in the middle fails one need to wait to the end of the run to see the output of the failed test.
I'm using the following:
pytest -v -m -rxs --stepwise
Is there a way to print the failed test traceback right after it fails without doing the same for the passed tests
I have searched for this all over the internet and couldn't find an answer.
The output of the job is something like this:
test/test_something.py:25: AssertionError
========================= 1 failed, 64 passed in 2.10s =========================
Job succeeded
my .gitlab-ci.yml file for the test:
run_tests:
stage: test
tags:
- tests
script:
- echo "Running tests"
- ./venv/bin/python -m pytest
I'm using shell executor.
anyone faced this problem before? as I understand that gitlab CI depends on the exit code of the pytest and it should fail if the exit code is not zero, but in this case pytest should have exit code 1 since a test failed.
It's not something about your gitlab-ci script but rather your pytest script (the script or module you are using to run your tests).
Following I included an example for you, assuming that you might use something like Flask CLI to manage your tests.
You can use SystemExit to raise the exit code. If anything other than 0 is returned, it will fail the process. In a nutshell, GitLab stages are going to succeed if the exit code that is returned is 0.
Pytest only runs the tests but doesn't return the exit code. You can implement this into your code:
your manage.py (assuming you are using flask CLI) will look like something as follows:
import pytest
import click
from flask import Flask
app = Flask(__name__)
#app.cli.command("tests")
#click.argument("option", required=False)
def run_test_with_option(option: str = None):
if option is None:
raise SystemExit(pytest.main())
Note how the above code is raiseing and defining a flask CLI command of tests. To run your code you can simply add the following to your gitlab-ci script:
run_tests:
stage: test
tags:
- tests
variables:
FLASK_APP: manage.py
script:
- echo "Running tests"
- ./venv/bin/activate
- flask tests
The script that will run your test will be flask tests which is raising SystemExit as shown.
FYI: you may not use Flask CLI to manage your tests script, and simply want to run a test script. In that case, this answer might also help.
I was able to implement parallel pytesting in Azure CI. See this repo for reference.
But still code coverage is not working as expected.
It is individually working, but it is not combining all tests coverage.
Here is the Azure config file I am using:
# Python test sample
# Sample that demonstrates how to leverage the parallel jobs capability of Azure Pipelines to run python tests in parallel.
# Parallelizing tests helps in reducing the time spent in testing and can speed up the pipelines significantly.
variables:
disable.coverage.autogenerate: 'true'
jobs:
- job: 'ParallelTesting'
pool:
vmImage: 'windows-latest'
strategy:
parallel: 3
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.7'
addToPath: true
architecture: 'x64'
- script: |
python -m pip install --upgrade pip setuptools wheel
displayName: 'Install tools'
- script: 'pip install -r $(System.DefaultWorkingDirectory)/requirements.txt'
displayName: 'Install dependencies'
- powershell: ./DistributeTests.ps1
displayName: 'PowerShell Script to distribute tests'
- script: |
pip install pytest-azurepipelines pytest-cov
displayName: 'Install Pytest dependencies'
- script: |
echo $(pytestfiles)
pytest $(pytestfiles) --junitxml=junit/$(pytestfiles)-results.xml --cov=. --cov-report=xml --cov-report=html
displayName: 'Pytest'
continueOnError: true
- task: PublishTestResults#2
displayName: 'Publish Test Results **/*-results.xml'
inputs:
testResultsFiles: '**/*-results.xml'
testRunTitle: $(python.version)
condition: succeededOrFailed()
- task: PublishCodeCoverageResults#1
inputs:
codeCoverageTool: Cobertura
summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/coverage.xml'
reportDirectory: '$(System.DefaultWorkingDirectory)/**/htmlcov'
displayName: 'Publish code coverage results'
And the powershell script to distribute tests:
<#
.SYNOPSIS
Distribute the tests in VSTS pipeline across multiple agents
.DESCRIPTION
This script slices tests files across multiple agents for faster execution.
We search for specific type of file structure (in this example test*), and slice them according to agent number
If we encounter multiple files [file1..file10] and if we have 2 agents, agent1 executes tests odd number of files while agent2 executes even number of files
For detalied slicing info: https://learn.microsoft.com/en-us/vsts/pipelines/test/parallel-testing-any-test-runner
We use JUnit style test results to publish the test reports.
#>
$tests = Get-ChildItem .\ -Filter "test*" # search for test files with specific pattern.
$totalAgents = [int]$Env:SYSTEM_TOTALJOBSINPHASE # standard VSTS variables available using parallel execution; total number of parallel jobs running
$agentNumber = [int]$Env:SYSTEM_JOBPOSITIONINPHASE # current job position
$testCount = $tests.Count
# below conditions are used if parallel pipeline is not used. i.e. pipeline is running with single agent (no parallel configuration)
if ($totalAgents -eq 0) {
$totalAgents = 1
}
if (!$agentNumber -or $agentNumber -eq 0) {
$agentNumber = 1
}
Write-Host "Total agents: $totalAgents"
Write-Host "Agent number: $agentNumber"
Write-Host "Total tests: $testCount"
$testsToRun= #()
# slice test files to make sure each agent gets unique test file to execute
For ($i=$agentNumber; $i -le $testCount;) {
$file = $tests[$i-1]
$testsToRun = $testsToRun + $file
Write-Host "Added $file"
$i = $i + $totalAgents
}
# join all test files seperated by space. pytest runs multiple test files in following format pytest test1.py test2.py test3.py
$testFiles = $testsToRun -Join " "
Write-Host "Test files $testFiles"
# write these files into variable so that we can run them using pytest in subsequent task.
Write-Host "##vso[task.setvariable variable=pytestfiles;]$testFiles"
If you take a look at the pipeline, it is possible to see that pytests are passing alright. It is also creating code coverage report accordingly. I believe the problem lies in consolidating code coverage reports into a single one.
Now if looking for the summary of last run, it is possible to notice that there is only one attachment per run. This is probably the last executed job attachment, most likely.
In this case test_chrome.py-results.xml.
If I don't miss anything you need to call coverage combine somewhere in your pipeline (at the moment you don't) and then upload the combined coverage.
❯ coverage --help
Coverage.py, version 6.4 with C extension
Measure, collect, and report on code coverage in Python programs.
usage: coverage <command> [options] [args]
Commands:
annotate Annotate source files with execution information.
combine Combine a number of data files.
...
With regards to the powershell script to distribute pytests in different workers you could directly instruct pytest to do that for you with pytest_collection_modifyitems in conftest.py, or you could also install pytest-azure-devops
I had setup codecov with gitlab pipelines a while back and was able to see coverage reports in codecov. Since the initial setup the reports stopped processing after a few commits, and I have not been able to figure out what I'm doing wrong to get the reports processing again.
In gitlab pipelines I use tox and pip install codecov:
test:
stage: test
script:
- pip install circuitpython-build-tools Sphinx sphinx-rtd-theme tox codecov
- tox
- codecov -t $CODECOV_TOKEN
artifacts:
paths:
- htmlcov/
In tox I run coverage:
[testenv:coverage]
deps = -rrequirements.txt
-rtest-requirements.txt
commands = coverage run -m unittest discover tests/
coverage html
In codecov I can see where the upload attempts to process, but it fails without much description:
There was an error processing coverage reports.
I've referenced the python tutorials, but can't see what I'm getting wrong.
https://github.com/codecov/codecov-python
https://github.com/codecov/example-python
Looks like maybe this was something on the codecov.io side. I didn't change anything, but came into coverage reports parsing and the badge working again this morning.
I am managing deployment / CI of a Flask app through Jenkins. I have a build step in the form of an executed shell which runs a shell script in the host that in turns runs nosetests.
The Jenkins shell command looks like so:
$WORKSPACE/ops/test_integration.sh
integration.sh looks like so:
SETTINGS_FILE="settings/integration.py" nosetests $#
How do I have the jenkins job fail and discontinue the build if the nosetests in the above shell script fail? Thanks.
I suspect nosetests will return a non zero value on failure
so you can set the shell to auto fail with
set -e
or any of the other options in here