I have searched for this all over the internet and couldn't find an answer.
The output of the job is something like this:
test/test_something.py:25: AssertionError
========================= 1 failed, 64 passed in 2.10s =========================
Job succeeded
my .gitlab-ci.yml file for the test:
run_tests:
stage: test
tags:
- tests
script:
- echo "Running tests"
- ./venv/bin/python -m pytest
I'm using shell executor.
anyone faced this problem before? as I understand that gitlab CI depends on the exit code of the pytest and it should fail if the exit code is not zero, but in this case pytest should have exit code 1 since a test failed.
It's not something about your gitlab-ci script but rather your pytest script (the script or module you are using to run your tests).
Following I included an example for you, assuming that you might use something like Flask CLI to manage your tests.
You can use SystemExit to raise the exit code. If anything other than 0 is returned, it will fail the process. In a nutshell, GitLab stages are going to succeed if the exit code that is returned is 0.
Pytest only runs the tests but doesn't return the exit code. You can implement this into your code:
your manage.py (assuming you are using flask CLI) will look like something as follows:
import pytest
import click
from flask import Flask
app = Flask(__name__)
#app.cli.command("tests")
#click.argument("option", required=False)
def run_test_with_option(option: str = None):
if option is None:
raise SystemExit(pytest.main())
Note how the above code is raiseing and defining a flask CLI command of tests. To run your code you can simply add the following to your gitlab-ci script:
run_tests:
stage: test
tags:
- tests
variables:
FLASK_APP: manage.py
script:
- echo "Running tests"
- ./venv/bin/activate
- flask tests
The script that will run your test will be flask tests which is raising SystemExit as shown.
FYI: you may not use Flask CLI to manage your tests script, and simply want to run a test script. In that case, this answer might also help.
Related
Need help!
I have a job on Gitlab ci, that runs tests and reruns failed ones. If there are no failed tests, job fails with exit code 5, that means that there are no tests for running. I found out that there is plugin "pytest-custom_exit_code", but I don't know how to correctly use it.
I need just to add command 'pytest --suppress-no-test-exit-code' to my runner.sh?
It looks like this now:
#!/bin/sh
/usr/local/bin/pytest -m test
/usr/local/bin/pytest -m --last-failed --last-failed-no-failures none test
Assumption here is that plugin is installed first using
pip install pytest-custom_exit_code
command like option pytest --suppress-no-test-exit-code should work after that.
If configuration file like .pytest.ini is used , following lines should be added in it
[pytest]
addopts = --suppress-no-test-exit-code
I want to run pytest inside a container, and I'm finding a different behavior when compared to running pytest in the host. Just a simple container, no orchestrators or anything else.
When I run pytest at the host level, all tests pass. Some need a couple of RERUN, but at the end, they pass.
For example, with a sample of 3 tests, the result of running just pytest is 3 passed, 2 rerun in 111.37 seconds.
Now, if instead of running this in the host, I build an image and run a container, the result is always something along the lines of 1 failed, 2 passed in 73.53 seconds , or actually any combination of 1 failed 2 passed, 2 failed 1 passed, 3 failed.
Notice how in this case there is no mention to any rerun operation?
The image doesn't have anything fancy, it's as simple as copying the requirements, tests and run pytest.
FROM python:3.7-slim
WORKDIR /apptests
COPY requirements requirements
COPY tests tests
RUN pip install -r requirements/tests.txt
CMD ["pytest"]
Any ideas about what might be happening? I'm not passing any flag or argument, in both cases (host or docker), it's just a raw pytest.
I thought that maybe when a test failed, it was reporting error and the container was exiting (even though I'm not using pytest -x), but that's not the case. All tests are run.
You could store your .pytest_cache in a volume so that each time a new container is started, it has the knowledge of the previous run(s).
For example, using compose.
services:
app:
image: myapp
command: python -m pytest -v --failed-first
volumes: [ pytest_cache:/app/.pytest_cache ]
volumes:
pytest_cache:
It was quite easy to solve. There was a missing file that needed to be copied to the container which, among other things, had:
[tool:pytest]
testpaths=tests
python_files=*.py
addopts = --junitxml=./reports/junit.xml
--verbose
--capture=no
--ignore=setup.py
--reruns 2 <----------------
--exitfirst
I want to run the test cases for my python code where am using flask framework.
You can use this command to run the test suite in flask framework
pytest --cov=src --cov-report=html
That depends on how you've written the test cases in the first place. Happily, though, Pytest tends to be able to run whatever at least close to standard tests you have, and pytest-cov adds coverage.
So, once you have pytest and pytest-cov installed, you can
pytest --cov . --cov-report term --cov-report html
and you'll get
a report of coverage in the console
a htmlcov/ directory with pretty, colorful coverage information.
If you've written your code as REST API. I would recommend pyresttest.
You can write your test cases as simple as this in a test.yaml file.
- test: # create entity by POST
- name: "Create person"
- url: "/api/person/"
- method: "POST"
- body: '{"first_name": "Ahmet","last_name": "Tatanga"}'
- headers: {Content-Type: application/json}
Then you just run this test case by
pyresttest test.yaml
You can implement some validators for returned JSON. To learn more please check here.
I am adding a pipeline step to run unit tests - the test suite is small and should execute quickly. However, the Run PyTest task is timing out. I set the timeout to 15 minutes which should be far more than enough time for the test suite to run (it takes 2.5 seconds for them to run in the IDE)
The logs show the last command being run is:
python.exe -m pytest --color=no -q --test-run-title="Unit Tests" --basetemp="D:\a\1\s" --junitprefix="py%winver%" --cov=test_module_A --cov=test_module_B --cov-report=xml --cov-report=html --pylint "D:\a\1\s\tests\test_module_A.py" "D:\a\1\s\tests\test_module_B.py"
The YAML for my Run PyTest task:
steps:
- task: stevedower.python.PyTest.PyTest#2
displayName: 'Run PyTest'
inputs:
title: 'Unit Tests'
testroot: tests
patterns: 'test_*.py'
resultfile: tests
doctests: false
pylint: true
codecoverage: 'test_module_A, test_module_B'
timeoutInMinutes: 15
It seems that the tests are not actually executing despite the pytest command being run. I am not aware of any additional logs that I should be looking at for more detailed test run information.
My tests were (unbeknownst to me) attempting to log in to Azure, so the test run was hanging at the login prompt. Be sure to mock out Azure ML Workspace object, not just the GetWorkspace() calls
In the corresponding pipeline job, enabling:
Allow scripts to access the OAuth token
Solved the issue for me.
I am managing deployment / CI of a Flask app through Jenkins. I have a build step in the form of an executed shell which runs a shell script in the host that in turns runs nosetests.
The Jenkins shell command looks like so:
$WORKSPACE/ops/test_integration.sh
integration.sh looks like so:
SETTINGS_FILE="settings/integration.py" nosetests $#
How do I have the jenkins job fail and discontinue the build if the nosetests in the above shell script fail? Thanks.
I suspect nosetests will return a non zero value on failure
so you can set the shell to auto fail with
set -e
or any of the other options in here