I am testing the application within the GitLab pipeline. There is such a script:
stages:
- test
- report
run_ui_tests:
tags:
- est
stage: test
before_script:
- echo "Prepairing enviroment..."
- python --version
- pip install -r requirements.txt
script:
- echo "Executing ui tests with Pytest..."
- cd cio_tests
- dir
- pytest -v authorize_test.py
allow_failure: true
artifacts:
when: always
paths:
- cio_tests/allure-results/
expire_in: 5 mins 30 sec
reporting:
tags:
- est
stage: report
needs:
- run_ui_tests
script:
- cd cio_tests
- dir
- allure generate --clean cio_tests/allure-report/
artifacts:
when: always
paths:
- cio_tests/allure-report/
expire_in: 5 days
The pipeline ends successfully, and Allure report is saved locally on disk. However, when the report is called in the browser, there is no data in it:
What's wrong?
Solution found. To open the built report locally, you need to raise the server in the directory with the allure-report folder.
For this :
Go to the directory with the allure-report folder in cmd or shell
Run the command: allure open -p allure-report
Allure report will be open in Edge browser
Related
I have a Python-based application that consists of:
Some Python source code that uses the AWS Boto3 SDK to interact with AWS resource
A Dockerfile that builds upon the AWS public.ecr.aws/lambda/python:3.9 image
An AWS SAM (Serverless Application Model) template that builds a lambda to execute the Docker image when the lambda is invoked
The first part of my build commands in the buildspec.yaml file are intended to execute all unit tests with a code coverage report. This works well.
I was able to integrate the unit test report with AWS CodeBuild using the reports section of the buildspec:
reports:
pytest_reports:
files:
- junitxml-report.xml
base-directory: ./pytest_reports
file-format: JUNITXML
This works as expected. I can see that a new "Report group" and the first report was created in CodeBuild after my code pipeline executed. Unfortunately, this only includes the unit test results report.
QUESTION: How do I integrate my Python code coverage report with CodeBuild via the buildspec.yaml file?
I have found some information on this AWS documentation page, but the list of code coverage report formats did not include anything that I can generate from a Python code coverage run. I am still somewhat new to Python development, so I was hoping an expert may have already solved this.
For reference, here is my complete buildspec.yaml file (with some sensitive values scrubbed):
version: 0.2
env:
variables:
# Elastic Container Registry (ECR) hosts
MAIN_REPO: 999999999999.dkr.ecr.us-east-1.amazonaws.com
DR_REPO: 999999999999.dkr.ecr.us-west-2.amazonaws.com
phases:
install:
runtime-versions:
python: 3.9
build:
on-failure: ABORT
commands:
# -------------------------------------------------------------------------------------------
# PART 1 - EXECUTE UNIT TESTS AND CODE COVERAGE ON THE PYTHON SOURCE CODE
# -------------------------------------------------------------------------------------------
# install/upgrade build-related modules that CodeBuild will use
- python3 -m pip install --upgrade pip
- python3 -m pip install --upgrade pytest
- python3 -m pip install --upgrade pytest-mock
- python3 -m pip install --upgrade pytest-cov
# do local user 'install' of source code, then run pytest (company-private Pypi repo must be explicitly included)
- pip install --extra-index-url https://artifactory.my-company-domain.com/artifactory/api/pypi/private-pypi/simple -e ./the_python_code
- python3 -m pytest --junitxml=./pytest_reports/junitxml-report.xml --cov-fail-under=69 --cov-report xml:pytest_reports/cov.xml --cov-report html:pytest_reports/cov_html --cov-report term-missing --cov=./the_python_code/src/ ./the_python_code
# -------------------------------------------------------------------------------------------
# PART 2 - BUILD THE DOCKER IMAGE AND PUBLISH TO ECR
# -------------------------------------------------------------------------------------------
# REFERENCE: https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html
# Pre-authenticate access to Docker Hub and Elastic Container Registry for image pulls and pushes
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin 999999999999.dkr.ecr.us-east-1.amazonaws.com
- docker image build -t 999999999999.dkr.ecr.us-east-1.amazonaws.com/my-docker-image-tag-name .
- docker push 999999999999.dkr.ecr.us-east-1.amazonaws.com/my-docker-image-tag-name
# -------------------------------------------------------------------------------------------
# PART 3 - BUILD THE SAM PROJECT
# -------------------------------------------------------------------------------------------
- printenv
- echo "-----------------------------------------------------"
- 'echo "ARTIFACTS_BUCKET_NAME : $ARTIFACTS_BUCKET_NAME"'
- 'echo "ARTIFACTS_BUCKET_PATH : $ARTIFACTS_BUCKET_PATH"'
- 'echo "CODEBUILD_KMS_KEY_ID : $CODEBUILD_KMS_KEY_ID"'
- echo "-----------------------------------------------------"
- MAIN_TEMPLATE="main-template.yaml"
- sam build --debug
- |
sam package \
--template-file .aws-sam/build/template.yaml \
--output-template-file "${MAIN_TEMPLATE}" \
--image-repository "999999999999.dkr.ecr.us-east-1.amazonaws.com/my-docker-image-tag-name" \
--s3-bucket "${ARTIFACTS_BUCKET_NAME}" \
--s3-prefix "${ARTIFACTS_BUCKET_PATH}" \
--kms-key-id "${CODEBUILD_KMS_KEY_ID}" \
--force-upload
reports:
pytest_reports:
files:
- junitxml-report.xml
base-directory: ./pytest_reports
file-format: JUNITXML
artifacts:
files:
- main-template.yaml
- parameters/*.json
There is job script:
image: python:3.7.9
stages:
- test
run_ui_tests:
tags:
- est
stage: test
before_script:
- echo "Prepairing enviroment..."
- python --version
- pip install -r requirements.txt
script:
- echo "Executing ui tests with Pytest..."
- cd cio_tests
- pytest -v authorize_test.py
after_script:
- echo "Cleaning test catalogue..."
job fails after all tests are completed:
What is the reason for this behaviour? After all, the tests are completed and one of the tests found a bug
I am trying to run sonarqube in a gitlab pipeline and pytests, and it does not return coverage.
Seems it finds the coverage file, according to the logs, but shows 0% coverage.
I am quite desperate as tried multiple solutions and combinations already.
Gitlab pipeline is (where commented out things is what I ran with/without for tests)
Unit tests:
image: python:3.9-slim
stage: test
before_script:
- python3 -V
- pip install --upgrade setuptools
- pip install ez_setup
# - pip install unroll
# - pip install -r requirements.txt
- pip install pytest pytest-cov
- pip install pytest
- pip install pytest-metadata
script:
- export PYTHONUNBUFFERED=1
# - python3 -m pytest
# - coverage run -m pytest
# - coverage report
# - coverage run -m pytest -rap --junitxml coverage.xml
# - coverage xml -i
- pytest -v --cov --cov-report=xml --cov-report=html
# - coverage lcov
- python3 -V
- ls -a
coverage: /All\sfiles.*?\s+(\d+.\d+)/
artifacts:
# reports:
# cobertura: cobertura-coverage.xml
paths:
# - coverage.lcov
- coverage.xml
- .coverage
only:
- merge_requests
- master
- development
sonarqube-check:
stage: analysis
image:
name: sonarsource/sonar-scanner-cli:latest
entrypoint: [""]
variables:
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar" # Defines the location of the analysis task cache
GIT_DEPTH: "0" # Tells git to fetch all the branches of the project, required by the analysis task
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script:
- ls -a
- ls -a .coverage
- sonar-scanner -X
allow_failure: true
only:
- merge_requests
- main
- main
sonar-project file
sonar.projectKey=XXXXX
sonar.qualitygate.wait=true
sonar.language=py
sonar.python.version=3.9
sonar.projectVersion=1.0
sonar.core.codeCoveragePlugin=cobertura
sonar.python.coverage.reportPaths=coverage.xml
sonar.python.xunit.reportPaths=coverage.xml
sonar.verbose=true
sonar.sources=src
sonar.tests=src
sonar.test.inclusions=tests/*.py, src/*.py
Folder structure is just 2 folders tests and src, with .py files in each.
Logs are
16:08:59.221 INFO: Sensor Cobertura Sensor for Python coverage [python]
16:08:59.221 DEBUG: Using pattern 'coverage.xml' to find reports
16:08:59.251 INFO: Python test coverage
16:08:59.255 INFO: Parsing report '/correctpath/coverage.xml'
16:08:59.373 DEBUG: 'src/delta.py' generated metadata as test with charset 'UTF-8'
16:08:59.376 DEBUG: 'src/invoice.py' generated metadata as test with charset 'UTF-8'
16:08:59.383 DEBUG: 'src/portfolio.py' generated metadata as test with charset 'UTF-8'
16:08:59.395 DEBUG: Saving coverage measures for file 'src/p1.py'
16:08:59.420 DEBUG: Saving coverage measures for file 'src/__init__.py'
16:08:59.424 DEBUG: 'src/__init__.py' generated metadata as test with charset 'UTF-8'
16:08:59.425 DEBUG: Saving coverage measures for file 'src/invoice.py'
16:08:59.426 DEBUG: Saving coverage measures for file 'src/delta.py'
16:08:59.428 INFO: Sensor Cobertura Sensor for Python coverage [python] (done) | time=207ms
16:08:59.429 INFO: Sensor JaCoCo XML Report Importer [jacoco]
16:08:59.435 INFO: 'sonar.coverage.jacoco.xmlReportPaths' is not defined. Using default locations: target/site/jacoco/jacoco.xml,target/site/jacoco-it/jacoco.xml,build/reports/jacoco/test/jacocoTestReport.xml
16:08:59.436 INFO: No report imported, no coverage information will be imported by JaCoCo XML Report Importer
Pipelines pass, but coverages are 0%.
I tried both coverage and pytest libraries, in case one of them has wrong format of coverage.xml
Thanks for any help!
Gitlab version is 13.6.6
Gitlab-runner version is 11.2.0
my .gitlab-ci.yml:
image: "python:3.7"
before_script:
- pip install flake8
flake8:
stage: test
script:
- flake8 -max-line-length=79
tags:
- test
The only information obtained from Pipelines is script failure and the output of failed job is No job log. How can I get more detailed error output?
Using artifacts can help you.
image: "python:3.7"
before_script:
- pip install flake8
flake8:
stage: test
script:
- flake8 -max-line-length=79
- cd path/to
tags:
- test
artifacts:
when: on_failure
paths:
- path/to/test.log
The log file can be downloaded via the web interface.
Note:- Using when: on_failure will ensure that test.log will only be collected if the build fails, saving disk space on successful builds.
My goal is to deploy and run my python script from GitHub to my virtual machine via Azure Pipeline. My azure-pipelines.yml looks like this:
jobs:
- deployment: VMDeploy
displayName: Test_script
environment:
name: deploymentenvironment
resourceType: VirtualMachine
strategy:
rolling:
maxParallel: 2 #for percentages, mention as x%
preDeploy:
steps:
- download: current
- script: echo initialize, cleanup, backup, install certs
deploy:
steps:
- task: Bash#3
inputs:
targetType: 'inline'
script: python3 $(Agent.BuildDirectory)/test_file.py
routeTraffic:
steps:
- script: echo routing traffic
postRouteTraffic:
steps:
- script: echo health check post-route traffic
on:
failure:
steps:
- script: echo Restore from backup! This is on failure
success:
steps:
- script: echo Notify! This is on success
This returns an error:
/usr/bin/python3: can't find '__main__' module in '/home/ubuntu/azagent/_work/1/test_file.py'
##[error]Bash exited with code '1'.
If I place the test_file.py to the /home/ubuntu and replace the deployment script with the following: script: python3 /home/ubuntu/test_file.py the script does run smoothly.
If I move the test_file.py to another directory with mv /home/ubuntu/azagent/_work/1/test_file.py /home/ubuntu I can find an empty folder, not a .py file, named of test_file.py
EDIT
Screenshot from Jobs:
The reason that you can not get the source is because you use download: current to download artifacts produced by the current pipeline run, but you didn't publish any artifact in current pipeline.
As deployment jobs doesn't automatically check out source code, you need either checkout the source in your deployment job,
- checkout: self
or publish the sources to the artifact before downloading it.
- publish: $(Build.SourcesDirectory)
artifact: Artifact_Deploy