I have GitHub Actions that build and test my Python application. I am also using pytest-cov to generate a code coverage report. This report is being uploaded to codecov.io.
I know that codecov.io can't fail your build if the coverage lowers, so how do I go about with GitHub Actions to fail the build if the coverage drops? Do I have to check the previous values and compare with the new "manually" (having to write a script)? Or is there an existing solution for this?
One solution is that you can do a job with 2 steps which are :
Check if the coverage has dropped or not
Build in function of your result
If the step 1 fail, no build.
You can do a python script and return an error if the coverage drops.
Try something like that :
jobs:
build:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v1
- name: Set Up Python
uses: actions/setup-python#v1
- name: Test Coverage
run: python check_coverage.py
- name: Build
if: success()
run: python do_something.py # <= here you're doing your build
I hope it helps.
There is nothing built-in, instead you should use one of the many integrations like sonarqube, if I don’t want to write a custom script.
Related
I have a travis integration in my project with build language as python. I want to integrate postman test which requires Node installation. Should i create a separate build for this? Is there a way to accommodate this in the same build. I tried adding a new env but apparently I was getting the tox error.
This is a broad guideline. Ideally:
The jobs are atomic (independent of setup and run)
You install npm or configure Travis to have npm setup alongside Python
You run your jobs in the order you desire (usually Newman last)
It seems nowdays you can have a language per job in Travis. Check: https://docs.travis-ci.com/user/build-matrix/#using-different-programming-languages-per-job. For example:
dist: xenial
language: php
php:
- '5.6'
jobs:
include:
- language: python
python: 3.8
script:
- python -c "print('Hi from Python!')"
- language: node_js
node_js: 12
script:
- npm i newman -g
- newman run COLLECTION
So this will likely allow you to keep one single build + test run.
Is there a way to exclude pytests marked with pytest.mark from running during the pre-commit hook?
Especially, I'd like to exclude the tests that are marked as integration tests.
The content of a tests looks like this
pytestmark = [pytest.mark.integration, pytest.mark.reporting_api]
### some tests
and the .pre-commit-conifg.yaml pytest configuration is
- repo: local
hooks:
- id: pytest
name: pytest
entry: pytest test/
language: system
pass_filenames: false
types: [python]
stages: [commit, push]
2c: it's not a good idea to run tests as part of pre-commit -- in my experience they're going to be too slow and your contributors may get frustrated and turn off the framework entirely
that said, it should be as simple as adding the arguments you want to either entry or args -- personally I prefer entry when working with repo: local hooks (since there's nothing that would "conventionally" override args)
in your case this would look like:
- repo: local
hooks:
- id: pytest
name: pytest
entry: pytest test/ -m 'not integration and not reporting_api'
language: system
pass_filenames: false
types: [python]
stages: [commit, push]
disclaimer: I created pre-commit and I'm a pytest core dev
I am trying to setup a simple github actions workflow for sql linting using sqlfluff package. here is sunrise movement workflow which is simple and clean.
name: Lint Models
on: [pull_request]
jobs:
lint-models:
runs-on: ubuntu-latest
steps:
- uses: "actions/checkout#v2"
- uses: "actions/setup-python#v2"
with:
python-version: "3.8"
- name: Install SQLFluff
run: "pip install sqlfluff==0.12.0"
- name: Lint models
run: "sqlfluff lint models"
When I tried to run it in github actions, it is giving me the following error message. Not quite sure why it is throwing error. Help is appreciated as I am trying to learn github acitons for the first time.
You have this:
run: "sqlfluff lint models"
This says to lint the directory called models. The directory does not exist in your repo (is it a sub folder?).
I'm working on a Project where I'm trying to setup a HTTP Server in C#. The responds from the server are tested using the pytest Module.
This is what I've done so far:
Define the API using swagger editor
generate base code using swagger generator
write some python Tests which are sending requests to the server and testing whether or not the responds fulfill certain requirements
I now want to set up CI on gitlab before I start actually writing the functions that correspond to the routes I've defined earlier. I set up a Runner on my local machine (it's later going to be on a dedicated server) using Docker.
As I am new to CI, there are a few questions I'm struggling with:
As I need both Python and .NET for testing, should I choose .NET as a base image and then install Python or Python as base image and then install .NET? What would be easier? I tried the latter but it doesn't seem very elegant...
Do I build before I push to the remote repository and include the /bin folder in my repository to execute those files or would I rather build during the CI and therefore not have to push anything but sourcecode?
I know those questions are a little bit wild but as I am new to CI and also Docker I'm looking for advise on how to follow best practices (if there are any).
The base image for a runner is just the default if you don't specify one in your .gitlab-ci.yml file. You can override the runner's default image by using a "pipeline default" image at the top of your .gitlab-ci.yml file (outside of any jobs), or you can specify the image for each job individually.
Using a "pipeline default" image:
image: python:latest
stages:
- build
...
In this example, all jobs will use the python:latest image unless the job specifies its own image like this example:
stages:
- build
- test
Build Job:
stage: build
image: python:latest
script:
- ...
Here, this job overrides the runner's default.
image: python:latest
stages:
- build
- db_setup
Build Job:
stage: build
script:
- # run some build steps
Database Setup Job:
stage: db_setup
image: mysql:latest
script:
- mysql -h my-host.example.com -u my-user -pmy-password -e "create database my-database;"
In this example, we have a "pipeline default" image that the "Build Job" uses since it doesn't specify its own image, but the "Database Setup Job" uses the mysql:latest image.
Here's an example where the runner's default image is ruby:latest
stages:
- build
- test
Build Job:
stage: build
script:
- # run some build steps
Test Job:
stage: test
image: golang:latest
script:
- # run some tests
In this last example, the "Build Job" uses the runner's base image, ruby:latest, but the "Test Job" uses golang:latest.
For you second question, it's up to you but the convention is to only commit source code and not dependencies/compiled resources, but again it's just a convention.
build:
I installed sonarqube in my MAC machine using the docker compose given below.
version: "2"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
# This needs explicit mapping due to https://github.com/docker-library/postgres/blob/4e48e3228a30763913ece952c611e5e9b95c8759/Dockerfile.template#L52
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:
After which I used the command
sonar-scanner
to analyse the project using sonarqube.
The analysis report is shown above. If you notice, the code coverage part is left blank, even though I have written some python unittest scripts. Please suggest a way so that I can get the code coverage report for my python project in sonarqube. Thanks in advance.
SonarQube doesn't calculate a code coverage. It only displays results provided by other tools.
You have to execute a tool which calculates code coverage (e.g. Coverage.py) and next add analysis parameters:
sonar.python.coverage.reportPath - a report path of the unit test results
sonar.python.coverage.itReportPath - a report path of the integration test results
You can read everything on SonarQube wiki: https://docs.sonarqube.org/display/PLUG/Python+Coverage+Results+Import
You’ll need a code coverage tool to analyze how much of the project’s code is coverage by unit tests.
As mentioned, one of the tools is coverage.
The coverage tool can be used to generate a SonarQube-compatible XML report, which is then uploaded to SonarQube.
Once installed, run coverage xml.
In your sonar-project.properties add:
sonar.python.coverage.reportPath=coverage.xml
Remember to add the auto-generated coverage output files to .gitignore:
.coverage
coverage.xml