Is there a way to exclude pytests marked with pytest.mark from running during the pre-commit hook?
Especially, I'd like to exclude the tests that are marked as integration tests.
The content of a tests looks like this
pytestmark = [pytest.mark.integration, pytest.mark.reporting_api]
### some tests
and the .pre-commit-conifg.yaml pytest configuration is
- repo: local
hooks:
- id: pytest
name: pytest
entry: pytest test/
language: system
pass_filenames: false
types: [python]
stages: [commit, push]
2c: it's not a good idea to run tests as part of pre-commit -- in my experience they're going to be too slow and your contributors may get frustrated and turn off the framework entirely
that said, it should be as simple as adding the arguments you want to either entry or args -- personally I prefer entry when working with repo: local hooks (since there's nothing that would "conventionally" override args)
in your case this would look like:
- repo: local
hooks:
- id: pytest
name: pytest
entry: pytest test/ -m 'not integration and not reporting_api'
language: system
pass_filenames: false
types: [python]
stages: [commit, push]
disclaimer: I created pre-commit and I'm a pytest core dev
Related
I have project with python and reactjs.
Project_folder
--Django_project
--Reactjs_project
--.git
I want to use pre-commit hooks for git using flake8 and black for my python project.
In this case how to tell git to check pre-commit hook only for python project.
You could use one of the pre-commit configurations, like exclude, as seen for flake8.
In your case:
- id: myProject
exclude: ^Reactjs_project/
Replace 'myProject' by the id you have used for declaring pre-commit.
I want to run the test cases for my python code where am using flask framework.
You can use this command to run the test suite in flask framework
pytest --cov=src --cov-report=html
That depends on how you've written the test cases in the first place. Happily, though, Pytest tends to be able to run whatever at least close to standard tests you have, and pytest-cov adds coverage.
So, once you have pytest and pytest-cov installed, you can
pytest --cov . --cov-report term --cov-report html
and you'll get
a report of coverage in the console
a htmlcov/ directory with pretty, colorful coverage information.
If you've written your code as REST API. I would recommend pyresttest.
You can write your test cases as simple as this in a test.yaml file.
- test: # create entity by POST
- name: "Create person"
- url: "/api/person/"
- method: "POST"
- body: '{"first_name": "Ahmet","last_name": "Tatanga"}'
- headers: {Content-Type: application/json}
Then you just run this test case by
pyresttest test.yaml
You can implement some validators for returned JSON. To learn more please check here.
I have GitHub Actions that build and test my Python application. I am also using pytest-cov to generate a code coverage report. This report is being uploaded to codecov.io.
I know that codecov.io can't fail your build if the coverage lowers, so how do I go about with GitHub Actions to fail the build if the coverage drops? Do I have to check the previous values and compare with the new "manually" (having to write a script)? Or is there an existing solution for this?
One solution is that you can do a job with 2 steps which are :
Check if the coverage has dropped or not
Build in function of your result
If the step 1 fail, no build.
You can do a python script and return an error if the coverage drops.
Try something like that :
jobs:
build:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v1
- name: Set Up Python
uses: actions/setup-python#v1
- name: Test Coverage
run: python check_coverage.py
- name: Build
if: success()
run: python do_something.py # <= here you're doing your build
I hope it helps.
There is nothing built-in, instead you should use one of the many integrations like sonarqube, if I don’t want to write a custom script.
I have a simple python project with a single file currently. It exists within the static/cgi-bin folder of my project. Currently, in the base of my directory, I have a .pre-commit-config.yaml file, and I have not touched the files in the .git/hooks folder. I would like to create pre-commit and pre-push hooks, but I can not seem to get it working.
When I try to commit, the following happens:
isort................................................(no files to check)Skipped
flake8...............................................(no files to check)Skipped
black................................................(no files to check)Skipped
When I try to push, I get the following error:
pytest...................................................................Failed
hookid: pytest
============================= test session starts ==============================
platform darwin -- Python 2.7.15, pytest-4.0.2, py-1.7.0, pluggy-0.8.0
rootdir: /Users/.../deployment, inifile:
collected 0 items
========================= no tests ran in 0.01 seconds =========================
error: failed to push some refs to '...git'
Note that deployment is the folder I am working in.
My code in the yaml file is:
repos:
- repo: local
hooks:
- id: isort
name: isort
entry: isort
language: system
types: [python]
stages: [commit]
- id: flake8
name: flake8
language: system
entry: flake8
types: [python]
stages: [commit]
- id: black
language_version: python3.6
name: black
language: system
entry: black
types: [python]
stages: [commit]
- id: pytest
name: pytest
language: system
entry: pytest
pass_filenames: false
always_run: true
stages: [push]
pre-commit will pass a list of staged files which match types / files to the entry listed.
Your commit shows "no files to check" because there were no python files in your commit. You probably want to run pre-commit run --all-files when first introducing new hooks
As for your pytest hook, pytest exits nonzero when it does not run any tests and so that is failing.
I installed sonarqube in my MAC machine using the docker compose given below.
version: "2"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
# This needs explicit mapping due to https://github.com/docker-library/postgres/blob/4e48e3228a30763913ece952c611e5e9b95c8759/Dockerfile.template#L52
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:
After which I used the command
sonar-scanner
to analyse the project using sonarqube.
The analysis report is shown above. If you notice, the code coverage part is left blank, even though I have written some python unittest scripts. Please suggest a way so that I can get the code coverage report for my python project in sonarqube. Thanks in advance.
SonarQube doesn't calculate a code coverage. It only displays results provided by other tools.
You have to execute a tool which calculates code coverage (e.g. Coverage.py) and next add analysis parameters:
sonar.python.coverage.reportPath - a report path of the unit test results
sonar.python.coverage.itReportPath - a report path of the integration test results
You can read everything on SonarQube wiki: https://docs.sonarqube.org/display/PLUG/Python+Coverage+Results+Import
You’ll need a code coverage tool to analyze how much of the project’s code is coverage by unit tests.
As mentioned, one of the tools is coverage.
The coverage tool can be used to generate a SonarQube-compatible XML report, which is then uploaded to SonarQube.
Once installed, run coverage xml.
In your sonar-project.properties add:
sonar.python.coverage.reportPath=coverage.xml
Remember to add the auto-generated coverage output files to .gitignore:
.coverage
coverage.xml