I installed sonarqube in my MAC machine using the docker compose given below.
version: "2"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
# This needs explicit mapping due to https://github.com/docker-library/postgres/blob/4e48e3228a30763913ece952c611e5e9b95c8759/Dockerfile.template#L52
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:
After which I used the command
sonar-scanner
to analyse the project using sonarqube.
The analysis report is shown above. If you notice, the code coverage part is left blank, even though I have written some python unittest scripts. Please suggest a way so that I can get the code coverage report for my python project in sonarqube. Thanks in advance.
SonarQube doesn't calculate a code coverage. It only displays results provided by other tools.
You have to execute a tool which calculates code coverage (e.g. Coverage.py) and next add analysis parameters:
sonar.python.coverage.reportPath - a report path of the unit test results
sonar.python.coverage.itReportPath - a report path of the integration test results
You can read everything on SonarQube wiki: https://docs.sonarqube.org/display/PLUG/Python+Coverage+Results+Import
You’ll need a code coverage tool to analyze how much of the project’s code is coverage by unit tests.
As mentioned, one of the tools is coverage.
The coverage tool can be used to generate a SonarQube-compatible XML report, which is then uploaded to SonarQube.
Once installed, run coverage xml.
In your sonar-project.properties add:
sonar.python.coverage.reportPath=coverage.xml
Remember to add the auto-generated coverage output files to .gitignore:
.coverage
coverage.xml
Related
I wrote a ChatOps bot for the collaboration tool Mattermost using this framework. Now I'm trying to write and run integration tests and I used their examples. By cloning the git repository you can run the tests by yourself. Their docker-compose.yml file will only work on a Linux machine. If you want to reproduce it on a Mac machine, you'll have to edit the docker-compose.yml to:
version: "3.7"
services:
app:
container_name: "mattermost-bot-test"
build: .
command: ./mm/docker-entry.sh
ports:
- "8065:8065"
extra_hosts:
- "dockerhost:127.0.0.1"
After running the command docker-compose up -d Mattermost is available at localhost:8065. I only took one simple test from their project and copied it in base-test.py. You can see my source code here. After starting the test by running the command pytest --capture=no --log-cli-level=DEBUG . it will return the following error: AttributeError: Can't pickle local object 'start_bot.<locals>.run_bot'. This error also shows up on the same test case in their project. The error happens at line 92 in the utils.py file
What am I doing wrong here?
I don't know if you already went down this road but I think you might get past the pickling error by making run_bot take the bot that it does bot.run() with as an argument and then pass it to the process.
Take a look at the Action tab on that GitHub repository. Pytest seems to execute correctly (ignoring the exceptions on the webhook test)
Here is a recent run you can use to compare your environment set-up: https://github.com/attzonko/mmpy_bot/runs/4289644769?check_suite_focus=true
Gitlab version is 13.6.6
Gitlab-runner version is 11.2.0
my .gitlab-ci.yml:
image: "python:3.7"
before_script:
- pip install flake8
flake8:
stage: test
script:
- flake8 -max-line-length=79
tags:
- test
The only information obtained from Pipelines is script failure and the output of failed job is No job log. How can I get more detailed error output?
Using artifacts can help you.
image: "python:3.7"
before_script:
- pip install flake8
flake8:
stage: test
script:
- flake8 -max-line-length=79
- cd path/to
tags:
- test
artifacts:
when: on_failure
paths:
- path/to/test.log
The log file can be downloaded via the web interface.
Note:- Using when: on_failure will ensure that test.log will only be collected if the build fails, saving disk space on successful builds.
I'm working on a Project where I'm trying to setup a HTTP Server in C#. The responds from the server are tested using the pytest Module.
This is what I've done so far:
Define the API using swagger editor
generate base code using swagger generator
write some python Tests which are sending requests to the server and testing whether or not the responds fulfill certain requirements
I now want to set up CI on gitlab before I start actually writing the functions that correspond to the routes I've defined earlier. I set up a Runner on my local machine (it's later going to be on a dedicated server) using Docker.
As I am new to CI, there are a few questions I'm struggling with:
As I need both Python and .NET for testing, should I choose .NET as a base image and then install Python or Python as base image and then install .NET? What would be easier? I tried the latter but it doesn't seem very elegant...
Do I build before I push to the remote repository and include the /bin folder in my repository to execute those files or would I rather build during the CI and therefore not have to push anything but sourcecode?
I know those questions are a little bit wild but as I am new to CI and also Docker I'm looking for advise on how to follow best practices (if there are any).
The base image for a runner is just the default if you don't specify one in your .gitlab-ci.yml file. You can override the runner's default image by using a "pipeline default" image at the top of your .gitlab-ci.yml file (outside of any jobs), or you can specify the image for each job individually.
Using a "pipeline default" image:
image: python:latest
stages:
- build
...
In this example, all jobs will use the python:latest image unless the job specifies its own image like this example:
stages:
- build
- test
Build Job:
stage: build
image: python:latest
script:
- ...
Here, this job overrides the runner's default.
image: python:latest
stages:
- build
- db_setup
Build Job:
stage: build
script:
- # run some build steps
Database Setup Job:
stage: db_setup
image: mysql:latest
script:
- mysql -h my-host.example.com -u my-user -pmy-password -e "create database my-database;"
In this example, we have a "pipeline default" image that the "Build Job" uses since it doesn't specify its own image, but the "Database Setup Job" uses the mysql:latest image.
Here's an example where the runner's default image is ruby:latest
stages:
- build
- test
Build Job:
stage: build
script:
- # run some build steps
Test Job:
stage: test
image: golang:latest
script:
- # run some tests
In this last example, the "Build Job" uses the runner's base image, ruby:latest, but the "Test Job" uses golang:latest.
For you second question, it's up to you but the convention is to only commit source code and not dependencies/compiled resources, but again it's just a convention.
build:
I have GitHub Actions that build and test my Python application. I am also using pytest-cov to generate a code coverage report. This report is being uploaded to codecov.io.
I know that codecov.io can't fail your build if the coverage lowers, so how do I go about with GitHub Actions to fail the build if the coverage drops? Do I have to check the previous values and compare with the new "manually" (having to write a script)? Or is there an existing solution for this?
One solution is that you can do a job with 2 steps which are :
Check if the coverage has dropped or not
Build in function of your result
If the step 1 fail, no build.
You can do a python script and return an error if the coverage drops.
Try something like that :
jobs:
build:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v1
- name: Set Up Python
uses: actions/setup-python#v1
- name: Test Coverage
run: python check_coverage.py
- name: Build
if: success()
run: python do_something.py # <= here you're doing your build
I hope it helps.
There is nothing built-in, instead you should use one of the many integrations like sonarqube, if I don’t want to write a custom script.
I have a python script that I am trying to run as part of gitlab pages deployment of a jekyll site. My site has blog posts that have various tags, and the python script generates the .md files for the tag pages. The script works perfectly fine when I just manually run it in an IDE, however I want it to be part of the gitlab ci deployment process
here is what my gitlab-ci.yml setup looks like:
run:
image: python:latest
script:
- python tag_generator.py
artifacts:
paths:
- public
only:
- master
pages:
image: ruby:2.3
stage: deploy
script:
- bundle install
- bundle exec jekyll build -d public
artifacts:
paths:
- public
only:
- master
however, it doesn't actually create the files that it's supposed to create, here is the output from the job "run":
...
Cloning repository...
Cloning into '/builds/username/projectname'...
Checking out 4c8a47fe as master...
Skipping Git submodules setup
$ python tag_generator.py
Tags generated, count 23
Uploading artifacts...
WARNING: public: no matching files
ERROR: No files to upload
Job succeeded
the script reads out "tags generated, count ___" once it's executed, so it is running, however the files that it's supposed to create aren't being created/uploaded into the right directory. there is a /tag directory in the root project folder, that is where they are supposed to go.
I realize that the issue must have something to do with the public folder, however when I don't have
artifacts:
paths:
- public
it still doesn't create the files in the /tag directory, so it doesn't work whether I have -public or not, and I don't know what the problem is.
I FIGURED IT OUT!
the "build" for the project isn't made in the repo, gitlab clones the repo into another place, so I had to change the artifact path for the python job so that it's in the cloned "build" location, like so:
run:
image: python:latest
stage: test
before_script:
- python -V # Print out python version for debugging
- pip install virtualenv
script:
- python tag_generator.py
artifacts:
paths:
- /builds/username/projectname/tag
only:
- master