How to get detailed error information when gitlab-ci fails - python

Gitlab version is 13.6.6
Gitlab-runner version is 11.2.0
my .gitlab-ci.yml:
image: "python:3.7"
before_script:
- pip install flake8
flake8:
stage: test
script:
- flake8 -max-line-length=79
tags:
- test
The only information obtained from Pipelines is script failure and the output of failed job is No job log. How can I get more detailed error output?

Using artifacts can help you.
image: "python:3.7"
before_script:
- pip install flake8
flake8:
stage: test
script:
- flake8 -max-line-length=79
- cd path/to
tags:
- test
artifacts:
when: on_failure
paths:
- path/to/test.log
The log file can be downloaded via the web interface.
Note:- Using when: on_failure will ensure that test.log will only be collected if the build fails, saving disk space on successful builds.

Related

How to install python and run a python file in a gitlab job

image : mcr.microsoft.com/dotnet/core/sdk:3.1
.deploy: &deploy
before_script:
- apt-get update -y
script:
- cd source/
- pip install -r requirements.txt
- python build_file.py > swagger.yml
I want to run the build_file.py file and write the output to swagger.yml. So to run the file I need to install python. How can I do that?
You can use a different Docker image for each job, so you can split your deployment stage into multiple jobs. In one use the python:3 image for example to run pip and generate the swagger.yml, then define it as an artifact that will be used by the next jobs.
Example (untested!) snippet:
deploy-swagger:
image: python:3
stage: deploy
script:
- cd source/
- pip install -r requirements.txt
- python build_file.py > swagger.yml
artifacts:
paths:
- source/swagger.yml
deploy-dotnet:
image: mcr.microsoft.com/dotnet/core/sdk:3.1
stage: deploy
dependencies:
- deploy-swagger
script:
- ls -l source/swagger.yml
- ...
You could (probably should) also make the swagger generation be part of previous stage and set an expiration for the artifact. See this blog post for example.

how to template python tasks in azure devops pipelines

I have two repositories A & B.
Azure Repository A - Contains a python app
Azure Repository B - Contains .yml templates and .py scripts I want to run in the .yml templates
According to the documentations.. I cannot do this because when I expand the template into the calling repository A's pipeline.. it will be like a code directive and just inject the code.. it will not know or care about the .py files in the respoitory.
What are my options without doing all my .py routines as inline ?
Azure Repo A's Pipeline Yaml file
trigger: none
resources:
pipelines:
- pipeline: my_project_a_pipeline
source: trigger_pipeline
trigger:
branches:
include:
- master
repositories:
- repository: template_repo_b
type: git
name: template_repo_b
ref: main
stages:
- template: pipelines/some_template.yml#template_repo_b
parameters:
SOME_PARAM_KEY: "some_param_value"
Azure Repo B's some_template.yml
parameters:
- name: SOME_PARAM_KEY
type: string
stages:
- stage: MyStage
displayName: "SomeStage"
jobs:
- job: "MyJob"
displayName: "MyJob"
steps:
- bash: |
echo Bashing
ls -la
displayName: 'Execute Warmup'
- task: PythonScript#0
inputs:
scriptSource: "filePath"
scriptPath: /SOME_PATH_ON_REPO_B/my_dumb_script.py
script: "my_dumb_script.py"
Is there an option to wire in the .py files into a completely separate repo C... add C to resources of B templates.. and be on my way ?
EDIT:
I can see In Azure templates repository, is there a way to mention repository for a filePath parameter of azure task 'pythonScript'? but then how do I consume the python package.. can I still use the PythonScript task ? sounds like I would then need to call my pip packaged code straight from bash ??
I figured it out.. how to pip install py files in azure devops pipelines.. using azure repositories.. via a template in the same repo
just add a reference to yourself at the top of any template
In the consuming repo
repositories:
- repository: this_template_repo
type: git
name: this_template_repo
ref: master
then add a job, referencing yourself by that name
- job: "PIP_INSTALL_LIBS"
displayName: "pip install libraries to agent"
steps:
- checkout: this_template_repo
path: this_template_repo
- bash: |
python3 -m pip install setuptools
python3 -m pip install -e $(Build.SourcesDirectory)/somepypimodule/src --force-reinstall --no-deps
displayName: 'pip install pip package'

Azure devops pipelines cache python dependencies

I want to cache the dependencies in requirement.txt. See https://learn.microsoft.com/en-us/azure/devops/pipelines/release/caching?view=azure-devops#pythonpip. Here is my azure-pipelines.yml
# Python package
# Create and test a Python package on multiple Python versions.
# Add steps that analyze code, save the dist with the build record, publish to a PyPI-compatible index, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/python
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
Python38:
python.version: '3.8'
variables:
PIP_CACHE_DIR: $(Pipeline.Workspace)/.pip
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- task: Cache#2
inputs:
key: 'python | "$(Agent.OS)" | requirements.txt'
restoreKeys: |
python | "$(Agent.OS)"
python
path: $(PIP_CACHE_DIR)
displayName: Cache pip packages
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- script: |
pip install pytest pytest-azurepipelines
pytest
displayName: 'pytest'
The dependencies specified in my requirements.txt are installed in every pipeline run.
The pipeline task Cache#2 gives the following output.
Starting: Cache pip packages
==============================================================================
Task : Cache
Description : Cache files between runs
Version : 2.0.1
Author : Microsoft Corporation
Help : https://aka.ms/pipeline-caching-docs
==============================================================================
Resolving key:
- python [string]
- "Linux" [string]
- requirements.txt [file] --> EBB7474E7D5BC202D25969A2E11E0D16251F0C3F3F656F1EE6E2BB7B23868B10
Resolved to: python|"Linux"|jNwyZU113iWcGlReTrxg8kzsyeND5OIrPLaN0I1rRs0=
Resolving restore key:
- python [string]
- "Linux" [string]
Resolved to: python|"Linux"|**
Resolving restore key:
- python [string]
Resolved to: python|**
ApplicationInsightsTelemetrySender will correlate events with X-TFS-Session 85b76fe3-b469-4330-a584-db569bc45342
Getting a pipeline cache artifact with one of the following fingerprints:
Fingerprint: `python|"Linux"|jNwyZU113iWcGlReTrxg8kzsyeND5OIrPLaN0I1rRs0=`
Fingerprint: `python|"Linux"|**`
Fingerprint: `python|**`
There is a cache miss.
ApplicationInsightsTelemetrySender correlated 1 events with X-TFS-Session 85b76fe3-b469-4330-a584-db569bc45342
Finishing: Cache pip packages
Enabling system diagnostics and viewing the log of Post-job: Cache pip packages showed the reason why no cache was created.
##[debug]Evaluating condition for step: 'Cache pip packages'
##[debug]Evaluating: AlwaysNode()
##[debug]Evaluating AlwaysNode:
##[debug]=> True
##[debug]Result: True
Starting: Cache pip packages
==============================================================================
Task : Cache
Description : Cache files between runs
Version : 2.0.1
Author : Microsoft Corporation
Help : https://aka.ms/pipeline-caching-docs
==============================================================================
##[debug]Skipping because the job status was not 'Succeeded'.
Finishing: Cache pip packages
There were failing tests in the build pipeline. The cache was used after I removed the failing tests.

Push Docker image to registry before using for azure pipelines

For my tests in azure-pipeline, I want to use a container that I then push to Docker Hub.
Actually, the steps are the following:
Pull image from registry
Do the tests
Push the new image with the new commits in the code in the Registry
The problem: The image Pulled from the registry contains the previous code, not the one that I am testing.
What I want to do:
First, deploy the image with the new code in the Docker registry
Then, steps 1 to 3 mentionned before, so that the image that I pull is the up-to-date.
Here is my actual code:
trigger:
- master
resources:
containers:
- container: moviestr_backend
image: nolwenbrosson/cicd:moviestr_backend-$(SourceBranchName)
ports:
- 5000:5000
- container: backend_mongo
image: mongo
ports:
- 27017:27017
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
Python37:
python.version: '3.7'
services:
moviestr_backend: moviestr_backend
backend_mongo: backend_mongo
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements.dev.txt
pip install pytest-azurepipelines
displayName: 'Install dependencies'
- script: |
python -m pytest
displayName: 'Make Unit tests'
- task: Docker#2
displayName: Login to Docker Hub
inputs:
command: login
containerRegistry: cicd
- task: Docker#2
displayName: Build and Push
inputs:
command: buildAndPush
repository: nolwenbrosson/cicd
tags: |
moviestr_backend-master
- task: Docker#2
displayName: Logout of ACR
inputs:
command: logout
containerRegistry: cicd
The problem is ressources is unique for the whole pipeline, and it will Pull the image at the beginning, not after I build my image with my up-to-date code. So, how can I do?
You could try to sperate your docker build and push task in your scenario.
First docker build your image with changed code, and then docker run your new build image.
Then docker test your image, finally docker push it.

Is there a Python/Django equivalent to Rails bundler-audit?

I'm fairly new to Django so apologies in advance if this is obvious.
In Rails projects, I use a gem called bundler-audit to check that the patch level of the gems I'm installing don't include security vulnerabilities. Normally, I incorporate running bundler-audit into my CI pipeline so that any time I deploy, I get a warning (and fail) if a gem has a security vulnerability.
Is there a similar system for checking vulnerabilities in Python packages?
After writing out this question, I searched around some more and found Safety, which was exactly what I was looking for.
In case anyone else is setting up CircleCI for a Django project and wants to check their packages for vulnerabilities, here is the configuration I used in my .circleci/config.yml:
version: 2
jobs:
build:
# build and run tests
safety_check:
docker:
- image: circleci/python:3.6.1
steps:
- checkout
- run:
command: |
python3 -m venv env3
. env3/bin/activate
pip install safety
# specify requirements.txt
safety check -r requirements.txt
merge_master:
# merge passing code into master
workflows:
version: 2
test_and_merge:
jobs:
- build:
filters:
branches:
ignore: master
- safety_check:
filters:
branches:
ignore: master
- merge_master:
filters:
branches:
only: develop
requires:
- build
# code is only merged if safety check passes
- safety_check
To check that this works, run pip install insecure-package && pip freeze > requirements.txt then push and watch for Circle to fail.

Categories

Resources