I have a travis integration in my project with build language as python. I want to integrate postman test which requires Node installation. Should i create a separate build for this? Is there a way to accommodate this in the same build. I tried adding a new env but apparently I was getting the tox error.
This is a broad guideline. Ideally:
The jobs are atomic (independent of setup and run)
You install npm or configure Travis to have npm setup alongside Python
You run your jobs in the order you desire (usually Newman last)
It seems nowdays you can have a language per job in Travis. Check: https://docs.travis-ci.com/user/build-matrix/#using-different-programming-languages-per-job. For example:
dist: xenial
language: php
php:
- '5.6'
jobs:
include:
- language: python
python: 3.8
script:
- python -c "print('Hi from Python!')"
- language: node_js
node_js: 12
script:
- npm i newman -g
- newman run COLLECTION
So this will likely allow you to keep one single build + test run.
Related
I have a GUI program I'm managing, written in Python. For the sake of not having to worry about environments, it's distributed as an executable built with PyInstaller. I can run this build from a function defined in the module as MyModule.build() (because to me it makes more sense to manage that script alongside the project itself).
I want to automate this to some extent, such that when a new release is added on Gitlab, it can be built and attached to the release by a runner. The approach I currently have to this is functional but hacky:
I use the Gitlab API to download the source of the tag for the release. I run python -m pip install -r {requirements_path} and python -m pip install {source_path} in the runner's environment. Then import and run the MyModule.build() function to generate an executable. Which is then uploaded and linked to the release with the Gitlab API.
Obviously the middle section is wanting. What are best approaches for similar projects? Can the package and requirments be installed in a separate venv than the one the runner script it running in?
One workflow would be to push a tag to create your release. The following jobs have a rules: configuration so they only run on tag pipelines.
One job will build the executable file. Another job will create the GitLab release using the file created in the first job.
build:
rules:
- if: "$CI_COMMIT_TAG" # Only run when tags are pushed
image: python:3.9-slim
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache: # https://docs.gitlab.com/ee/ci/caching/#cache-python-dependencies
paths:
- .cache/pip
- venv/
script:
- python -m venv venv
- source venv/bin/activate
- python -m pip install -r requirements.txt # package requirements
- python -m pip install pyinstaller # build requirements
- pyinstaller --onefile --name myapp mypackage/__main__.py
artifacts:
paths:
- dist
create_release:
rules:
- if: "$CI_COMMIT_TAG"
needs: [build]
image: registry.gitlab.com/gitlab-org/release-cli:latest
script: # zip/upload your binary wherever it should be downloaded from
- echo "Uploading release!"
release: # create GitLab release
tag_name: $CI_COMMIT_TAG
name: 'Release of myapp version $CI_COMMIT_TAG'
description: 'Release created using the release-cli.'
assets: # link uploaded asset(s) to the release
- name: 'release-zip'
url: 'https://example.com/downloads/myapp/$CI_COMMIT_TAG/myapp.zip'
I am trying to add a python linting step to an Azure DevOps pipeline using Super-Linter. I followed the instruction here to add the following step in my pipeline:
jobs:
- job: PythonLint
displayName: Python Lint
pool:
vmImage: ubuntu-latest
steps:
- script: |
docker pull github/super-linter:latest
docker run -e RUN_LOCAL=true -e VALIDATE_PYTHON_PYLINT=true -v $(System.DefaultWorkingDirectory):/tmp/lint github/super-linter
displayName: 'Code Scan using GitHub Super-Linter'
The Super-Linter github docs page describes passing environment variables to tell it which linters to run. I pass the variable VALIDATE_PYTHON_PYLINT to tell it to run pylint. When I run the pipeline, I get a load of errors of the form E0401: Unable to import 'pyspark.sql' (import-error).
Pylint's configuration page says the modules should be passed as command line arguments.
My question is, how can I configure Super-Linter to pass these arguments or otherwise tell pylint which modules to import?
I have installed Windows agent and I need to be able run Python scripts. I know I need to install Python, but I have no idea how.
I added Python files from standard installation to
$AGENT_TOOLSDIRECTORY/
Python/
3.8.2/
x64/
{tool files}
x64.complete
Restarted agent, but what now? How to put it to Capabilities?
What I'm missing?
EDIT:
I need to run this YAML task
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.x'
addToPath: true
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- task: BatchScript#1
displayName: 'Run script make.bat'
inputs:
filename: make.bat
arguments: html
I have set up a self-hosted agent on a Windows 10 laptop, (for which I have admin access), and I'm running Azure DevOps Express 2020.
I found, downloaded and installed an agent according to the instructions at Download and configure the agent. I used vsts-agent-win-x64-2.170.1.zip and set this up to run as a service, (I guess anyone running it manually needs to double check that it's runnning at show time). I also ran the install command as admin in powershell!
To install a Python version I need to download the appropriate installer from the ftp site at Python.org, eg. for 3.7.9 I've used python-3.7.9-amd64.exe.
I then run this from the command line (CMD run as administrator) without UI with:
python-3.7.9-amd64.exe /quiet InstallAllUsers=0 TargetDir=$AGENT_TOOLSDIRECTORY\Python\3.7.9\x64 Include_launcher=0
(other options for install available in python docs)
Once this is complete, (and it runs in background so will take longer than the initial command), you need to create an empty {platform}.complete file (as described here), in my case this is x64.complete.
This then worked! I did restart the server for this first version, but I've added other python versions since without needing to. My pipeline task was simply:
steps:
- task: UsePythonVersion#0
displayName: 'Use Python $(python.version)'
inputs:
versionSpec: '$(python.version)'
(with a variable python.version set us as a list of versions 3.7.9, 3.8.8)
One key element for me was the file structure, where the documentation says {tool files} this means the python.exe file and other common dirs such as Lib and Scripts. I initially installed these in a sub-dir which didn't work. So it should look like this:
$AGENT_TOOLSDIRECTORY/
Python/
3.7.9/
x64/
Doc/
Lib/
Scripts/
python.exe
...etc...
x64.complete
To be honest I'm mostly relieved that this worked without too much trouble. I gave up trying to get Artifacts to work on-prem. In my limited experience all of this is much easier, and better, on the cloud version. Haven't yet persuaded my employer to take that leap however...
For this issue, in order to use the python version installed in your on-premise machine. You either need to point to the python.exe physical path in cmd task. Or add the python.exe path to environment variable path manually in powershell task. For example:
To use local python in powershell task:
$env:Path += ";c:\{local path to}\Python\Python38\; c:\{local path to}\Python\Python38\Scripts\"
python -V
To use python in CMD task:
c:\{local path to}\Python\Python38\python.exe -V
c:\{local path to}\Python\Python38\Scripts\pip.exe install
So, I think to run python script with private agent, just make sure python is installed locally, then point to python.exe path. You can refer to this case for details.
I added these 4 Tasks before being able to execute python on my pipeline with a vs2017-win2016 agent:
Use Python 3.x
steps:
- task: UsePythonVersion#0
displayName: 'Use Python 3.x'
Use Pip Authenticate
steps:
- task: PipAuthenticate#1
displayName: 'Pip Authenticate'
Use Commandline task
steps:
- script: |
python -m pip install --upgrade pip setuptools wheel
failOnStderr: true
displayName: 'install pip for setup of python framework'
Use Commandline task
steps:
- script: 'pip install -r _python-test-harness/requirements.txt'
failOnStderr: true
displayName: 'install python framework project''s specific requirements'
Hope that helps
I have a travis job that looks like this:
jobs:
include:
- stage: "Unit tests"
language: python
python:
- "3.6"
- "3.7"
install:
- pip install -r requirements.txt
script:
- python -m unittest test.client
I would expect this unit test to run two jobs one for python 3.6 and one for 3.7 however it always only runs for the first version listed. Am I missing something here? I followed the guide from the docs
Thanks
The python versions are not defined within the jobs but on the root level.
python:
- "3.6"
- "3.7"
jobs:
...
I found this out because travis recently introduced a build config validation. It can be found under your build -> View config -> Build config validation
I'm fairly new to Django so apologies in advance if this is obvious.
In Rails projects, I use a gem called bundler-audit to check that the patch level of the gems I'm installing don't include security vulnerabilities. Normally, I incorporate running bundler-audit into my CI pipeline so that any time I deploy, I get a warning (and fail) if a gem has a security vulnerability.
Is there a similar system for checking vulnerabilities in Python packages?
After writing out this question, I searched around some more and found Safety, which was exactly what I was looking for.
In case anyone else is setting up CircleCI for a Django project and wants to check their packages for vulnerabilities, here is the configuration I used in my .circleci/config.yml:
version: 2
jobs:
build:
# build and run tests
safety_check:
docker:
- image: circleci/python:3.6.1
steps:
- checkout
- run:
command: |
python3 -m venv env3
. env3/bin/activate
pip install safety
# specify requirements.txt
safety check -r requirements.txt
merge_master:
# merge passing code into master
workflows:
version: 2
test_and_merge:
jobs:
- build:
filters:
branches:
ignore: master
- safety_check:
filters:
branches:
ignore: master
- merge_master:
filters:
branches:
only: develop
requires:
- build
# code is only merged if safety check passes
- safety_check
To check that this works, run pip install insecure-package && pip freeze > requirements.txt then push and watch for Circle to fail.