I have a GUI program I'm managing, written in Python. For the sake of not having to worry about environments, it's distributed as an executable built with PyInstaller. I can run this build from a function defined in the module as MyModule.build() (because to me it makes more sense to manage that script alongside the project itself).
I want to automate this to some extent, such that when a new release is added on Gitlab, it can be built and attached to the release by a runner. The approach I currently have to this is functional but hacky:
I use the Gitlab API to download the source of the tag for the release. I run python -m pip install -r {requirements_path} and python -m pip install {source_path} in the runner's environment. Then import and run the MyModule.build() function to generate an executable. Which is then uploaded and linked to the release with the Gitlab API.
Obviously the middle section is wanting. What are best approaches for similar projects? Can the package and requirments be installed in a separate venv than the one the runner script it running in?
One workflow would be to push a tag to create your release. The following jobs have a rules: configuration so they only run on tag pipelines.
One job will build the executable file. Another job will create the GitLab release using the file created in the first job.
build:
rules:
- if: "$CI_COMMIT_TAG" # Only run when tags are pushed
image: python:3.9-slim
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache: # https://docs.gitlab.com/ee/ci/caching/#cache-python-dependencies
paths:
- .cache/pip
- venv/
script:
- python -m venv venv
- source venv/bin/activate
- python -m pip install -r requirements.txt # package requirements
- python -m pip install pyinstaller # build requirements
- pyinstaller --onefile --name myapp mypackage/__main__.py
artifacts:
paths:
- dist
create_release:
rules:
- if: "$CI_COMMIT_TAG"
needs: [build]
image: registry.gitlab.com/gitlab-org/release-cli:latest
script: # zip/upload your binary wherever it should be downloaded from
- echo "Uploading release!"
release: # create GitLab release
tag_name: $CI_COMMIT_TAG
name: 'Release of myapp version $CI_COMMIT_TAG'
description: 'Release created using the release-cli.'
assets: # link uploaded asset(s) to the release
- name: 'release-zip'
url: 'https://example.com/downloads/myapp/$CI_COMMIT_TAG/myapp.zip'
Related
I have a project on ReadTheDocs that I'm trying to build. I'm using a very basic .readthedocs.yaml file that reads:
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Set the version of Python and other tools you might need
build:
os: ubuntu-20.04
tools:
python: "3.9"
# You can also specify other tool versions:
# nodejs: "16"
# rust: "1.55"
# golang: "1.17"
# Build documentation in the docs/ directory with Sphinx
sphinx:
builder: html
configuration: docs/source/conf.py
fail_on_warning: true
# If using Sphinx, optionally build your docs in additional formats such as PDF
# formats:
# - pdf
# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: docs/requirements.txt
conda:
environment: environment.yml
Unfortunately, the RTD build logs seem to tell me that after cloning and writing out the environment.yml file, the build process runs python env create --quiet --name develop --file environment.yml. This obviously fails with "no such file or directory" (Error 2) since, well, no such file or directory as env exists in the directory structure. Shouldn't RTD be running conda create here? How do I make it do the right thing?
Thanks,
Eli
This problem is described in https://github.com/readthedocs/readthedocs.org/issues/8595
In summary, python: "3.9" now means that CPython 3.9 + venv are used, regardless of conda.environment.
If one wants to use a conda environment, python: "miniconda3-4.7" or python: "mambaforge-4.10" need to be specified.
In the future, a better error message should be shown at least. Feel free to upvote the issue.
I have two repositories A & B.
Azure Repository A - Contains a python app
Azure Repository B - Contains .yml templates and .py scripts I want to run in the .yml templates
According to the documentations.. I cannot do this because when I expand the template into the calling repository A's pipeline.. it will be like a code directive and just inject the code.. it will not know or care about the .py files in the respoitory.
What are my options without doing all my .py routines as inline ?
Azure Repo A's Pipeline Yaml file
trigger: none
resources:
pipelines:
- pipeline: my_project_a_pipeline
source: trigger_pipeline
trigger:
branches:
include:
- master
repositories:
- repository: template_repo_b
type: git
name: template_repo_b
ref: main
stages:
- template: pipelines/some_template.yml#template_repo_b
parameters:
SOME_PARAM_KEY: "some_param_value"
Azure Repo B's some_template.yml
parameters:
- name: SOME_PARAM_KEY
type: string
stages:
- stage: MyStage
displayName: "SomeStage"
jobs:
- job: "MyJob"
displayName: "MyJob"
steps:
- bash: |
echo Bashing
ls -la
displayName: 'Execute Warmup'
- task: PythonScript#0
inputs:
scriptSource: "filePath"
scriptPath: /SOME_PATH_ON_REPO_B/my_dumb_script.py
script: "my_dumb_script.py"
Is there an option to wire in the .py files into a completely separate repo C... add C to resources of B templates.. and be on my way ?
EDIT:
I can see In Azure templates repository, is there a way to mention repository for a filePath parameter of azure task 'pythonScript'? but then how do I consume the python package.. can I still use the PythonScript task ? sounds like I would then need to call my pip packaged code straight from bash ??
I figured it out.. how to pip install py files in azure devops pipelines.. using azure repositories.. via a template in the same repo
just add a reference to yourself at the top of any template
In the consuming repo
repositories:
- repository: this_template_repo
type: git
name: this_template_repo
ref: master
then add a job, referencing yourself by that name
- job: "PIP_INSTALL_LIBS"
displayName: "pip install libraries to agent"
steps:
- checkout: this_template_repo
path: this_template_repo
- bash: |
python3 -m pip install setuptools
python3 -m pip install -e $(Build.SourcesDirectory)/somepypimodule/src --force-reinstall --no-deps
displayName: 'pip install pip package'
I have installed Windows agent and I need to be able run Python scripts. I know I need to install Python, but I have no idea how.
I added Python files from standard installation to
$AGENT_TOOLSDIRECTORY/
Python/
3.8.2/
x64/
{tool files}
x64.complete
Restarted agent, but what now? How to put it to Capabilities?
What I'm missing?
EDIT:
I need to run this YAML task
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.x'
addToPath: true
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- task: BatchScript#1
displayName: 'Run script make.bat'
inputs:
filename: make.bat
arguments: html
I have set up a self-hosted agent on a Windows 10 laptop, (for which I have admin access), and I'm running Azure DevOps Express 2020.
I found, downloaded and installed an agent according to the instructions at Download and configure the agent. I used vsts-agent-win-x64-2.170.1.zip and set this up to run as a service, (I guess anyone running it manually needs to double check that it's runnning at show time). I also ran the install command as admin in powershell!
To install a Python version I need to download the appropriate installer from the ftp site at Python.org, eg. for 3.7.9 I've used python-3.7.9-amd64.exe.
I then run this from the command line (CMD run as administrator) without UI with:
python-3.7.9-amd64.exe /quiet InstallAllUsers=0 TargetDir=$AGENT_TOOLSDIRECTORY\Python\3.7.9\x64 Include_launcher=0
(other options for install available in python docs)
Once this is complete, (and it runs in background so will take longer than the initial command), you need to create an empty {platform}.complete file (as described here), in my case this is x64.complete.
This then worked! I did restart the server for this first version, but I've added other python versions since without needing to. My pipeline task was simply:
steps:
- task: UsePythonVersion#0
displayName: 'Use Python $(python.version)'
inputs:
versionSpec: '$(python.version)'
(with a variable python.version set us as a list of versions 3.7.9, 3.8.8)
One key element for me was the file structure, where the documentation says {tool files} this means the python.exe file and other common dirs such as Lib and Scripts. I initially installed these in a sub-dir which didn't work. So it should look like this:
$AGENT_TOOLSDIRECTORY/
Python/
3.7.9/
x64/
Doc/
Lib/
Scripts/
python.exe
...etc...
x64.complete
To be honest I'm mostly relieved that this worked without too much trouble. I gave up trying to get Artifacts to work on-prem. In my limited experience all of this is much easier, and better, on the cloud version. Haven't yet persuaded my employer to take that leap however...
For this issue, in order to use the python version installed in your on-premise machine. You either need to point to the python.exe physical path in cmd task. Or add the python.exe path to environment variable path manually in powershell task. For example:
To use local python in powershell task:
$env:Path += ";c:\{local path to}\Python\Python38\; c:\{local path to}\Python\Python38\Scripts\"
python -V
To use python in CMD task:
c:\{local path to}\Python\Python38\python.exe -V
c:\{local path to}\Python\Python38\Scripts\pip.exe install
So, I think to run python script with private agent, just make sure python is installed locally, then point to python.exe path. You can refer to this case for details.
I added these 4 Tasks before being able to execute python on my pipeline with a vs2017-win2016 agent:
Use Python 3.x
steps:
- task: UsePythonVersion#0
displayName: 'Use Python 3.x'
Use Pip Authenticate
steps:
- task: PipAuthenticate#1
displayName: 'Pip Authenticate'
Use Commandline task
steps:
- script: |
python -m pip install --upgrade pip setuptools wheel
failOnStderr: true
displayName: 'install pip for setup of python framework'
Use Commandline task
steps:
- script: 'pip install -r _python-test-harness/requirements.txt'
failOnStderr: true
displayName: 'install python framework project''s specific requirements'
Hope that helps
I have a python script that I am trying to run as part of gitlab pages deployment of a jekyll site. My site has blog posts that have various tags, and the python script generates the .md files for the tag pages. The script works perfectly fine when I just manually run it in an IDE, however I want it to be part of the gitlab ci deployment process
here is what my gitlab-ci.yml setup looks like:
run:
image: python:latest
script:
- python tag_generator.py
artifacts:
paths:
- public
only:
- master
pages:
image: ruby:2.3
stage: deploy
script:
- bundle install
- bundle exec jekyll build -d public
artifacts:
paths:
- public
only:
- master
however, it doesn't actually create the files that it's supposed to create, here is the output from the job "run":
...
Cloning repository...
Cloning into '/builds/username/projectname'...
Checking out 4c8a47fe as master...
Skipping Git submodules setup
$ python tag_generator.py
Tags generated, count 23
Uploading artifacts...
WARNING: public: no matching files
ERROR: No files to upload
Job succeeded
the script reads out "tags generated, count ___" once it's executed, so it is running, however the files that it's supposed to create aren't being created/uploaded into the right directory. there is a /tag directory in the root project folder, that is where they are supposed to go.
I realize that the issue must have something to do with the public folder, however when I don't have
artifacts:
paths:
- public
it still doesn't create the files in the /tag directory, so it doesn't work whether I have -public or not, and I don't know what the problem is.
I FIGURED IT OUT!
the "build" for the project isn't made in the repo, gitlab clones the repo into another place, so I had to change the artifact path for the python job so that it's in the cloned "build" location, like so:
run:
image: python:latest
stage: test
before_script:
- python -V # Print out python version for debugging
- pip install virtualenv
script:
- python tag_generator.py
artifacts:
paths:
- /builds/username/projectname/tag
only:
- master
I'm fairly new to Django so apologies in advance if this is obvious.
In Rails projects, I use a gem called bundler-audit to check that the patch level of the gems I'm installing don't include security vulnerabilities. Normally, I incorporate running bundler-audit into my CI pipeline so that any time I deploy, I get a warning (and fail) if a gem has a security vulnerability.
Is there a similar system for checking vulnerabilities in Python packages?
After writing out this question, I searched around some more and found Safety, which was exactly what I was looking for.
In case anyone else is setting up CircleCI for a Django project and wants to check their packages for vulnerabilities, here is the configuration I used in my .circleci/config.yml:
version: 2
jobs:
build:
# build and run tests
safety_check:
docker:
- image: circleci/python:3.6.1
steps:
- checkout
- run:
command: |
python3 -m venv env3
. env3/bin/activate
pip install safety
# specify requirements.txt
safety check -r requirements.txt
merge_master:
# merge passing code into master
workflows:
version: 2
test_and_merge:
jobs:
- build:
filters:
branches:
ignore: master
- safety_check:
filters:
branches:
ignore: master
- merge_master:
filters:
branches:
only: develop
requires:
- build
# code is only merged if safety check passes
- safety_check
To check that this works, run pip install insecure-package && pip freeze > requirements.txt then push and watch for Circle to fail.