how to template python tasks in azure devops pipelines - python

I have two repositories A & B.
Azure Repository A - Contains a python app
Azure Repository B - Contains .yml templates and .py scripts I want to run in the .yml templates
According to the documentations.. I cannot do this because when I expand the template into the calling repository A's pipeline.. it will be like a code directive and just inject the code.. it will not know or care about the .py files in the respoitory.
What are my options without doing all my .py routines as inline ?
Azure Repo A's Pipeline Yaml file
trigger: none
resources:
pipelines:
- pipeline: my_project_a_pipeline
source: trigger_pipeline
trigger:
branches:
include:
- master
repositories:
- repository: template_repo_b
type: git
name: template_repo_b
ref: main
stages:
- template: pipelines/some_template.yml#template_repo_b
parameters:
SOME_PARAM_KEY: "some_param_value"
Azure Repo B's some_template.yml
parameters:
- name: SOME_PARAM_KEY
type: string
stages:
- stage: MyStage
displayName: "SomeStage"
jobs:
- job: "MyJob"
displayName: "MyJob"
steps:
- bash: |
echo Bashing
ls -la
displayName: 'Execute Warmup'
- task: PythonScript#0
inputs:
scriptSource: "filePath"
scriptPath: /SOME_PATH_ON_REPO_B/my_dumb_script.py
script: "my_dumb_script.py"
Is there an option to wire in the .py files into a completely separate repo C... add C to resources of B templates.. and be on my way ?
EDIT:
I can see In Azure templates repository, is there a way to mention repository for a filePath parameter of azure task 'pythonScript'? but then how do I consume the python package.. can I still use the PythonScript task ? sounds like I would then need to call my pip packaged code straight from bash ??

I figured it out.. how to pip install py files in azure devops pipelines.. using azure repositories.. via a template in the same repo
just add a reference to yourself at the top of any template
In the consuming repo
repositories:
- repository: this_template_repo
type: git
name: this_template_repo
ref: master
then add a job, referencing yourself by that name
- job: "PIP_INSTALL_LIBS"
displayName: "pip install libraries to agent"
steps:
- checkout: this_template_repo
path: this_template_repo
- bash: |
python3 -m pip install setuptools
python3 -m pip install -e $(Build.SourcesDirectory)/somepypimodule/src --force-reinstall --no-deps
displayName: 'pip install pip package'

Related

Github action to execute a Python script that create a file, then commit and push this file

My repo contains a main.py that generates a html map and save results in a csv. I want the action to:
execute the python script (-> this seems to be ok)
that the file generated would then be in the repo, hence having the file generated to be added, commited and pushed to the main branch to be available in the page associated with the repo.
name: refresh map
on:
schedule:
- cron: "30 11 * * *" #runs at 11:30 UTC everyday
jobs:
getdataandrefreshmap:
runs-on: ubuntu-latest
steps:
- name: checkout repo content
uses: actions/checkout#v3 # checkout the repository content to github runner.
- name: setup python
uses: actions/setup-python#v4
with:
python-version: 3.8 #install the python needed
- name: Install dependencies
run: |
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: execute py script
uses: actions/checkout#v3
run: |
python main.py
git config user.name github-actions
git config user.email github-actions#github.com
git add .
git commit -m "crongenerated"
git push
The github-action does not pass when I include the 2nd uses: actions/checkout#v3 and the git commands.
Thanks in advance for your help
If you want to run a script, then you don't need an additional checkout step for that. There is a difference between steps that use workflows and those that execute shell scripts directly. You can read more about it here.
In your configuration file, you kind of mix the two in the last step. You don't need an additional checkout step because the repo from the first step is still checked out. So you can just use the following workflow:
name: refresh map
on:
schedule:
- cron: "30 11 * * *" #runs at 11:30 UTC everyday
jobs:
getdataandrefreshmap:
runs-on: ubuntu-latest
steps:
- name: checkout repo content
uses: actions/checkout#v3 # checkout the repository content to github runner.
- name: setup python
uses: actions/setup-python#v4
with:
python-version: 3.8 #install the python needed
- name: Install dependencies
run: |
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: execute py script
run: |
python main.py
git config user.name github-actions
git config user.email github-actions#github.com
git add .
git commit -m "crongenerated"
git push
I tested it with a dummy repo and everything worked.

python - Automated building of executables

I have a GUI program I'm managing, written in Python. For the sake of not having to worry about environments, it's distributed as an executable built with PyInstaller. I can run this build from a function defined in the module as MyModule.build() (because to me it makes more sense to manage that script alongside the project itself).
I want to automate this to some extent, such that when a new release is added on Gitlab, it can be built and attached to the release by a runner. The approach I currently have to this is functional but hacky:
I use the Gitlab API to download the source of the tag for the release. I run python -m pip install -r {requirements_path} and python -m pip install {source_path} in the runner's environment. Then import and run the MyModule.build() function to generate an executable. Which is then uploaded and linked to the release with the Gitlab API.
Obviously the middle section is wanting. What are best approaches for similar projects? Can the package and requirments be installed in a separate venv than the one the runner script it running in?
One workflow would be to push a tag to create your release. The following jobs have a rules: configuration so they only run on tag pipelines.
One job will build the executable file. Another job will create the GitLab release using the file created in the first job.
build:
rules:
- if: "$CI_COMMIT_TAG" # Only run when tags are pushed
image: python:3.9-slim
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache: # https://docs.gitlab.com/ee/ci/caching/#cache-python-dependencies
paths:
- .cache/pip
- venv/
script:
- python -m venv venv
- source venv/bin/activate
- python -m pip install -r requirements.txt # package requirements
- python -m pip install pyinstaller # build requirements
- pyinstaller --onefile --name myapp mypackage/__main__.py
artifacts:
paths:
- dist
create_release:
rules:
- if: "$CI_COMMIT_TAG"
needs: [build]
image: registry.gitlab.com/gitlab-org/release-cli:latest
script: # zip/upload your binary wherever it should be downloaded from
- echo "Uploading release!"
release: # create GitLab release
tag_name: $CI_COMMIT_TAG
name: 'Release of myapp version $CI_COMMIT_TAG'
description: 'Release created using the release-cli.'
assets: # link uploaded asset(s) to the release
- name: 'release-zip'
url: 'https://example.com/downloads/myapp/$CI_COMMIT_TAG/myapp.zip'

python can't find '__main__' module in '.py' in Azure Pipelines

My goal is to deploy and run my python script from GitHub to my virtual machine via Azure Pipeline. My azure-pipelines.yml looks like this:
jobs:
- deployment: VMDeploy
displayName: Test_script
environment:
name: deploymentenvironment
resourceType: VirtualMachine
strategy:
rolling:
maxParallel: 2 #for percentages, mention as x%
preDeploy:
steps:
- download: current
- script: echo initialize, cleanup, backup, install certs
deploy:
steps:
- task: Bash#3
inputs:
targetType: 'inline'
script: python3 $(Agent.BuildDirectory)/test_file.py
routeTraffic:
steps:
- script: echo routing traffic
postRouteTraffic:
steps:
- script: echo health check post-route traffic
on:
failure:
steps:
- script: echo Restore from backup! This is on failure
success:
steps:
- script: echo Notify! This is on success
This returns an error:
/usr/bin/python3: can't find '__main__' module in '/home/ubuntu/azagent/_work/1/test_file.py'
##[error]Bash exited with code '1'.
If I place the test_file.py to the /home/ubuntu and replace the deployment script with the following: script: python3 /home/ubuntu/test_file.py the script does run smoothly.
If I move the test_file.py to another directory with mv /home/ubuntu/azagent/_work/1/test_file.py /home/ubuntu I can find an empty folder, not a .py file, named of test_file.py
EDIT
Screenshot from Jobs:
The reason that you can not get the source is because you use download: current to download artifacts produced by the current pipeline run, but you didn't publish any artifact in current pipeline.
As deployment jobs doesn't automatically check out source code, you need either checkout the source in your deployment job,
- checkout: self
or publish the sources to the artifact before downloading it.
- publish: $(Build.SourcesDirectory)
artifact: Artifact_Deploy

How to add --no-bundler command in azure build pipeline

I am having some trouble with running my function app in python. When i push the function directly through func azure functionapp publish air-temperature-v2 --no-bundler. This publishes the function directly to portal.azure and the function works as expected. However if i try to commit and push to the Azure repos and it generates its build, everything is successful but when I try to run the function, it gives a module name 'pandas' not found error. It works fine locally & online (using no bundler command). My question is, how can I add the no bundler command in azure python pipeline? My yaml is as follows :
# Python package
# Create and test a Python package on multiple Python versions.
# Add steps that analyze code, save the dist with the build record, publish to a PyPI-compatible index, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/python
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
Python36:
python.version: '3.6'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- script: python HttpExample/__init__.py
- task: ArchiveFiles#2
inputs:
rootFolderOrFile: '$(Build.SourcesDirectory)'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/Application$(Build.BuildId).zip'
replaceExistingArchive: true
verbose: # (no value); this input is optional
- task: PublishBuildArtifacts#1
#- script: |
# pip install pytest pytest-azurepipelines
# pytest
# displayName: 'pytest'
# ignore
- task: AzureFunctionApp#1
inputs:
azureSubscription: 'zohair-rg'
appType: 'functionAppLinux'
appName: 'air-temperature-v2'
package: '$(Build.ArtifactStagingDirectory)/Application$(Build.BuildId).zip'
startUpCommand: 'func azure functionapp publish air-temperature-v2 --no-bundler'
I have even tried add the no bundler command as the startup command but it still does not work.
This could be related to azure-function-core-tools version issue, please try following and deploy:
Please update your azure-function-core-tool version to the latest
Please try to deploy your build using below command:
func azure functionapp publish <app_name> --build remote
There was a similar issue raise sometime back, couldn't recall but this fix worked.
Alternatively , have you considere Azure CLI task to deploy azure function , here is a detailed article explaining the Azure CI CD using azure CLi for python.
https://clemenssiebler.com/deploy-azure-functions-python-azure-devops/
Hope it helps.

Is there a Python/Django equivalent to Rails bundler-audit?

I'm fairly new to Django so apologies in advance if this is obvious.
In Rails projects, I use a gem called bundler-audit to check that the patch level of the gems I'm installing don't include security vulnerabilities. Normally, I incorporate running bundler-audit into my CI pipeline so that any time I deploy, I get a warning (and fail) if a gem has a security vulnerability.
Is there a similar system for checking vulnerabilities in Python packages?
After writing out this question, I searched around some more and found Safety, which was exactly what I was looking for.
In case anyone else is setting up CircleCI for a Django project and wants to check their packages for vulnerabilities, here is the configuration I used in my .circleci/config.yml:
version: 2
jobs:
build:
# build and run tests
safety_check:
docker:
- image: circleci/python:3.6.1
steps:
- checkout
- run:
command: |
python3 -m venv env3
. env3/bin/activate
pip install safety
# specify requirements.txt
safety check -r requirements.txt
merge_master:
# merge passing code into master
workflows:
version: 2
test_and_merge:
jobs:
- build:
filters:
branches:
ignore: master
- safety_check:
filters:
branches:
ignore: master
- merge_master:
filters:
branches:
only: develop
requires:
- build
# code is only merged if safety check passes
- safety_check
To check that this works, run pip install insecure-package && pip freeze > requirements.txt then push and watch for Circle to fail.

Categories

Resources