How to add --no-bundler command in azure build pipeline - python

I am having some trouble with running my function app in python. When i push the function directly through func azure functionapp publish air-temperature-v2 --no-bundler. This publishes the function directly to portal.azure and the function works as expected. However if i try to commit and push to the Azure repos and it generates its build, everything is successful but when I try to run the function, it gives a module name 'pandas' not found error. It works fine locally & online (using no bundler command). My question is, how can I add the no bundler command in azure python pipeline? My yaml is as follows :
# Python package
# Create and test a Python package on multiple Python versions.
# Add steps that analyze code, save the dist with the build record, publish to a PyPI-compatible index, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/python
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
Python36:
python.version: '3.6'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- script: python HttpExample/__init__.py
- task: ArchiveFiles#2
inputs:
rootFolderOrFile: '$(Build.SourcesDirectory)'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/Application$(Build.BuildId).zip'
replaceExistingArchive: true
verbose: # (no value); this input is optional
- task: PublishBuildArtifacts#1
#- script: |
# pip install pytest pytest-azurepipelines
# pytest
# displayName: 'pytest'
# ignore
- task: AzureFunctionApp#1
inputs:
azureSubscription: 'zohair-rg'
appType: 'functionAppLinux'
appName: 'air-temperature-v2'
package: '$(Build.ArtifactStagingDirectory)/Application$(Build.BuildId).zip'
startUpCommand: 'func azure functionapp publish air-temperature-v2 --no-bundler'
I have even tried add the no bundler command as the startup command but it still does not work.

This could be related to azure-function-core-tools version issue, please try following and deploy:
Please update your azure-function-core-tool version to the latest
Please try to deploy your build using below command:
func azure functionapp publish <app_name> --build remote
There was a similar issue raise sometime back, couldn't recall but this fix worked.
Alternatively , have you considere Azure CLI task to deploy azure function , here is a detailed article explaining the Azure CI CD using azure CLi for python.
https://clemenssiebler.com/deploy-azure-functions-python-azure-devops/
Hope it helps.

Related

Python Package publish in Azure DevOps Artifacts: 'D:\a\1\a' is empty. Nothing will be added to build artifact

A. Used twine to authenticate and publish my Python packages to an Azure Artifacts feed
- task: CmdLine#2
displayName: Build Artifacts
inputs:
script:
echo Building distribution package
python -m pip install --upgrade twine build setuptools
python -m build
- task: TwineAuthenticate#1
inputs:
artifactFeed: ddey-feed
- script:
python -m twine upload -r "ddey-feed" --config-file $(PYPIRC_PATH) dist/*.whl
B. Although it ran successfully, but I didn't get any package in Artifacts. I found the Warning:'D:\a\1\a' is empty. Nothing will be added to build artifact
C. I did some research and decided to add additional section which does a copy and publish
- task: CopyFiles#2
displayName: 'Copy Files to: $(build.artifactstagingdirectory)'
inputs:
SourceFolder: '$(Build.SourcesDirectory)'
Contents: |
**/*
!.git/**/*
TargetFolder: '$(Build.ArtifactStagingDirectory)'
condition: succeededOrFailed()
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact: drop'
inputs:
PathtoPublish: '$(build.artifactstagingdirectory)'
condition: succeededOrFailed()
Can anyone please comment what else I can modify in yaml file to get the package available in Artifacts?
Different Tried things after Suggestions:
Add Tree command to see all build folders to confirm generation of file:
2. After removing source folder and let it use default source
successful build and consumed:
Artifacts is generated and I can see it from pipeline.
Problem Statement
In Artifacts tab, I don't see the build available in any feed. How to connect the build with a specific feed (ddey-feed). I though TwineAuthenticate is suppose to take care of it.
ok. I have finally resolved the whole issue and could deploy the package to Artifacts Feed.
Key learning:
When creating Artifacts Feed, Make sure to check permission. Add Allow project-scoped builds otherwise will get permission error in pushing package from Azure pipeline
You need to define PYPIRC_PATH to point where .pypirc file reside. This can be done using environment variable set-up as shown below
- script: |
echo "$(PYPIRC_PATH)"
python -m twine upload -r ddey-feed --verbose --config-file $(PYPIRC_PATH) dist/*
displayName: 'cmd to push package to Artifact Feed'
env:
PYPIRC_PATH: $(Build.SourcesDirectory)
Make sure Twine Authenticate feed name matches with twine upload feed name. If pipeline fails to push the package, you can try to run following command directly from your repo: twine upload -r ddey-feed --config-file ./.pypirc dist/ and it should successfully upload the build to Artifacts.
For debug purpose, print the directories.
echo "Structure of work folder of this pipeline:"
tree $(Agent.WorkFolder)\1 /f
echo "Build.ArtifactStagingDirectory:"
echo "$(Build.ArtifactStagingDirectory)"
echo "Build.BinariesDirectory:"
echo "$(Build.BinariesDirectory)"
echo "Build.SourcesDirectory:"
echo "$(Build.SourcesDirectory)"
Summary of components of the pipeline
Indentation before CopyFiles#2

how to template python tasks in azure devops pipelines

I have two repositories A & B.
Azure Repository A - Contains a python app
Azure Repository B - Contains .yml templates and .py scripts I want to run in the .yml templates
According to the documentations.. I cannot do this because when I expand the template into the calling repository A's pipeline.. it will be like a code directive and just inject the code.. it will not know or care about the .py files in the respoitory.
What are my options without doing all my .py routines as inline ?
Azure Repo A's Pipeline Yaml file
trigger: none
resources:
pipelines:
- pipeline: my_project_a_pipeline
source: trigger_pipeline
trigger:
branches:
include:
- master
repositories:
- repository: template_repo_b
type: git
name: template_repo_b
ref: main
stages:
- template: pipelines/some_template.yml#template_repo_b
parameters:
SOME_PARAM_KEY: "some_param_value"
Azure Repo B's some_template.yml
parameters:
- name: SOME_PARAM_KEY
type: string
stages:
- stage: MyStage
displayName: "SomeStage"
jobs:
- job: "MyJob"
displayName: "MyJob"
steps:
- bash: |
echo Bashing
ls -la
displayName: 'Execute Warmup'
- task: PythonScript#0
inputs:
scriptSource: "filePath"
scriptPath: /SOME_PATH_ON_REPO_B/my_dumb_script.py
script: "my_dumb_script.py"
Is there an option to wire in the .py files into a completely separate repo C... add C to resources of B templates.. and be on my way ?
EDIT:
I can see In Azure templates repository, is there a way to mention repository for a filePath parameter of azure task 'pythonScript'? but then how do I consume the python package.. can I still use the PythonScript task ? sounds like I would then need to call my pip packaged code straight from bash ??
I figured it out.. how to pip install py files in azure devops pipelines.. using azure repositories.. via a template in the same repo
just add a reference to yourself at the top of any template
In the consuming repo
repositories:
- repository: this_template_repo
type: git
name: this_template_repo
ref: master
then add a job, referencing yourself by that name
- job: "PIP_INSTALL_LIBS"
displayName: "pip install libraries to agent"
steps:
- checkout: this_template_repo
path: this_template_repo
- bash: |
python3 -m pip install setuptools
python3 -m pip install -e $(Build.SourcesDirectory)/somepypimodule/src --force-reinstall --no-deps
displayName: 'pip install pip package'

DevOps pipeline running python script error on import azureml.core

I'm trying to run a Python script in a DevOps pipeline upon check in. A basic 'hellow world' script works, but when I import azureml.core, it errors out with ModuleNotFoundError: No module named 'azureml'.
That makes sense, since I don't know how it was going to find azureml.core. My question is: how do I get the Python script to find the module? Do I need to check it in as part of my code base in DevOps? Or is there some way to reference it via a hyperlink?
Here is my YML file:
trigger:
- master
pool:
vmImage: ubuntu-latest
steps:
- task: PythonScript#0
inputs:
scriptSource: 'filepath'
scriptPath: test.py
And here is my python script:
print('hello world')
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
You have to install the lost package into your python, the default python does not have that. Please use the below yml:
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
variables:
solution: '**/*.sln'
buildPlatform: 'Any CPU'
buildConfiguration: 'Release'
steps:
- task: UsePythonVersion#0
displayName: 'Use Python 3.8'
inputs:
versionSpec: 3.8
- script: python3 -m pip install --upgrade pip
displayName: 'upgrade pip'
- script: python3 -m pip install azureml.core
displayName: 'Install azureml.core'
- task: PythonScript#0
inputs:
scriptSource: 'filepath'
scriptPath: test.py

Super-Linter: Pass python modules as arguments to Pylint in Azure DevOps

I am trying to add a python linting step to an Azure DevOps pipeline using Super-Linter. I followed the instruction here to add the following step in my pipeline:
jobs:
- job: PythonLint
displayName: Python Lint
pool:
vmImage: ubuntu-latest
steps:
- script: |
docker pull github/super-linter:latest
docker run -e RUN_LOCAL=true -e VALIDATE_PYTHON_PYLINT=true -v $(System.DefaultWorkingDirectory):/tmp/lint github/super-linter
displayName: 'Code Scan using GitHub Super-Linter'
The Super-Linter github docs page describes passing environment variables to tell it which linters to run. I pass the variable VALIDATE_PYTHON_PYLINT to tell it to run pylint. When I run the pipeline, I get a load of errors of the form E0401: Unable to import 'pyspark.sql' (import-error).
Pylint's configuration page says the modules should be passed as command line arguments.
My question is, how can I configure Super-Linter to pass these arguments or otherwise tell pylint which modules to import?

Push Docker image to registry before using for azure pipelines

For my tests in azure-pipeline, I want to use a container that I then push to Docker Hub.
Actually, the steps are the following:
Pull image from registry
Do the tests
Push the new image with the new commits in the code in the Registry
The problem: The image Pulled from the registry contains the previous code, not the one that I am testing.
What I want to do:
First, deploy the image with the new code in the Docker registry
Then, steps 1 to 3 mentionned before, so that the image that I pull is the up-to-date.
Here is my actual code:
trigger:
- master
resources:
containers:
- container: moviestr_backend
image: nolwenbrosson/cicd:moviestr_backend-$(SourceBranchName)
ports:
- 5000:5000
- container: backend_mongo
image: mongo
ports:
- 27017:27017
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
Python37:
python.version: '3.7'
services:
moviestr_backend: moviestr_backend
backend_mongo: backend_mongo
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements.dev.txt
pip install pytest-azurepipelines
displayName: 'Install dependencies'
- script: |
python -m pytest
displayName: 'Make Unit tests'
- task: Docker#2
displayName: Login to Docker Hub
inputs:
command: login
containerRegistry: cicd
- task: Docker#2
displayName: Build and Push
inputs:
command: buildAndPush
repository: nolwenbrosson/cicd
tags: |
moviestr_backend-master
- task: Docker#2
displayName: Logout of ACR
inputs:
command: logout
containerRegistry: cicd
The problem is ressources is unique for the whole pipeline, and it will Pull the image at the beginning, not after I build my image with my up-to-date code. So, how can I do?
You could try to sperate your docker build and push task in your scenario.
First docker build your image with changed code, and then docker run your new build image.
Then docker test your image, finally docker push it.

Categories

Resources