My goal is to deploy and run my python script from GitHub to my virtual machine via Azure Pipeline. My azure-pipelines.yml looks like this:
jobs:
- deployment: VMDeploy
displayName: Test_script
environment:
name: deploymentenvironment
resourceType: VirtualMachine
strategy:
rolling:
maxParallel: 2 #for percentages, mention as x%
preDeploy:
steps:
- download: current
- script: echo initialize, cleanup, backup, install certs
deploy:
steps:
- task: Bash#3
inputs:
targetType: 'inline'
script: python3 $(Agent.BuildDirectory)/test_file.py
routeTraffic:
steps:
- script: echo routing traffic
postRouteTraffic:
steps:
- script: echo health check post-route traffic
on:
failure:
steps:
- script: echo Restore from backup! This is on failure
success:
steps:
- script: echo Notify! This is on success
This returns an error:
/usr/bin/python3: can't find '__main__' module in '/home/ubuntu/azagent/_work/1/test_file.py'
##[error]Bash exited with code '1'.
If I place the test_file.py to the /home/ubuntu and replace the deployment script with the following: script: python3 /home/ubuntu/test_file.py the script does run smoothly.
If I move the test_file.py to another directory with mv /home/ubuntu/azagent/_work/1/test_file.py /home/ubuntu I can find an empty folder, not a .py file, named of test_file.py
EDIT
Screenshot from Jobs:
The reason that you can not get the source is because you use download: current to download artifacts produced by the current pipeline run, but you didn't publish any artifact in current pipeline.
As deployment jobs doesn't automatically check out source code, you need either checkout the source in your deployment job,
- checkout: self
or publish the sources to the artifact before downloading it.
- publish: $(Build.SourcesDirectory)
artifact: Artifact_Deploy
Related
So i am trying to create a pipeline on bitbucket. On my local computer, I navigate to the folder cd terraform/environments/devand run terraform init without an issue. However, when I run the test pipeline on bitbucket, it stops on the second action because
bash: terraform: command not found
How can I fix this? I believe I need to install terraform on bitbucket somehow but I am not sure how to do so. Do I use python pip commands? If so, how and why?
image: atlassian/default-image:2
pipelines:
branches:
test:
- step:
name: 'Navigate to Dev'
script:
- cd terraform/environments/dev
condition:
changesets:
includePaths:
- "terraform/modules"
- "terraform/environments/dev"
- step:
name: 'Initialize Terraform'
script:
- terraform init
You need the correct image for your build agent. In this situation, the agent basically only needs terraform installed and accessible:
image: hashicorp/terraform
This will fix your issue. You can also of course set the tag for the image to your specific version of Terraform.
I have two repositories A & B.
Azure Repository A - Contains a python app
Azure Repository B - Contains .yml templates and .py scripts I want to run in the .yml templates
According to the documentations.. I cannot do this because when I expand the template into the calling repository A's pipeline.. it will be like a code directive and just inject the code.. it will not know or care about the .py files in the respoitory.
What are my options without doing all my .py routines as inline ?
Azure Repo A's Pipeline Yaml file
trigger: none
resources:
pipelines:
- pipeline: my_project_a_pipeline
source: trigger_pipeline
trigger:
branches:
include:
- master
repositories:
- repository: template_repo_b
type: git
name: template_repo_b
ref: main
stages:
- template: pipelines/some_template.yml#template_repo_b
parameters:
SOME_PARAM_KEY: "some_param_value"
Azure Repo B's some_template.yml
parameters:
- name: SOME_PARAM_KEY
type: string
stages:
- stage: MyStage
displayName: "SomeStage"
jobs:
- job: "MyJob"
displayName: "MyJob"
steps:
- bash: |
echo Bashing
ls -la
displayName: 'Execute Warmup'
- task: PythonScript#0
inputs:
scriptSource: "filePath"
scriptPath: /SOME_PATH_ON_REPO_B/my_dumb_script.py
script: "my_dumb_script.py"
Is there an option to wire in the .py files into a completely separate repo C... add C to resources of B templates.. and be on my way ?
EDIT:
I can see In Azure templates repository, is there a way to mention repository for a filePath parameter of azure task 'pythonScript'? but then how do I consume the python package.. can I still use the PythonScript task ? sounds like I would then need to call my pip packaged code straight from bash ??
I figured it out.. how to pip install py files in azure devops pipelines.. using azure repositories.. via a template in the same repo
just add a reference to yourself at the top of any template
In the consuming repo
repositories:
- repository: this_template_repo
type: git
name: this_template_repo
ref: master
then add a job, referencing yourself by that name
- job: "PIP_INSTALL_LIBS"
displayName: "pip install libraries to agent"
steps:
- checkout: this_template_repo
path: this_template_repo
- bash: |
python3 -m pip install setuptools
python3 -m pip install -e $(Build.SourcesDirectory)/somepypimodule/src --force-reinstall --no-deps
displayName: 'pip install pip package'
I am having some trouble with running my function app in python. When i push the function directly through func azure functionapp publish air-temperature-v2 --no-bundler. This publishes the function directly to portal.azure and the function works as expected. However if i try to commit and push to the Azure repos and it generates its build, everything is successful but when I try to run the function, it gives a module name 'pandas' not found error. It works fine locally & online (using no bundler command). My question is, how can I add the no bundler command in azure python pipeline? My yaml is as follows :
# Python package
# Create and test a Python package on multiple Python versions.
# Add steps that analyze code, save the dist with the build record, publish to a PyPI-compatible index, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/python
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
Python36:
python.version: '3.6'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- script: python HttpExample/__init__.py
- task: ArchiveFiles#2
inputs:
rootFolderOrFile: '$(Build.SourcesDirectory)'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/Application$(Build.BuildId).zip'
replaceExistingArchive: true
verbose: # (no value); this input is optional
- task: PublishBuildArtifacts#1
#- script: |
# pip install pytest pytest-azurepipelines
# pytest
# displayName: 'pytest'
# ignore
- task: AzureFunctionApp#1
inputs:
azureSubscription: 'zohair-rg'
appType: 'functionAppLinux'
appName: 'air-temperature-v2'
package: '$(Build.ArtifactStagingDirectory)/Application$(Build.BuildId).zip'
startUpCommand: 'func azure functionapp publish air-temperature-v2 --no-bundler'
I have even tried add the no bundler command as the startup command but it still does not work.
This could be related to azure-function-core-tools version issue, please try following and deploy:
Please update your azure-function-core-tool version to the latest
Please try to deploy your build using below command:
func azure functionapp publish <app_name> --build remote
There was a similar issue raise sometime back, couldn't recall but this fix worked.
Alternatively , have you considere Azure CLI task to deploy azure function , here is a detailed article explaining the Azure CI CD using azure CLi for python.
https://clemenssiebler.com/deploy-azure-functions-python-azure-devops/
Hope it helps.
I am trying to schedule Python Script through Kubernetes CronJob but for some reason I am not able to understand how can I do it. I am able to run simple script like echo Hello World but that's not what I want
I tried using this specification:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: test
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: "Forbid"
failedJobsHistoryLimit: 10
startingDeadlineSeconds: 600 # 10 min
jobTemplate:
spec:
backoffLimit: 0
activeDeadlineSeconds: 3300 # 55min
template:
spec:
containers:
- name: hello
image: python:3.6-slim
command: ["python"]
args: ["./main.py"]
restartPolicy: Never
But then I am not able to run it because main.py is not found, I understand that relative path is not supported so I hardcoded the path but then I am not able to find my home directory, I tried doing ls /home/ and over there my folder name is not visible so I am not able to access my project repository.
Initially I was planning to run bash script which can do:
Install requirements by pip install requirements.txt
Then run Python script
But I am not sure how can I do this with kubernetes, It is so confusing to me
In short I want to be able to run k8s CronJob which can run Python script by first installing requirements and then running it
where is the startup script ./main.py located? is it present in the image.
you need to build new image using python:3.6-slim as base image and add your python script to PATH. then you would be able to run it from k8s CronJob
I have a python script that I am trying to run as part of gitlab pages deployment of a jekyll site. My site has blog posts that have various tags, and the python script generates the .md files for the tag pages. The script works perfectly fine when I just manually run it in an IDE, however I want it to be part of the gitlab ci deployment process
here is what my gitlab-ci.yml setup looks like:
run:
image: python:latest
script:
- python tag_generator.py
artifacts:
paths:
- public
only:
- master
pages:
image: ruby:2.3
stage: deploy
script:
- bundle install
- bundle exec jekyll build -d public
artifacts:
paths:
- public
only:
- master
however, it doesn't actually create the files that it's supposed to create, here is the output from the job "run":
...
Cloning repository...
Cloning into '/builds/username/projectname'...
Checking out 4c8a47fe as master...
Skipping Git submodules setup
$ python tag_generator.py
Tags generated, count 23
Uploading artifacts...
WARNING: public: no matching files
ERROR: No files to upload
Job succeeded
the script reads out "tags generated, count ___" once it's executed, so it is running, however the files that it's supposed to create aren't being created/uploaded into the right directory. there is a /tag directory in the root project folder, that is where they are supposed to go.
I realize that the issue must have something to do with the public folder, however when I don't have
artifacts:
paths:
- public
it still doesn't create the files in the /tag directory, so it doesn't work whether I have -public or not, and I don't know what the problem is.
I FIGURED IT OUT!
the "build" for the project isn't made in the repo, gitlab clones the repo into another place, so I had to change the artifact path for the python job so that it's in the cloned "build" location, like so:
run:
image: python:latest
stage: test
before_script:
- python -V # Print out python version for debugging
- pip install virtualenv
script:
- python tag_generator.py
artifacts:
paths:
- /builds/username/projectname/tag
only:
- master