I am trying to install private python library in gcr.io container and getting the error message even after setting configure-docker for gcloud and --network=cloudbuild in yaml file.
RUN pip install -e git+https://source.developers.google.com/p/project_name/r/github_domain_repository_name#egg=package_path
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['auth', 'configure-docker']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--network=cloudbuild', '-t', 'gcr.io/project-${_ENVIRONMENT}/cloud_run-pubsub_example', './cloud-run/file_upserter/']
Related
I have a Lambda function that uses a library lightgbm.
Unfortunately, it gives and error when trying to import it in Python, and says libgomp.so.1: cannot open shared object file at the beginning, so I figured out I need to do apt-get install libgomp1 and maybe something more.
How am I supposed to run these commands?
I assume it is better to use Layers, or something similar because running these commands every time Lambda starts doesn't make sense.
But how do I do sudo apt-get in a particular folder, from what I know that is not possible.
So my questions boil down to - how to run these various bash commands, and install packages, like when you do in Dockerfile, but for Zip file in Lambda.
I am using AWS SAM for deployment and development.
You can run lambda functions from your own Docker images, where you have almost full control over what the image contains.
Here's a simple example of a Python application executed in a container: https://docs.aws.amazon.com/lambda/latest/dg/python-image.html
Dockerfile:
FROM public.ecr.aws/lambda/python:3.8
# Install the function's dependencies using file requirements.txt
# from your project folder.
COPY requirements.txt .
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
# Copy function code
COPY app.py ${LAMBDA_TASK_ROOT}
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handler" ]
app.py:
import sys
def handler(event, context):
return 'Hello from AWS Lambda using Python' + sys.version + '!'
I use SAM and a Docker image to run a part of a large Java application as a lambda function. Here's what my CF template looks like:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: '...'
Parameters:
Customer:
Type: String
Description: Customer ID
Environment:
Type: String
Description: Environment
AllowedValues: ["production", "test"]
AppVersion:
Type: String
Description: App version
Resources:
AppLambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub 'app-${Customer}-${Environment}'
PackageType: Image
ImageUri: 'applambda:latest'
Role: !GetAtt AppLambdaRole.Arn
Architectures:
- x86_64
Timeout: 30
MemorySize: 1024
Description: 'App lambda endpoint (see tags for more info).'
Environment:
Variables:
# This is here to improve cold start speed.
# https://aws.amazon.com/blogs/compute/optimizing-aws-lambda-function-performance-for-java/
JAVA_TOOL_OPTIONS: '-XX:+TieredCompilation -XX:TieredStopAtLevel=1'
Tags:
Name: !Sub 'app-${Customer}-${Environment}'
Customer: !Ref Customer
Environment: !Ref Environment
AppVersion: !Ref AppVersion
Application: myApp
Metadata:
Dockerfile: Dockerfile
DockerContext: ./
DockerTag: latest
AppLambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Join ['/', ['/aws/lambda', !Ref AppLambda]]
RetentionInDays: 7
DeletionPolicy: Delete
UpdateReplacePolicy: Retain
AppLambdaPinger:
Type: AWS::Events::Rule
Properties:
Description: Keeps the app lambda warm.
ScheduleExpression: 'rate(15 minutes)'
Targets:
- Arn: !GetAtt AppLambda.Arn
Id: TargetLambda
Input: '{"ping":"pong"}'
AppLambdaPingerPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref AppLambda
Action: lambda:InvokeFunction
Principal: events.amazonaws.com
SourceArn: !GetAtt AppLambdaPinger.Arn
I have two repositories A & B.
Azure Repository A - Contains a python app
Azure Repository B - Contains .yml templates and .py scripts I want to run in the .yml templates
According to the documentations.. I cannot do this because when I expand the template into the calling repository A's pipeline.. it will be like a code directive and just inject the code.. it will not know or care about the .py files in the respoitory.
What are my options without doing all my .py routines as inline ?
Azure Repo A's Pipeline Yaml file
trigger: none
resources:
pipelines:
- pipeline: my_project_a_pipeline
source: trigger_pipeline
trigger:
branches:
include:
- master
repositories:
- repository: template_repo_b
type: git
name: template_repo_b
ref: main
stages:
- template: pipelines/some_template.yml#template_repo_b
parameters:
SOME_PARAM_KEY: "some_param_value"
Azure Repo B's some_template.yml
parameters:
- name: SOME_PARAM_KEY
type: string
stages:
- stage: MyStage
displayName: "SomeStage"
jobs:
- job: "MyJob"
displayName: "MyJob"
steps:
- bash: |
echo Bashing
ls -la
displayName: 'Execute Warmup'
- task: PythonScript#0
inputs:
scriptSource: "filePath"
scriptPath: /SOME_PATH_ON_REPO_B/my_dumb_script.py
script: "my_dumb_script.py"
Is there an option to wire in the .py files into a completely separate repo C... add C to resources of B templates.. and be on my way ?
EDIT:
I can see In Azure templates repository, is there a way to mention repository for a filePath parameter of azure task 'pythonScript'? but then how do I consume the python package.. can I still use the PythonScript task ? sounds like I would then need to call my pip packaged code straight from bash ??
I figured it out.. how to pip install py files in azure devops pipelines.. using azure repositories.. via a template in the same repo
just add a reference to yourself at the top of any template
In the consuming repo
repositories:
- repository: this_template_repo
type: git
name: this_template_repo
ref: master
then add a job, referencing yourself by that name
- job: "PIP_INSTALL_LIBS"
displayName: "pip install libraries to agent"
steps:
- checkout: this_template_repo
path: this_template_repo
- bash: |
python3 -m pip install setuptools
python3 -m pip install -e $(Build.SourcesDirectory)/somepypimodule/src --force-reinstall --no-deps
displayName: 'pip install pip package'
I have a Cloud Source Repository where I maintain the code of my python package. I have set up two triggers:
A trigger that runs on every commit on every branch (this one installs my python package and tests the code.
A trigger that runs on a pushed git tag (install the package, test, build artifacts, and deploy them to my private pypi repo).
During the second trigger, I want to verify that my Version number matches the git tag. In the setup.py file, I have added the code:
#!/usr/bin/env python
import sys
import os
from setuptools import setup
from setuptools.command.install import install
VERSION = "v0.1.5"
class VerifyVersionCommand(install):
"""Custom command to verify that the git tag matches our version"""
description = 'verify that the git tag matches our version'
def run(self):
tag = os.getenv('TAG_NAME')
if tag != VERSION:
info = "Git tag: {0} does not match the version of this app: {1}".format(
tag, VERSION
)
sys.exit(info)
setup(
name="name",
version=VERSION,
classifiers=["Programming Language :: Python :: 3 :: Only"],
py_modules=["name"],
install_requires=[
[...]
],
packages=["name"],
cmdclass={
'verify': VerifyVersionCommand,
}
)
The beginning of my cloudbuild.yaml looks like this:
steps:
- name: 'docker.io/library/python:3.8.6'
id: Install
entrypoint: /bin/sh
args:
- -c
- |
python3 -m venv /workspace/venv &&
. /workspace/venv/bin/activate &&
pip install -e .
- name: 'docker.io/library/python:3.8.6'
id: Verify
entrypoint: /bin/sh
args:
- -c
- |
. /workspace/venv/bin/activate &&
python setup.py verify
This works flawlessly on CircleCi, but on Cloud Build I get the error message:
Finished Step #0 - "Install"
Starting Step #1 - "Verify"
Step #1 - "Verify": Already have image: docker.io/library/python:3.8.6
Step #1 - "Verify": running verify
Step #1 - "Verify": /workspace/venv/lib/python3.8/site-packages/setuptools/dist.py:458: UserWarning: Normalizing 'v0.1.5' to '0.1.5'
Step #1 - "Verify": warnings.warn(tmpl.format(**locals()))
Step #1 - "Verify": Git tag: None does not match the version of this app: v0.1.5
Finished Step #1 - "Verify"
ERROR
ERROR: build step 1 "docker.io/library/python:3.8.6" failed: step exited with non-zero status: 1
Therefore, the TAG_NAME variable as specified in the Cloud Build documentation seems to not contain the git tag.
How can I access the git tag to verify it?
The TAG_NAME is set as substitution variables but not as environment variables
You can do that
- name: 'docker.io/library/python:3.8.6'
id: Verify
entrypoint: /bin/sh
env:
- "TAG_NAME=$TAG_NAME"
args:
- -c
- |
. /workspace/venv/bin/activate &&
python setup.py verify
I am trying to run a apache beam pipeline with DirectRunner in cloudbuild and by doing that I need to install the requirements for the python script, but I am facing some errors.
This is part of my cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: [ '-c', "gcloud secrets versions access latest --secret=env --format='get(payload.data)' | tr '_-' '/+' | base64 -d > .env" ]
id: GetSecretEnv
# - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
# entrypoint: 'bash'
# args: ['-c', 'gcloud config set app/cloud_build_timeout 1600 && gcloud app deploy --quiet tweepy-to-pubsub/app.yaml']
- name: gcr.io/cloud-builders/gcloud
id: Access id_github
entrypoint: 'bash'
args: [ '-c', 'gcloud secrets versions access latest --secret=id_github> /root/.ssh/id_github' ]
volumes:
- name: 'ssh'
path: /root/.ssh
# Set up git with key and domain
- name: 'gcr.io/cloud-builders/git'
id: Set up git with key and domain
entrypoint: 'bash'
args:
- '-c'
- |
chmod 600 /root/.ssh/id_github
cat <<EOF >/root/.ssh/config
Hostname github.com
IdentityFile /root/.ssh/id_github
EOF
ssh-keyscan -t rsa github.com > /root/.ssh/known_hosts
volumes:
- name: 'ssh'
path: /root/.ssh
- name: 'gcr.io/cloud-builders/git'
# Connect to the repository
id: Connect and clone repository
dir: workspace
args:
- clone
- --recurse-submodules
- git#github.com:x/repo.git
volumes:
- name: 'ssh'
path: /root/.ssh
- name: 'gcr.io/$PROJECT_ID/dataflow-python3'
entrypoint: '/bin/bash'
args: [ '-c',
'source /venv/bin/activate' ]
- name: 'gcr.io/$PROJECT_ID/dataflow-python3'
entrypoint: '/bin/bash'
dir: workspace
args: ['pip', 'install','-r', '/dir1/dir2/requirements.txt']
- name: 'gcr.io/$PROJECT_ID/dataflow-python3'
entrypoint: 'python'
dir: workspace
args: [ 'dir1/dir2/script.py',
'--runner=DirectRunner' ]
timeout: "1600s"
Without the step where I install the requirements this works but I need the libs, because I have python error for missing libs, and on the second step (5th actually in the original form of cloud build) the cloud build fails with this error
Step #5: Already have image (with digest): gcr.io/x/dataflow-python3
Step #5: import-im6.q16: unable to open X server `' # error/import.c/ImportImageCommand/360.
Step #5: import-im6.q16: unable to open X server `' # error/import.c/ImportImageCommand/360.
Step #5: /usr/local/bin/pip: line 5: from: command not found
Step #5: /usr/local/bin/pip: pip: line 7: syntax error near unexpected token `('
Step #5: /usr/local/bin/pip: pip: line 7: ` sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])'
How do I fix this? I also tried some examples on the internet and it doesn't work
Edit: First I deploy on app engine and then I download the repo in cloud build vm, install requirements and try to run it the python script
I think that the issue comes from your path definition
'source /venv/bin/activate'
and
'pip', 'install','-r', '/dir1/dir2/requirements.txt'
You use the full path definition and it doesn't work on Cloud Build. The current working directory is /workspace/. If you use relative path, add simply a dot . before the path, it should works better.
Or not... Indeed, you have the venv activation in a step, and the pip install in the following step. From one step to another, the runtime environment is offloaded and reloaded with the other container. Thus, your source command that set up environment variable, disappear in the pip step.
In addition, your cloud build environment is built for the build and destroy then. You don't need to use venv in this case and you can simplify the 3 last steps like this
- name: 'gcr.io/$PROJECT_ID/dataflow-python3'
entrypoint: '/bin/bash'
args:
- '-c'
- |
pip install -r ./dir1/dir2/requirements.txt
python ./dir1/dir2/script.py --runner=DirectRunner
I am having some trouble with running my function app in python. When i push the function directly through func azure functionapp publish air-temperature-v2 --no-bundler. This publishes the function directly to portal.azure and the function works as expected. However if i try to commit and push to the Azure repos and it generates its build, everything is successful but when I try to run the function, it gives a module name 'pandas' not found error. It works fine locally & online (using no bundler command). My question is, how can I add the no bundler command in azure python pipeline? My yaml is as follows :
# Python package
# Create and test a Python package on multiple Python versions.
# Add steps that analyze code, save the dist with the build record, publish to a PyPI-compatible index, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/python
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
Python36:
python.version: '3.6'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- script: python HttpExample/__init__.py
- task: ArchiveFiles#2
inputs:
rootFolderOrFile: '$(Build.SourcesDirectory)'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/Application$(Build.BuildId).zip'
replaceExistingArchive: true
verbose: # (no value); this input is optional
- task: PublishBuildArtifacts#1
#- script: |
# pip install pytest pytest-azurepipelines
# pytest
# displayName: 'pytest'
# ignore
- task: AzureFunctionApp#1
inputs:
azureSubscription: 'zohair-rg'
appType: 'functionAppLinux'
appName: 'air-temperature-v2'
package: '$(Build.ArtifactStagingDirectory)/Application$(Build.BuildId).zip'
startUpCommand: 'func azure functionapp publish air-temperature-v2 --no-bundler'
I have even tried add the no bundler command as the startup command but it still does not work.
This could be related to azure-function-core-tools version issue, please try following and deploy:
Please update your azure-function-core-tool version to the latest
Please try to deploy your build using below command:
func azure functionapp publish <app_name> --build remote
There was a similar issue raise sometime back, couldn't recall but this fix worked.
Alternatively , have you considere Azure CLI task to deploy azure function , here is a detailed article explaining the Azure CI CD using azure CLi for python.
https://clemenssiebler.com/deploy-azure-functions-python-azure-devops/
Hope it helps.