Problem with Installing python requirements in GCP cloudbuild - python

I am trying to run a apache beam pipeline with DirectRunner in cloudbuild and by doing that I need to install the requirements for the python script, but I am facing some errors.
This is part of my cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: [ '-c', "gcloud secrets versions access latest --secret=env --format='get(payload.data)' | tr '_-' '/+' | base64 -d > .env" ]
id: GetSecretEnv
# - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
# entrypoint: 'bash'
# args: ['-c', 'gcloud config set app/cloud_build_timeout 1600 && gcloud app deploy --quiet tweepy-to-pubsub/app.yaml']
- name: gcr.io/cloud-builders/gcloud
id: Access id_github
entrypoint: 'bash'
args: [ '-c', 'gcloud secrets versions access latest --secret=id_github> /root/.ssh/id_github' ]
volumes:
- name: 'ssh'
path: /root/.ssh
# Set up git with key and domain
- name: 'gcr.io/cloud-builders/git'
id: Set up git with key and domain
entrypoint: 'bash'
args:
- '-c'
- |
chmod 600 /root/.ssh/id_github
cat <<EOF >/root/.ssh/config
Hostname github.com
IdentityFile /root/.ssh/id_github
EOF
ssh-keyscan -t rsa github.com > /root/.ssh/known_hosts
volumes:
- name: 'ssh'
path: /root/.ssh
- name: 'gcr.io/cloud-builders/git'
# Connect to the repository
id: Connect and clone repository
dir: workspace
args:
- clone
- --recurse-submodules
- git#github.com:x/repo.git
volumes:
- name: 'ssh'
path: /root/.ssh
- name: 'gcr.io/$PROJECT_ID/dataflow-python3'
entrypoint: '/bin/bash'
args: [ '-c',
'source /venv/bin/activate' ]
- name: 'gcr.io/$PROJECT_ID/dataflow-python3'
entrypoint: '/bin/bash'
dir: workspace
args: ['pip', 'install','-r', '/dir1/dir2/requirements.txt']
- name: 'gcr.io/$PROJECT_ID/dataflow-python3'
entrypoint: 'python'
dir: workspace
args: [ 'dir1/dir2/script.py',
'--runner=DirectRunner' ]
timeout: "1600s"
Without the step where I install the requirements this works but I need the libs, because I have python error for missing libs, and on the second step (5th actually in the original form of cloud build) the cloud build fails with this error
Step #5: Already have image (with digest): gcr.io/x/dataflow-python3
Step #5: import-im6.q16: unable to open X server `' # error/import.c/ImportImageCommand/360.
Step #5: import-im6.q16: unable to open X server `' # error/import.c/ImportImageCommand/360.
Step #5: /usr/local/bin/pip: line 5: from: command not found
Step #5: /usr/local/bin/pip: pip: line 7: syntax error near unexpected token `('
Step #5: /usr/local/bin/pip: pip: line 7: ` sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])'
How do I fix this? I also tried some examples on the internet and it doesn't work
Edit: First I deploy on app engine and then I download the repo in cloud build vm, install requirements and try to run it the python script

I think that the issue comes from your path definition
'source /venv/bin/activate'
and
'pip', 'install','-r', '/dir1/dir2/requirements.txt'
You use the full path definition and it doesn't work on Cloud Build. The current working directory is /workspace/. If you use relative path, add simply a dot . before the path, it should works better.
Or not... Indeed, you have the venv activation in a step, and the pip install in the following step. From one step to another, the runtime environment is offloaded and reloaded with the other container. Thus, your source command that set up environment variable, disappear in the pip step.
In addition, your cloud build environment is built for the build and destroy then. You don't need to use venv in this case and you can simplify the 3 last steps like this
- name: 'gcr.io/$PROJECT_ID/dataflow-python3'
entrypoint: '/bin/bash'
args:
- '-c'
- |
pip install -r ./dir1/dir2/requirements.txt
python ./dir1/dir2/script.py --runner=DirectRunner

Related

how to template python tasks in azure devops pipelines

I have two repositories A & B.
Azure Repository A - Contains a python app
Azure Repository B - Contains .yml templates and .py scripts I want to run in the .yml templates
According to the documentations.. I cannot do this because when I expand the template into the calling repository A's pipeline.. it will be like a code directive and just inject the code.. it will not know or care about the .py files in the respoitory.
What are my options without doing all my .py routines as inline ?
Azure Repo A's Pipeline Yaml file
trigger: none
resources:
pipelines:
- pipeline: my_project_a_pipeline
source: trigger_pipeline
trigger:
branches:
include:
- master
repositories:
- repository: template_repo_b
type: git
name: template_repo_b
ref: main
stages:
- template: pipelines/some_template.yml#template_repo_b
parameters:
SOME_PARAM_KEY: "some_param_value"
Azure Repo B's some_template.yml
parameters:
- name: SOME_PARAM_KEY
type: string
stages:
- stage: MyStage
displayName: "SomeStage"
jobs:
- job: "MyJob"
displayName: "MyJob"
steps:
- bash: |
echo Bashing
ls -la
displayName: 'Execute Warmup'
- task: PythonScript#0
inputs:
scriptSource: "filePath"
scriptPath: /SOME_PATH_ON_REPO_B/my_dumb_script.py
script: "my_dumb_script.py"
Is there an option to wire in the .py files into a completely separate repo C... add C to resources of B templates.. and be on my way ?
EDIT:
I can see In Azure templates repository, is there a way to mention repository for a filePath parameter of azure task 'pythonScript'? but then how do I consume the python package.. can I still use the PythonScript task ? sounds like I would then need to call my pip packaged code straight from bash ??
I figured it out.. how to pip install py files in azure devops pipelines.. using azure repositories.. via a template in the same repo
just add a reference to yourself at the top of any template
In the consuming repo
repositories:
- repository: this_template_repo
type: git
name: this_template_repo
ref: master
then add a job, referencing yourself by that name
- job: "PIP_INSTALL_LIBS"
displayName: "pip install libraries to agent"
steps:
- checkout: this_template_repo
path: this_template_repo
- bash: |
python3 -m pip install setuptools
python3 -m pip install -e $(Build.SourcesDirectory)/somepypimodule/src --force-reinstall --no-deps
displayName: 'pip install pip package'

How do I access a git tag in a google cloud build?

I have a Cloud Source Repository where I maintain the code of my python package. I have set up two triggers:
A trigger that runs on every commit on every branch (this one installs my python package and tests the code.
A trigger that runs on a pushed git tag (install the package, test, build artifacts, and deploy them to my private pypi repo).
During the second trigger, I want to verify that my Version number matches the git tag. In the setup.py file, I have added the code:
#!/usr/bin/env python
import sys
import os
from setuptools import setup
from setuptools.command.install import install
VERSION = "v0.1.5"
class VerifyVersionCommand(install):
"""Custom command to verify that the git tag matches our version"""
description = 'verify that the git tag matches our version'
def run(self):
tag = os.getenv('TAG_NAME')
if tag != VERSION:
info = "Git tag: {0} does not match the version of this app: {1}".format(
tag, VERSION
)
sys.exit(info)
setup(
name="name",
version=VERSION,
classifiers=["Programming Language :: Python :: 3 :: Only"],
py_modules=["name"],
install_requires=[
[...]
],
packages=["name"],
cmdclass={
'verify': VerifyVersionCommand,
}
)
The beginning of my cloudbuild.yaml looks like this:
steps:
- name: 'docker.io/library/python:3.8.6'
id: Install
entrypoint: /bin/sh
args:
- -c
- |
python3 -m venv /workspace/venv &&
. /workspace/venv/bin/activate &&
pip install -e .
- name: 'docker.io/library/python:3.8.6'
id: Verify
entrypoint: /bin/sh
args:
- -c
- |
. /workspace/venv/bin/activate &&
python setup.py verify
This works flawlessly on CircleCi, but on Cloud Build I get the error message:
Finished Step #0 - "Install"
Starting Step #1 - "Verify"
Step #1 - "Verify": Already have image: docker.io/library/python:3.8.6
Step #1 - "Verify": running verify
Step #1 - "Verify": /workspace/venv/lib/python3.8/site-packages/setuptools/dist.py:458: UserWarning: Normalizing 'v0.1.5' to '0.1.5'
Step #1 - "Verify": warnings.warn(tmpl.format(**locals()))
Step #1 - "Verify": Git tag: None does not match the version of this app: v0.1.5
Finished Step #1 - "Verify"
ERROR
ERROR: build step 1 "docker.io/library/python:3.8.6" failed: step exited with non-zero status: 1
Therefore, the TAG_NAME variable as specified in the Cloud Build documentation seems to not contain the git tag.
How can I access the git tag to verify it?
The TAG_NAME is set as substitution variables but not as environment variables
You can do that
- name: 'docker.io/library/python:3.8.6'
id: Verify
entrypoint: /bin/sh
env:
- "TAG_NAME=$TAG_NAME"
args:
- -c
- |
. /workspace/venv/bin/activate &&
python setup.py verify

FileNotFoundError: Github Actions Workflow fails when creating directory or file during test

I am using Github Python application workflow for CI. My application creates a folder to store temporary files. It works perfectly when testing on localhost but it will not let me create a new directory in Github actions. I get the below error:
#classmethod
def save_files(cls, files: list) -> str:
"""
saves a list of files in the "files"
folder in app
:param files: list of FileStorage objects
:return: directory name where files saved
"""
folder = time.strftime("%Y%m%d-%H%M%S")
folder_path = Path(__file__).parent / "files" / folder
os.mkdir(folder_path)
E FileNotFoundError: [Errno 2] No such file or directory: /home/runner/work/DocumentAnalysisTool/DocumentAnalysisTool/app/files/20200430-235749
Here is my workflow pythonapp.yml file:
name: Python application
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.8
uses: actions/setup-python#v1
with:
python-version: 3.8
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Lint with flake8
run: |
pip install flake8
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pip install pytest
pytest
Thank you in advance

could not read Username for 'https://source.developers.google.com'

I am trying to install private python library in gcr.io container and getting the error message even after setting configure-docker for gcloud and --network=cloudbuild in yaml file.
RUN pip install -e git+https://source.developers.google.com/p/project_name/r/github_domain_repository_name#egg=package_path
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['auth', 'configure-docker']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--network=cloudbuild', '-t', 'gcr.io/project-${_ENVIRONMENT}/cloud_run-pubsub_example', './cloud-run/file_upserter/']

Serverless - Numpy - Unable to find good bind path format

I've been beating on this for over a week and been through all sorts of forum issues and posts and cannot resolve. I'm trying to package numpy in a function, individually building requirements (I have multiple functions with multiple requirements that I'd like to keep separate).
Environment:
Windows 10 Home
Docker Toolbox for Windows:
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24302
Built: Fri Mar 23 08:31:36 2018
OS/Arch: windows/amd64
Experimental: false
Orchestrator: swarm
Server: Docker Engine - Community
Engine:
Version: 18.09.0
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:52:55 2018
OS/Arch: linux/amd64
Experimental: false
Serverless Version:
serverless version 6.4.1
serverless-python-requirements version 6.4.1
Directory Structure:
|-test
|-env.yml
|-serverless.yml
|-Dockerfile
|-functions
|-f1
|-index.py
|-requirements.txt
|-sub_function_1.py
|-sub_function_2.py
|-f2
|-index.py
|-requirements.txt
|-sub_function_3.py
|-sub_function_4.py
serverless.yml
service: test
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
zip: true
dockerFile: Dockerfile
dockerizePip: non-linux
provider:
name: aws
runtime: python3.6
stage: dev
environment: ${file(./env.yml):${opt:stage, self:provider.stage}.env}
region: ${file(./env.yml):${opt:stage, self:provider.stage}.aws.region}
profile: ${file(./env.yml):${opt:stage, self:provider.stage}.aws.profile}
package:
individually: true
functions:
f1:
handler:index.handler
module:functions/f1
f2:
handler:index.handleer
module:functions/f2
I have my project files in C:\Serverless\test. I run npm init, followed by npm i --save serverless-python-requirements, accepting all defaults. I get the following on sls deploy -v. even though I've added C:\ to Shared Folders on the running default VM in VirtualBox, and selected auto-mount and permanent.
If I comment out both dockerizePip and dockerFile I get the following as expected based on here and other SO posts:
Serverless: Invoke invoke
{
"errorMessage": "Unable to import module 'index'"
}
If I comment out dockerfile I get:
Serverless: Docker Image: lambci/lambda:build-python3.6
Error --------------------------------------------------
error during connect: Get https://XXXXXX/v1.37/version: dial tcp
XXXXXXXXXX: connectex: A connection attempt failed because the
connected party did not properly respond after a period of time, or
established connection failed because connected host has failed to
respond.
at dockerCommand (C:\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:20:11)
at getBindPath (C:\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:100:3)
With Dockerfile
# AWS Lambda execution environment is based on Amazon Linux 1
FROM amazonlinux:1
# Install Python 3.6
RUN yum -y install python36 python36-pip
# Install your dependencies
RUN curl -s https://bootstrap.pypa.io/get-pip.py | python3
RUN yum -y install python3-devel mysql-devel gcc
# Set the same WORKDIR as default image
RUN mkdir /var/task
WORKDIR /var/task
.
Serverless: Building custom docker image from Dockerfile...
Serverless: Docker Image: sls-py-reqs-custom
Error --------------------------------------------------
Unable to find good bind path format
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Stack Trace --------------------------------------------
Error: Unable to find good bind path format
at getBindPath (C:\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:142:9)
at installRequirements (C:\Serverless\test\node_modules\serverless-python-requirements\lib\pip.js:152:7)
at installRequirementsIfNeeded (C:\Serverless\test\node_modules\serverless-python-requirements\lib\pip.js:451:3)
If I move my project to C:\Users\, I get this instead:
Serverless: Docker Image: sls-py-reqs-custom
Serverless: Trying bindPath /c/Users/Serverless/test/.serverless/requirements (run,--rm,-v,/c/Users/Serverless/test/.serverless/req
uirements:/test,alpine,ls,/test/requirements.txt)
Serverless: /test/requirements.txt
Error --------------------------------------------------
docker: Error response from daemon: create "/c/Users/Serverless/test/.serverless/requirements": "\"/c/Users/Serverless/test/.serv
erless/requirements\"" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you in
tended to pass a host directory, use absolute path.
See 'docker run --help'.
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Stack Trace --------------------------------------------
Error: docker: Error response from daemon: create "/c/Users/Serverless/test/.serverless/requirements": "\"/c/Users/Serverless/test/
.serverless/requirements\"" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If y
ou intended to pass a host directory, use absolute path.
See 'docker run --help'.
at dockerCommand (C:\Users\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:20:11)
at getDockerUid (C:\Users\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:162:14)
I've seen the Makefile style recommendation from #brianz here, but I'm not sure how to adapt that to this (Makefiles are not my strong suit). I'm a bit at a loss as to what to do next and advice would be greatly appreciated. TIA.
I was unable to make the plugin work but I found a better solution anyhow - Lambda Layers. This is a bonus because it reduces the size of the lambda and allows code/file reuse. There is a pre-built lambda layer for numpy and scipy that you can use, but I built my own to show myself how it all works. Here's how I made it work:
Create a layer package:
Open an EC2 instance or Ubuntu or Linux or whatever - This is needed so we can compile the runtime binaries correctly
Make a dependencies package zip - Must use the directory structure python/lib/python3.6/site-packages for python to find during runtime
mkdir -p tmpdir/python/lib/python3.6/site-packages
pip install -r requirements.txt --no-deps -t tmpdir/python/lib/python3.6/site-packages
cd tmpdir zip -r ../py_dependencies.zip .
cd ..
rm -r tmpdir
Push layer zip to AWS - requires latest awscli
sudo pip install awscli --upgrade --user
sudo aws lambda publish-layer-version \
--layer-name py_dependencies \
--description "Python 3.6 dependencies [numpy=0.15.4]" \
--license-info "MIT" \
--compatible-runtimes python3.6 \
--zip-file fileb://py_dependencies.zip \
--profile python_dev_serverless
To use in any function that requires numpy, just use the arn that is shown in the console or during the upload above
f1:
handler: index.handler_f_use_numpy
include:
- functions/f_use_numpy.py
layers:
- arn:aws:lambda:us-west-2:XXXXX:layer:py_dependencies:1
As an added bonus, you can push common files like constants to a layer as well. Here's how I did it for testing use in windows and on the lambda:
import platform
\# Set common path
COMMON_PATH = "../../layers/common/"
if platform.system() == "Linux": COMMON_PATH = "/opt/common/"
def handler_common(event, context):
# Read from a constants.json file
with open(COMMON_PATH + 'constants.json') as f:
return text = json.load(f)
when I got the same issue, I opened docker went to settings/shared drive opted to reset credentials and after applied my changes and this cleared the error
I fixed this issue by temporarily disabling Windows Firewall.

Categories

Resources