I have a Lambda function that uses a library lightgbm.
Unfortunately, it gives and error when trying to import it in Python, and says libgomp.so.1: cannot open shared object file at the beginning, so I figured out I need to do apt-get install libgomp1 and maybe something more.
How am I supposed to run these commands?
I assume it is better to use Layers, or something similar because running these commands every time Lambda starts doesn't make sense.
But how do I do sudo apt-get in a particular folder, from what I know that is not possible.
So my questions boil down to - how to run these various bash commands, and install packages, like when you do in Dockerfile, but for Zip file in Lambda.
I am using AWS SAM for deployment and development.
You can run lambda functions from your own Docker images, where you have almost full control over what the image contains.
Here's a simple example of a Python application executed in a container: https://docs.aws.amazon.com/lambda/latest/dg/python-image.html
Dockerfile:
FROM public.ecr.aws/lambda/python:3.8
# Install the function's dependencies using file requirements.txt
# from your project folder.
COPY requirements.txt .
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
# Copy function code
COPY app.py ${LAMBDA_TASK_ROOT}
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handler" ]
app.py:
import sys
def handler(event, context):
return 'Hello from AWS Lambda using Python' + sys.version + '!'
I use SAM and a Docker image to run a part of a large Java application as a lambda function. Here's what my CF template looks like:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: '...'
Parameters:
Customer:
Type: String
Description: Customer ID
Environment:
Type: String
Description: Environment
AllowedValues: ["production", "test"]
AppVersion:
Type: String
Description: App version
Resources:
AppLambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub 'app-${Customer}-${Environment}'
PackageType: Image
ImageUri: 'applambda:latest'
Role: !GetAtt AppLambdaRole.Arn
Architectures:
- x86_64
Timeout: 30
MemorySize: 1024
Description: 'App lambda endpoint (see tags for more info).'
Environment:
Variables:
# This is here to improve cold start speed.
# https://aws.amazon.com/blogs/compute/optimizing-aws-lambda-function-performance-for-java/
JAVA_TOOL_OPTIONS: '-XX:+TieredCompilation -XX:TieredStopAtLevel=1'
Tags:
Name: !Sub 'app-${Customer}-${Environment}'
Customer: !Ref Customer
Environment: !Ref Environment
AppVersion: !Ref AppVersion
Application: myApp
Metadata:
Dockerfile: Dockerfile
DockerContext: ./
DockerTag: latest
AppLambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Join ['/', ['/aws/lambda', !Ref AppLambda]]
RetentionInDays: 7
DeletionPolicy: Delete
UpdateReplacePolicy: Retain
AppLambdaPinger:
Type: AWS::Events::Rule
Properties:
Description: Keeps the app lambda warm.
ScheduleExpression: 'rate(15 minutes)'
Targets:
- Arn: !GetAtt AppLambda.Arn
Id: TargetLambda
Input: '{"ping":"pong"}'
AppLambdaPingerPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref AppLambda
Action: lambda:InvokeFunction
Principal: events.amazonaws.com
SourceArn: !GetAtt AppLambdaPinger.Arn
Related
I have two repositories A & B.
Azure Repository A - Contains a python app
Azure Repository B - Contains .yml templates and .py scripts I want to run in the .yml templates
According to the documentations.. I cannot do this because when I expand the template into the calling repository A's pipeline.. it will be like a code directive and just inject the code.. it will not know or care about the .py files in the respoitory.
What are my options without doing all my .py routines as inline ?
Azure Repo A's Pipeline Yaml file
trigger: none
resources:
pipelines:
- pipeline: my_project_a_pipeline
source: trigger_pipeline
trigger:
branches:
include:
- master
repositories:
- repository: template_repo_b
type: git
name: template_repo_b
ref: main
stages:
- template: pipelines/some_template.yml#template_repo_b
parameters:
SOME_PARAM_KEY: "some_param_value"
Azure Repo B's some_template.yml
parameters:
- name: SOME_PARAM_KEY
type: string
stages:
- stage: MyStage
displayName: "SomeStage"
jobs:
- job: "MyJob"
displayName: "MyJob"
steps:
- bash: |
echo Bashing
ls -la
displayName: 'Execute Warmup'
- task: PythonScript#0
inputs:
scriptSource: "filePath"
scriptPath: /SOME_PATH_ON_REPO_B/my_dumb_script.py
script: "my_dumb_script.py"
Is there an option to wire in the .py files into a completely separate repo C... add C to resources of B templates.. and be on my way ?
EDIT:
I can see In Azure templates repository, is there a way to mention repository for a filePath parameter of azure task 'pythonScript'? but then how do I consume the python package.. can I still use the PythonScript task ? sounds like I would then need to call my pip packaged code straight from bash ??
I figured it out.. how to pip install py files in azure devops pipelines.. using azure repositories.. via a template in the same repo
just add a reference to yourself at the top of any template
In the consuming repo
repositories:
- repository: this_template_repo
type: git
name: this_template_repo
ref: master
then add a job, referencing yourself by that name
- job: "PIP_INSTALL_LIBS"
displayName: "pip install libraries to agent"
steps:
- checkout: this_template_repo
path: this_template_repo
- bash: |
python3 -m pip install setuptools
python3 -m pip install -e $(Build.SourcesDirectory)/somepypimodule/src --force-reinstall --no-deps
displayName: 'pip install pip package'
I am working in PyCharm with the AWS SAM and AWS SAM CLI modules. I am trying to setup a simple program:
An Amazon Lambda layer for "ROCFacade"
ROCFacade will import Python's standard requests module. After installing it with PIP, I copied it from the External Libraries/python3.8/site-packages folder (third box) to the lambda-layers subfolder in the second box.
I am trying to call it from hello-world/app.py which so far is little more than the boilerplate installed by AWS SAM
When I try to run it, PyCharm reports that the ROCFacade module cannot be found.
Folder structure
The error message occurs if I ran it with an "app" configuration or with the Lambda configuration, below.
I have another project that uses the same ROCFacade with a simple main.py console app so the code does work. I'm not sure if my problem here is with environment variables (i.e., Python doesn't know to look in the lambda-layers folder) or the Pythong app/Lambda configuration. I am a newbie to both Python and Lambda/AWS development.
Thank you
Lambda error message
Lambda configuration
I found my oversight. In the template.yaml the dev needs to add a reference to the layer in the function descriptor and define the layer.
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: hello_world/
Handler: app.lambda_handler
Runtime: python3.8
Layers: !Ref ROCFacadeLayer
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /hello
Method: get
ROCFacadeLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: ROCFacadeLayer
ContentUri: lambda-layers/roc-facade-layer.zip
CompatibleRuntimes:
- python3.7
- python3.8
I am utilizing https://www.npmjs.com/package/youtube-dl-exec through a simple JS lambda function on an AWS lambda (node 14).
The code is pretty simple and gathers info as per the URL given (and supported by YTDL). I have done testing with jest and it works well on my local where python 2.7 is installed.
My package.json dependencies look like
"dependencies": {
"youtube-dl": "^3.5.0",
"youtube-dl-exec": "^1.2.0"
},
"devDependencies": {
"jest": "^26.6.3"
}
I am using github action to deploy the code on push to master using main.yml file:
name: Deploy to AWS lambda
on: [push]
jobs:
deploy_source:
name: build and deploy lambda
strategy:
matrix:
node-version: [14.x]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: npm install and build
run: |
npm ci
npm run build --if-present
env:
CI: true
- uses: actions/setup-python#v2
with:
python-version: '3.x' # Version range or exact version of a Python version to use, using SemVer's version range syntax
architecture: 'x64' # optional x64 or x86. Defaults to x64 if not specified
- name: zip
uses: montudor/action-zip#v0.1.0
with:
args: zip -qq -r ./bundle.zip ./
- name: default deploy
uses: appleboy/lambda-action#master
with:
aws_access_key_id: ${{ secrets.AWS_EEEEE_ID }}
aws_secret_access_key: ${{ secrets.AWS_EEEEE_KEY }}
aws_region: us-EEEEE
function_name: DownloadEEEEE
zip_file: bundle.zip
I am getting a
INFO Error: Command failed with exit code 127: /var/task/node_modules/youtube-dl-exec/bin/youtube-dl https://www.EXQEEEE.com/p/XCCRXqXInEEZ4W4 --dump-json --no-warnings --no-call-home --no-check-certificate --prefer-free-formats --youtube-skip-dash-manifest
/usr/bin/env: python: No such file or directory
at makeError (/var/task/node_modules/execa/lib/error.js:59:11)
at handlePromise (/var/task/node_modules/execa/index.js:114:26)
at processTicksAndRejections (internal/process/task_queues.js:93:5) {
shortMessage: 'Command failed with exit code 127: /var/task/node_modules/youtube-dl-exec/bin/youtube-dl https://www.instagram.com/p/CCRq_InFZ44 --dump-json --no-warnings --no-call-home --no-check-certificate --prefer-free-formats --youtube-skip-dash-manifest',
command: '/var/task/node_modules/youtube-dl-exec/bin/youtube-dl https://www.EXQEEEE.com/p/XCCRXqXInEEZ4W4 --dump-json --no-warnings --no-call-home --no-check-certificate --prefer-free-formats --youtube-skip-dash-manifest',
exitCode: 127,
signal: undefined,
signalDescription: undefined,
stdout: '',
stderr: '/usr/bin/env: python: No such file or directory',
failed: true,
timedOut: false,
isCanceled: false,
killed: false
}
error.
I have tried adding a lambda layer, adding python in the main.yml file, and also installing through dependency but perhaps I am doing something wrong so that the library is not able to find python at /usr/bin/env.
How do I make python be available in that path?
Should I not use ubuntu-latest on lambda config (main.yml) since it doesn't come packed with python by default?
Any help would be appreciated.
Note: I have obfuscated the URLs for privacy purposes.
The new nodejs10.x Lambda runtime does not contain python anymore, and therefore, youtube-dl does not work
I am trying to run a apache beam pipeline with DirectRunner in cloudbuild and by doing that I need to install the requirements for the python script, but I am facing some errors.
This is part of my cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: [ '-c', "gcloud secrets versions access latest --secret=env --format='get(payload.data)' | tr '_-' '/+' | base64 -d > .env" ]
id: GetSecretEnv
# - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
# entrypoint: 'bash'
# args: ['-c', 'gcloud config set app/cloud_build_timeout 1600 && gcloud app deploy --quiet tweepy-to-pubsub/app.yaml']
- name: gcr.io/cloud-builders/gcloud
id: Access id_github
entrypoint: 'bash'
args: [ '-c', 'gcloud secrets versions access latest --secret=id_github> /root/.ssh/id_github' ]
volumes:
- name: 'ssh'
path: /root/.ssh
# Set up git with key and domain
- name: 'gcr.io/cloud-builders/git'
id: Set up git with key and domain
entrypoint: 'bash'
args:
- '-c'
- |
chmod 600 /root/.ssh/id_github
cat <<EOF >/root/.ssh/config
Hostname github.com
IdentityFile /root/.ssh/id_github
EOF
ssh-keyscan -t rsa github.com > /root/.ssh/known_hosts
volumes:
- name: 'ssh'
path: /root/.ssh
- name: 'gcr.io/cloud-builders/git'
# Connect to the repository
id: Connect and clone repository
dir: workspace
args:
- clone
- --recurse-submodules
- git#github.com:x/repo.git
volumes:
- name: 'ssh'
path: /root/.ssh
- name: 'gcr.io/$PROJECT_ID/dataflow-python3'
entrypoint: '/bin/bash'
args: [ '-c',
'source /venv/bin/activate' ]
- name: 'gcr.io/$PROJECT_ID/dataflow-python3'
entrypoint: '/bin/bash'
dir: workspace
args: ['pip', 'install','-r', '/dir1/dir2/requirements.txt']
- name: 'gcr.io/$PROJECT_ID/dataflow-python3'
entrypoint: 'python'
dir: workspace
args: [ 'dir1/dir2/script.py',
'--runner=DirectRunner' ]
timeout: "1600s"
Without the step where I install the requirements this works but I need the libs, because I have python error for missing libs, and on the second step (5th actually in the original form of cloud build) the cloud build fails with this error
Step #5: Already have image (with digest): gcr.io/x/dataflow-python3
Step #5: import-im6.q16: unable to open X server `' # error/import.c/ImportImageCommand/360.
Step #5: import-im6.q16: unable to open X server `' # error/import.c/ImportImageCommand/360.
Step #5: /usr/local/bin/pip: line 5: from: command not found
Step #5: /usr/local/bin/pip: pip: line 7: syntax error near unexpected token `('
Step #5: /usr/local/bin/pip: pip: line 7: ` sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])'
How do I fix this? I also tried some examples on the internet and it doesn't work
Edit: First I deploy on app engine and then I download the repo in cloud build vm, install requirements and try to run it the python script
I think that the issue comes from your path definition
'source /venv/bin/activate'
and
'pip', 'install','-r', '/dir1/dir2/requirements.txt'
You use the full path definition and it doesn't work on Cloud Build. The current working directory is /workspace/. If you use relative path, add simply a dot . before the path, it should works better.
Or not... Indeed, you have the venv activation in a step, and the pip install in the following step. From one step to another, the runtime environment is offloaded and reloaded with the other container. Thus, your source command that set up environment variable, disappear in the pip step.
In addition, your cloud build environment is built for the build and destroy then. You don't need to use venv in this case and you can simplify the 3 last steps like this
- name: 'gcr.io/$PROJECT_ID/dataflow-python3'
entrypoint: '/bin/bash'
args:
- '-c'
- |
pip install -r ./dir1/dir2/requirements.txt
python ./dir1/dir2/script.py --runner=DirectRunner
How does one run locally a AWS Lambda Function with layers?
My environment:
Pycharm project for an AWS Lambda Function with Python 3.6 runtime.
AWS Toolkit
similar file/folder structure to create a Lambda Layer: https://aws.amazon.com/blogs/compute/working-with-aws-lambda-and-lambda-layers-in-aws-sam/ as follows:
+---.aws-sam
....
+---test
| app.py
| requirements.txt
|
+---dependencies
| \---python
| constants.py
| requirements.txt
| sql.py
| utils.py
and deployment template like:
testFunc:
Type: AWS::Serverless::Function
Properties:
CodeUri: teest/
Handler: app.test
Runtime: python3.6
FunctionName: testFunc
Events:
test:
Type: Api
Properties:
Path: /test
Method: ANY
Layers:
- !Ref TempConversionDepLayer
TempConversionDepLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: Layer1
Description: Dependencies
ContentUri: dependencies/
CompatibleRuntimes:
- python3.6
- python3.7
LicenseInfo: 'MIT'
RetentionPolicy: Retain
I can deploy the function correctly and running it on AWS works well,
whenever i try to run the function locally, it fails with the error message:
`Unable to import module 'app': No module named 'sql'`
I've tried to read all possible resources about Layers and Pycharm but nothing really helped.
Can anybody give a hand please?
Thank you,
I was able to get around this issue in PyCharm by adding a symbolic link to another directory which contained code for the layer