AWS Lambda | Azure Python Library Intergration - python

I am trying to make Azure API calls using AWS Lambda using Python.
So I decided to create a Lambda Layer for Azure Compute Management Library.
I downloaded the azure-mgmt-compute 17.0.0 from this link.
Then added the zip to Lambda Layers, When I am trying to import the azure library, I am facing the following error :
{
"errorMessage": "Unable to import module 'lambda_function': No module named 'azure'",
"errorType": "Runtime.ImportModuleError"
}
Then I decided to create a zip package using a virtual environment using the following commands:
virtualenv v-env;
source v-env/bin/activate;
pip install azure-mgmt-compute;
deactivate;
cd v-env/lib/python3.8/site-packages;
zip -r9 ${OLDPWD}/function.zip .;
Still, no luck, does anybody has implemented something like this before?

You can use serverless to accomplish this. Create requirements.txt and add all dependent package list. In your case "azure-mgmt-compute". In serverless.yml under custom section add below and refer lambda layer in function. And run sls deploy --stage dev. This will create lambda layer and will add layer in lambda. You can directly import dependent library in lambda.
functions:
azure_container_instance:
handler: azure_container_instance/handler.lambda_handler
layers:
- Ref: PythonRequirementsLambdaLayer
timeout: 300
custom:
pythonRequirements:
dockerizePip: non-linux
slim: true
strip: false
fileName: ./requirements.txt
layer:
name: ${self:provider.stage}-layerName
description: Python requirements lambda layer
compatibleRuntimes:
- python3.8
licenseInfo: GPLv3
allowedAccounts:
- '*'

Related

How to organize Python code for AWS Lambda Layers

I am working in PyCharm with the AWS SAM and AWS SAM CLI modules. I am trying to setup a simple program:
An Amazon Lambda layer for "ROCFacade"
ROCFacade will import Python's standard requests module. After installing it with PIP, I copied it from the External Libraries/python3.8/site-packages folder (third box) to the lambda-layers subfolder in the second box.
I am trying to call it from hello-world/app.py which so far is little more than the boilerplate installed by AWS SAM
When I try to run it, PyCharm reports that the ROCFacade module cannot be found.
Folder structure
The error message occurs if I ran it with an "app" configuration or with the Lambda configuration, below.
I have another project that uses the same ROCFacade with a simple main.py console app so the code does work. I'm not sure if my problem here is with environment variables (i.e., Python doesn't know to look in the lambda-layers folder) or the Pythong app/Lambda configuration. I am a newbie to both Python and Lambda/AWS development.
Thank you
Lambda error message
Lambda configuration
I found my oversight. In the template.yaml the dev needs to add a reference to the layer in the function descriptor and define the layer.
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: hello_world/
Handler: app.lambda_handler
Runtime: python3.8
Layers: !Ref ROCFacadeLayer
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /hello
Method: get
ROCFacadeLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: ROCFacadeLayer
ContentUri: lambda-layers/roc-facade-layer.zip
CompatibleRuntimes:
- python3.7
- python3.8

pysftp library not working in AWS lambda layer

I want to upload files to EC2 instance using pysftp library (Python script). So I have created small Python script which is using below line to connect
pysftp.Connection(
host=Constants.MY_HOST_NAME,
username=Constants.MY_EC2_INSTANCE_USERNAME,
private_key="./mypemfilelocation.pem",
)
some code here .....
pysftp.put(file_to_be_upload, ec2_remote_file_path)
This script will upload files from my local Windows machine to EC2 instance using .pem file and it works correctly.
Now I want to do this action using AWS lambda with API Gateway functionality.
So I have uploaded Python script to AWS lambda. Now I am not sure how to use pysftp library in AWS lambda, so I found solution that add pysftp library Layer in AWS lambda Layer. I did it with
pip3 install pysftp -t ./library_folder
And I make zip of above folder and added in AWS lambda Layer.
But still I got so many errors like one by one :-
No module named 'pysftp'
No module named 'paramiko'
Undefined Symbol: PyInt_FromLong
cannot import name '_bcrypt' from partially initialized module 'bcrypt' (most likely due to a circular import)
cffi module not found
I just fade up of above errors I didn't find the proper solution. How can I can use pysftp library in my AWS lambda seamlessly?
I build pysftp layer and tested it on my lambda with python 3.8. Just to see import and basic print:
import json
import pysftp
def lambda_handler(event, context):
# TODO implement
print(dir(pysftp))
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
I used the following docker tool to build the pysftp layer:
https://github.com/lambci/docker-lambda
So what I did for pysftp was:
# create pysftp fresh python 3.8 environment
python -m venv pysftp
# activate it
source pysftp/bin/activate
cd pysftp
# install pysftp in the environemnt
pip3 install pysftp
# generate requirements.txt
pip freeze > requirements.txt
# use docker to construct the layer
docker run --rm -v `pwd`:/var/task:z lambci/lambda:build-python3.8 python3.8 -m pip --isolated install -t ./mylayer -r requirements.txt
zip -r pysftp-layer.zip .
And the rest is uploading the zip into s3, creating new layer in AWS console, setting Compatible runtime to python 3.8 and using it in my test lambda function.
You can also check here how to use this docker tool (the docker command I used is based on what is in that link).
Hope this helps

How to install external modules in a Python Lambda Function created by AWS CDK?

I'm using the Python AWS CDK in Cloud9 and I'm deploying a simple Lambda function that is supposed to send an API request to Atlassian's API when an Object is uploaded to an S3 Bucket (also created by the CDK). Here is my code for CDK Stack:
from aws_cdk import core
from aws_cdk import aws_s3
from aws_cdk import aws_lambda
from aws_cdk.aws_lambda_event_sources import S3EventSource
class JiraPythonStack(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
# The code that defines your stack goes here
jira_bucket = aws_s3.Bucket(self,
"JiraBucket",
encryption=aws_s3.BucketEncryption.KMS)
event_lambda = aws_lambda.Function(
self,
"JiraFileLambda",
code=aws_lambda.Code.asset("lambda"),
handler='JiraFileLambda.handler',
runtime=aws_lambda.Runtime.PYTHON_3_6,
function_name="JiraPythonFromCDK")
event_lambda.add_event_source(
S3EventSource(jira_bucket,
events=[aws_s3.EventType.OBJECT_CREATED]))
The lambda function code uses the requests module which I've imported. However, when I check the CloudWatch Logs, and test the lambda function - I get:
Unable to import module 'JiraFileLambda': No module named 'requests'
My Question is: How do I install the requests module via the Python CDK?
I've already looked around online and found this. But it seems to directly modify the lambda function, which would result in a Stack Drift (which I've been told is BAD for IaaS). I've also looked at the AWS CDK Docs too but didn't find any mention of external modules/libraries (I'm doing a thorough check for it now) Does anybody know how I can work around this?
Edit: It would appear I'm not the only one looking for this.
Here's another GitHub issue that's been raised.
It is not even necessary to use the experimental PythonLambda functionality in CDK - there is support built into CDK to build the dependencies into a simple Lambda package (not a docker image). It uses docker to do the build, but the final result is still a simple zip of files. The documentation shows it here: https://docs.aws.amazon.com/cdk/api/latest/docs/aws-lambda-readme.html#bundling-asset-code ; the gist is:
new Function(this, 'Function', {
code: Code.fromAsset(path.join(__dirname, 'my-python-handler'), {
bundling: {
image: Runtime.PYTHON_3_9.bundlingImage,
command: [
'bash', '-c',
'pip install -r requirements.txt -t /asset-output && cp -au . /asset-output'
],
},
}),
runtime: Runtime.PYTHON_3_9,
handler: 'index.handler',
});
I have used this exact configuration in my CDK deployment and it works well.
And for Python, it is simply
aws_lambda.Function(
self,
"Function",
runtime=aws_lambda.Runtime.PYTHON_3_9,
handler="index.handler",
code=aws_lambda.Code.from_asset(
"function_source_dir",
bundling=core.BundlingOptions(
image=aws_lambda.Runtime.PYTHON_3_9.bundling_image,
command=[
"bash", "-c",
"pip install --no-cache -r requirements.txt -t /asset-output && cp -au . /asset-output"
],
),
),
)
UPDATE:
It now appears as though there is a new type of (experimental) Lambda Function in the CDK known as the PythonFunction. The Python docs for it are here. And this includes support for adding a requirements.txt file which uses a docker container to add them to your function. See more details on that here. Specifically:
If requirements.txt or Pipfile exists at the entry path, the construct will handle installing all required modules in a Lambda compatible Docker container according to the runtime.
Original Answer:
So this is the awesome bit of code my manager wrote that we now use:
def create_dependencies_layer(self, project_name, function_name: str) -> aws_lambda.LayerVersion:
requirements_file = "lambda_dependencies/" + function_name + ".txt"
output_dir = ".lambda_dependencies/" + function_name
# Install requirements for layer in the output_dir
if not os.environ.get("SKIP_PIP"):
# Note: Pip will create the output dir if it does not exist
subprocess.check_call(
f"pip install -r {requirements_file} -t {output_dir}/python".split()
)
return aws_lambda.LayerVersion(
self,
project_name + "-" + function_name + "-dependencies",
code=aws_lambda.Code.from_asset(output_dir)
)
It's actually part of the Stack class as a method (not inside the init). The way we have it set up here is that we have a folder called lambda_dependencies which contains a text file for every lambda function we are deploying which just has a list of dependencies, like a requirements.txt.
And to utilise this code, we include in the lambda function definition like this:
get_data_lambda = aws_lambda.Function(
self,
.....
layers=[self.create_dependencies_layer(PROJECT_NAME, GET_DATA_LAMBDA_NAME)]
)
You should install the dependencies of your lambda locally before deploying the lambda via CDK. CDK does not have idea how to install the dependencies and which libraries should be installed.
In you case, you should install the dependency requests and other libraries before executing cdk deploy.
For example,
pip install requests --target ./asset/package
There is an example for reference.
Wanted to share 2 template repos I made for this (heavily inspired by some of the above):
https://github.com/iguanaus/cdk-ecs-python-with-requirements- - demo of ecs service of basic python function
https://github.com/iguanaus/cdk-lambda-python-with-requirements - demo of lambda python job with requirements.
Hope they are helpful for folks :)
Lastly; if you want to see a long thread on this subject, see here: https://github.com/aws/aws-cdk/issues/3660
I ran into this issue as well. I used a solution like #Kane and #Jamie suggest just fine when I was working on my ubuntu machine. However, I ran into issue when working on MacOS. Apparently some (all?) python packages don't work on lambda (linux env) if they are pip installeded on a different os (see stackoverflow post)
My solution was to run the pip install inside a docker container. This allowed me to cdk deploy from my macbook and not run into issues with my python packages in lambda.
suppose you have a dir lambda_layers/python in your cdk project that will house your python packages for the lambda layer.
current_path = str(pathlib.Path(__file__).parent.absolute())
pip_install_command = ("docker run --rm --entrypoint /bin/bash -v "
+ current_path
+ "/lambda_layers:/lambda_layers python:3.8 -c "
+ "'pip3 install Pillow==8.1.0 -t /lambda_layers/python'")
subprocess.run(pip_install_command, shell=True)
lambda_layer = aws_lambda.LayerVersion(
self,
"PIL-layer",
compatible_runtimes=[aws_lambda.Runtime.PYTHON_3_8],
code=aws_lambda.Code.asset("lambda_layers"))
As an alternative to my other answer, here's a slightly different approach that also works with docker-in-docker (the bundling-options approach doesn't).
Set up the Lambda function like
lambda_fn = aws_lambda.Function(
self,
"Function",
runtime=lambdas.Runtime.PYTHON_3_9,
code=lambdas.Code.from_docker_build(
"function_source_dir",
),
handler="index.lambda_handler",
)
and in function_source_dir/ have these files:
index.py (to match the above code - you can name this whatever you like)
requirements.txt
Dockerfile
Set up your Dockerfile like
# Note that this dockerfile is only used to build the lambda asset - the
# lambda still just runs with a zip source, not a docker image.
# See the docstring for aws_lambda.Code.from_docker_build
FROM public.ecr.aws/lambda/python:3.9.2022.04.27.10-x86_64
COPY index.py /asset/
COPY requirements.txt /tmp/
RUN pip3 install -r /tmp/requirements.txt -t /asset
and the synth step will build your asset in docker (using the above dockerfile) then pull the built Lambda source from the /asset/ directory in the image.
I haven't looked into too much detail about why the BundlingOptions approach fails to build when running inside a docker container, but this one does work (as long as docker is run with -v /var/run/docker.sock:/var/run/docker.sock to enable docker-in-docker). As always, be sure to consider your security posture when doing this.

AWS Python Layer Run Locally

How does one run locally a AWS Lambda Function with layers?
My environment:
Pycharm project for an AWS Lambda Function with Python 3.6 runtime.
AWS Toolkit
similar file/folder structure to create a Lambda Layer: https://aws.amazon.com/blogs/compute/working-with-aws-lambda-and-lambda-layers-in-aws-sam/ as follows:
+---.aws-sam
....
+---test
| app.py
| requirements.txt
|
+---dependencies
| \---python
| constants.py
| requirements.txt
| sql.py
| utils.py
and deployment template like:
testFunc:
Type: AWS::Serverless::Function
Properties:
CodeUri: teest/
Handler: app.test
Runtime: python3.6
FunctionName: testFunc
Events:
test:
Type: Api
Properties:
Path: /test
Method: ANY
Layers:
- !Ref TempConversionDepLayer
TempConversionDepLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: Layer1
Description: Dependencies
ContentUri: dependencies/
CompatibleRuntimes:
- python3.6
- python3.7
LicenseInfo: 'MIT'
RetentionPolicy: Retain
I can deploy the function correctly and running it on AWS works well,
whenever i try to run the function locally, it fails with the error message:
`Unable to import module 'app': No module named 'sql'`
I've tried to read all possible resources about Layers and Pycharm but nothing really helped.
Can anybody give a hand please?
Thank you,
I was able to get around this issue in PyCharm by adding a symbolic link to another directory which contained code for the layer

Import libraries in lambda layers

I wanted to import jsonschema library in my AWS Lambda in order to perform request validation. Instead of bundling the dependency with my app , I am looking to do this via Lambda Layers. I zipped all the dependencies under venv/lib/python3.6/site-packages/. I uploaded this as a lambda layer and added it to my aws lambda using publish-layer-version and aws lambda update-function-configuration commands respectively. The zip folder is name "lambda-dep.zip" and all the files are under it. However when I try to import jsonschema in my lambda_function , I see the error below -
from jsonschema import validate
{
"errorMessage": "Unable to import module 'lambda_api': No module named 'jsonschema'",
"errorType": "Runtime.ImportModuleError"
}
Am I missing any steps are is there a different mechanism to import anything within lambda layers?
You want to make sure your .zip follows this folder structure when unzipped
python/lib/python3.6/site-packages/{LibrariesGoHere}.
Upload that zip, make sure the layer is added to the Lambda function and you should be good to go.
This is the structure that has worked for me.
Here the script that I use to upload a layer:
#!/usr/bin/env bash
LAYER_NAME=$1 # input layer, retrived as arg
ZIP_ARTIFACT=${LAYER_NAME}.zip
LAYER_BUILD_DIR="python"
# note: put the libraries in a folder supported by the runtime, means that should by python
rm -rf ${LAYER_BUILD_DIR} && mkdir -p ${LAYER_BUILD_DIR}
docker run --rm -v `pwd`:/var/task:z lambci/lambda:build-python3.6 python3.6 -m pip --isolated install -t ${LAYER_BUILD_DIR} -r requirements.txt
zip -r ${ZIP_ARTIFACT} .
echo "Publishing layer to AWS..."
aws lambda publish-layer-version --layer-name ${LAYER_NAME} --zip-file fileb://${ZIP_ARTIFACT} --compatible-runtimes python3.6
# clean up
rm -rf ${LAYER_BUILD_DIR}
rm -r ${ZIP_ARTIFACT}
I added the content above to a file called build_layer.sh, then I call it as bash build_layer.sh my_layer. The script requires a requirements.txt in the same folder, and it uses Docker to have the same runtime used for Python3.6 Lambdas.
The arg of the script is the layer name.
After uploading a layer to AWS, be sure that the right layer's version is referenced inside your Lambda.
Update from previous answers: Per AWS documentation, requirements have been changed to simply be placed in a /python directory without the rest of the directory structure.
https://aws.amazon.com/premiumsupport/knowledge-center/lambda-import-module-error-python/
Be sure your unzipped directory structure has libraries within a /python directory.
There is an easier method. Just install the packages into a python folder. Then install the packages using the -t (Target) option. Note the "." in the zip file. this is a wild card.
mkdir lambda_function
cd lambda_function
mkdir python
cd python
pip install yourPackages -t ./
cd ..
zip /tmp/labmda_layer.zip .
The zip file is now your lambda layer.
The step by step instructions includeing video instructions can be found here.
https://geektopia.tech/post.php?blogpost=Create_Lambda_Layer_Python

Categories

Resources