I wanted to import jsonschema library in my AWS Lambda in order to perform request validation. Instead of bundling the dependency with my app , I am looking to do this via Lambda Layers. I zipped all the dependencies under venv/lib/python3.6/site-packages/. I uploaded this as a lambda layer and added it to my aws lambda using publish-layer-version and aws lambda update-function-configuration commands respectively. The zip folder is name "lambda-dep.zip" and all the files are under it. However when I try to import jsonschema in my lambda_function , I see the error below -
from jsonschema import validate
{
"errorMessage": "Unable to import module 'lambda_api': No module named 'jsonschema'",
"errorType": "Runtime.ImportModuleError"
}
Am I missing any steps are is there a different mechanism to import anything within lambda layers?
You want to make sure your .zip follows this folder structure when unzipped
python/lib/python3.6/site-packages/{LibrariesGoHere}.
Upload that zip, make sure the layer is added to the Lambda function and you should be good to go.
This is the structure that has worked for me.
Here the script that I use to upload a layer:
#!/usr/bin/env bash
LAYER_NAME=$1 # input layer, retrived as arg
ZIP_ARTIFACT=${LAYER_NAME}.zip
LAYER_BUILD_DIR="python"
# note: put the libraries in a folder supported by the runtime, means that should by python
rm -rf ${LAYER_BUILD_DIR} && mkdir -p ${LAYER_BUILD_DIR}
docker run --rm -v `pwd`:/var/task:z lambci/lambda:build-python3.6 python3.6 -m pip --isolated install -t ${LAYER_BUILD_DIR} -r requirements.txt
zip -r ${ZIP_ARTIFACT} .
echo "Publishing layer to AWS..."
aws lambda publish-layer-version --layer-name ${LAYER_NAME} --zip-file fileb://${ZIP_ARTIFACT} --compatible-runtimes python3.6
# clean up
rm -rf ${LAYER_BUILD_DIR}
rm -r ${ZIP_ARTIFACT}
I added the content above to a file called build_layer.sh, then I call it as bash build_layer.sh my_layer. The script requires a requirements.txt in the same folder, and it uses Docker to have the same runtime used for Python3.6 Lambdas.
The arg of the script is the layer name.
After uploading a layer to AWS, be sure that the right layer's version is referenced inside your Lambda.
Update from previous answers: Per AWS documentation, requirements have been changed to simply be placed in a /python directory without the rest of the directory structure.
https://aws.amazon.com/premiumsupport/knowledge-center/lambda-import-module-error-python/
Be sure your unzipped directory structure has libraries within a /python directory.
There is an easier method. Just install the packages into a python folder. Then install the packages using the -t (Target) option. Note the "." in the zip file. this is a wild card.
mkdir lambda_function
cd lambda_function
mkdir python
cd python
pip install yourPackages -t ./
cd ..
zip /tmp/labmda_layer.zip .
The zip file is now your lambda layer.
The step by step instructions includeing video instructions can be found here.
https://geektopia.tech/post.php?blogpost=Create_Lambda_Layer_Python
Related
I have a local python code which GPG encrypts a file. I need to convert this to AWS Lambda, once a file has been added to AWS S3 which triggers this lambda.
My local code
import os
import os.path
import time
import sys
gpg = gnupg.GPG(gnupghome='/home/ec2-user/.gnupg')
path = '/home/ec2-user/2021/05/28/'
ptfile = sys.argv[1]
with open(path + ptfile, 'rb')as f:
status = gpg.encrypt_file(f, recipients=['user#email.com'], output=path + ptfile + ".gpg")
print(status.ok)
print(status.stderr)
This works great when I execute this file as python3 encrypt.py file.csv and the result is file.csv.gpg
I'm trying to move this to AWS Lambda and invoked when a file.csv is uploaded to S3.
import json
import urllib.parse
import boto3
import gnupg
import os
import os.path
import time
s3 = boto3.client('s3')
def lambda_handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
try:
gpg = gnupg.GPG(gnupghome='/.gnupg')
ind = key.rfind('/')
ptfile = key[ind + 1:]
with open(ptfile, 'rb')as f:
status = gpg.encrypt_file(f, recipients=['email#company.com'], output= ptfile + ".gpg")
print(status.ok)
print(status.stderr)
My AWS Lambda code zip created a folder structure in AWS
The error I see at runtime is [ERROR] Runtime.ImportModuleError: Unable to import module 'lambda_function': No module named 'gnupg'
Traceback (most recent call last):
You can create a gpg binary suitable for use by python-gnupg on AWS Lambda from the GnuPG 1.4 source. You will need
GCC and associated tools (sudo yum install -y gcc make glibc-static on Amazon Linux 2)
pip
zip
After downloading the GnuPG source package and verifying its signature, build the binary with
$ tar xjf gnupg-1.4.23.tar.bz2
$ cd gnupg-1.4.23
$ ./configure
$ make CFLAGS='-static'
$ cp g10/gpg /path/to/your/lambda/
You will also need the gnupg.py module from python-gnupg, which you can fetch using pip:
$ cd /path/to/your/lambda/
$ pip install -t . python-gnupg
Your Lambda’s source structure will now look something like this:
.
├── gnupg.py
├── gpg
└── lambda_function.py
Update your function to pass the location of the gpg binary to the python-gnupg constructor:
gpg = gnupg.GPG(gnupghome='/.gnupg', gpgbinary='./gpg')
Use zip to package the Lambda function:
$ chmod o+r gnupg.py lambda_function.py
$ chmod o+rx gpg
$ zip lambda_function.zip gnupg.py gpg lambda_function.py
Since there are some system dependencies required to use gpg within python i.e gnupg itself, you will need to build your lambda code using the container runtime environment: https://docs.aws.amazon.com/lambda/latest/dg/lambda-images.html
Using docker will allow you to install underlying system dependencies, as well as import your keys.
Dockerfile will look something like this:
FROM public.ecr.aws/lambda/python:3.8
RUN apt-get update && apt-get install gnupg
# copy handler file
COPY app.py <path-to-keys> ./
# Add keys to gpg
RUN gpg --import <path-to-private-key>
RUN gpg --import <path-to-public-key>
# Install dependencies and open port
RUN pip3 install -r requirements.txt
CMD ["app.lambda_handler"]
app.py would be your lambda code. Feel free to copy any necessary files besides the main lambda handler.
Once the container image is built and uploaded. The lambda can now use the image (including all of its dependencies). The lambda code will run within the containerized environment which contains both gnupg and your imported keys.
Resources:
https://docs.aws.amazon.com/lambda/latest/dg/python-image.html
https://docs.aws.amazon.com/lambda/latest/dg/lambda-images.html
https://medium.com/#julianespinel/how-to-use-python-gnupg-to-decrypt-a-file-into-a-docker-container-8c4fb05a0593
The best way to do this is to add a lambda layer to your python lambda.
You need to make a virtual environment in which you pip install gnupg and then put all the installed python packages in a zip file, which you upload to aws as a lambda layer. This lambda layer can then be used in all lambdas where you need gnupg. To create the lamba layer you basically do:
python3.9 -m venv my_venv
./my_venv/bin/pip3.9 install gnupg
cp -r ./my_venv/lib/python3.9/site-packages/ python
zip -r lambda_layer.zip python
Where the python version above has to match that of the python function in your lambda.
If you don't want to use layers you can additionally do:
zip -r lambda_layer.zip ./.gnupg
zip lambda_layer.zip lambda_funtion.py
And you get a zip file that you can use as a lambda deployment package
gpg is now already installed in
public.ecr.aws/lambda/python:3.8.
However despite that does not seem to be available from Lambda.
So you still need to get the gpg executable into the Lambda environment.
I did it using a docker image.
My Dockerfile is just:
FROM public.ecr.aws/lambda/python:3.8
COPY .venv/lib/python3.8/site-packages/ ./
COPY test_gpg.py .
CMD ["test_gpg.lambda_handler"]
.venv is the directory with my python virtualenv containing
the python packages I need.
The python-gnupg package requires you to have a working installation of the gpg executable, as mentioned in their official docs' Deployment Requirements; I am yet to find a way to access a gpg executable from lambda.
I ended up using a Dockerfile image as the library is already available in one of the Amazon Linux or Lambda Python base images provided by AWS that you can find here https://gallery.ecr.aws/lambda/python (in the tag images tab you will find all the Python versions needed based on your requirements).
You will need to create the following 3 files in your dev environment:
Dockerfile
requirements.txt
lambda script
The requirements.txt contains the python-gnupg for the import and all the other libraries based on your requirements:
boto3==1.15.11 # via -r requirements.in
urllib3==1.25.10 # via botocore
python-gnupg==0.5.0 # required for gpg encryption
This is the Dockerfile:
# Python 3.9 lambda base image
FROM public.ecr.aws/lambda/python:3.9
# Install pip-tools so we can manage requirements
RUN yum install python-pip -y
# Copy requirements.txt file locally
COPY requirements.txt ./
# Install dependencies into current directory
RUN python3.9 -m pip install -r requirements.txt
# Copy lambda file locally
COPY s3_to_sftp_batch.py .
# Define handler file name
CMD ["s3_to_sftp_batch.on_trigger_event_test"]
Then inside your lambda code add:
# define library path to point system libraries
os.environ["LD_LIBRARY_PATH"] = "/usr/bin/"
# create instance of GPG class and specify path that contains gpg binary
gpg = gnupg.GPG(gnupghome='/tmp', gpgbinary='/usr/bin/gpg')
Save these files then go to AWS ECR and create a Private repo then go to the repo and in the top right corner go to View push commands and run them to push your image in AWS.
Finally create your Lambda function using the container image.
I am having trouble creating a lambda layer for the xgboost library. Im running:
Im grabbing a zip of xgboost and it's dependencies from here (https://github.com/alexeybutyrev/aws_lambda_xgboost) and loading it into a layer. When I try to test my lambda, I get this error:
Unable to import module 'lambda_function': No module named 'xgboost.core'
It looks like __init__.py is trying to reference core.py via from .core import <stuff>
Has anyone encountered this error with AWS Lambda before?
EDIT: As #Marcin has remark, the first answer provided works for packages under 262 MB large.
A. Python Packages within Lambda Layer size limit
You can also do it with AWS sam cli and Docker (see this link to install the SAM cli), to build the packages inside a container. Basically you initialize a default template with Python as runtime and then you specify the packages under the requirements.txt file. I found it more easy than the article you mentioned. I let you steps if you want to consider them for future use.
1. Initialize a default SAM template
Under any folder that you want to keep the project, you can type
sam init
this will prompt a series of questions, for a quick set up we will be choosing the Quick Start Templates as follows
1 - AWS Quick Start Templates
2 - Python 3.8
Project name [sam-app]: your_project_name
1 - Hello World Example
By choosing the Hello World Example it generates a default lambda function with a requirements.txt file. Now, we're going to edit with the name of the package that you want, in this case xgboost
2. Specify packages to install
cd your_project_name
code hello_world/requirements.txt
as I have Visual Studio Code as editor, this will open the file on it. Now, I can specify the xgboost package
your_python_package
Here comes the reason to have Docker installed. Some packages relied on C++. Thus, it is recommended to build inside a container (case on Windows). Now, move to the folder where the template.yaml file is located. Then, type
sam build -u
3. Zip packages
there are some files that you do not want to be included in your lambda layer, because we only want to keep the python libraries. Thus, you could remove the following files
rm .aws-sam/build/HelloWorldFunction/app.py
rm .aws-sam/build/HelloWorldFunction/__init__.py
rm .aws-sam/build/HelloWorldFunction/requirements.txt
and then zip the remaining content of the folder.
cp -r .aws-sam/build/HelloWorldFunction/ python/
zip -r my_layer.zip python/
where we place the layer in the python/ folder according to the docs
On Windows system the zip command should be replaced with
Compress-Archive my_layer/ my_layer.zip.
4. Upload your Layer to AWS
On AWS go to Lambda, then choose Layers and Create Layer. Now, you can upload your .zip file as the image below shows
Notice that for zip files over 50 MB, you should upload the .zip file to an s3 bucket and provide the path, for exampl, https://s3:amazonaws.com//mybucket/my_layer.zip.
B. Python packages that exceeds Lambda Layer limits
The xgboost package on its own is more than 300 MB and will throw the following error
As #Marcin has kindly pointed out, the prior approach with SAM cli would not directly work for Python layers that exceed the limit. There's an open issue on github to specify a custom docker image when running sam build -u and a possible solution retagging the default lambda/lambci image.
So, how could we pass through this?. There are already some useful resources that I would just point to.
First, the Medium article that #Alex took as solution that follow this repo code.
Second, alexeybutyrev approach that works by applying the strip command to reduce the libraries sizes. One can find this approach under a github repo, the instructions are provided.
Edit (December 2020)
This month AWS releases container Image support for AWS Lambda. Following the next tree structure for your project
Project/
|-- app/
| |-- app.py
| |-- requirements.txt
| |-- xgb_trained.bin
|-- Dockerfile
You can deploy an XGBoost model with the following Docker image. Follow this repo instructions for a detailed explanation.
# Dockerfile based on https://docs.aws.amazon.com/lambda/latest/dg/images-create.html
# Define global args
ARG FUNCTION_DIR="/function"
ARG RUNTIME_VERSION="3.6"
# Choose buster image
FROM python:${RUNTIME_VERSION}-buster as base-image
# Install aws-lambda-cpp build dependencies
RUN apt-get update && \
apt-get install -y \
g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev \
git
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Copy function code
COPY app/* ${FUNCTION_DIR}/
# Install python dependencies and runtime interface client
RUN python${RUNTIME_VERSION} -m pip install \
--target ${FUNCTION_DIR} \
--no-cache-dir \
awslambdaric \
-r ${FUNCTION_DIR}/requirements.txt
# Install xgboost from source
RUN git clone --recursive https://github.com/dmlc/xgboost
RUN cd xgboost; make -j4; cd python-package; python${RUNTIME_VERSION} setup.py install; cd;
# Multi-stage build: grab a fresh copy of the base image
FROM base-image
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}
# Copy in the build image dependencies
COPY --from=base-image ${FUNCTION_DIR} ${FUNCTION_DIR}
ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ]
CMD [ "app.handler" ]
So I was never able to figure out why it failed in this way. The solution I found that worked was to create an EC2 instance running amazon linux, install and zip the libraries there, then save to S3. See here for detailed instructions:
https://medium.com/#lucashenriquessilva/how-to-create-a-aws-lambda-python-layer-db2830e08b12
I want to upload files to EC2 instance using pysftp library (Python script). So I have created small Python script which is using below line to connect
pysftp.Connection(
host=Constants.MY_HOST_NAME,
username=Constants.MY_EC2_INSTANCE_USERNAME,
private_key="./mypemfilelocation.pem",
)
some code here .....
pysftp.put(file_to_be_upload, ec2_remote_file_path)
This script will upload files from my local Windows machine to EC2 instance using .pem file and it works correctly.
Now I want to do this action using AWS lambda with API Gateway functionality.
So I have uploaded Python script to AWS lambda. Now I am not sure how to use pysftp library in AWS lambda, so I found solution that add pysftp library Layer in AWS lambda Layer. I did it with
pip3 install pysftp -t ./library_folder
And I make zip of above folder and added in AWS lambda Layer.
But still I got so many errors like one by one :-
No module named 'pysftp'
No module named 'paramiko'
Undefined Symbol: PyInt_FromLong
cannot import name '_bcrypt' from partially initialized module 'bcrypt' (most likely due to a circular import)
cffi module not found
I just fade up of above errors I didn't find the proper solution. How can I can use pysftp library in my AWS lambda seamlessly?
I build pysftp layer and tested it on my lambda with python 3.8. Just to see import and basic print:
import json
import pysftp
def lambda_handler(event, context):
# TODO implement
print(dir(pysftp))
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
I used the following docker tool to build the pysftp layer:
https://github.com/lambci/docker-lambda
So what I did for pysftp was:
# create pysftp fresh python 3.8 environment
python -m venv pysftp
# activate it
source pysftp/bin/activate
cd pysftp
# install pysftp in the environemnt
pip3 install pysftp
# generate requirements.txt
pip freeze > requirements.txt
# use docker to construct the layer
docker run --rm -v `pwd`:/var/task:z lambci/lambda:build-python3.8 python3.8 -m pip --isolated install -t ./mylayer -r requirements.txt
zip -r pysftp-layer.zip .
And the rest is uploading the zip into s3, creating new layer in AWS console, setting Compatible runtime to python 3.8 and using it in my test lambda function.
You can also check here how to use this docker tool (the docker command I used is based on what is in that link).
Hope this helps
I'm using the Python AWS CDK in Cloud9 and I'm deploying a simple Lambda function that is supposed to send an API request to Atlassian's API when an Object is uploaded to an S3 Bucket (also created by the CDK). Here is my code for CDK Stack:
from aws_cdk import core
from aws_cdk import aws_s3
from aws_cdk import aws_lambda
from aws_cdk.aws_lambda_event_sources import S3EventSource
class JiraPythonStack(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
# The code that defines your stack goes here
jira_bucket = aws_s3.Bucket(self,
"JiraBucket",
encryption=aws_s3.BucketEncryption.KMS)
event_lambda = aws_lambda.Function(
self,
"JiraFileLambda",
code=aws_lambda.Code.asset("lambda"),
handler='JiraFileLambda.handler',
runtime=aws_lambda.Runtime.PYTHON_3_6,
function_name="JiraPythonFromCDK")
event_lambda.add_event_source(
S3EventSource(jira_bucket,
events=[aws_s3.EventType.OBJECT_CREATED]))
The lambda function code uses the requests module which I've imported. However, when I check the CloudWatch Logs, and test the lambda function - I get:
Unable to import module 'JiraFileLambda': No module named 'requests'
My Question is: How do I install the requests module via the Python CDK?
I've already looked around online and found this. But it seems to directly modify the lambda function, which would result in a Stack Drift (which I've been told is BAD for IaaS). I've also looked at the AWS CDK Docs too but didn't find any mention of external modules/libraries (I'm doing a thorough check for it now) Does anybody know how I can work around this?
Edit: It would appear I'm not the only one looking for this.
Here's another GitHub issue that's been raised.
It is not even necessary to use the experimental PythonLambda functionality in CDK - there is support built into CDK to build the dependencies into a simple Lambda package (not a docker image). It uses docker to do the build, but the final result is still a simple zip of files. The documentation shows it here: https://docs.aws.amazon.com/cdk/api/latest/docs/aws-lambda-readme.html#bundling-asset-code ; the gist is:
new Function(this, 'Function', {
code: Code.fromAsset(path.join(__dirname, 'my-python-handler'), {
bundling: {
image: Runtime.PYTHON_3_9.bundlingImage,
command: [
'bash', '-c',
'pip install -r requirements.txt -t /asset-output && cp -au . /asset-output'
],
},
}),
runtime: Runtime.PYTHON_3_9,
handler: 'index.handler',
});
I have used this exact configuration in my CDK deployment and it works well.
And for Python, it is simply
aws_lambda.Function(
self,
"Function",
runtime=aws_lambda.Runtime.PYTHON_3_9,
handler="index.handler",
code=aws_lambda.Code.from_asset(
"function_source_dir",
bundling=core.BundlingOptions(
image=aws_lambda.Runtime.PYTHON_3_9.bundling_image,
command=[
"bash", "-c",
"pip install --no-cache -r requirements.txt -t /asset-output && cp -au . /asset-output"
],
),
),
)
UPDATE:
It now appears as though there is a new type of (experimental) Lambda Function in the CDK known as the PythonFunction. The Python docs for it are here. And this includes support for adding a requirements.txt file which uses a docker container to add them to your function. See more details on that here. Specifically:
If requirements.txt or Pipfile exists at the entry path, the construct will handle installing all required modules in a Lambda compatible Docker container according to the runtime.
Original Answer:
So this is the awesome bit of code my manager wrote that we now use:
def create_dependencies_layer(self, project_name, function_name: str) -> aws_lambda.LayerVersion:
requirements_file = "lambda_dependencies/" + function_name + ".txt"
output_dir = ".lambda_dependencies/" + function_name
# Install requirements for layer in the output_dir
if not os.environ.get("SKIP_PIP"):
# Note: Pip will create the output dir if it does not exist
subprocess.check_call(
f"pip install -r {requirements_file} -t {output_dir}/python".split()
)
return aws_lambda.LayerVersion(
self,
project_name + "-" + function_name + "-dependencies",
code=aws_lambda.Code.from_asset(output_dir)
)
It's actually part of the Stack class as a method (not inside the init). The way we have it set up here is that we have a folder called lambda_dependencies which contains a text file for every lambda function we are deploying which just has a list of dependencies, like a requirements.txt.
And to utilise this code, we include in the lambda function definition like this:
get_data_lambda = aws_lambda.Function(
self,
.....
layers=[self.create_dependencies_layer(PROJECT_NAME, GET_DATA_LAMBDA_NAME)]
)
You should install the dependencies of your lambda locally before deploying the lambda via CDK. CDK does not have idea how to install the dependencies and which libraries should be installed.
In you case, you should install the dependency requests and other libraries before executing cdk deploy.
For example,
pip install requests --target ./asset/package
There is an example for reference.
Wanted to share 2 template repos I made for this (heavily inspired by some of the above):
https://github.com/iguanaus/cdk-ecs-python-with-requirements- - demo of ecs service of basic python function
https://github.com/iguanaus/cdk-lambda-python-with-requirements - demo of lambda python job with requirements.
Hope they are helpful for folks :)
Lastly; if you want to see a long thread on this subject, see here: https://github.com/aws/aws-cdk/issues/3660
I ran into this issue as well. I used a solution like #Kane and #Jamie suggest just fine when I was working on my ubuntu machine. However, I ran into issue when working on MacOS. Apparently some (all?) python packages don't work on lambda (linux env) if they are pip installeded on a different os (see stackoverflow post)
My solution was to run the pip install inside a docker container. This allowed me to cdk deploy from my macbook and not run into issues with my python packages in lambda.
suppose you have a dir lambda_layers/python in your cdk project that will house your python packages for the lambda layer.
current_path = str(pathlib.Path(__file__).parent.absolute())
pip_install_command = ("docker run --rm --entrypoint /bin/bash -v "
+ current_path
+ "/lambda_layers:/lambda_layers python:3.8 -c "
+ "'pip3 install Pillow==8.1.0 -t /lambda_layers/python'")
subprocess.run(pip_install_command, shell=True)
lambda_layer = aws_lambda.LayerVersion(
self,
"PIL-layer",
compatible_runtimes=[aws_lambda.Runtime.PYTHON_3_8],
code=aws_lambda.Code.asset("lambda_layers"))
As an alternative to my other answer, here's a slightly different approach that also works with docker-in-docker (the bundling-options approach doesn't).
Set up the Lambda function like
lambda_fn = aws_lambda.Function(
self,
"Function",
runtime=lambdas.Runtime.PYTHON_3_9,
code=lambdas.Code.from_docker_build(
"function_source_dir",
),
handler="index.lambda_handler",
)
and in function_source_dir/ have these files:
index.py (to match the above code - you can name this whatever you like)
requirements.txt
Dockerfile
Set up your Dockerfile like
# Note that this dockerfile is only used to build the lambda asset - the
# lambda still just runs with a zip source, not a docker image.
# See the docstring for aws_lambda.Code.from_docker_build
FROM public.ecr.aws/lambda/python:3.9.2022.04.27.10-x86_64
COPY index.py /asset/
COPY requirements.txt /tmp/
RUN pip3 install -r /tmp/requirements.txt -t /asset
and the synth step will build your asset in docker (using the above dockerfile) then pull the built Lambda source from the /asset/ directory in the image.
I haven't looked into too much detail about why the BundlingOptions approach fails to build when running inside a docker container, but this one does work (as long as docker is run with -v /var/run/docker.sock:/var/run/docker.sock to enable docker-in-docker). As always, be sure to consider your security posture when doing this.
I wish to add more python modules to my yocto/openembedded project but I am unsure how to? I wish to add flask and its dependencies.
some python packages having corresponding recipes in the meta folders, like Enum class for example:
meta-openembedded/meta-python/recipes-devtools/python/python-enum34_1.1.6.bb
unfortunately lot's of useful classes aren't available, but some might be needed for the python application. get used of installing missing packages using pip already on booted platform? but what if the target product is not IP network connected? the solution is to implement a new recipe and add to the platform meta layer (at least). Example is a recipe for the module keyboard useful for intercepting keys/buttons touch events:
use PyPi web site to identify if the package is available:
https://pypi.org/project/keyboard/
download archive available on the package description page:
https://github.com/boppreh/keyboard/archive/master.zip
collect some useful information required to fill-out a new recipe:
SUMMARY - could be obtained from the package description page
HOMEPAGE - the project URL on github or bitbucket or sourceforge, etc
LICENSE - verify license type
LIC_FILES_CHKSUM by executing md5sum on existing LICENSE or README or PKG-INFO file located in the root of the package (preferrably)
SRC_URI[md5sum] - is md5sum of the archive itself. it will be used to discover and download archive on pypi server automatically with the help of supporting script inherit pypi
PYPI_PACKAGE_EXT - if the package is not tar.gz require to supply the correct extension
create missing python-keyboard_0.13.1.bb recipe:
`
SUMMARY = "Hook and simulate keyboard events on Windows and Linux"
HOMEPAGE = "https://github.com/boppreh/keyboard"
LICENSE = "BSD-3-Clause"
LIC_FILES_CHKSUM = "file://PKG-INFO;md5=9bc8ba91101e2f378a65d36f675c88b7"
SRC_URI[md5sum] = "d4b90e53bbde888e7b7a5a95fe580a30"
SRC_URI += "file://add_missing_CHANGES_md.patch"
PYPI_PACKAGE = "keyboard"
PYPI_PACKAGE_EXT = "zip"
inherit pypi
inherit setuptools
BBCLASSEXTEND = "native nativesdk"
`
the package has been patched by adding
SRC_URI += "file://add_missing_CHANGES_md.patch"
directive to the recipe due to missing CHANGES.md file used by setup.py script to identify package version (this step is optional). the patch itself has to be placed inside the folder next to the recipe matching recipe name but without version:
python-keyboard
This question is old, but currently in 2020 there is a python package called pipoe.
pipoe can generate .bb classes corresponding to python packages for you!
Usage:
$ pip3 install pipoe
$ pipoe -p requests
OR
$ pipoe -p requests --python python3
Now copy the generated .bb files to your layer and use them.
https://pypi.org/project/pipoe/
The OE layer index at layers.openembedded.org lists all known layers and the recipes they contain, so searching that should bring up the meta-python layer that you can add to your build and use recipes from.
In your Image recipe you can add a Python module by adding it to the IMAGE_INSTALL variable:
IMAGE_INSTALL += "python-numpy"
You can find possible modules for example by searching for them with wildcards:
find -name *python*numpy*bb
in the Yocto Folder brings:
./poky/meta/recipes-devtools/python/python-numpy_1.7.0.bb
The pipoe did not work for me either, I ended up making this bash script. Someone else might find it useful.
You will need to change this in my script below:
local my_layers_dir="my/layers/directory"
To run this script:
./pypi.sh <modulename>
#example:
./pypi.sh humanfriendly #this should generate the bb file for the humanfriendly python module
pypi.sh:
#!/bin/bash
set -ex
function argstovars()
{
for change in $#; do
set -- `echo $change | tr '=' ' '`
eval $1=$2
done
}
function main(){
local module=""
argstovars $#
local my_layers_dir="my/layers/directory"
local url_files="https://pypi.org/project/$module/#files"
mkdir -p /tmp/pypi
rm -fr /tmp/pypi/*
pushd /tmp/pypi
wget $url_files
local targz_url=$(cat index.html | grep https://files | grep tar.gz | sed -r "s/<a href=\"(.*)\">/\1/g")
wget $targz_url
local targz_file=$(ls | grep tar.gz)
local md5=$(md5sum $targz_file)
md5=${md5%% *}
local sha256=$(sha256sum $targz_file)
sha256=${sha256%% *}
tar -xf $targz_file
local module_with_version=$(echo "$targz_file" | sed -r "s/(.*)\.tar\.gz/\1/g")
pushd $module_with_version
local license_file=$(find . -name LICENSE*)
local md5lic=$(md5sum $license_file)
md5lic=${md5lic%% *}
popd
popd
module_with_version="${module_with_version//-/_}"; echo $foo
mkdir -p "$my_layers_dir/$module"
pushd "$my_layers_dir/$module"
echo "SUMMARY = \"This is a python module for $module\"
HOMEPAGE = \"https://pypi.org/project/$module/\"
LICENSE = \"MIT\"
LIC_FILES_CHKSUM = \"file://$license_file;md5=$md5lic\"
SRC_URI[md5sum] = \"$md5\"
SRC_URI[sha256sum] = \"$sha256\"
PYPI_PACKAGE = \"$module\"
inherit pypi setuptools3
RDEPENDS_${PN} += \" \
python3-psutil \
\"
" > "${module_with_version}.bb"
popd
}
time main module=$#