I am trying to deploy an azure function written using python to an azure function app. The function is using pyzbar library. The pyzbar library documentation says that in a Linux environment, the below command needs to be executed so that the pyzbar can work.
sudo apt-get install libzbar0
How can I execute this command on the consumption plan. Please note that I can get this to work if I deploy the function with a container approach using a premium or a dedicated plan. But I want to get this to work using the consumption plan.
Any help is highly appreciated.
I have a work around where every time you trigger your function it will run a script that will install the required packages using the command prompt.
This can be achieved using subprocess module
code :
subprocess.run(["apt-get"," install"," libzbar0"])
for Example in the following code I am installing pandas using pip and returning it's version.
But this will increase your execution time as even if you have added the packages it will continue to execute the installation commands every time you trigger the function.
Related
I’ve tried to install fenics and use the repository of the paper “Hybrid FEM-NN models: Combining artificial neural networks with the finite element method]” to calculate a linear Physics-Informed Neural Network for a linear problem (https://github.com/sebastkm/hybrid-fem-nn-examples/tree/main/examples/pinn_linear).
I’m using Windows 11 and Python 3.10.4.
To run the script main.py I need to use the fenics package. As usual working in python I did
pip install fenics
which worked without any problems. Trying to run the script prompted the error
from fenics import *
ModuleNotFoundError: No module named 'fenics'
After reading a couple of posts on this issue I made sure there are no other virtual environments anymore and that the path sys.path :
C:\Users\neuma\AppData\Local\Programs\Python\Python310\Lib\site-packages
contains the folder which contains the installed fenics packages:
fenics_dijitso-2019.1.0.dist-info
fenics_ffc-2019.1.0.post0.dist-info
fenics_fiat-2019.1.0.dist-info
fenics_ufl-2019.1.0.dist-info
fenics-2019.1.0.dist-info
I’ve noted that there is no folder named just fenics.
After this attempt did not work I tried to follow the instruction for DOLFINx (https://docs.fenicsproject.org/dolfinx/main/python/installation.html#dependencies) since some posts mentioned dolfinx and fenics are the same.
After installing the docker I followed the instructions on running fenics within the docker (https://fenics.readthedocs.io/projects/containers/en/latest/introduction.html#installing-docker). This at least seemed to work using the Terminal:
C:\Users\neuma>docker run -ti quay.io/fenicsproject/stable:latest
# FEniCS stable version image
Welcome to FEniCS/stable!
This image provides a full-featured and optimized build of the stable
release of FEniCS.
To help you get started this image contains a number of demo
programs. Explore the demos by entering the 'demo' directory, for
example:
cd ~/demo/python/documented/poisson
python3 demo_poisson.py
fenics#9548d966c2fc:~$ cd ~/demo/python/documented/poisson
fenics#9548d966c2fc:~/demo/python/documented/poisson$ python3 demo_poisson.py
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Solving linear variational problem.
To view figure, visit http://0.0.0.0:8000
Press Ctrl+C to stop WebAgg server
Visiting the url prompt the following message:
This page is not working
0.0.0.0 has not sent any data.
ERR_EMPTY_RESPONSE
Since I seem to have tried all possible versions of installation of fenics (and/or Dolfinx) and nothing worked I want to ask here if anyone could help me with the installtion.
I’m pretty confused about how to understand the difference betweend fenics and dolfinx and why I need Ubuntu or Linux and a Docker to run a package which already seems to be installed in python . Maybe this screenshot will make it a little clearer:
If you need any more information just let me know. Would be great if someone could help me out.
Oskar
In case anyone else struggles with this or similiar problems, the solution was given here: https://fenicsproject.discourse.group/t/installation-problems/9041/43
I am trying to use great-expectations, i.e., run expectations suites within an AWS Lambda function.
When I am trying to install the packages in the requirements.txt, I get an error re jupyter lab:
aws-sam\\build\\ValidationFunction\\.\\jupyterlab_widgets-1.1.0.data\\data\\share\\jupyter\\labextension
s\\#jupyter-widgets\\jupyterlab-manager\\schemas\\#jupyter-widgets\\jupyterlab-manager\\package.json.orig'
I am using SAM CLI, version 1.42.0 and am trying to build the function inside a container.
Python version 3.9.
Requirements.txt:
botocore
boto3
awslambdaric
awswrangler
pandas_profiling
importlib-metadata==2.0
great-expectations==0.13.19
s3fs==2021.6.0
python-dateutil==2.8.1
aiobotocore==1.3.0
requests==2.25.1
decorator==4.4.2
pyarrow==2
I read several posts on the internet using Lambda functions to run Great Expectations. However, there are none reporting any issues.
Specifically, the question is does anyone have a solution for running Python code on Lambda functions when the dependencies are a large set of Python packages?
Can you show a bit more of your code and the full error stack? I would start as simple as possible get basic validation working and then add back in dependencies until you find the culprit.
Add a simple lambda and the minimum dependencies, maybe pandas and great expectations and then validate one rule as in:
custom_expectation_suite = ExpectationSuite(expectation_suite_name="deliverable_rules.custom")
custom_expectation_suite.add_expectation(
ExpectationConfiguration(expectation_type="expect_column_values_to_not_be_null",
kwargs={'column': 'first_name'
meta={'reason': f'first name should not be null'}))
validation_result = data_frame_to_validate.validate(custom_expectation_suite, run_id=run_id)
So my team at work had put together a simple python service and had a dockerfile that performs some small tasks like pip installing dependencies, creating a user, rewiring some directories etc.
Everything worked fine locally -- we were able to build and run a docker image locally on my laptop (I have a MacBook Pro, running High Sierra if that is relevant).
We attempted to build the project on Openshift and kept getting a "Generic Build error" which told us to check the logs.
The log line it was failing on was
RUN pip3 install pandas flask-restful xgboost scikit-learn nltk scipy
and there was no related error listed with it. It literally just stopped at that point.
Specifically, it was breaking out when it got to installing xgboost. We removed the xgboost part and the entire Dockerfile ran fine and the build completed successfully so we know it was definitely just that one install causing the issue. Has anyone else encountered this and know why it happens?
We are pretty certain it wasn't any kind of memory or storage issue as the container had plenty of room for how small the service was.
Note: We were able to eventually use a coworker's template image to get our project up with the needed dependencies, just curious why this was happening in case we run into a similar issue in the future
Not sure if this is an option for you, but when I need to build from a Dockerfile, I usually use a hosted service like Quay.io or DockerHub.
I am trying to import a python deployment package in aws lambda. The python code uses numpy. I followed the deployment package instructions for virtual env but it still gave Missing required dependencies ['numpy']. I followed the instruction given on stack overflow (skipped step 4 for shared libraries, could not find any shared libraries) but no luck. Any suggestions to make it work?
The easiest way is to use AWS Cloud9, there is no need to start EC2 instances and prepare deployment packages.
Step 1: start up Cloud9 IDE
Go to the AWS console and select the Cloud9 service.
Create environment
enter Name
Environment settings (consider using t2.small instance type, with the default I had sometimes problems to restart the environment)
Review
click Create environment
Step 2: create Lambda function
Create Lambda Function (bottom of screen)
enter Function name and Application name
Select runtime (Python 3.6) and blueprint (empty-python)
Function trigger (none)
Create serverless application (keep defaults)
Finish
wait couple of seconds for the IDE to open
Step 3: install Numpy
at the bottom of the screen there is a command prompt
go to the Application folder, for me this is
cd App
install the package (we have to use pip from the virtual environment, because the default pip points to /usr/bin/pip and this is python 2.7)
venv/bin/pip install numpy -t .
Step4: test installation
you can test the installation by editing the lambda_function.py file:
import numpy as np
def lambda_handler(event, context):
return np.sin(1.0)
save the changes and click the green Run button on the top of the screen
a new tab should appear, now click the green Run button inside the tab
after the Lambda executes I get:
Response
0.8414709848078965
Step 5: deploy Lambda function
on the right hand side of the screen select your Lambda function
click the upwards pointing arrow to deploy
go to the AWS Lambda service tab
the Lambda function should be visible, the name has the format
cloud9-ApplicationName-FunctionName-RandomString
Using Numpy is a real pain.
Numpy needs to be properly compiled on the same OS as it runs. This means that you need to install/compile Numpy on an AMI image in order for it erun properly in Lambda.
The easiest way to do this is to start a small EC2 instance and install it there. Then copy the compiled files (from /usr/lib/python/site-packages/numpy). These are the files you need to include in your Lambda package.
I believe you can also use the serverless tool to achieve this.
NumPy must be compiled on the platform that it will be run on. Easiest way to do this is to use Docker. Docker has a lambda container. Compile NumPy locally in Docker with the lambda container, then push to AWS lambda.
The serverless framework handles all this for you if you want an easy solution. See https://stackoverflow.com/a/50027031/1085343
I was having this same issue and pip install wasn't working for me. Eventually, I found some obscure posts about this issue here and here. They suggested going to pypi, downloading the .whl file (select py version and manylinux1_x86_64), uploading, and unzipping. This is what ultimately worked for me. If nothing else is working for you, I would suggest trying this.
For Python 3.6, you can use a Docker image as someone already suggested. Full instructions for that here: https://medium.com/i-like-big-data-and-i-cannot-lie/how-to-create-an-aws-lambda-python-3-6-deployment-package-using-docker-d0e847207dd6
The key piece is:
docker run -it dacut/amazon-linux-python-3.6
I wrote lets scripts for customer. In order not to install python and dependent packages, I packed all to 3 exe-file using cx-freeze. First - winservice, who does most of the work. Second - settings wizard. Third - client for work with winservice. Faced to the task, need after installing the package (made using bdist_msi) register the service in system, and run the wizard. How do it?
I think if you don't have a certificate that's impossible.