I am using python-based Azure Function and would like to use the UUID package in the function.
However when the function is deployed, the function fails on the import uuid statement. uuid is specified in requirements and installs ok.
the uuid import works fine when I run the function locally in debug mode, but not when deployed.
Related
I am trying to deploy a Python Azure Function into an Azure Function App. The function __init__.py script imports an SDK which is stored as an Azure Artifact Python Package. I can build and publish the function to Azure successfully using a pipeline from the DevOps repo, however the function fails at the import mySDK line when I run it.
I assume the issue is that because the function is serverless, when it is called the SDK needs to be pip installed again - how do I do this?
I have tried adding a PIP_EXTRA_INDEX_URL to the artifact feed in the Function App with no success.
PIP_EXTRA_INDEX_URL wored for me.
What was the issue you received when you tried it?
Basically before you deploy your function, you should alter the application settings on your function app and add the PIP_EXTRA_INDEX_URL key-value pair. Then add the python package in your azure artifacts feed to the requirements.txt file in your function app code.
There is a good guide here EasyOps - How To connect to azure artifact feed from Function App
Is was trying to create Azure function using Python(Http trigger) to fetch data from the gremlin graph.
I used
from gremlin_python.driver import client as clientDriver
to import the libraries and it was working fine locally.
When i deploy the same code to the Azure portal and ran the code, am getting 500 internal error.
After trying some changes, i could see "from gremlin_python.driver import client as clientDriver" import statement is not working(When i remove this piece the code works)
When we run the code in VSCode, we are creating a virtual env and installing the gremlin packages, so it was working in local and not in Azure portal.
Could someone help me in resolving this issue.
For this problem, we need to make sure the requirements.txt is all right. And if you just do the import module by the line
from gremlin_python.driver import client as clientDriver
You need to add another line to import the gremlin_python.driver module explicitly.
import gremlin_python.driver
Hope it helps~
I have to display data from a SQL database using App Engine from Google Cloud via Python.
All requests are GET type.
The package must also be imported into the Python file: import requests.
I also need to import the json package from the flask.
The application package will be used inside the "app.route" functions; I can define multiple routes (one for each URI above) in which I take and display the data; example full:
def employees():
res = requests.get('https://ultra-automata-237814.appspot.com/employees')
return app.response_class(
response=json.dumps(res.json()),
status=200,
mimetype='application/json'
)
I run pip freeze > requirements.txt command.
I installed requirements.txt from my project root.
I upgraded pip.
I'm trying to use boto3 within a pipenv with Python 3.6.5.
So I installed it with
pipenv install boto3
So for testing purposes I'm using a single Flask app, and add at the beginning of the file:
import boto3
However without even running the program, PyLint warns me that E0401:Unable to import 'boto3', and the auto-completion only proposes botocore.
If I try to run the flask app or to deploy it to Lambda (cause it's the purpose of this app), I get an error 500.
However, the strange is that if I use the REPL within the pipenv and in the same directory and type
>> import boto3
Well it's successful, and I can use all the other commands of boto3. So in my opinion it is installed but for I reason I can't think of, my Python file can't load it.
I heard of file naming conflicts, but honestly I doubt that this is the reason since even if I rename the file and the Flask app with a weird name it still can't load.
Any thoughts about it? Thanks a lot
I'm using Apex to deploy lambda functions in AWS. I need to write a lambda function which runs a cleanup script on an Oracle RDS in my AWS VPC. Oracle has a very nice python library called cx_Oracle, but I'm having some problems using it in a Lambda function (running on Python 2.7). My first step was to try to run the oracle-described test code as follows:
from __future__ import print_function
import json
import boto3
import boto3.ec2
import os
import cx_Oracle
def handle(event, context):
con = cx_Oracle.connect('username/password#my.oracle.rds:1521/orcl')
print(str(con.version))
con.close()
When I try to run this piece of test code, I get the following response:
Unable to import module 'main': /var/task/cx_Oracle.so: invalid ELF header
Google has told me that this error is caused because the cx_Oracle library is not a complete oracle implementation for python, rather it requires the SQLPlus client to be pre-installed, and the cx_Oracle library references components installed as part of SQLPlus.
Obviously pre-installing SQLPlus might be difficult.
Apex has the
hooks {}
functionality which would allow me to pre-build things, but I'm having trouble finding documentation showing what happens to those artefacts and how that works. In theory I could download the libraries into a nexus or an S3 bucket, and then in my hooks{} declaration, I could add them to the zip file. I could then try to install them as part of the python script. However, I have a few problems with this:
How are the 'built' artefacts accessed inside the lambda
function? Can they be? Have I misunderstood this?
Does a python 2.7 lambda function have enough access rights to
the operating system of the host container to be able to install a
library?
If the answer to question 2 is no, is there another way to write
a lambda function to run some SQL against an Oracle RDS instance?