I have created an event grid triggered azure function in python. I have deployed my solution to azure successfully and the execution is working fine. But, I have an issue with calling another python script in the same folder location. My code is given below: -
import os, json, subprocess
import logging
import azure.functions as func
def main(event: func.EventGridEvent):
try:
correctionsMessages = event.get_json()
for correctionMessage in correctionsMessages:
strMessage = json.dumps(correctionMessage)
full_path_to_script = os.path.join(os.path.dirname(os.path.realpath(__file__)) + '/' + correctionMessage['ScriptName'] + '.py')
logging.info('Script Path: %s', full_path_to_script)
logging.info('Parameter: %s', json.dumps(detectionMessage))
subprocess.check_call('python '+ full_path_to_script + ' ' + json.dumps(strMessage))
result = json.dumps({
'id': event.id,
'data': event.get_json(),
'topic': event.topic,
'subject': event.subject,
'event_type': event.event_type,
})
logging.info('Python EventGrid trigger processed an event: %s', result)
except Exception as e:
logging.info('Error: %s', e)
The above code is giving error for subprocess.check_call. Error is "Error: [Errno 2] No such file or directory: 'python /home/site/wwwroot/Detections/Script1.py". Script1.py is in same folder with init.py. When i am running this function locally, it is working absolutely fine.
Per my experience, the error was caused by the subprocess.check_call function not know the call path of python, not due to the Script1.py path.
On your local for Azure Functions development environment, the python path has been configured in the local environment variable, so the subprocess.check_call function could invoke python via search the python execute file from the paths of environment variable. But on cloud, there is not a python path value pre-configured in the same environment variable, only the Azure Function Host know the real absoluted path for Python.
So the solution is to find out the real absoluted path of Python and use it instead of python in your code.
However, in Azure Function for Python stack runtime, I think it's not a good idea for using subprocess.check_call to spawn a child process to do some processing for a given message. The safe and correct way is to define a function in Script1.py or directly in __init__.py to pass the given message as parameters to realize the same feature.
Related
Currently I'm trying copy from local dir to HDFS directory using fs handler from java gateway in python, which will later the script will be running using spark-submit.
My First attempt:
fs.copyFromLocalFile(spark._jvm.org.apache.hadoop.fs.Path(local_dir,hdfs_file_path+v_file_dt))
Second attempt:
def copy_from_local_file(spark, logger, local_dir, hdfspath, delSrc=True, overwrite=True):
# copyFromLocalFile(boolean delSrc, boolean overwrite, Path src, Path dst)
try:
fs(spark).copyFromLocalFile(delSrc, overwrite, spark._jvm.org.apache.hadoop.fs.Path(local_dir), spark._jvm.org.apache.hadoop.fs.Path(hdfspath))
log.info("copyFromLocal {} to {} success".format(local_dir, hdfspath))
except Exception as e:
log.error(e)
log.info("copyFromLocal {} to {} error".format(local_dir, hdfspath))
Error for the second attempt:
ERROR:spark-ingestor:'JavaObject' object is not callable
Since I'm not familiar with how FS functions, I simply used the copyFromLocalFile which i assumed reflects the command "-put" if i run the task using basic hdfs command.
However, both methods aren't output any file copied to HDFS directory.Appreciate any help on this issue. Thanks!
Hi I am pretty new in this AWS world, what I am trying to do is connect a python client to the AWS IoT service and publish a message, I am using the SDK python and its example, but I have problems whit the certification process, I already have created the thing, the policies and the certification and I downloaded the files, but in the python program I have no idea if I am writing the path to this files in a correct way,
First I tried writing the whole path of each file and nothing then I tried just putting "certificados\thefile" and nothing .
The error that pops up says the error is the path which precesily I do not how to write it.
Thanks for taking the time and sotty if this question is too basic I am just jumping into this.
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: MIT-0
import time as t
import json
import AWSIoTPythonSDK.MQTTLib as AWSIoTPyMQTT
# Define ENDPOINT, CLIENT_ID, PATH_TO_CERT, PATH_TO_KEY, PATH_TO_ROOT, MESSAGE, TOPIC, and RANGE
ENDPOINT = "MYENDPOINT"
CLIENT_ID = "testDevice"
PATH_TO_CERT = "certificados/5a7e19a0269abe740ac8b38a1bfdab115d14074eb212167a3ba359c0d237a8c3-certificate.pem.crt"
PATH_TO_KEY = "certificados/5a7e19a0269abe740ac8b38a1bfdab115d14074eb212167a3ba359c0d237a8c3-private.pem.key"
PATH_TO_ROOT = "certificados/AmazonRootCA1.pem"
MESSAGE = "Hello World"
TOPIC = "Prueba/A"
RANGE = 20
myAWSIoTMQTTClient = AWSIoTPyMQTT.AWSIoTMQTTClient(CLIENT_ID)
myAWSIoTMQTTClient.configureEndpoint(ENDPOINT, 8883)
myAWSIoTMQTTClient.configureCredentials(PATH_TO_ROOT, PATH_TO_KEY, PATH_TO_CERT)
myAWSIoTMQTTClient.connect()
print('Begin Publish')
for i in range (RANGE):
data = "{} [{}]".format(MESSAGE, i+1)
message = {"message" : data}
myAWSIoTMQTTClient.publish(TOPIC, json.dumps(message), 1)
print("Published: '" + json.dumps(message) + "' to the topic: " + "'test/testing'")
t.sleep(0.1)
print('Publish End')
myAWSIoTMQTTClient.disconnect()
I have created a directory on my deskopt to store this files, its name is "certificados" and from there I am taking the path but it doesn't work.
OSError: certificados/AmazonRootCA1.pem: No such file or directory
Also I am using VS code to run this application.
The error is pretty clear, it can't find the CA cert file at the path you've given it. The path you've given will be interpreted relative to where the files are executed, which is most likely going to be relative to the python file it's self. If that's not the Desktop then you need to provide the fully qualified path:
So assuming Linux, change the paths to:
PATH_TO_CERT = "/home/user/Desktop/certificados/5a7e19a0269abe740ac8b38a1bfdab115d14074eb212167a3ba359c0d237a8c3-certificate.pem.crt"
PATH_TO_KEY = "/home/user/Desktop/certificados/5a7e19a0269abe740ac8b38a1bfdab115d14074eb212167a3ba359c0d237a8c3-private.pem.key"
PATH_TO_ROOT = "/home/user/Desktop/certificados/AmazonRootCA1.pem"
I have a Lambda function that needs to use pandas, sqlalchemy, and cx_Oracle.
Installing and packaging all these libraries together exceeds the 250MB deployment package limit of AWS Lambda.
I would like to include only the .zip of the Oracle Basic Light Package, then extract and use it at runtime.
What I have tried
My project is structured as follows:
cx_Oracle-7.2.3.dist-info/
dateutil/
numpy/
pandas/
pytz/six-1.12.0.dist-info/
sqlalchemy/
SQLAlchemy-1.3.8.egg-info/
cx_Oracle.cpython-36m-x86_64-linux-hnu.so
instantclient-basiclite-linux.x64-19.3.0.0.0dbru.zip
main.py
six.py
template.yml
In main.py, I run the following:
import json, traceback, os
import sqlalchemy as sa
import pandas as pd
def main(event, context):
try:
unzip_oracle()
return {'statusCode': 200,
'body': json.dumps(run_query()),
'headers': {'Content-Type': 'application/json', 'Access-Control-Allow-Origin':'*'}}
except:
em = traceback.format_exc()
print("Error encountered. Error is: \n" + str(em))
return {'statusCode': 500,
'body': str(em),
'headers': {'Content-Type': 'application/json', 'Access-Control-Allow-Origin':'*'}}
def unzip_oracle():
print('extracting oracle drivers and copying results to /var/task/lib')
os.system('unzip /var/task/instantclient-basiclite-linux.x64-19.3.0.0.0dbru.zip -d /tmp')
print('extraction steps complete')
os.system('export ORACLE_HOME=/tmp/instantclient_19_3')
def get_db_connection():
return sa.engine.url.URL('oracle+cx_oracle',
username='do_not_worry', password='about_any',
host='of_these', port=1521,
query=dict(service_name='details')
)
def run_query():
query_text = """SELECT * FROM dont_worry_about_it"""
conn = sa.create_engine(get_db_connection())
print('Connected')
df = pd.read_sql(sa.text(query_text), conn)
print(df.shape)
return df.to_json(orient='records')
This returns the error:
sqlalchemy.exc.DatabaseError: (cx_Oracle.DatabaseError) DPI-1047: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory". See https://oracle.github.io/odpi/doc/installation.html#linux for help
(Background on this error at: http://sqlalche.me/e/4xp6)
What I have also tried
I've tried:
Adding
Environment:
Variables:
ORACLE_HOME: /tmp
LD_LIBRARY_PATH: /tmp
to template.yml and redeploying. Same error as above.
Adding os.system('export LD_LIBRARY_PATH=/tmp/instantclient_19_3') into the python script. Same error as above.
Many cp and ln things that were forbidden in Lambda outside of the /tmp folder. Same error as above.
One way that works, but is bad
If I make a folder called lib/ in the Lambda package, and include an odd assortment of libaio.so.1, libclntsh.so, etc. files, the function will work as expected, for some reason. I ended up with this:
<all the other libraries and files as above>
lib/
-libaio.so.1
-libclntsh.so
-libclntsh.so.10.1
-libclntsh.so.11.1
-libclntsh.so.12.1
-libclntsh.so.18.1
-libclntsh.so.19.1
-libclntshcore.so.19.1
-libipc1.so
-libmql1.so
-libnnz19.so
-libocci.so
-libocci.so.10.1
-libocci.so.11.1
-libocci.so.12.1
-libocci.so.18.1
-libocci.so.19.1
-libociicus.so
-libons.so
However, I chose these files through trial and error and don't want to go through this again.
Is there a way to unzip instantclient-basiclite-linux.x64-19.3.0.0.0dbru.zip in Lambda at runtime, and make Lambda see/use it to connect to an Oracle database?
I am not by any means an expert at python but this line seems very strange
print('extracting oracle drivers and copying results to /var/task/lib')
os.system('unzip /var/task/instantclient-basiclite-linux.x64-19.3.0.0.0dbru.zip -d /tmp')
print('extraction steps complete')
os.system('export ORACLE_HOME=/tmp/instantclient_19_3')
Normally, you you will have very limited access to OS level API with Lambda. And even when you do, It can behave the way you do not expect It to do. ( Think as if : Who owns the "unzip" feature? File created by this command would be visible / invokable by who? )
I see you mentioned that you have no issue extracting the files which is also a bit strange
My only answer for you is that
1/ Try to "bring your own" tools ( Unzip, etc.. )
2/ Never try to do OS level calls. Like os.system('export ...') , Always use the full path
Looking again at your question, seems like the way you specify environment variable is conflicting
ORACLE_HOME: /tmp
should not it be
Environment:
Variables:
ORACLE_HOME: /tmp/instantclient_19_3
LD_LIBRARY_PATH: /tmp/instantclient_19_3
Also, see: How to access an AWS Lambda environment variable from Python
I am developing a simple website using Flask + gunicorn + nginx on a Raspberry Pi with Rasbian Jessie.
I am stuck at launching a process with this Python code:
def which(program):
def is_exe(fpath):
return os.path.isfile(fpath) and os.access(fpath, os.X_OK)
fpath, fname = os.path.split(program)
if fpath:
if is_exe(program):
return program
else:
for path in os.environ["PATH"].split(os.pathsep):
path = path.strip('"')
exe_file = os.path.join(path, program)
if is_exe(exe_file):
return exe_file
return None
mplayer_path = which("mplayer")
try:
player = subprocess.Popen([mplayer_path, mp3], stdin=subprocess.PIPE)
except:
return render_template('no_mp3s.html', mp3_message=sys.exc_info())
"mp3" is the path to an mp3 file while "mplayer_path" is the absolute path to mplayer, as returned by the which function described in this answer.
The code works in development when I launch flask directly. In production, when I access the website through nginx, I get the following error message through the no_mp3s.html template:
type 'exceptions.AttributeError'
AttributeError("'NoneType' object has no attribute 'rfind'",)
traceback object at 0x7612ab98
I suspect a permission issue with nginx, but being very new with Linux I am a bit lost!
Edit:
I should add that nowhere in my code (which fits in a single file) I call rfind(). Also, I am sure that the error is caught in this specific try/except because it is the only one that outputs to no_mp3s.html.
Edit:
Following blubberdiblub comments I found out that it is the which function that does not work when the app is run in nginx. Hard coding the path to mplayer seems to work!
I'm trying to deploy a python .zip package as an AWS Lambda
I choose the hello-python Footprint.
I created the 1st lambda with the inline code, after that I tried to change to upload from a development .zip.
The package I used is a .zip contains a single file called hello_python.py with the same code as the default inline code sample, which is shown below:
from __future__ import print_function
import json
print('Loading function')
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
print("value1 = " + event['key1'])
print("value2 = " + event['key2'])
print("value3 = " + event['key3'])
return event['key1'] # Echo back the first key value
#raise Exception('Something went wrong')
After I click "save and test", nothing happens, but I get this weird red ribbon, but no other substantive error messages. The logs and the run results do not exhibit any change if modifying to source, repackaging and uploading it again.
Lambda functions requires a handler in the format <FILE-NAME-NO-EXTENSION>.<FUNCTION-NAME>. In your case the handler is set to lambda_function.lambda_handler, which is the default value assigned by AWS Lambda). However, you've named your file hello_python.py. Therefore, AWS Lambda is looking for a python file named lambda_function.py and finding nothing.
To fix this either:
Rename your hello_python.py file to lambda_function.py
Modify your lambda function handler to be hello_python.lambda_handler
You can see an example of how this works in the documentation where they create a python function called my_handler() inside the file hello_python.py, and they create a lambda function to call it with the handler hello_python.my_handler.