Execute lambda function on local device using greengrass - python

I am trying to learn AWS greengrass and so I was following this tutorial https://docs.aws.amazon.com/greengrass/latest/developerguide/gg-gs.html which explains step by step on setting up with greengrass on raspberry pi and publishing some messages using a lambda function.
A simple lambda function is as following :
import greengrasssdk
import platform
from threading import Timer
import time
# Creating a greengrass core sdk client
client = greengrasssdk.client('iot-data')
# Retrieving platform information to send from Greengrass Core
my_platform = platform.platform()
def greengrass_hello_world_run():
if not my_platform:
client.publish(topic='hello/world', payload='hello Sent from Greengrass Core.')
else:
client.publish(topic='hello/world', payload='hello Sent from Greengrass Core running on platform: {}'.format(my_platform))
# Asynchronously schedule this function to be run again in 5 seconds
Timer(5, greengrass_hello_world_run).start()
# Execute the function above
greengrass_hello_world_run()
# This is a dummy handler and will not be invoked
# Instead the code above will be executed in an infinite loop for our example
def function_handler(event, context):
return
Here this works but I am trying to understand it better by having a lambda function to do some extra work for example opening a file and writing to it.
I modified the greengrass_hello_world_run() function as following
def greengrass_hello_world_run():
if not my_platform:
client.publish(topic='hello/world', payload='hello Sent from Greengrass Core.')
else:
stdout = "hello from greengrass\n"
with open('/home/pi/log', 'w') as file:
for line in stdout:
file.write(line)
client.publish(topic='hello/world', payload='hello Sent from Greengrass Core running on platform: {}'.format(my_platform))
I expect upon deploying, the daemon running on my local pi should create that file in the given directory coz I believe greengrass core tries to run this lambda function on local device. However it doesnt create any file nor it publish anything coz I believe this code might be breaking. Not sure how though, I tried looking into cloudwatch but I dont see any events or errors being reported.
Any help on this would be really appreciated,
cheers !

A few thoughts on this...
If you turn on the local logging in your GG group set up it will start to write logs locally on your PI. The settings are:
The logs are located in: /greengrass/ggc/var/log/system
If you tail the python_runtime.log you can see any errors from the lambda execution.
If you want to access local resources you will need to create a resource in your GG group definition. You can then give this access to a volume which is where you can write your file.
You do need to deploy your group once this has been done for the changes to take effect.

I think I found the answer, we have to create the resource in lambda environment and also make sure to give read and write access to lambda for accessing that resource. By default lambda can only access /tmp folder.
Here is the link to the documentation
https://docs.aws.amazon.com/greengrass/latest/developerguide/access-local-resources.html

Related

Camunda /External Task/ Connecting Python with BPM

I would like to create a simple Python Script and use it to perform a service task in my BPMN process. Does anyone know how I can use a Python script in a service task?
I'm attaching the references that you could use with the specific configuration.
Note : from BPMN perspective you just need to give the task type to the service task which you will use in script to identify the task.
For Camunda -7 and local setup to execute the service task you can follow https://medium.com/#klauke.peter/implementing-an-external-task-worker-for-camunda-in-python-566b5ebff488
For Camunda -8 and zeebe setup you will have to make a slight change while creating the channel you will have to use "from pyzeebe import create_camunda_cloud_channel" for the functional implementation you can find it in the ref url
https://pyzeebe.readthedocs.io/en/latest/channels.html#camunda-cloud
You can also refer once you created the channel and started the process https://forum.camunda.io/t/boundary-event-error-handler/37272
In this url you will have code for handling service task and then also the boundary task
You can't use a python script IN a service task, but you can use a python script as an external task worker, this repo will probably be a good starting point for you.
dg

Deliver a message from google cloud functions to virtual machine

currently I automatically start a VM after running a cloud function via this code:
def start_vm(context, event):
compute = googleapiclient.discovery.build('compute', 'v1')
result = compute.instances().start(project='PROJECT', zone='ZONE', instance='NAME').execute()
Now I am looking for a way to deliver a message or a parameter at the same time. After the VM starts and based on the added message/parameter, a different code runs. Does anyone know how to achieve this?
Appreciate every help.
Thank you.
You can use the Guest attributes. The Cloud Functions add the guest attribute and then run the VM.
In the startup script, you read the data in the guest attributes and then you use them to perform stuff.
The other solution is to start a webserver in the VM and then to POST a request to this webserver.
This solution is better is you have several task to perform on the VM. But, take care of the security is you expose a webserver. Expose it only internally and use a VPC connector on your Cloud Function to reach your VM.

Invoke AWS SAM local function from another SAM local function

I am trying to create an AWS SAM app with multiple AWS Serverless functions.
The app has 1 template.yaml file which have resource of 2 different serverless lambda functions, for instance "Consumer Lambda" and "Worker Lambda". Consumer gets triggered at a rate of 5 minutes. The consumer uses boto3 library to trigger the worker lambda function. This code works when the worker lambda is deployed on AWS.
But I want to test both the functions locally with Sam local invoke "Consumer" which invokes "Worker" also locally.
Here's a screenshot of the YAML file:
I am using Pycharm to run the project. There is an option to run only 1 function at a time which then creates only one folder in the build folder.
I have to test if Consumer is able to invoke worker locally in pycharm before deployment. I think there is some way to do it but not sure how to. I did some extensive search but didn't yield anything.
Any help is appreciated. Thanks in advance
You can start the lambda invoke endpoint in the following way (official docs):
sam local start-lambda
Now you can point your AWS resource client to port 3001 and trigger the functions locally.
For eg. If you are doing this on Python, it can be acheived in the following way with boto3:
boto3
# Create a lambda client
lambda_client = boto3.client('lambda',
region_name="<localhost>",
endpoint_url="<http://127.0.0.1:3001>",
use_ssl=False,
verify=False)
# Invoke the function
lambda_client.invoke(FunctionName=<function_name>,
Payload=<lambda_payload>)

How to convert existing multi module python script to be used in AWS Lambda?

As directed, I have configured the handler as directed. For e.g. single_file.lambda_handler
def lambda_handler(event, context):
hubspot_api()
sheet_clear()
hubspot_properties()
remaining code
But it does the code does not execute and returns time out error. The configured time is apt for the running code. Kindly suggest.
AWS Lambda execution timeout mostly means, that Your lambda do not have permission to do what You want. I'll check if external HTTP communication hubspot_api is reachable. Also, I strongly suggest using simple logging or AWS X-Ray with segments that can help to figure out where the problem is.

In an Alexa skill lambda function, can I make a call to a shell script when a certain intent is invoked?

I have a remote postfix mail server that has some data I want to return for a user when they ask "read my information." For this intent, in my getResponse() function, I make a call to a function getInfo(), whose return value I will pass to the speech_output. Within this function, I want to use ssh to access the remote machine and collect the data.
Currently, I have a shell script in the same directory that can collect the data from the remote server, called retrieveInfo.sh that writes the data to a file call readingFile. I have written the following code inside my getInfo() function:
subprocess.call(['./retrieveInfo.sh'])
f=open("./readingFile", "r")
retVal = f.read()
return retVal
All of this code works, except for the subprocess call. However, when I run this code, including the subprocess call in another setting(not Alexa testing, just my own debugging), everything works fine. Does this have to do with packaging deployment? I have tried to upload a zip file with subprocess installed but that has not seemed to work either. I am not sure if this is an environmental issue, or some other problem?
It basically comes down to the following: When an intent is invoked, how can I make it run another program to collect some data for me? Or is there a way for me to access that remote machine through the lambda python function itself? I tried package deployment with paramiko, but that did not work either.

Categories

Resources