How to bind Azure Function to a blob container in Python? - python

I have a blob container containing multiple files. I'm interested in binding the last modified one as input for an azure function. The function is implemented in python.
I thought I could do this by binding a blob container as CloudBlobContainer and then iterate over the files to find the last modified one. According to this thread it seems like binding to a container is possible in C#. But I can't figure out how to do this in Python. CloudBlobContainer doesn't seem to exist for Python. What other alternatives do I have?

According to this thread it seems like binding to a container is
possible in C#.
It seems that you have already seen the Usage of Blob Trigger Azure Function document.Another evidence is that,actually,all bindings of dev language platform except C# are built on the ExtensionBundle.You could see there is no Container type in the supported list.
So, i guess you have to implement it with python blob storage sdk in the Azure Function method. Or you could submit feedback to Azure Function Team to improve the product.

Related

AWS CDK Python Create Resources conditionally

I would like to create resources depending on the parameters value. How can I achieve that?
Example:
vpc_create = core.CfnParameter(stack, "createVPC")
condition = core.CfnCondition(stack,
"testeCondition",
expression=core.Fn.condition_equals(vpc_create, True)
)
vpc = ec2.Vpc(stack, "MyVpc", max_azs=3)
How add the condition to VPC resource for VPC being created only if the parameter it's true?
I think that I need to get Cloudformation resource, something like that:
vpc.node.default_child # And I think this returns an object from ec2.CfnVPC class, but I'm stuck here.
Thanks
Conditional resource creation and a lot of other flexibility is possible using context data. AWS itself recommends context over parameters
In general, we recommend against using AWS CloudFormation parameters with the AWS CDK. Unlike context values or environment variables, the usual way to pass values into your AWS CDK apps without hard-coding them, parameter values are not available at synthesis time, and thus cannot be easily used in other parts of your AWS CDK app, particularly for control flow.
Please read in full at: https://docs.aws.amazon.com/cdk/latest/guide/parameters.html

How to load dependencies from an s3 bucket AND a separate event JSON?

The dependencies for my AWS Lambda function were larger than the allowable limits, so I uploaded them to an s3 bucket. I have seen how to use an s3 bucket as an event for a Lambda function, but I need to use these packages in conjunction with a separate event. The s3 bucket only contains python modules (numpy, nltk, etc.) not the event data used in the Lambda function.
How can I do this?
Event data will come in from whatever event source that you configure. Refer the docs here for the S3 event source.
As for the dependencies themself, you will have to zip the whole codebase (code + dependencies) and use that as a deployment package. You can find detailed intructions on that on the docs. For reference, here the one for NodeJS and Python.
Protip: A better way to manage dependencies is to use Lambda Layer. You can create a layer will all your dependencies and then add it to the function that make use of these. Read more about it here.
If your dependencies are still above the 512MB hard limit of AWS Lambda, you may consider using AWS Elastic File System with Lambda.
With this now you can essentially attach a network storage to your lambda function. I have personally used it to load huge reference files which are over the limit of Lambda's file storage. For a walkthrough you can refer to this article by AWS. To pick the conclusion from the article:
EFS for Lambda allows you to share data across function invocations, read large reference data files, and write function output to a persistent and shared store. After configuring EFS, you provide the Lambda function with an access point ARN, allowing you to read and write to this file system. Lambda securely connects the function instances to the EFS mount targets in the same Availability Zone and subnet.
You can read the Read the announcement here
Edit 1: Added EFS for lambda info.

Python vs. Node.js Event Payloads in Firebase Cloud Functions

I am in the process of writing a Cloud Function for Firebase via the Python option. I am interested in Firebase Realtime Database Triggers; in other words I am willing to listen to events that happen in my Realtime Database.
The Python environment provides the following signature for handling Realtime Database triggers:
def handleEvent(data, context):
# Triggered by a change to a Firebase RTDB reference.
# Args:
# data (dict): The event payload.
# context (google.cloud.functions.Context): Metadata for the event.
This is looking good. The data parameter provides 2 dictionaries; 'data' for notifying the data before the change and 'delta' for the changed bits.
The confusion kicks in when comparing this signature with the Node.js environment. Here is a similar signature from theNode.js world:
exports.handleEvent = functions.database.ref('/path/{objectId}/').onWrite((change, context) => {}
In this signature, the change parameter is pretty powerful and it seems to be of type firebase.database.DataSnapshot. It has nice helper methods such as hasChild() or numChildren() that provide information about the changed object.
The question is: Does Python environment have a similar DataSnapshot object? With Python, do I have to query the database to get the number of children for example? It really isn't clear what Python environment can and can't do.
Related API/Reference/Documentation:
Firebase Realtime DB Triggers: https://cloud.google.com/functions/docs/calling/realtime-database
DataSnapshot Reference: https://firebase.google.com/docs/reference/js/firebase.database.DataSnapshot
The python runtime currently doesn't have a similar object structure. The firebase-functions SDK is actually doing a lot of work for you in creating objects that are easy to consume. Nothing similar is happening in the python environment. You are essentially getting a pretty raw view at the payload of data contained by the event that triggered your function.
If you write Realtime Database triggers for node, not using the Firebase SDK, it will be a similar situation. You'll get a really basic object with properties similar to the python dictionary.
This is the reason why use of firebase-functions along with the Firebase SDK is the preferred environment for writing triggers from Firebase products. The developer experience is superior: it does a bunch of convenient work for you. The downside is that you have to pay for the cost of the Firebase Admin SDK to load and initialize on cold start.
Note that might be possible for you to parse the event and create your own convenience objects using the Firebase Admin SDK for python.

Azure Python SDK to get usage details - UsageDetailsOperations Class

I am new to python.
I need to get the Usage details using python sdk.
I am able to do the same using the usage detail API.
But unable to do so using the sdk.
I am trying to use the azure.mgmt.consumption.operations.UsageDetailsOperations class. The official docs for UsageDetailsOperations
https://learn.microsoft.com/en-us/python/api/azure-mgmt-consumption/azure.mgmt.consumption.operations.usage_details_operations.usagedetailsoperations?view=azure-python#list-by-billing-period
specifies four parameters to create the object
(i.e.client:Client for service requests,config:Configuration of service client,
serializer:An object model serializer,deserializer:An object model deserializer).
Out of these parameters I only have the client.
I need help understanding how to get the other three parameters or is there another way to create the UsageDetailsOperations object.
Or is there any other approach to get the usage details.
Thanks!
This class is not designed to be created manually, you need to create a consumption client, which will have an attribute "usages" which will be the class in question (instanciated correctly).
There is unfortunately no samples for consumption yet, but creating the client will be similar to creating any other client (see Network client creation for instance).
For consumption, what might help is the tests, since they give some idea of scenarios:
https://github.com/Azure/azure-sdk-for-python/blob/fd643a0/sdk/consumption/azure-mgmt-consumption/tests/test_mgmt_consumption.py
If you're new to Azure and Python, you might want to do this quickstart:
https://learn.microsoft.com/en-us/azure/python/python-sdk-azure-get-started
Feel free to open an issue in the main Python repo, asking for more documentation about this client (this will help prioritize it):
https://github.com/Azure/azure-sdk-for-python/issues
(I'm working at Microsoft in the Python SDK team).

Obtain EC2 instance IP from AWS lambda function and use requests library

I am writing a lambda function on Amazon AWS Lambda. It accesses the URL of an EC2 instance, on which I am running a web REST API. The lambda function is triggered by Alexa and is coded in the Python language (python3.x).
Currently, I have hard coded the URL of the EC2 instance in the lambda function and successfully ran the Alexa skill.
I want the lambda function to automatically obtain the IP from the EC2 instance, which keeps changing whenever I start the instance. This would ensure that I don't have to go the code and hard code the URL each time I start the EC2 instance.
I stumbled upon a similar question on SO, but it was unanswered. However, there was a reply which indicated updating IAM roles. I have already created IAM roles for other purposes before, but I am still not used to it.
Is this possible? Will it require managing of security groups of the EC2 instance?
Do I need to set some permissions/configurations/settings? How can the lambda code achieve this?
Additionally, I pip installed the requests library on my system, and I tried uploading a '.zip' file with the structure :
REST.zip/
requests library folder
index.py
I am currently using the urllib library
When I use zip files for my code upload (I currently edit code inline), it can't even accesse index.py file to run the code
You could do it using boto3, but I would advise against that architecture. A better approach would be to use a load balancer (even if you only have one instance), and then use the CNAME record of the load balancer in your application (this will not change for as long as the LB exists).
An even better way, if you have access to your own domain name, would be to create a CNAME record and point it to the address of the load balancer. Then you can happily use the DNS name in your Lambda function without fear that it would ever change.

Categories

Resources