Get environment name in AWS Lambda function in python - python

I want to get the environment name (dev/qa/prod) where my AWS Lambda function is running or being executed programatically. I don't want to give as part of my environment variables.
How do we do that?

In AWS, everything is in "Production". That is, there is no concept of "AWS for Dev" or "AWS for QA".
It is up to you to create resources that you declare to be Dev, QA or Production. Some people do this in different AWS Accounts, or at least use different VPCs for each environment.
Fortunately, you mention that "each environment in AWS has a different role".
This means that the AWS Lambda function can call get_caller_identity() to obtain "details about the IAM user or role whose credentials are used to call the operation":
import boto3
def lambda_handler(event, context):
sts_client = boto3.client('sts')
print(sts_client.get_caller_identity())
It returns:
{
"UserId": "AROAJK7HIAAAAAJYPQN7E:My-Function",
"Account": "111111111111",
"Arn": "arn:aws:sts: : 111111111111:assumed-role/my-role/My-Function",
...
}
}
Thus, you could extract the name of the Role being used from the Arn.

Your lambda function can do one of the following :
def handler_name(event, context): - read the data from the event dict. The caller will have to add this argument (since I dont know what is the lambda trigger I cant tell if it is a good solution)
Read the data from S3 (or other storage like DB)
Read the data from AWS Systems Manager Parameter Store
I don't want to give as part of my environment variables
Why?

Related

How to get logs from Docker image running in AWS Lambda function?

I'm trying to debug an AWS Lambda function that's using a Docker image, as described here. I'm using the stock AWS Python image: public.ecr.aws/lambda/python:3.8
I'm able to follow the steps described in the above link to test my function locally and it works just fine:
docker run -p 9000:8080 hello-world, followed by curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}' in another Terminal window properly performs the function I'm expecting. However once this is running in Lambda, after successfully tagging the image and pushing it to AWS ECR, the function doesn't seem to be working and I'm not able to find any logs to debug the failed/missing executions.
I'm at a bit of a loss in terms of where these logs are stored, and/or what configuration I may be missing to get these logs into CloudWatch or something similar. Where can I expect to find these logs to further debug my lambda function?
So, there are no technical diferences from working with docker images with lambda compated to the code as zip or in s3. As for the logs, according to AWS documentation (and this is the description directly from the docs):
AWS Lambda automatically monitors Lambda functions on your behalf, reporting metrics through Amazon CloudWatch. To help you troubleshoot failures in a function, Lambda logs all requests handled by your function and also automatically stores logs generated by your code through Amazon CloudWatch Logs.
You can insert logging statements into your code to help you validate that your code is working as expected. Lambda automatically integrates with CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function, which is named /aws/lambda/.
So, the most basic code would have some sort of logging within your lambda. My suggestion in this case to troubleshoot:
1 - Like in the image bellow, go to your lambda function and try access the cloudwatch logs directly from the console. Make sure to confirm the default region in which your function was deployed.
2 - If the logs exists (the group for the lambda function exists), the check if there are any raise exceptions from your code.
3 - If there are any errors indicating that the group log for cloudwatch doesn't exist or that the group log from the function doesnt exist, then check the configurations from your lambda directly in the console or, if you are using a framework like serverless or cloudwatch, the code structure.
4 - Finally, if everything seems ok this could be only related to one simple thing. User permissions from your account or Role permission from you lambda function (which is mostly the case for these situations).
One thing that you should check is the basic role generated from your lambda, which ensures that you can create new log groups
One policy example should be something like this (You can also add manually the CloudWatch Logs policy, the effect should be similar):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:us-east-1:XXXXXXXXXX:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-east-1:XXXXXXXXXX:log-group:/aws/lambda/<YOUR-LAMBDA-FUNCTION>r:*"
]
}
]
}
More related to this issue can be found here:
https://aws.amazon.com/pt/premiumsupport/knowledge-center/lambda-cloudwatch-log-streams-error/
I say this because but I have used frequently docker for code dependencies with lambda, based on this first tutorial from when this feature was introduced.
https://aws.amazon.com/pt/blogs/aws/new-for-aws-lambda-container-image-support/
Hopefully this was helpfull!
Feel free to leave additional comments.
For a special case when you are using serverless framework, I had to use the following to get the logs in the cloudwatch.
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event: dict, context: dict) -> dict:
logger.info(json.dumps(event))
# ...
return {'statusCode': 200, 'body': json_str}
For my case, the lambda function runs inside ecr docker container.

How to store secrets for a Python Flask App on EC2

I have a simple Flask App that uses Stripe running on an EC2 instance.
I followed this guide to get it running: https://medium.com/techfront/step-by-step-visual-guide-on-deploying-a-flask-application-on-aws-ec2-8e3e8b82c4f7
I export the keys as environment variables and then in the code read them.
stripe_keys = {
"secret_key": os.environ["STRIPE_SECRET_KEY"],
"publishable_key": os.environ["STRIPE_PUBLISHABLE_KEY"],
"webhook_secret": os.environ["STRIPE_WEBHOOK_KEY"],
}
However, this requires me to SSH into the EC2 machines to set the variables. Is there a better approach?
I'd recommend AWS System Manager - Parameter Store
maintain your keys in SSM Parameter Store, choose SecureString type so your keys are encrypted at rest
give your EC2 instance IAM role enough permissions to fetch and decrypt your SecureString stored in SSM Parameter Store
make sure your EC2 instance can reach the Internet, as SSM Parameter Store is an Internet-facing service
in your code, use AWS SDK to fetch and decrypt your SecureString stored in SSM Parameter Store
I reckon you're writing in Python, so https://nqbao.medium.com/how-to-use-aws-ssm-parameter-store-easily-in-python-94fda04fea84
PS: if you use CloudFormation or other Infra-as-Code tools to provision your EC2 instances, most IaC tools support injecting SSM Parameter Store as env vars during deployment. With this approach, your code can stay as is, your EC2 instance doesn't need extra permission.
As Chris Chen pointed out, you can use AWS Parameter Store and on top of it: AWStanding
Suppose you stored your variables like this in Parameter store:
"/stripe/secret_key"
"/stripe/publishable_key"
"/stripe/webhook_secret"
Then you can write code like this:
from awstanding.parameter_store import load_path
load_path('/stripe')
# Now you can access you variables exactly like this:
os.environ["STRIPE_SECRET_KEY"]
os.environ["STRIPE_PUBLISHABLE_KEY"]
os.environ["STRIPE_WEBHOOK_SECRET"]
# or store them in settings variables:
STRIPE_SECRET_KEY = os.environ["STRIPE_SECRET_KEY"]
Also, it handles automatically any encrypted key.

How to use boto3 inside an EC2 instance

I have a python app running in a Docker container on a EC2 instance managed by ECS (well, that's what I would like...). However, to use services like SSM with boto3, I need to know the region where the instance is running. I dont need any credentials as I use a role for the instance which grants access to the service, so a default Session is ok.
I know that it is possible to fetch the region with a curl to get the dynamic metadata, but is there any more elegant way to instantiate a client with a region name (of credentials) inside an EC2 instance ?
I ran through the boto3 documentation and found
Note that if you've launched an EC2 instance with an IAM role configured, there's no explicit configuration you need to set in boto3 to use these credentials. Boto3 will automatically use IAM role credentials if it does not find credentials in any of the other places listed above.
So why do I need to pass the region name for SSM client for example ? Is there a workaround ?
Region is a required parameter for the SSM client to know which region it should be interacting with. It does not try to assume even if you’re in the AWS cloud.
If you want it to assume in your container the simplest way in which to implement is to use the AWS environment variables.
In your container definition specify the environment attribute specify a variable with name AWS_DEFAULT_REGION and the value of your current region.
By doing this you will not have to specify a region in the SDK within the container.
This example uses the environment attribute for more information.
Here is how to retrieve a parameter from the Parameter Store using the instance profile credentials:
#!/usr/bin/env python3
from ec2_metadata import ec2_metadata
import boto3
session = boto3.Session(region_name=ec2_metadata.region)
ssm = session.client('ssm')
parameter = ssm.get_parameter(Name='/path/to/a/parameter', WithDecryption=True)
print(parameter['Parameter']['Value'])
Replace the client section with the service of your choice and you should be set.

Invoke Google Cloud Function with Parameters

I am trying to pass a parameter "env" while triggering/invoking a cloud function using Scheduler with http.
I am using a service account that has sufficient permissions for invoking functions and admin rights on Scheduler.
Passing the parameter works when the function allows un-authenticated invocation, but if the function is deployed with authentication it gives an error: { "status": "UNAUTHENTICATED" ....
It is worth noting that when I changed the function code that it does not require a parameter, it worked successfully with the same service account.
So, it must be an issue with passing parameters.
The scheduler job setup looks like this:
The way I retrieve the parameter "env" in the function is as
def fetchtest(request):
env = request.args.get('env')

Access Blob storage without binding?

I'm using a queue trigger to pass in some data about a job that I want to run with Azure Functions(I'm using python). Part of the data is the name of a file that I want to pull from blob storage. Because of this, declaring a file path/name in an input binding doesn't seem like the right direction, since the function won't have the file name until it gets the queue trigger.
One approach I've tried is to use the azure-storage sdk, but I'm unsure of how to handle authentication from within the Azure Function.
Is there another way to approach this?
In Function.json, The blob input binding can refer to properties from the queue payload. The queue payload needs to be a JSON object
Since this is function.json, it works for all languages.
See official docs at https://learn.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings
For example, in you function.json,
{
"name": "imageSmall",
"type": "blob",
"path": "container/{filename}",
}
And if your queue message payload is:
{
"filename" : "myfilename"
}
Then the {filename} token in the blob's path expression will get substituted.
Typically, you store connection strings / account keys in App Settings of the Function App, and then read them by accessing environment variables. I haven't used python in Azure, but I believe that looks like
connection = open(os.environ['ConnectionString']).read()
I've found one example of python function which does what you ask for: queue trigger + blob operation.
Storing secrets can (also) be done using App Settings.
In Azure, go to your Azure Functions App Service, Then click "Application Settings". Then, scroll down to the "App Settings" list. This list consists of Key-Value pairs. Add your key, for example MY_CON_STR and the actual connection string as the value.
Don't forget to click save at this point
Now, in your application (your Function for this example), you can load the stored value using its key. For example, in python, you can use:
os.environ['MY_CON_STR']
Note that since the setting isn't saved locally, you have to execute it from within Azure. Unfortunately, Azure Functions applications do not contain a web.config file.

Categories

Resources