Where does zappa upload environment variables to? - python

tl;dr
Environment variables set in a zappa_settings.json don't upload as environment variables to AWS Lambda. Where do they go?
ts;wm
I have a Lambda function configured, deployed and managed using the Zappa framework. In the zappa_settings.json I have set a number of environment variables. These variables are definitely present as my application successfully runs however, when trying to inspect the Lambda function environment variables in the console or AWS CLI I see no environment variables have been uploaded to the Lambda function itself.
Extract from zappa_settings.json:
{
"stage-dev": {
"app_function": "project.app",
"project_name": "my-project",
"runtime": "python3.7",
"s3_bucket": "my-project-zappa",
"slim_handler": true,
"environment_variables": {
"SECRET": "mysecretvalue"
}
}
}
Output of aws lambda get-function-configuration --function-name my-project-stage-dev:
{
"Configuration": {
"FunctionName": "my-project-stage-dev",
"FunctionArn": "arn:aws:lambda:eu-west-1:000000000000:function:my-project-stage-dev",
"Runtime": "python3.7",
"Role": "arn:aws:iam::000000000000:role/lambda-execution-role",
"Handler": "handler.lambda_handler",
"CodeSize": 12333025,
"Description": "Zappa Deployment",
"Timeout": 30,
"MemorySize": 512,
"LastModified": "...",
"CodeSha256": "...",
"Version": "$LATEST",
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "..."
},
"Code": {
"RepositoryType": "S3",
"Location": "..."
}
}
Environment is absent from the output despite being included in the zappa_settings and the AWS documentation indicating it should be included if present, this is confirmed by checking in the console. I want to know where zappa is uploading the environment variables to, and if possible why it is doing so over using Lambda's in-built environment?
AWS CLI docs:
https://docs.aws.amazon.com/cli/latest/reference/lambda/get-function-configuration.html

environment_variables are saved in zappa_settings.py when creating a package for deployment (run zappa package STAGE and explore the archive) and are then dynamically set as environment variables by modifying os.environ in handler.py.
To set native AWS variables you need to use aws_environment_variables.

Related

Is it possible to send a message out to Log Stream (App Insights Logs) Inside Azure Function App

I feel like it must be possible, but I've yet to find an answer.
I navigate here inside of my Function App:
Then click the drop down arrow and select Application Insight Logs
As you can see in that picture, there's a log with an [Information] tag. I thought maybe I could do something like this in my Azure Function that's running my python script:
import logging
logging.info("Hello?")
However, I'm not able to get messages to show up in those logs. How do I actually achieve this? If there's a different place where logs created with logging.info() show up, I'd love to know that as well.
host.json file:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"logLevel": {
"default": "Information",
"Host.Results": "Information",
"Function": "Information",
"Host.Aggregator": "Information"
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
},
"extensions": {
"queues": {
"batchSize": 2,
"maxDequeueCount": 2
}
},
"functionTimeout": "00:45:00"
}
I believe, there is no different place to write log info, but we need to change the log levels accordingly in host.json for getting different types of logs.
I tried to log the information level logs in this workaround.
In VS Code, Created Azure Functions - Python Stack.
Added this code logging.info(f" Calling Activity Function") in Activity Function Code like below:
This is the host.json code by default:
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
}
After running this durable function, it is logging the information level:
Please refer to this workaround where I had given information about logging levels and also optimization of application insights logs on Azure Python Function.
Updated Answer:

Azure functions app keeps returning 401 unauthorized

I have created an Azure Function base on a custom image (docker) using VS Code.
I used the deployment feature of VS code to deploy it to azure and everything was fine.
My function.json file specifies anonymous auth level:
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
Why an I still getting the 401 unauthorized error?
Thanks
Amit
I changed my authLevel from function to anonymous and it finally worked!
Below methods can fix 4XX errors in our function app:
Make sure you add all the values from Local.Settings.json file to Application settings (FunctionApp -> Configuration -> Application Settings)
Check for CORS in your function app. Try adding “*” and saving it.. reload the function app and try to run it.
(Any request made against a storage resource when CORS is enabled must either have a valid authorization header or must be made against a public resource.)
When you make your request to your function, you may need to pass an authorization header with key 'x-functions-key' and value equal to either your default key for all functions (Function App > App Keys > default in the Azure portal) or a key specific to that function (Function App > Functions > [specific function] > Function Keys > default in Azure Portal).

Python Azure Function on Premium Plan: Publishing / deployment of functions fails…

I encountered multiple issues regarding the publishing / deployment of functions when using Python Azure Functions running on Linux and Premium Plan. Following are options what can be done in cases where it fails or it is successful but the function (on Azure) does not reflect what should have been published / deployed.
The following options may also work for non-Linux / non-Python / non-Premium Plan Function (Apps).
Wait some minutes after the publishing so that the Function (App) reflects the update
Restart the Function App
Make sure that the following AppSettings are set under "Configuration" (please adjust to your current context)
[
{
"name": "AzureWebJobsStorage",
"value": "<KeyVault reference to storage account connection string>",
"slotSetting": false
},
{
"name": "ENABLE_ORYX_BUILD",
"value": "true",
"slotSetting": false
},
{
"name": "FUNCTIONS_EXTENSION_VERSION",
"value": "~3",
"slotSetting": false
},
{
"name": "FUNCTIONS_WORKER_RUNTIME",
"value": "python",
"slotSetting": false
},
{
"name": "SCM_DO_BUILD_DURING_DEPLOYMENT",
"value": "true",
"slotSetting": false
},
{
"name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
"value": "<storage account connection string>",
"slotSetting": false
},
{
"name": "WEBSITE_CONTENTSHARE",
"value": "<func app name>",
"slotSetting": false
}
]
When using Azure DevOps Pipelines use the standard Azure Function task (https://github.com/Microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureFunctionAppV1/README.md) to publish the function respectively set the AppSettings.
This task also works for Python even if it does not explicitly provide the option under "Runtime Stack" (just leave it empty).
Make sure to publish the correct files (if you publish via ZipDeploy the zip folder should contain host.json at its root)
You can check if the correct files haven been published via checking the wwwroot folder via the Azure Portal -> Function App -> Development Tools -> SSH
cd /home/site/wwwroot
dir
Check the deployment logs
Either via the link displayed as output during the deployment
Should look like "https://func-app-name.net/api/deployments/someid/log"
Via Development Tools -> Advanced Tools
If the steps so far did not help it can help to SSH to the host via the portal (Development Tools -> SSH) and to delete
# The deployments folder (and then republish)
cd /home/site
rm -r deployments
# The wwwroot folder (and then republish)
cd /home/site
rm -r wwwroot
Delete the Function App resource and redeploy it

Pass arguments to Python running in Docker container on AWS Fargate

Passing arguments to a Docker container running a Python script can be done like so
docker run my_script:0.1 --arg1 val --arg2 val ...
I can't seem to figure out how to pass those arguments when running the container on AWS Fargate (perhaps it doesn't work?)
In ecs, you will run your containers as tasks. So you will first register the task that includes your container definition and then you can run the task passing your arguments as environment variables.
Here is an example task definition:
myscript-task.json: (a sample task definition)
{
"containerDefinitions": [
{
"name": "myscript",
"image": "12345123456.dkr.ecr.us-west-2.amazonaws.com/myscript:0.1",
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group" : "/ecs/fargate",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "myscript"
}
}
}
],
"family": "myscript",
"networkMode": "awsvpc",
"executionRoleArn": "arn:aws:iam::12345123456:role/ecsTaskExecutionRole",
"cpu": "256",
"memory": "512",
"requiresCompatibilities": [
"FARGATE"
]
}
You will register the task in the console or with the register-task-definition command:
aws ecs register-task-definition --cli-input-json file://myscript-task.json
You can now run the task with ecs run-task command. Using overrides parameter you will be able to run the same task with different values.
aws ecs run-task --cluster testCluster --launch-type FARGATE --task-definition myscript:1 --network-configuration 'awsvpcConfiguration={subnets=[subnet-0abcdec237054abc],assignPublicIp=ENABLED}' --overrides file://overrides.json
A sample Overrides.json:
{
"containerOverrides": [{
"name": "myscript",
"environment": [{
"name": "VAR1",
"value": "valueOfVar1"
}]
}]
}
Now you can access the variable in your python script.
Python script(sample) printing the passed environment variable.
import os
print(os.environ['VAR1'])
With log driver configured, you will be able to see the output in cloudwatch logs.
You can use container definitions parameters in ECS task definition to pass runtime arguments.
Command parameter maps to COMMAND parameter in docker run.
"command": [
"--arg1",
"val",
"--arg2",
"val"
],
It is also possible to pass parameters as environment variables.
"environment": [
{
"name": "LOG_LEVEL",
"value": "debug"
}
],

Dynamically change dockerrun.aws.json image tag on deploy

Is there a way to dynamically get the version tag from my __init__.py file and append it to the dockerrun.aws.json image name for example::
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "dockerkey",
"Key": "mydockercfg"
},
"Image": {
"Name": "comp/app:{{version}}",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "80"
}
]
}
This when when I do eb deploy it will build the correct version. At the moment I have to keep modifying the json file with each deploy.
I also stumbled upon that last year, where AWS support stated there's no such feature at hand. I ended up writing a script that receives the docker tag as parameter and composes the dockerrun.aws.json file on the fly with the correct tagname.
I've written a bash script which runs
eb deploy
Before it executes that command I change a symlink depending on if I'm running production or staging. For Example:
ln -sf ../ebs/Dockerrun.aws.$deploy_type.json ../ebs/Dockerrun.aws.json

Categories

Resources