Terraform azurerm_function_app_function deleting itself after deployment - python

TLDR: azurerm_function_app_function will work fine on Terraform Apply, but disappears from Azure Portal afterwards.
I am trying to deploy an Azure Function via Terraform for months now and have not had any luck with it.
The Terraform apply will run fine. I will then go into the Azure Portal and look at the function app functions and this function will be there. However when I refresh the blade the function will disappear. I have made the same function and deployed it via VS Code no issues, but with Terraform there is no luck.
resource "azurerm_linux_function_app" "main" {
name = "tf-linux-app"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
service_plan_id = azurerm_service_plan.main.id
storage_account_name = azurerm_storage_account.main.name
storage_account_access_key = azurerm_storage_account.main.primary_access_key
site_config {
app_scale_limit = 200
elastic_instance_minimum = 0
application_stack {
python_version = "3.9"
}
}
app_settings = {
"${azurerm_storage_account.main.name}_STORAGE" = azurerm_storage_account.main.primary_connection_string
}
client_certificate_mode = "Required"
identity {
type = "SystemAssigned"
}
}
resource "azurerm_function_app_function" "main" {
name = "tf-BlobTrigger"
function_app_id = azurerm_linux_function_app.main.id
language = "Python"
file {
name = "__init__.py"
content = file("__init__.py")
}
test_data = "${azurerm_storage_container.container1.name}/{name}"
config_json = jsonencode({
"scriptFile" : "__init__.py",
"disabled": false,
"bindings" : [
{
"name" : "myblob",
"type" : "blobTrigger",
"direction" : "in",
"path" : "${azurerm_storage_container.container1.name}/{name}",
"connection" : "${azurerm_storage_container.container1.name}_STORAGE"
}
]
})
}
As far as the Python script, I'm literally just trying
the demo found here
that Azure provides.
__init__.py:
import logging
import azure.functions as func
def main(myblob: func.InputStream):
logging.info('Python Blob trigger function processed %s', myblob.name)
I tried running Terraform apply, I expected the function app to appear and stay there, but it appears and then disappears. I also tried deploying a C# function to a Windows app. This worked as expected, but I now need the script in Python.

Related

Timeout errors when testing Azure function app

Using Azure functions in a python runtime env. to take latitude/longitude tuples from a snowflake database and return the respective countries. We also want to convert any non-english country names into English.
We initially found that although the script would show output in the terminal while testing on azure, it would soon return a 503 error (although the script continues to run at this point). If we cancelled the script it would show as a success in the monitor screen of azure portal, however leaving the script to run to completion would result in the script failing. We decided (partially based on this post) this was due to the runtime exceeding the maximum http response time allowed. To combat this we tried a number of solutions.
First we extended the function timeout value in the function.json file:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
},
"functionTimeout": "00:10:00"
}
We then modified our script to use a queue trigger by adding the output
def main(req: func.HttpRequest, msg: func.Out[func.QueueMessage]) ->func.HttpResponse:
to the main .py script. We also then modified the function.json file to
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "processing",
"connection": "QueueConnectionString"
}
]
}
and the local.settings.json file to
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AzureWebJobsStorage": "{AzureWebJobsStorage}",
"QueueConnectionString": "<Connection String for Storage Account>",
"STORAGE_CONTAINER_NAME": "testdocs",
"STORAGE_TABLE_NAME": "status"
}
}
We also then added a check to see if the country name was already in English. The intention here was to cut down on calls to the translate function.
After each of these changes we redeployed to the functions app and tested again. Same result. The function will run, and print output to terminal, however after a few seconds it will show a 503 error and eventually fail.
I can show a code sample but cannot provide the tables unfortunately.
from snowflake import connector
import pandas as pd
import pyarrow
from geopy.geocoders import Nominatim
from deep_translator import GoogleTranslator
from pprint import pprint
import langdetect
import logging
import azure.functions as func
def main(req: func.HttpRequest, msg: func.Out[func.QueueMessage]) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
# Connecting string to Snowflake
conn = connector.connect(
user='<USER>',
password='<PASSWORD>',
account='<ACCOUNT>',
warehouse='<WH>',
database='<DB>',
schema='<SCH>'
)
# Creating objects for Snowflake, Geolocation, Translate
cur = conn.cursor()
geolocator = Nominatim(user_agent="geoapiExercises")
translator = GoogleTranslator(target='en')
# Fetching weblog data to get the current latlong list
fetchsql = "SELECT PAGELATLONG FROM <TABLE_NAME> WHERE PAGELATLONG IS NOT NULL GROUP BY PAGELATLONG;"
logging.info(fetchsql)
cur.execute(fetchsql)
df = pd.DataFrame(cur.fetchall(), columns = ['PageLatLong'])
logging.info('created data frame')
# Creating and Inserting the mapping into final table
for index, row in df.iterrows():
latlong = row['PageLatLong']
location = geolocator.reverse(row['PageLatLong']).raw['address']
logging.info('got addresses')
city = str(location.get('state_district'))
country = str(location.get('country'))
countrycd = str(location.get('country_code'))
logging.info('got countries')
# detect language of country
res = langdetect.detect_langs(country)
lang = str(res[0]).split(':')[0]
conf = str(res[0]).split(':')[0]
if lang != 'en' and conf > 0.99:
country = translator.translate(country)
logging.info('translated non-english country names')
insertstmt = "INSERT INTO <RESULTS_TABLE> VALUES('"+latlong+"','"+city+"','"+country+"','"+countrycd+"')"
logging.info(insertstmt)
try:
cur.execute(insertstmt)
except Exception:
pass
return func.HttpResponse("success")
If anyone had an idea what may be causing this issue I'd appreciate any suggestions.
Thanks.
To resolve timeout errors, you can try following ways:
As suggested by MayankBargali-MSFT , You can try to define the retry policies and for Trigger like HTTP and timer, don't resume on a new instance. This means that the max retry count is a best effort, and in some rare cases an execution could be retried more than the maximum, or for triggers like HTTP and timer be retried less than the maximum. You can navigate to Diagnose and solve problems to see if it helps you to know the root cause of 503 error as there can be multiple reasons for this error
As suggested by ryanchill , 503 issue is the result of high memory consumption which exceeded the limits of the consumption plan. The best resolve for this issue is switching to a dedicated hosting plan which provides more resources. However, if that isn't an option, reducing the amount of data being retrieved should be explored.
References: https://learn.microsoft.com/en-us/answers/questions/539967/azure-function-app-503-service-unavailable-in-code.html , https://learn.microsoft.com/en-us/answers/questions/522216/503-service-unavailable-while-executing-an-azure-f.html , https://learn.microsoft.com/en-us/answers/questions/328952/azure-durable-functions-timeout-error-in-activity.html and https://learn.microsoft.com/en-us/answers/questions/250623/azure-function-not-running-successfully.html

Get context data from Lambda Authorizer (APi Gateway)

I'm using the Aws Lambda authorizer to secure an Api gateway. The authorizer lambda function is written in python using this blueprint from aws (https://github.com/awslabs/aws-apigateway-lambda-authorizer-blueprints/blob/master/blueprints/python/api-gateway-authorizer-python.py)
I added this code in the "blueprint"
if(event['authorizationToken'] == 'allow'):
policy.allowAllMethods()
else:
policy.denyAllMethods()
# Finally, build the policy
authResponse = policy.build()
# new! -- add additional key-value pairs associated with the authenticated principal
# these are made available by APIGW like so: $context.authorizer.<key>
# additional context is cached
context = {
'key': 'somevalue, # $context.authorizer.key -> value
'number' : 1,
'bool' : True
}
# context['arr'] = ['foo'] <- this is invalid, APIGW will not accept it
# context['obj'] = {'foo':'bar'} <- also invalid
authResponse['context'] = context
return authResponse
However in the lambda function attached to the route i cannot find the context value from authorizer. How can i get the values from context[key] ?
The solution is to use Mapping Templates on Integration Request. If you look at the route pipeline you will see that before reaching the Lambda Function you have a "Integration Request" section (and also a Integration Response)
In Integration Request you have the option to edit the input into lambda function via Mapping Templates.
So, i created a new Mapping Template ( use "Where" there are no Templates defined)
Content -Type use application/json
and in the actual template use something like
#set($inputRoot = $input.path('$'))
{
"key":"$context.authorizer.key"
}
Attention : the above template will remove the original output. That data is found in $inputRoot and you can add it to response using this format
{
"key":"$context.authorizer.key",
"originalkey":$inputRoot.originalkey
}
With the help of accepted answer I had came up with this:
## See http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html
## This template will pass through all parameters including path, querystring, header, stage variables, and context through to the integration endpoint via the body/payload
#set($inputRoot = $input.path('$'))
#set($authorizer = $context.authorizer)
#set($allParams = $input.params())
{
#foreach($key in $inputRoot.keySet())
"$key" : "$util.escapeJavaScript($inputRoot.get($key))"
#if($foreach.hasNext),#end
#end,
"context" : {
"params" : {
#foreach($type in $allParams.keySet())
#set($params = $allParams.get($type))
"$type" : {
#foreach($paramName in $params.keySet())
"$paramName" : "$util.escapeJavaScript($params.get($paramName))"
#if($foreach.hasNext),#end
#end
}
#if($foreach.hasNext),#end
#end
},
"stage-variables" : {
#foreach($key in $stageVariables.keySet())
"$key" : "$util.escapeJavaScript($stageVariables.get($key))"
#if($foreach.hasNext),#end
#end
},
#foreach($key in $context.keySet())
"$key" : "$util.escapeJavaScript($context.get($key))"
#if($foreach.hasNext),#end
#end,
"authorizer": {
#foreach($key in $authorizer.keySet())
"$key" : "$util.escapeJavaScript($authorizer.get($key))"
#if($foreach.hasNext),#end
#end
}
}
}
edit:
after tweaking around in api gateway I'we found out about
Use Lambda Proxy integration toggle button that adds extra parameters to event object

Azure Function: 500 internal internal server error in Run/Test Mode

I want to Test my azure function using the Azure Apps feature to Run/Test mode but it is throwing the '500 internal server error'.
I am able to debug the same code in my local environment but when to trigger the same code on the azure portal then it is getting failed without any proper error logs.
This Azure function will read the json format data from the event hub and write the same to the blob storage. I am using python for the azure function development.
Here is the code:
init.py
from typing import List
import logging
import os
import azure.functions as func
from azure.storage.blob import BlobClient
import datetime
import json
storage_connection_string = os.getenv('storage_connection_string_FromKeyVault')
container_name = os.getenv('storage_container_name_FromKeyVault')
today = datetime.datetime.today()
def main(events: List[func.EventHubEvent]):
for event in events:
a = event.get_body().decode('utf-8')
json.loads(a)
logging.info('Python EventHub trigger processed an event: %s', a)
logging.info(f' SequenceNumber = {event.sequence_number}')
logging.info(f' Offset = {event.offset}')
blob_client = BlobClient.from_connection_string(storage_connection_string, container_name, str(today.year) +"/" + str(today.month) + "/" + str(today.day) + "/" + str(event.sequence_number) + ".json")
blob_client.upload_blob(event.get_body().decode(),blob_type="AppendBlob")
local.settings.json
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "<Endpoint1>",
"FUNCTIONS_WORKER_RUNTIME": "python",
"storage_connection_string_FromKeyVault": "<connectionString",
"storage_container_name_FromKeyVault": "<container_name>",
"EventHubReceiverPolicy_FromKeyVault": "<Endpoint2>"
}
}
function.json
{
"scriptFile": "__init__.py",
"bindings": [
{
"type": "eventHubTrigger",
"name": "events",
"direction": "in",
"eventHubName": "pwo-events",
"connection": "EventHubReceiverPolicy_FromKeyVault",
"cardinality": "many",
"consumerGroup": "$Default",
"dataType": "binary"
}
]
}
Please note that this error is throwing when I am clicking on Run/Test on the portal. but the same code is also running fine after deployment.
The 500 error is not helpful to solve this problem, you need to check the specific error of the azure function. You can use application insights to get the details error. The function must configure the corresponding application insights before you can view the log on the portal.
So you need to configure an application insights to your function app like this:
Then your function app will restart.
Of course, you can also go to kudu to view:
First, go to advanced tools, then click 'GO',
Then After you go to kudu, click Debug Console -> CMD -> LogFiles -> Application -> Functions -> yourtriggername. You will find log file there.
If you are based on linux OS, after go to kudu, just click 'log stream'(this is not supportted to consumption plan for linux.).
I had this problem and I found that problem was with dependencies. Removing unexisting libraries (or using Microsoft's bring dependency document) will solve the issue.
Adding third-party dependencies in the Azure portal is currently not supported for Linux Consumption Function Apps. Click here to setup local environment. Learn more
If you need dependencies, to solve this problem, you can refer to this Microsoft Document

Updating Azure Container Registry Public Access IP with Python

I've tried looking online but could not find an answer as the documentation (and the API) for Azure Python SDK is just horrible.
I have a Container Registery on Azure with a list of allowed IPs for public access. I'd like to modify that list by adding a new IP using Python.
I'm not sure the API supports it or how to achieve this using ContainerRegistryManagementClient.
Can't agree more that documentation (and the API) for Azure Python SDK is just horrible :)
If you want to add a list of allowed IPs for public access to your Container Registery on Azure, just try the code below using REST API:
from azure.identity import ClientSecretCredential
import requests
TENANT_ID= ""
CLIENT_ID = ""
CLIENT_SECRET = ""
SUBSCRIPTION_ID = ""
GROUP_NAME = ""
REGISTRIES = ""
#your public ip list here
ALLOWED_IPS = [{
"value": "167.220.255.1"
},
{
"value": "167.220.255.2"
}
]
clientCred = ClientSecretCredential(TENANT_ID,CLIENT_ID,CLIENT_SECRET)
authResp = clientCred.get_token("https://management.azure.com/.default")
requestURL = 'https://management.azure.com/subscriptions/'+SUBSCRIPTION_ID+'/resourceGroups/'+GROUP_NAME+'/providers/Microsoft.ContainerRegistry/registries/'+REGISTRIES+'?api-version=2020-11-01-preview'
requestBody = {
"properties": {
"publicNetworkAccess": "Enabled",
"networkRuleSet": {
"defaultAction": "Deny",
"virtualNetworkRules": [],
"ipRules": ALLOWED_IPS
},
"networkRuleBypassOptions": "AzureServices"
}
}
r = requests.patch(url=requestURL,json=requestBody,headers={"Authorization":"Bearer "+ authResp.token})
print(r.text)
Result:
Before you run this, pls make sure that your client app has been granted the required permissions(Azure subscription roles, such as contributor).

Is there a way to connect a new lambda function an existing AWS ApiGateway using AWS-CDK? (Python)

I am trying to wrap my head around AWS-CDK to deploy Lambda functions to aws through it. I already have a pre-existing Api gateway, that I have manually deployed through the console and would like to know if there is any means to connect it as a trigger to new lambda functions deployed using the CDK? I have read through the example code:
apigw.LambdaRestApi(
self, 'Endpoint',
handler=my_lambda,
)
This creates a new gateway in aws, which is not the functionality that I am looking for.
Any help would be much appreciated!
import * as cdk from '#aws-cdk/core';
import * as lambda from '#aws-cdk/aws-lambda';
import * as apigateway from '#aws-cdk/aws-apigateway';
class BindApiFunctionStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const restapi = apigateway.RestApi.fromRestApiAttributes(this, "myapi", {
restApiId : "<yourrestapiid>",
rootResourceId : "<yourrootresourceid>"
});
const helloWorld = new lambda.Function(this, "hello", {
runtime: lambda.Runtime.NODEJS_10_X,
handler: 'index.handler',
code: lambda.Code.fromInline('exports.handler = function(event, ctx, cb) { return cb(null, "hi"); }')
})
restapi.root.addResource("test").addMethod("GET", new apigateway.LambdaIntegration(helloWorld))
}
}
const app = new cdk.App();
new BindApiFunctionStack(app, 'MyStack');
app.synth();

Categories

Resources