I encountered multiple issues regarding the publishing / deployment of functions when using Python Azure Functions running on Linux and Premium Plan. Following are options what can be done in cases where it fails or it is successful but the function (on Azure) does not reflect what should have been published / deployed.
The following options may also work for non-Linux / non-Python / non-Premium Plan Function (Apps).
Wait some minutes after the publishing so that the Function (App) reflects the update
Restart the Function App
Make sure that the following AppSettings are set under "Configuration" (please adjust to your current context)
[
{
"name": "AzureWebJobsStorage",
"value": "<KeyVault reference to storage account connection string>",
"slotSetting": false
},
{
"name": "ENABLE_ORYX_BUILD",
"value": "true",
"slotSetting": false
},
{
"name": "FUNCTIONS_EXTENSION_VERSION",
"value": "~3",
"slotSetting": false
},
{
"name": "FUNCTIONS_WORKER_RUNTIME",
"value": "python",
"slotSetting": false
},
{
"name": "SCM_DO_BUILD_DURING_DEPLOYMENT",
"value": "true",
"slotSetting": false
},
{
"name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
"value": "<storage account connection string>",
"slotSetting": false
},
{
"name": "WEBSITE_CONTENTSHARE",
"value": "<func app name>",
"slotSetting": false
}
]
When using Azure DevOps Pipelines use the standard Azure Function task (https://github.com/Microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureFunctionAppV1/README.md) to publish the function respectively set the AppSettings.
This task also works for Python even if it does not explicitly provide the option under "Runtime Stack" (just leave it empty).
Make sure to publish the correct files (if you publish via ZipDeploy the zip folder should contain host.json at its root)
You can check if the correct files haven been published via checking the wwwroot folder via the Azure Portal -> Function App -> Development Tools -> SSH
cd /home/site/wwwroot
dir
Check the deployment logs
Either via the link displayed as output during the deployment
Should look like "https://func-app-name.net/api/deployments/someid/log"
Via Development Tools -> Advanced Tools
If the steps so far did not help it can help to SSH to the host via the portal (Development Tools -> SSH) and to delete
# The deployments folder (and then republish)
cd /home/site
rm -r deployments
# The wwwroot folder (and then republish)
cd /home/site
rm -r wwwroot
Delete the Function App resource and redeploy it
Related
I'm running an Azure function locally, from VSCode, that outputs a string to a blob. I'm using Azurite to emulate the output blob container.
My function looks like this:
import azure.functions as func
def main(mytimer: func.TimerRequest, outputblob:func.Out[str]):
outputblob.set("hello")
My function.json:
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "mytimer",
"type": "timerTrigger",
"direction": "in",
"schedule": "0 * * * * *"
},
{
"name": "outputblob",
"type": "blob",
"dataType": "string",
"direction": "out",
"path": "testblob/hello"
}
]
}
In local.settings.json, I've set "AzureWebJobsStorage": "UseDevelopmentStorage=true".
The problem is, when I run the function and check in Azure Storage Explorer, the container is created (testblob) (along with 2 other containers: azure-webjobs-hosts and azure-webjobs-secrets) but it is empty and Azure Storage Explorer displays an error message when I refresh :
The first argument must be of type string or an instance of Buffer, ArrayBuffer, or Array or an Array-like Object.Received undefined
The function runs and doesn't return any error message.
When I use a queue instead of a blob as output, it works and I can see the string in the emulated queue storage.
When I use the blob storage in my Azure subscription instead of the emulated blob, it works as well, a new blob is created with the string.
I've tried the following:
clean and restart Azurite several times
replace "UseDevelopmentStorage=true" by the connection string of the emulated storage
reinstall Azure Storage Explorer
I keep getting the same error message.
I'm using Azure Storage Explorer Version 1.25.0 on Windows 11.
Thanks for any help!
It looks like this is a known issue with the latest release (v1.25.0) of Azure Storage Explorer version see:
https://github.com/microsoft/AzureStorageExplorer/issues/6008
Simplest solution is to uninstall and re-install an earlier version:
https://github.com/microsoft/AzureStorageExplorer/releases/tag/v1.24.3
I have written a program to convert a file JSON in each line to convert into a JSON array.
Refer to the below link to understand what I want to achieve:
How to get JSON Array in a blob storage using dataflow
I have created below files for trigger:
function.json:
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "jsonfiletrigger",
"type": "blobTrigger",
"direction": "in",
"path": "<containername>/in.json",
"connection": "<Storage account>"
},
{
"name": "blobin",
"type": "blob",
"direction": "in",
"path": "<containername>/in.json",
"connection": "<Storage account>"
},
{
"name": "blobout",
"type": "blob",
"direction": "out",
"path": "<containername>/out.json",
"connection": "<Storage account>"
}
],
"disabled": false
}
host.json:
{
"version": "2.0"
}
local.settings.json
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "DefaultEndpointsProtocol=https;AccountName=<Storage account>;AccountKey=<Storage account access key>;EndpointSuffix=core.windows.net",
"FUNCTIONS_EXTENSION_VERSION": "~3",
"FUNCTIONS_WORKER_RUNTIME": "python",
"APPINSIGHTS_INSTRUMENTATIONKEY": "<appinsight instrumentation key>",
"APPLICATIONINSIGHTS_CONNECTION_STRING": "InstrumentationKey=<Instrumentation key>;IngestionEndpoint=https://westeurope-3.in.applicationinsights.azure.com/"
},
"ConnectionStrings": {}
}
init.py
import logging
import azure.functions as func
import sys
import json
import os
def main(blobin: func.InputStream, blobout: func.Out[bytes], context: func.Context):
logging.info('env variables :: %s' % dict(os.environ))
jsonarr = []
try:
with open(blobin, 'rt') as fin:
for line in fin.readlines():
jsonobj = json.loads(line.strip())
jsonarr.append(jsonobj)
except OSError as e:
print(f'EXCEPTION: Unable to read input as file. {e}')
sys.exit(254)
except Exception as e:
print(f'EXCEPTION: {e}')
sys.exit(255)
try:
with open(blobout, 'wt') as fout:
json.dump(jsonarr, indent=4, fp=fout)
except OSError as e:
print(f'EXCEPTION: Unable to write output. {e}')
sys.exit(254)
except Exception as e:
print(f'EXCEPTION: {e}')
sys.exit(255)
I ran the below command to publish:
func azure functionapp publish jsonlisttoarray --publish-local-settings
I see files are in the functionapp. But not sure why the function doesn't get triggered.
Please help resolve the issue.
The problem may be caused by the connection string of storage account hadn't been uploaded to azure portal when you do deployment.
We can see the document shows us values in ConnectionStrings will not be published to azure when you run the command func azure functionapp publish jsonlisttoarray --publish-local-settings.
I test it in my side, the values under ConnectionStrings field in local.settings.json wasn't published to function application settings on portal when I do deployment. And it will lead to the function can't be triggered.
To solve this problem, you need to go to your function app on azure portal first. Then click "Configuration" --> under "Application settings" tab --> click "New application setting" to add a variable with the name and value same with it in your local.settings.json under ConnectionStrings field.
================================Update================================
It seems you there is a mistake of connection field in your function.json. First, you should add a variable in local.settings.json with the value of the storage connection string like below:
Then set the value of connection field with the name of connection string(in local.settings.json) in function.json:
Then deploy the function to azure again with the command func azure functionapp publish jsonlisttoarray --publish-local-settings.
Note: if you do not add --publish-local-settings in your publish command, it will not upload the value in local.settings.json to your function app when you do deployment.
I found that problem was with the directory structure. I was have having all the files in the same directory. I needed to have function.json and init.py or any other sources under directory.
A function-app can have multiple functions each function shares the same settings, requirements.txt, and host.json.
The directory structure looks like below:
$ ls -Ra
.:
. .. BlobTrigger extensions.csproj host.json local.settings.json proxies.json .python_packages requirements.txt
./BlobTrigger:
. .. function.json __init__.py
./.python_packages:
. ..
I was trying to use the debugging function for lambda (python) in Visaul Studio Code. I was following the instructions on AWS Docs, but I could not trigger the python applicaion in debug mode.
Please kindly see if you know the issue and if I have setup anything incorrectly, thanks.
Reference:
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-debugging.html
Observation
Start application
Seems application was not started on the debug port specified?
Request call
The endpoint could not be reached and python application was not entered
If accessed through port 3000, application could complete successfully
Setup performed
Initialize the project and install ptvsd as instructed
Enable ptvsd on the python code
Add launch configuration
Project structure
Python source
This is basically just the offical helloworld sample for python
import json
# import requests
import ptvsd
# Enable ptvsd on 0.0.0.0 address and on port 5890 that we'll connect later with our IDE
ptvsd.enable_attach(address=('localhost', 5890), redirect_output=True)
ptvsd.wait_for_attach()
def lambda_handler(event, context):
"""Sample pure Lambda function
Parameters
----------
event: dict, required
API Gateway Lambda Proxy Input Format
Event doc: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html#api-gateway-simple-proxy-for-lambda-input-format
context: object, required
Lambda Context runtime methods and attributes
Context doc: https://docs.aws.amazon.com/lambda/latest/dg/python-context-object.html
Returns
------
API Gateway Lambda Proxy Output Format: dict
Return doc: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html
"""
# try:
# ip = requests.get("http://checkip.amazonaws.com/")
# except requests.RequestException as e:
# # Send some context about this error to Lambda Logs
# print(e)
# raise e
return {
"statusCode": 200,
"body": json.dumps({
"message": "hello world",
# "location": ip.text.replace("\n", "")
}),
}
Launch configuration
launch.json
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
},
{
"name": "SAM CLI Python Hello World",
"type": "python",
"request": "attach",
"port": 5890,
"host": "localhost",
"pathMappings": [
{
"localRoot": "${workspaceFolder}/hello_world/build",
"remoteRoot": "/var/task"
}
]
}
]
}
It seems I was editing the python file at "python-debugging/hello_world/build" following the guideline of the doc (there is a step in the doc which asks you to copy the python file to "python-debugging/hello_world/build").
But then when you run "sam local start-api", it actually runs the python file at the location specifed by the CloudFormation template (tempalted.yaml), which is at "python-debugging/hello_world" (check the "CodeUri" property).
When I moved all the libriaries to the same folder as the python file it works.
So I suppose you have to make sure which python (or lambda) script you are running, and ensure the libraries are together with the python script (if you are not using layers).
Folder structure
Entering debugging mode in Visual studio code
Step 1: Invoke and start up the local API gateway
Server
Step 2: Send a test request
Client
Step 3: Request received, lambda triggered, pending activating debug mode in Visual Studio Code
Server
Step 4: Lambda function triggered, entering debug mode in Visual Studio Code
In the IDE, open the "Run" perspective, select the launch config for this file ("SAM CLI Python Hello World"). Start the debug.
Step 5: Step through the function, return response
Server
Client
tl;dr
Environment variables set in a zappa_settings.json don't upload as environment variables to AWS Lambda. Where do they go?
ts;wm
I have a Lambda function configured, deployed and managed using the Zappa framework. In the zappa_settings.json I have set a number of environment variables. These variables are definitely present as my application successfully runs however, when trying to inspect the Lambda function environment variables in the console or AWS CLI I see no environment variables have been uploaded to the Lambda function itself.
Extract from zappa_settings.json:
{
"stage-dev": {
"app_function": "project.app",
"project_name": "my-project",
"runtime": "python3.7",
"s3_bucket": "my-project-zappa",
"slim_handler": true,
"environment_variables": {
"SECRET": "mysecretvalue"
}
}
}
Output of aws lambda get-function-configuration --function-name my-project-stage-dev:
{
"Configuration": {
"FunctionName": "my-project-stage-dev",
"FunctionArn": "arn:aws:lambda:eu-west-1:000000000000:function:my-project-stage-dev",
"Runtime": "python3.7",
"Role": "arn:aws:iam::000000000000:role/lambda-execution-role",
"Handler": "handler.lambda_handler",
"CodeSize": 12333025,
"Description": "Zappa Deployment",
"Timeout": 30,
"MemorySize": 512,
"LastModified": "...",
"CodeSha256": "...",
"Version": "$LATEST",
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "..."
},
"Code": {
"RepositoryType": "S3",
"Location": "..."
}
}
Environment is absent from the output despite being included in the zappa_settings and the AWS documentation indicating it should be included if present, this is confirmed by checking in the console. I want to know where zappa is uploading the environment variables to, and if possible why it is doing so over using Lambda's in-built environment?
AWS CLI docs:
https://docs.aws.amazon.com/cli/latest/reference/lambda/get-function-configuration.html
environment_variables are saved in zappa_settings.py when creating a package for deployment (run zappa package STAGE and explore the archive) and are then dynamically set as environment variables by modifying os.environ in handler.py.
To set native AWS variables you need to use aws_environment_variables.
Is there a way to dynamically get the version tag from my __init__.py file and append it to the dockerrun.aws.json image name for example::
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "dockerkey",
"Key": "mydockercfg"
},
"Image": {
"Name": "comp/app:{{version}}",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "80"
}
]
}
This when when I do eb deploy it will build the correct version. At the moment I have to keep modifying the json file with each deploy.
I also stumbled upon that last year, where AWS support stated there's no such feature at hand. I ended up writing a script that receives the docker tag as parameter and composes the dockerrun.aws.json file on the fly with the correct tagname.
I've written a bash script which runs
eb deploy
Before it executes that command I change a symlink depending on if I'm running production or staging. For Example:
ln -sf ../ebs/Dockerrun.aws.$deploy_type.json ../ebs/Dockerrun.aws.json