I am trying to inject Environment Variables into an Azure "App Services" Flask resource.
Note: I am aware that some use files to set up environment variables. I may look into that in the future, but for now I'm trying to do this without managing files.
Per the manual, I added the environment variables as "app settings," on the portal page.
And I can see that they have been set correctly with the Azure CLI command:
az webapp config appsettings list --name <redacted> --resource-group <redacted>
which outputs:
{
"name": "DB.DATABASE",
"slotSetting": false,
"value": "<redacted>"
},
{
"name": "DB.DRIVER",
"slotSetting": false,
"value": "{SQL Server Native Client 11.0}"
},
...
My Python code references the variables, which works locally.
from os import environ
driver = environ.get('DB.DRIVER')
server = environ.get('DB.SERVER')
user_id = environ.get('DB.USER_ID')
password = environ.get('DB.PASSWORD')
database = environ.get('DB.DATABASE')
trusted_connection = environ.get('DB.TRUSTED_CONNECTION')
print(f'driver: {driver}')
print(f'server: {server}')
print(f'user_id: {user_id}')
and the output, in the Azure log stream is:
2020-10-05T17:08:01.172742838Z driver: None
2020-10-05T17:08:01.172767338Z server: None
2020-10-05T17:08:01.172772538Z user_id: None
What, please, am I missing from this procedure? It seemed so simple, but it just fails with no error message.
Related
I have a Flask application and I've integrated Flasgger for documentation. When I run my app locally, I can access swagger at http://127.0.0.1:5000/apidocs. But when it's deployed to our dev environment, the hostname is https://services.company.com/my-flask-app. And when I add /apidocs at the end of that URL, swagger does not load.
This is how I've configured swagger:
swagger_config = {
"headers": [],
"specs": [
{
"endpoint": "APISpecification",
"route": "/APISpecification",
"rule_filter": lambda rule: True, # all in
"model_filter": lambda tag: True, # all in
}
],
"static_url_path": "/flasgger_static",
"specs_route": "/apidocs/",
"url_prefix": "/my-flask-app", # TODO - redo this for INT deployment
}
When I run this, on my local I can access swagger at http://127.0.0.1:5000/my-flask-app/apidocs/#/, but I think on my dev environment it'd probably be accessible at https://services.company.com/my-flask-app/my-flask-app/api-docs. When I check the console, Flasgger tries to get the css from https://services.company.com/ not https://services.company.com/my-flask-app
Any ideas on how I can resolve this?
I want to Test my azure function using the Azure Apps feature to Run/Test mode but it is throwing the '500 internal server error'.
I am able to debug the same code in my local environment but when to trigger the same code on the azure portal then it is getting failed without any proper error logs.
This Azure function will read the json format data from the event hub and write the same to the blob storage. I am using python for the azure function development.
Here is the code:
init.py
from typing import List
import logging
import os
import azure.functions as func
from azure.storage.blob import BlobClient
import datetime
import json
storage_connection_string = os.getenv('storage_connection_string_FromKeyVault')
container_name = os.getenv('storage_container_name_FromKeyVault')
today = datetime.datetime.today()
def main(events: List[func.EventHubEvent]):
for event in events:
a = event.get_body().decode('utf-8')
json.loads(a)
logging.info('Python EventHub trigger processed an event: %s', a)
logging.info(f' SequenceNumber = {event.sequence_number}')
logging.info(f' Offset = {event.offset}')
blob_client = BlobClient.from_connection_string(storage_connection_string, container_name, str(today.year) +"/" + str(today.month) + "/" + str(today.day) + "/" + str(event.sequence_number) + ".json")
blob_client.upload_blob(event.get_body().decode(),blob_type="AppendBlob")
local.settings.json
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "<Endpoint1>",
"FUNCTIONS_WORKER_RUNTIME": "python",
"storage_connection_string_FromKeyVault": "<connectionString",
"storage_container_name_FromKeyVault": "<container_name>",
"EventHubReceiverPolicy_FromKeyVault": "<Endpoint2>"
}
}
function.json
{
"scriptFile": "__init__.py",
"bindings": [
{
"type": "eventHubTrigger",
"name": "events",
"direction": "in",
"eventHubName": "pwo-events",
"connection": "EventHubReceiverPolicy_FromKeyVault",
"cardinality": "many",
"consumerGroup": "$Default",
"dataType": "binary"
}
]
}
Please note that this error is throwing when I am clicking on Run/Test on the portal. but the same code is also running fine after deployment.
The 500 error is not helpful to solve this problem, you need to check the specific error of the azure function. You can use application insights to get the details error. The function must configure the corresponding application insights before you can view the log on the portal.
So you need to configure an application insights to your function app like this:
Then your function app will restart.
Of course, you can also go to kudu to view:
First, go to advanced tools, then click 'GO',
Then After you go to kudu, click Debug Console -> CMD -> LogFiles -> Application -> Functions -> yourtriggername. You will find log file there.
If you are based on linux OS, after go to kudu, just click 'log stream'(this is not supportted to consumption plan for linux.).
I had this problem and I found that problem was with dependencies. Removing unexisting libraries (or using Microsoft's bring dependency document) will solve the issue.
Adding third-party dependencies in the Azure portal is currently not supported for Linux Consumption Function Apps. Click here to setup local environment. Learn more
If you need dependencies, to solve this problem, you can refer to this Microsoft Document
I encountered multiple issues regarding the publishing / deployment of functions when using Python Azure Functions running on Linux and Premium Plan. Following are options what can be done in cases where it fails or it is successful but the function (on Azure) does not reflect what should have been published / deployed.
The following options may also work for non-Linux / non-Python / non-Premium Plan Function (Apps).
Wait some minutes after the publishing so that the Function (App) reflects the update
Restart the Function App
Make sure that the following AppSettings are set under "Configuration" (please adjust to your current context)
[
{
"name": "AzureWebJobsStorage",
"value": "<KeyVault reference to storage account connection string>",
"slotSetting": false
},
{
"name": "ENABLE_ORYX_BUILD",
"value": "true",
"slotSetting": false
},
{
"name": "FUNCTIONS_EXTENSION_VERSION",
"value": "~3",
"slotSetting": false
},
{
"name": "FUNCTIONS_WORKER_RUNTIME",
"value": "python",
"slotSetting": false
},
{
"name": "SCM_DO_BUILD_DURING_DEPLOYMENT",
"value": "true",
"slotSetting": false
},
{
"name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
"value": "<storage account connection string>",
"slotSetting": false
},
{
"name": "WEBSITE_CONTENTSHARE",
"value": "<func app name>",
"slotSetting": false
}
]
When using Azure DevOps Pipelines use the standard Azure Function task (https://github.com/Microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureFunctionAppV1/README.md) to publish the function respectively set the AppSettings.
This task also works for Python even if it does not explicitly provide the option under "Runtime Stack" (just leave it empty).
Make sure to publish the correct files (if you publish via ZipDeploy the zip folder should contain host.json at its root)
You can check if the correct files haven been published via checking the wwwroot folder via the Azure Portal -> Function App -> Development Tools -> SSH
cd /home/site/wwwroot
dir
Check the deployment logs
Either via the link displayed as output during the deployment
Should look like "https://func-app-name.net/api/deployments/someid/log"
Via Development Tools -> Advanced Tools
If the steps so far did not help it can help to SSH to the host via the portal (Development Tools -> SSH) and to delete
# The deployments folder (and then republish)
cd /home/site
rm -r deployments
# The wwwroot folder (and then republish)
cd /home/site
rm -r wwwroot
Delete the Function App resource and redeploy it
I'm trying to enable authentication in Apache SuperSet through Oauth2.
It shoud be straightforward due to the fact that it's built upon Flask AppBuilder which supports OAuth and is extremely easy to setup and use.
I managed to make both the following examples work seamlessy with Twitter Oauth configuration:
FAB OAuth example
flask-oauthlib examples
Now I'm trying to apply the same configuration to SuperSet.
Docker
As I can't manually build the project for several mysterious python errors (tried on Windows 7/Ubuntu Linux and with Python versions 2.7 and 3.6), I decided to use this Superset docker image (that installs and works fine) and inject my configuration as suggested by docs:
Follow the instructions provided by Apache Superset for writing your own superset_config.py. Place this file in a local directory and mount this directory to /home/superset/.superset inside the container.
I added a superset_config.py (in a folder and alone) and mounted it by adding to the Dockerfile the following:
ADD config .superset/config
(config is the name of the folder) or (for the single file):
COPY superset_config.py .superset
In both cases the files end up in the right place in the container (I check with docker exec /bin/bash) but the web application shows no difference: no traces of Twitter authentication.
Can somebody figure out what I am doing wrong?
You have to change the superset_config.py. Look at this example config, it works for me.
import os
from flask_appbuilder.security.manager import AUTH_OID,
AUTH_REMOTE_USER,
AUTH_DB, AUTH_LDAP, AUTH_OAUTH
basedir = os.path.abspath(os.path.dirname(__file__))
ROW_LIMIT = 5000
SUPERSET_WORKERS = 4
SECRET_KEY = 'a long and random secret key'
SQLALCHEMY_DATABASE_URI = ‘postgresql://username:pass#host:port/dbname’
CSRF_ENABLED = True
AUTH_TYPE = AUTH_OAUTH
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = "Public"
OAUTH_PROVIDERS = [
{
'name': 'google',
'whitelist': ['#company.com'],
'icon': 'fa-google',
'token_key': 'access_token',
'remote_app': {
'base_url': 'https://www.googleapis.com/oauth2/v2/',
'request_token_params': {
'scope': 'email profile'
},
'request_token_url': None,
'access_token_url':
'https://accounts.google.com/o/oauth2/token',
'authorize_url': 'https://accounts.google.com/o/oauth2/auth',
'consumer_key': 'GOOGLE_AUTH_KEY',
'consumer_secret': 'GOOGLE_AUTH_SECRET'
}
}
]
2021 update: The FAB OAuth provider schema seems like it changed a bit since this answer. If you're trying to do this with Superset >= 1.1.0, try this instead:
OAUTH_PROVIDERS = [
{
'name': 'google',
'icon': 'fa-google',
'token_key': 'access_token',
'remote_app': {
'client_id': 'GOOGLE_KEY',
'client_secret': 'GOOGLE_SECRET',
'api_base_url': 'https://www.googleapis.com/oauth2/v2/',
'client_kwargs':{
'scope': 'email profile'
},
'request_token_url': None,
'access_token_url': 'https://accounts.google.com/o/oauth2/token',
'authorize_url': 'https://accounts.google.com/o/oauth2/auth'
}
}
]
Of course, sub out GOOGLE_KEY and GOOGLE_SECRET. The rest should be fine. This was cribbed from the FAB security docs for the next time there is drift.
I have working CLI tool created with mongoengine running on linux environment
MongoDB = 3.4.1
Python = 2.7.5
PyMongo = 3.4.0
MongoEngine = 0.11.0
I am connecting to database
with information in .ini file that looks like so:
[DATABASE]
uri=mongodb://%(user)s:%(password)s#%(host)s/%(dbname)s
dbname=myapp
user=
host=localhost
password=
In Python:
DB_CONN = mongoengine.connect(conf['dbname'], host=conf['uri'])
There are currently two users in database. User usrRO that have read role
and user usrRW that have readWrite role. When connecting to db using
user usrRW name and password in .ini file everything works.
But connecting with user usrRO credentials (user usrRO can read data from mongo CLI interface) leads to:
pymongo.errors.OpeartionFailure: not authorized on myapp to execute command
{ createIndexes: <collection_name>,
indexes: [ { unique: true,
backgroung: false,
sparse: false,
key: { name: 1 },
name: "name_1" } ],
writeConcern: {} }
Is there any way to use users usrRO credentials, or any other way to connect
to database with ready only privileges using mongoengine ?
I found the answer in the mongo-engine gitlab issues.
Simply add
'auto_create_index': False
to your objects meta. This might not work for complex object structures.