I have a Flask application and I've integrated Flasgger for documentation. When I run my app locally, I can access swagger at http://127.0.0.1:5000/apidocs. But when it's deployed to our dev environment, the hostname is https://services.company.com/my-flask-app. And when I add /apidocs at the end of that URL, swagger does not load.
This is how I've configured swagger:
swagger_config = {
"headers": [],
"specs": [
{
"endpoint": "APISpecification",
"route": "/APISpecification",
"rule_filter": lambda rule: True, # all in
"model_filter": lambda tag: True, # all in
}
],
"static_url_path": "/flasgger_static",
"specs_route": "/apidocs/",
"url_prefix": "/my-flask-app", # TODO - redo this for INT deployment
}
When I run this, on my local I can access swagger at http://127.0.0.1:5000/my-flask-app/apidocs/#/, but I think on my dev environment it'd probably be accessible at https://services.company.com/my-flask-app/my-flask-app/api-docs. When I check the console, Flasgger tries to get the css from https://services.company.com/ not https://services.company.com/my-flask-app
Any ideas on how I can resolve this?
Related
Sorry Docker starter question here.
I'm currently trying to build an app with Python using FastApi and dockerize it. When it's dockerized I will connect it to an AWS Lambda. The problem is, how can I test my Lambda before deploying it to ECR?
I already tried to use the local Lambda invoke with: localhost:9000/2015-03-31/functions/function/invocations and create a post request reading a file:
{
"resource": "/",
"path": "/upload/",
"httpMethod": "POST",
"requestContext": {},
"multiValueQueryStringParameters": null,
"headers": {
"Accept": "application/json",
"Content-Type": "application/json" },
"body": { "filename": "image.jpg" },
"files": { "upload": "image.jpg" }
}
I don't get it to work...
Code:
#app.post("/upload/")
async def upload_image(request: Request):
print(request)
print(await request.json())
print(await request.body())
return {"received_request_body": request.json()}
handler = Mangum(app)
Does your container image include the runtime interface emulator (RIE)? Did you build and run the container image? Take a read through the following reference:
https://docs.aws.amazon.com/lambda/latest/dg/images-test.html
You might also check out AWS SAM CLI, which offers a nice workflow for local build and test of Lambda functions.
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-build.html
I am not sure about your complete infra.But based on the above given limited information I am trying to answer.
You can test lambda independently with test events.You can do this from either AWS console or from cli (aws lambda invoke).
Reference:
https://docs.aws.amazon.com/lambda/latest/dg/testing-functions.html
But if you want to test it with api then use api gateway along with your lambda which will expose it as an endpoint.Then you can test your lambda which ever way you are comfortable(like curl command/postman etc..)
Reference
Api Gateway with lambda
Navigation for test event from Lambda console
Test event for Lambda screenshot
I have created an Azure Function base on a custom image (docker) using VS Code.
I used the deployment feature of VS code to deploy it to azure and everything was fine.
My function.json file specifies anonymous auth level:
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
Why an I still getting the 401 unauthorized error?
Thanks
Amit
I changed my authLevel from function to anonymous and it finally worked!
Below methods can fix 4XX errors in our function app:
Make sure you add all the values from Local.Settings.json file to Application settings (FunctionApp -> Configuration -> Application Settings)
Check for CORS in your function app. Try adding “*” and saving it.. reload the function app and try to run it.
(Any request made against a storage resource when CORS is enabled must either have a valid authorization header or must be made against a public resource.)
When you make your request to your function, you may need to pass an authorization header with key 'x-functions-key' and value equal to either your default key for all functions (Function App > App Keys > default in the Azure portal) or a key specific to that function (Function App > Functions > [specific function] > Function Keys > default in Azure Portal).
I am trying to inject Environment Variables into an Azure "App Services" Flask resource.
Note: I am aware that some use files to set up environment variables. I may look into that in the future, but for now I'm trying to do this without managing files.
Per the manual, I added the environment variables as "app settings," on the portal page.
And I can see that they have been set correctly with the Azure CLI command:
az webapp config appsettings list --name <redacted> --resource-group <redacted>
which outputs:
{
"name": "DB.DATABASE",
"slotSetting": false,
"value": "<redacted>"
},
{
"name": "DB.DRIVER",
"slotSetting": false,
"value": "{SQL Server Native Client 11.0}"
},
...
My Python code references the variables, which works locally.
from os import environ
driver = environ.get('DB.DRIVER')
server = environ.get('DB.SERVER')
user_id = environ.get('DB.USER_ID')
password = environ.get('DB.PASSWORD')
database = environ.get('DB.DATABASE')
trusted_connection = environ.get('DB.TRUSTED_CONNECTION')
print(f'driver: {driver}')
print(f'server: {server}')
print(f'user_id: {user_id}')
and the output, in the Azure log stream is:
2020-10-05T17:08:01.172742838Z driver: None
2020-10-05T17:08:01.172767338Z server: None
2020-10-05T17:08:01.172772538Z user_id: None
What, please, am I missing from this procedure? It seemed so simple, but it just fails with no error message.
tl;dr
Environment variables set in a zappa_settings.json don't upload as environment variables to AWS Lambda. Where do they go?
ts;wm
I have a Lambda function configured, deployed and managed using the Zappa framework. In the zappa_settings.json I have set a number of environment variables. These variables are definitely present as my application successfully runs however, when trying to inspect the Lambda function environment variables in the console or AWS CLI I see no environment variables have been uploaded to the Lambda function itself.
Extract from zappa_settings.json:
{
"stage-dev": {
"app_function": "project.app",
"project_name": "my-project",
"runtime": "python3.7",
"s3_bucket": "my-project-zappa",
"slim_handler": true,
"environment_variables": {
"SECRET": "mysecretvalue"
}
}
}
Output of aws lambda get-function-configuration --function-name my-project-stage-dev:
{
"Configuration": {
"FunctionName": "my-project-stage-dev",
"FunctionArn": "arn:aws:lambda:eu-west-1:000000000000:function:my-project-stage-dev",
"Runtime": "python3.7",
"Role": "arn:aws:iam::000000000000:role/lambda-execution-role",
"Handler": "handler.lambda_handler",
"CodeSize": 12333025,
"Description": "Zappa Deployment",
"Timeout": 30,
"MemorySize": 512,
"LastModified": "...",
"CodeSha256": "...",
"Version": "$LATEST",
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "..."
},
"Code": {
"RepositoryType": "S3",
"Location": "..."
}
}
Environment is absent from the output despite being included in the zappa_settings and the AWS documentation indicating it should be included if present, this is confirmed by checking in the console. I want to know where zappa is uploading the environment variables to, and if possible why it is doing so over using Lambda's in-built environment?
AWS CLI docs:
https://docs.aws.amazon.com/cli/latest/reference/lambda/get-function-configuration.html
environment_variables are saved in zappa_settings.py when creating a package for deployment (run zappa package STAGE and explore the archive) and are then dynamically set as environment variables by modifying os.environ in handler.py.
To set native AWS variables you need to use aws_environment_variables.
I'm trying to enable authentication in Apache SuperSet through Oauth2.
It shoud be straightforward due to the fact that it's built upon Flask AppBuilder which supports OAuth and is extremely easy to setup and use.
I managed to make both the following examples work seamlessy with Twitter Oauth configuration:
FAB OAuth example
flask-oauthlib examples
Now I'm trying to apply the same configuration to SuperSet.
Docker
As I can't manually build the project for several mysterious python errors (tried on Windows 7/Ubuntu Linux and with Python versions 2.7 and 3.6), I decided to use this Superset docker image (that installs and works fine) and inject my configuration as suggested by docs:
Follow the instructions provided by Apache Superset for writing your own superset_config.py. Place this file in a local directory and mount this directory to /home/superset/.superset inside the container.
I added a superset_config.py (in a folder and alone) and mounted it by adding to the Dockerfile the following:
ADD config .superset/config
(config is the name of the folder) or (for the single file):
COPY superset_config.py .superset
In both cases the files end up in the right place in the container (I check with docker exec /bin/bash) but the web application shows no difference: no traces of Twitter authentication.
Can somebody figure out what I am doing wrong?
You have to change the superset_config.py. Look at this example config, it works for me.
import os
from flask_appbuilder.security.manager import AUTH_OID,
AUTH_REMOTE_USER,
AUTH_DB, AUTH_LDAP, AUTH_OAUTH
basedir = os.path.abspath(os.path.dirname(__file__))
ROW_LIMIT = 5000
SUPERSET_WORKERS = 4
SECRET_KEY = 'a long and random secret key'
SQLALCHEMY_DATABASE_URI = ‘postgresql://username:pass#host:port/dbname’
CSRF_ENABLED = True
AUTH_TYPE = AUTH_OAUTH
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = "Public"
OAUTH_PROVIDERS = [
{
'name': 'google',
'whitelist': ['#company.com'],
'icon': 'fa-google',
'token_key': 'access_token',
'remote_app': {
'base_url': 'https://www.googleapis.com/oauth2/v2/',
'request_token_params': {
'scope': 'email profile'
},
'request_token_url': None,
'access_token_url':
'https://accounts.google.com/o/oauth2/token',
'authorize_url': 'https://accounts.google.com/o/oauth2/auth',
'consumer_key': 'GOOGLE_AUTH_KEY',
'consumer_secret': 'GOOGLE_AUTH_SECRET'
}
}
]
2021 update: The FAB OAuth provider schema seems like it changed a bit since this answer. If you're trying to do this with Superset >= 1.1.0, try this instead:
OAUTH_PROVIDERS = [
{
'name': 'google',
'icon': 'fa-google',
'token_key': 'access_token',
'remote_app': {
'client_id': 'GOOGLE_KEY',
'client_secret': 'GOOGLE_SECRET',
'api_base_url': 'https://www.googleapis.com/oauth2/v2/',
'client_kwargs':{
'scope': 'email profile'
},
'request_token_url': None,
'access_token_url': 'https://accounts.google.com/o/oauth2/token',
'authorize_url': 'https://accounts.google.com/o/oauth2/auth'
}
}
]
Of course, sub out GOOGLE_KEY and GOOGLE_SECRET. The rest should be fine. This was cribbed from the FAB security docs for the next time there is drift.