how to deploy streamlit with secrets.toml on heroku? - python

Hi I have an application that I would like to deploy on heroku. The question is how would I deploy a streamlit app with secrets.toml?
Currently the connection can be done locally via this
credentials = service_account.Credentials.from_service_account_info(
st.secrets["gcp_service_account"])
However when I deploy it to heroku, this doesn't seem to connect.
Please help.

On heroku I entered the gcp_service_account credentials as a config var (from the heroku dashboard go to 'Settings' --> 'Reveal Config Vars' as below:
Instead of st.secrets["<key>"], use os.environ["<key>"] in your python code as below:
gsheet_url = os.environ['private_gsheets_url']
For nested secrets like the gcp service account credentials, I first parse the json string as below:
parsed_credentials = json.loads(os.environ["gcp_service_account"])
credentials = service_account.Credentials.from_service_account_info(parsed_credentials,scopes=scopes)
Hope this helps.

Related

Firestore SDK hangs in production

I'm using the Firebase Admin Python SDK to read/write data to Firestore. I've created a service account with the necessary permissions and saved the credentials .json file in the source code (I know this isn't the most secure, but I want to get the thing running before fixing security issues). When testing the integration locally, it works flawlessly. But after deploying to GCP, where our service is hosted, calls to Firestore don't work properly and retry for a while before throwing 503 Deadline Exceeded errors. However, SSHing into a GKE pod and calling the SDK manually works without issues. It's just when the SDK is used in code flow that causes problems.
Our service runs in Google Kubernetes Engine in one project (call it Project A), but the Firestore database is in another project (call it project B). The service account that I'm trying to use is owned by Project B, so it should still be able to access the database even when it is being initialized from inside Project A.
Here's how I'm initiating the SDK:
from firebase_admin import get_app
from firebase_admin import initialize_app
from firebase_admin.credentials import Certificate
from firebase_admin.firestore import client
from google.api_core.exceptions import AlreadyExists
credentials = Certificate("/path/to/credentials.json")
try:
app = initialize_app(credential=credentials, name="app_name")
except ValueError:
app = get_app(name="app_name")
client = client(app=app)
Another wrinkle is that another part of our code is able to successfully use the same service account to produce Firebase Access Tokens. The successful code is:
import firebase_admin
from firebase_admin import auth as firebase_admin_auth
if "app_name" in firebase_admin._apps:
# Already initialized
app = firebase_admin.get_app(name="app_name")
else:
# Initialize
credentials = firebase_admin.credentials.Certificate("/path/to/credentials.json")
app = firebase_admin.initialize_app(credential=credentials, name="app_name")
firebase_token = firebase_admin_auth.create_custom_token(
uid="id-of-user",
developer_claims={"admin": is_admin, "site_slugs": read_write_site_slugs},
app=app,
)
Any help appreciated.
Turns out that the problem here was a conflict between gunicorn's gevents and the SDK's use of gRCP. Something related to websockets. I found the solution here. I added the following code to our Django app's settings:
import grpc.experimental.gevent as grpc_gevent
grpc_gevent.init_gevent()

AWS Chalice route working locally, but not when deployed

I'm new to AWS Chalice and I'm running into obstacles during deployment--essentially, everything works fine when I run chalice local, I can go to the route I've defined and it will return the related JSON data. However, once deployed and I try accessing the same route, I get the HTTP 502 Bad Gateway error:
{
"message": "Internal server error"
}
Is there something I'm missing when I set up my AWS IAM roles? Or, more generally, is there some level of configuration with AWS beyond just setting a config file with the access key and secret in the .aws directory on my system? Below is the basic boilerplate code I'm running when I get the error:
from chalice import Chalice
import json, datetime, os
from chalicelib import config
app = Chalice(app_name='trading-bot')
app.debug = True
#app.route('/quote')
def quote():
return {"hello": "world"}
Please let me know if there's any more details I can provide; again I'm new to Chalice and AWS in general so it may be some simple settings I need to update on my profile.
Thanks!

Flask: How to restart Azure App programmatically

I have 4 Scrapy spiders that I launch through Flask on Azure. How to restart the application at the click of a button on my website? How to use REST API in Flask function?
restart:
Restart
flask:
#app.route('/restart')
def restart():
# REST API
If you want to restart an azure web app, please follow the steps below:
1.Install the following python packages:
azure-mgmt-resource and azure-mgmt-web.
2.Then create a service principal for authentication. You can use azure cli or azure portal to create it. Here is an example by using azure cli:
az ad sp create-for-rbac --name xxxx
In the output, you can get these items, and write them down:
application id(client id)
directory id(tenant)
client secret(secret)
Then use the code below:
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.web import WebSiteManagementClient
subscription_id ="xxxx" #you can get it from azure portal
client_id ="xxx"
secret="xxx"
tenant="xxx"
credentials = ServicePrincipalCredentials(
client_id= client_id,
secret=secret,
tenant = tenant
)
#resource_client = ResourceManagementClient(credentials,subscription_id)
web_client = WebSiteManagementClient(credentials,subscription_id)
#restart your azure web app
web_client.web_apps.restart("your_resourceGroup_name","your_web_app_name")

How to get the user location from the web app running on Heroku?

I am having a problem that when I get user location by using http://freegeoip.net/json, it seems to get the location from where the Heroku server is located.
I used Flask to make the web app.
I uploaded the web app through Heroku.
Per the api docs here you need to pass the ip of the user like 'http://freegeoip.net/json/%s' % request.remote_addr. Otherwise it will return the location of the requester, which in your case is your heroku server.

Deploying Flask app to via Elastic Beanstalk which interacts with S3

I have a Flask app which looks like this:
from flask import Flask
import boto3
application = Flask(__name__)
#application.route("/")
def home():
return "Server successfully loaded"
#application.route("/app")
def frontend_from_aws():
s3 = boto3.resource("s3")
frontend = s3.Object(bucket_name = "my_bucket", key = "frontend.html")
return frontend.get()["Body"].read()
if __name__ == "__main__":
application.debug = True
application.run()
Everything works perfectly when I test locally, but when I deploy the app to Elastic Beanstalk the second endpoint gives an internal server error:
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
I didn't see anything alarming in the logs, though I'm not completely sure I'd know where to look. Any ideas?
Update: As a test, I moved frontend.html to a different bucket and modified the "/app" endpoint accordingly, and mysteriously it worked fine. So apparently this has something to do with the settings for the original bucket. Does anybody know what the right settings might be?
I found a quick and dirty solution: IAM policies (AWS console -> Identity & Access Management -> Policies). There was an existing policy called AmazonS3FullAccess, and after I attached aws-elasticbeanstalk-ec2-role to it my app was able to read and write to S3 at will. I'm guessing that more subtle access management can be achieved by creating custom roles and policies, but this was good enough for my purposes.
Did you set up your AWS credentials on your Elastc Beanstalk instance as they are on your local machine (i.e. in ~/.aws/credentials)?

Categories

Resources