Deploying Flask app to via Elastic Beanstalk which interacts with S3 - python

I have a Flask app which looks like this:
from flask import Flask
import boto3
application = Flask(__name__)
#application.route("/")
def home():
return "Server successfully loaded"
#application.route("/app")
def frontend_from_aws():
s3 = boto3.resource("s3")
frontend = s3.Object(bucket_name = "my_bucket", key = "frontend.html")
return frontend.get()["Body"].read()
if __name__ == "__main__":
application.debug = True
application.run()
Everything works perfectly when I test locally, but when I deploy the app to Elastic Beanstalk the second endpoint gives an internal server error:
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
I didn't see anything alarming in the logs, though I'm not completely sure I'd know where to look. Any ideas?
Update: As a test, I moved frontend.html to a different bucket and modified the "/app" endpoint accordingly, and mysteriously it worked fine. So apparently this has something to do with the settings for the original bucket. Does anybody know what the right settings might be?

I found a quick and dirty solution: IAM policies (AWS console -> Identity & Access Management -> Policies). There was an existing policy called AmazonS3FullAccess, and after I attached aws-elasticbeanstalk-ec2-role to it my app was able to read and write to S3 at will. I'm guessing that more subtle access management can be achieved by creating custom roles and policies, but this was good enough for my purposes.

Did you set up your AWS credentials on your Elastc Beanstalk instance as they are on your local machine (i.e. in ~/.aws/credentials)?

Related

AWS Chalice route working locally, but not when deployed

I'm new to AWS Chalice and I'm running into obstacles during deployment--essentially, everything works fine when I run chalice local, I can go to the route I've defined and it will return the related JSON data. However, once deployed and I try accessing the same route, I get the HTTP 502 Bad Gateway error:
{
"message": "Internal server error"
}
Is there something I'm missing when I set up my AWS IAM roles? Or, more generally, is there some level of configuration with AWS beyond just setting a config file with the access key and secret in the .aws directory on my system? Below is the basic boilerplate code I'm running when I get the error:
from chalice import Chalice
import json, datetime, os
from chalicelib import config
app = Chalice(app_name='trading-bot')
app.debug = True
#app.route('/quote')
def quote():
return {"hello": "world"}
Please let me know if there's any more details I can provide; again I'm new to Chalice and AWS in general so it may be some simple settings I need to update on my profile.
Thanks!

Trading view alerts to trigger market order through python and Oanda's API

I'm trying to trigger a python module (market order for Oanda) using web hooks(from trading view).
Similar to this
1) https://www.youtube.com/watch?v=88kRDKvAWMY&feature=youtu.be
and this
2)https://github.com/Robswc/tradingview-webhooks-bot
But my broker is Oanda so I'm using python to place the trade. This link has more information.
https://github.com/hootnot/oanda-api-v20
The method is web hook->ngrok->python. When a web hook is sent, the ngrok (while script is also running) shows a 500 internal service error and that the server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
This is what my script says when its running (see picture);
First says some stuff related the market order then;
running script picture
One thing I noticed is that after Debug it doesn't say Running on... (so maybe my flask is not active?
Here is the python script;
from flask import Flask
import market_orders
# Create Flask object called app.
app = Flask(__name__)
# Create root to easily let us know its on/working.
#app.route('/')
def root():
return 'online'
#app.route('/webhook', methods=['POST'])
def webhook():
if request.method == 'POST':
# Parse the string data from tradingview into a python dict
print(market_orders.myfucn())
else:
print('do nothing')
if __name__ == '__main__':
app.run()
Let me know if there is any other information that would be helpful.
Thanks for your help.
I fixed it!!!! Google FTW
The first thing I learned was how to make my module a FLASK server. I followed these websites to figure this out;
This link helped me set up the flask file in a virtual environment. I also moved my Oanda modules to this new folder. And opened the ngrok app while in this folder via the command window. I also ran the module from within the command window using flask run.
https://topherpedersen.blog/2019/12/28/how-to-setup-a-new-flask-app-on-a-mac/
This link showed me how to set the FLASK_APP and the FLASK_ENV
Flask not displaying http address when I run it
Then I fixed the internal service error by adding return 'okay' after print(do nothing) in my script. This I learned from;
Flask Value error view function did not return a response

Cloud9 "Unable to load http preview" Flask project

I'm following a mooc for building quickly a website in flask.
I'm using Cloud9 but i'm unable to watch my preview on it, i get an :
"Unable to load http preview" :
the code is really simple, here the views.py code
from flask import Flask, render_template
app = Flask(__name__)
# Config options - Make sure you created a 'config.py' file.
app.config.from_object('config')
# To get one variable, tape app.config['MY_VARIABLE']
#app.route('/')
def index():
return "Hello world !"
if __name__ == "__main__":
app.run()
And the preview screen, is what I get when I execute
python views.py
Thank you in advance
you need to make FLASK_APP environment variable, and flask application is not running like python views.py but flask run. Quick start
# give an environment variable, give the absolute path or relative
# path to you flask app, in your case it is `views.py`
export FLASK_APP=views.py
#after this run flask application
flask run
I faced the same problem. There is no way we can preview http endpoints directly. Although in AWS documentation they have asked to follow certain steps, but those too wont work. Only way is to access it using instance public address and exposing required ports. Read here for this.

How to access a remote datastore when running dev_appserver.py?

I'm attempting to run a localhost web server that has remote api access to a remote datastore using the remote_api_stub method ConfigureRemoteApiForOAuth.
I have been using the following Google doc for reference but find it rather sparse:
https://cloud.google.com/appengine/docs/python/tools/remoteapi
I believe I'm missing the authentication bit, but can't find a concrete resource to guide me. What would be the easiest way, given the follow code example, to access a remote datastore while running dev_appserver.py?
import webapp2
from google.appengine.ext import ndb
from google.appengine.ext.remote_api import remote_api_stub
class Topic(ndb.Model):
created_by = ndb.StringProperty()
subject = ndb.StringProperty()
#classmethod
def query_by_creator(cls, creator):
return cls.query(Topic.created_by == creator)
class MainPage(webapp2.RequestHandler):
def get(self):
remote_api_stub.ConfigureRemoteApiForOAuth(
'#####.appspot.com',
'/_ah/remote_api'
)
topics = Topic.query_by_creator('bill')
self.response.headers['Content-Type'] = 'text/plain'
self.response.out.write('<html><body>')
self.response.out.write('<h1>TOPIC SUBJECTS:<h1>')
for topic in topics.fetch(10):
self.response.out.write('<h3>' + topic.subject + '<h3>')
self.response.out.write('</body></html>')
app = webapp2.WSGIApplication([
('/', MainPage)
], debug=True)
This get's asked a lot, simply because you can't use app engines libraries outside of the SDK. However, there is also an easier way to do it from within the App Engine SDK as well.
I would use gcloud for this. Here's how to set it up:
If you want to interact with google cloud storage services inside or outside of the App Engine environment, you may use Gcloud (https://googlecloudplatform.github.io/gcloud-python/stable/) to do so.
You need a service account on your application as well as download the JSON credentials file. You do this on the app engine console under the authentication tab. Create it, and then download it. Call it client_secret.json or something.
With those, once you install the proper packages for gcloud with pip, you'll be able to make queries as well as write data.
Here is an example of authenticating yourself to use the library:
from gcloud import datastore
# the location of the JSON file on your local machine
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/location/client_secret.json"
# project ID from the Developers Console
projectID = "THE_ID_OF_YOUR_PROJECT"
os.environ["GCLOUD_TESTS_PROJECT_ID"] = projectID
os.environ["GCLOUD_TESTS_DATASET_ID"] = projectID
client = datastore.Client(dataset_id=projectID)
Once that's done, you can make queries like this:
query = client.query(kind='Model').fetch()
It's actually super easy. Any who, that's how I would do that! Cheers.

Google cloud storage client with remote api on local client

I'm trying to use the google remote api on my app engine project to upload local files to my app's default cloud storage bucket.
I have configured my app.yaml to have remote api on. I'm able to access my bucket and upload/access files from it. I run my local python console and try to write to the bucket with the following code:
from google.appengine.ext.remote_api import remote_api_stub
from google.appengine.api import app_identity
import cloudstorage
def auth_func():
return ('user#gmail.com', '*******')
remote_api_stub.ConfigureRemoteApi('my-app-id', '/_ah/remote_api', auth_func,'my-app-id.appspot.com')
filename = "/"+app_identity.get_default_gcs_bucket_name()+ "/myfile.txt"
gcs_file = cloudstorage.open(filename,'w',content_type='text/plain',options={'x-goog-meta-foo': 'foo','x-goog-meta-bar': 'bar'})
I see the following reponse:
WARNING:root:suspended generator urlfetch(context.py:1214) raised DownloadError(Unable to fetch URL: http://None/_ah/gcs/my-app-id.appspot.com/myfile.txt)
Notice the
http://None/_ah/gcs.....
I don't think None should be part of the url. Is there an issue with the GoogleAppEngineCloudStorageClient, v1.9.0.0? I'm also using Google App Engine 1.9.1.
Any ideas?
Google Cloud Storage client does not respect remote_api_stub and considers you are running script locally
os.environ['SERVER_SOFTWARE'] = 'Development (remote_api)/1.0'
or even
os.environ['SERVER_SOFTWARE'] = ''
will help.
The function, checking your environment from common.py
def local_run():
"""Whether we should hit GCS dev appserver stub."""
server_software = os.environ.get('SERVER_SOFTWARE')
if server_software is None:
return True
if 'remote_api' in server_software:
return False
if server_software.startswith(('Development', 'testutil')):
return True
return False
If I understand correctly, you want to upload a local text file to a specific bucket. I do not think what you're doing will work.
The alternative would be to ditch the RemoteAPI and upload it using the Cloud Storage API.

Categories

Resources