I want to put a very simple website on Amazon EB using Flask framework. The website has a welcome form, it takes two values from the user, reads an SQLite database and shows results on the screen. The website works perfectly on my local machine. The problem is on the Elastic Beanstalk.
I create a zip file with the application.py, the SQLite database, the static folder (for bootstrap) and templates folder (for the two templates) and I upload it on the Elastic Beanstalk. My system is windows, running python 3.6. After the upload, EB gives me green status. I click the link on the EB and takes me to the form. All working good so far. Then when I click on the form the button submit to take me to the results page but instead I receive:
Internal Server Error
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
I worked the code to identify the step that Amazon EB fails to understand and it seems that the program fails at the line cur.execute('''SELECT a, b, c, d, e, f... which means that the Amazon EB does not see/understand my SQLITE database.
Can someone help?
This is my code for the flask program application.py:
import os
import sqlite3
from flask import Flask, request, render_template
application = Flask(__name__)
#application.route('/', methods=['POST', 'GET'])
def index():
if request.method == 'GET':
return render_template('welcome.html')
elif request.method == 'POST':
conn = sqlite3.connect('Sky.db')
cur = conn.cursor()
weekendid= request.form['weekend']
myData=[]
cur.execute('''SELECT a, b, c, d, e, f, g
FROM Table1 WHERE a = ? ORDER BY g DESC LIMIT 5''', (weekendid,))
row = cur.fetchall()
for i in row:
average_1 = (i[1]+i[3])/2
average_2 = (i[2]+i[4])/2
variable1 = i[5]
variable2 = i[6]
cur.execute('''SELECT * FROM Table2 WHERE a = ?''', (i[0],))
coords=cur.fetchone()
zz = [average_1, average_2,variable1,variable2]
myData.append(zz)
return render_template('where.html', myData=myData)
if __name__ == '__main__':
application.debug = True
host = os.environ.get('IP', '127.0.0.1')
port = int(os.environ.get('PORT', 80))
application.run(host=host, port=port)
First. Be careful when using SQLite on Elastic Beanstalk, as if you change configuration it is likely to kill your instance and redeploy. In your case, it doesn't look like you are actually writing to the database, so that isn't a problem.
First step of finding the error might be going to the Elastic Beanstalk console and clicking Request logs. It's under the logs pane.
There you should be able to get the logs from your instance and find the actual error under /var/log/httpd/error_log.
You might also want to ssh to your instance and verify the paths are as you expect. You can of course also find the logs that way. If you are using the eb console tool, you can simply do eb ssh
Related
I'm trying to set up a Flask API Server from which I can data from local database via an ongoing HTTP Request to another database.
In the local code, I run a thread that is running and updating the local DB every 1 minute.
app = Flask(__name__)
cached_event_log = None
#app.route('/event_log', methods=['GET'])
def get_event_log():
if cached_event_log != None and .get_latest_event_time == cached_event_log[-1]:
return jsonify(cached_event_log)
#MAKE CONNECTION TO DB AND GET THE DATA
return jsonify(event_log)
if(__name__ == '__main__'):
app.run(Debug=True)
I'm struggling to find a "standard" way to set up A request.
Any opinion would be highly appreciated- Thank you,
You could set up a cron job to run a script at a specified interval
Or use something like Advanced Python Scheduler
https://apscheduler.readthedocs.io/en/3.0/
Advanced Python Scheduler support in Flask
https://github.com/viniciuschiele/flask-apscheduler
Latest update: The problem had indeed to do with permission and user groups, today I learned why we do not simply use root for everything. Thanks to Jakub P. for reminding me to look into the apache error logs and thanks to domoarrigato for providing helpful insight and solutions.
What's up StackOverflow.
I followed the How To Deploy a Flask Application on an Ubuntu VPS tutorial provided by DigitalOcean, and got my application to successfully print out Hello, I love Digital Ocean! when being reached externally by making a GET request to my public server IP.
All good right? Not really.
After that, I edit the tutorial script and write a custom flask application, I test the script in my personal development environment and it runs without issue, I also test it on the DigitalOcean server by having it deploy on localhost and making another GET request.
All works as expected until I try to access it from my public DigitalOcean IP, now suddenly I am presented with a 500 Internal Server Error.
What is causing this issue, and what is the correct way to debug in this case?
What have I tried?
Setting app.debug = True gives the same 500 Internal Server Error without a debug report.
Running the application on the localhost of my desktop pc and DigitalOcean server gives no error, the script executes as expected.
The tutorial code runs and executed fine, and making a GET request to the Digital Ocean public IP returns the expected response.
I can switch between the tutorial application and my own application and clearly see that I am only getting the error wit my custom application. However the custom application still presents no issues running on localhost.
My code
from flask import Flask, request, redirect
from netaddr import IPNetwork
import os
import time
app = Flask(__name__)
APP_ROOT = os.path.dirname(os.path.abspath(__file__))
# Custom directories
MODULES = os.path.join(APP_ROOT, 'modules')
LOG = os.path.join(APP_ROOT, 'log')
def check_blacklist(ip_adress):
ipv4 = [item.strip() for item in open(MODULES + '//ipv4.txt').readlines()]
ipv6 = [item.strip() for item in open(MODULES + '//ipv6.txt').readlines()]
for item in ipv4 + ipv6:
if ip_adress in IPNetwork(item):
return True
else:
pass
return False
#app.route('/')
def hello():
ip_adress = request.environ['REMOTE_ADDR']
log_file = open(LOG + '//captains_log.txt', 'a')
with log_file as f:
if check_blacklist(ip_adress):
f.write('[ {}: {} ][ FaceBook ] - {} .\n'
.format(time.strftime("%d/%m/%Y"), time.strftime("%H:%M:%S"), request.environ))
return 'Facebook'
else:
f.write('[ {}: {} ][ Normal User ] - {} .\n'
.format(time.strftime("%d/%m/%Y"), time.strftime("%H:%M:%S"), request.environ))
return 'Normal Users'
if __name__ == '__main__':
app.debug = True
app.run()
The tutorial code:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello, I love Digital Ocean!"
if __name__ == "__main__":
app.run()
Seems like the following line could be a problem:
log_file = open(LOG + '//captains_log.txt', 'a')
if the path its looking for is: '/var/www/flaskapp/flaskapp/log//captains_log.txt'
that would make sense that an exception is thrown there. Possible that the file is in a different place, on the server, or a / needs to be removed - make sure the open command will find the correct file.
If captains_log.txt is outside the flask app directory, you can copy it in and chown it. if the txt file needs to be outside the directory then you'll have to add the user to the appropriate group, or open up permissions on the actual directory.
chown command should be:
sudo chown www:www /var/www/flaskapp/flaskapp/log/captains_log.txt
and it might be smart to run:
sudo chown -r www:www /var/www
I have a Flask app which looks like this:
from flask import Flask
import boto3
application = Flask(__name__)
#application.route("/")
def home():
return "Server successfully loaded"
#application.route("/app")
def frontend_from_aws():
s3 = boto3.resource("s3")
frontend = s3.Object(bucket_name = "my_bucket", key = "frontend.html")
return frontend.get()["Body"].read()
if __name__ == "__main__":
application.debug = True
application.run()
Everything works perfectly when I test locally, but when I deploy the app to Elastic Beanstalk the second endpoint gives an internal server error:
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
I didn't see anything alarming in the logs, though I'm not completely sure I'd know where to look. Any ideas?
Update: As a test, I moved frontend.html to a different bucket and modified the "/app" endpoint accordingly, and mysteriously it worked fine. So apparently this has something to do with the settings for the original bucket. Does anybody know what the right settings might be?
I found a quick and dirty solution: IAM policies (AWS console -> Identity & Access Management -> Policies). There was an existing policy called AmazonS3FullAccess, and after I attached aws-elasticbeanstalk-ec2-role to it my app was able to read and write to S3 at will. I'm guessing that more subtle access management can be achieved by creating custom roles and policies, but this was good enough for my purposes.
Did you set up your AWS credentials on your Elastc Beanstalk instance as they are on your local machine (i.e. in ~/.aws/credentials)?
I have a webpage (built using HTML & jQuery) which displays the data from a MySQL database. I am using Flask to connect HTML with my database. However, my database gets updated every 15 minutes (using a separate Python Script). Currently, I stop the flask server, update the database and restart the Flask to update the webpage. My question is the following:
Is there a way to update the MySQL database in the background without having to stop the flask server? I read about concepts of AJAX and CRON, however i am not able to understand how to use them with flask asynchronously.
Note: I am a newbie in web applications and this is my first project which involves connecting client side and server side. Any help will be appreciated.
Thanks
You are most likely doing something like this:
from flask import Flask, render_template
from yourMySqlLibrary import connect_to_mysql
conn = connect_to_mysql()
# This is only executed when you start the script
data = conn.execute("SELECT * FROM MySemiRegularlyUpdatedTable")
app = Flask(__name__)
#app.route("/")
def view_data():
return render_template("view_data.html", data=data)
if __name__ == "__main__":
app.run()
If that is the case, then your solution is simply to move your connection and query calls into your controller so that the database is re-queried every time you hit the page:
#app.route("/")
def view_data():
# Removed from above and placed here
# The connection is made to the database for each request
conn = connect_to_mysql()
# This is only executed on every request
data = conn.execute("SELECT * FROM MySemiRegularlyUpdatedTable")
return render_template("view_data.html", data=data)
This way, your view will update when your data does - and you won't have to restart the server just to pick up changes to your data.
I am new to web development just pieced together my first django web app and integrated with apache using mod_wsgi.
the app has some 15 parameters on which you could query multiple SQL server databases and the result can be downloaded as .xls file; have deployed the same on the company network.
the problem is when i access the web app on one machine and set query parameters, the same parameters get set in the web app when i try opening it from a different machine (web client) .
Its like there is just one global object which is being served to all the web client.
Am using django template tags to set values in the app's html pages.
not using any models in the django project as am querying SQL server DB which are already built.
the query function from my views.py looks like
def query(self,request):
"""
"""
print "\n\n\t inside QUERY PAGE:",request.method,"\n\n"
self.SummaryOfResults_list = []
if self.vmd_cursor != -1:
self.vmd_cursor.close()
if request.method == 'POST':
QueryPage_post_dic = request.POST
print "\n\nQueryPage_post_dic :",QueryPage_post_dic
self.err_list = []
self.err_list = db_qry.validate_entry(QueryPage_post_dic)
if len(self.err_list):
return HttpResponseRedirect('/error/')
else:
channel_numbers,JPEG_Over_HTTP,Codec,format,rate_ctrl,transport,img_sz,BuildInfo_versions, self.numspinner_values_dic = db_qry.process_postdata(QueryPage_post_dic, self.numspinner_values_dic)
return self.get_result(request,channel_numbers,JPEG_Over_HTTP,Codec,format,rate_ctrl,transport,img_sz,BuildInfo_versions)
else:
print "\nself.Cam_Selected_list inside qry :",self.Cam_Selected_list
if (len(self.Cam_Selected_list) != 1):
return HttpResponseRedirect('/error/')
self.tc_dic,self.chnl_dic,self.enbl_dic,self.frmt_dic,self.cdectyp_dic,self.imgsz_dic,self.rtctrl_dic,self.jpg_ovr_http_dic,self.trnsprt_dic,self.cdec_dic,self.typ_dic,self.resolution_dic, self.vmd_cursor = populate_tbls.Read_RefTbls(self.Cam_Selected_list[0])
c = self.get_the_choices(self.Cam_Selected_list[0])
c['camera_type']= self.Cam_Selected_list[0]
for k,v in self.numspinner_values_dic.items():
c[k] = v
self.vmd_cursor.execute("SELECT DISTINCT [GD Build Info] FROM MAIN")
res_versions = self.vmd_cursor.fetchall()
version_list = []
ver_list = ['',' ']
for version in res_versions:
tmp_ver = version[0].encode()
if (tmp_ver not in ver_list):
version_list.append(tmp_ver)
c['build_info'] = version_list
print "\n\n c dic :",c
c.update(csrf(request))
return render_to_response('DBQuery.html',c)
and the dictionary being passed to render_to_response holds the values that setting checkboxes and multiselect boxes (dojo)
thanks
Its like there is just one global object which is being served to all the web client.
What you're saying is probably exactly what's happening. Unless you're building whatever object that self in that example code is anew for each request, it will be practically randomly shared between clients.
You can store your global variable in SQL DB that you are using. This way you can retain the value/state of the variable across request -> response cycle.
If you need faster response time explore Key->Value Pair in memory datastore like redis.
To add to what's mentioned by AKX I suggest that you read up on HTTP Request -> HTTP Response cycle and How web applications work.