This question already has answers here:
Are a WSGI server and HTTP server required to serve a Flask app?
(3 answers)
How to serve static files in Flask
(24 answers)
How to run functions in parallel?
(8 answers)
Closed 3 years ago.
I would like to develop an app that uses both Flask and httpd.
Flask publishes HTML-related files, and httpd publishes files in local files.
It is for browsing local files published in httpd from Flask HTML.
Though the port numbers of Flask and httpd are different, it seems that httpd server side is not working.
Connection refused error occurs when connecting to httpd server.
Added the intention of the question.
I want to run Flask's built-in web server and HTTPServer simultaneously from a script.
I just want to be able to see myself, not to expose it to the network.
I'm looking for a mechanism that can be completed with the app.py script without using WSGI.
Added additional information to the question.
This question uses Flask and Python's HTTPServer, but using NodeJS's HTTPServer instead of HTTPServer seems to work well.
(Comment out run())
I would like to complete in Python if possible without using NodeJS HTTPServer.
https://www.npmjs.com/package/http-server
C:\Users\User\Videos>http-server
Starting up http-server, serving ./
Available on:
http://127.0.0.1:8080
Hit CTRL-C to stop the server
...
version
Flask==1.0.2
Python 3.7
Can I not start each server with the following code?
templates/index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title></title>
</head>
<body>
<video src="http://localhost:8080/video.mp4"></video>
</body>
</html>
python(app.py)
from http.server import SimpleHTTPRequestHandler, HTTPServer
from flask import Flask, render_template
app = Flask(__name__)
#app.route('/')
def hello_world():
return render_template('index.html')
class Handler(SimpleHTTPRequestHandler):
def __init__(self, *args, directory=None, **kwargs):
super().__init__(*args,
directory=r'C:\Users\User\Videos',
**kwargs)
def run(server_class=HTTPServer, handler_class=Handler):
server_address = ('localhost', 8000)
httpd = server_class(server_address, handler_class)
httpd.serve_forever()
if __name__ == '__main__':
app.run(host='localhost', port=5000)
run()
It may not have been transmitted well. I'm sorry.
Thank you very much.
Can I not start each server with the following code?
Yes, there are many other ways.
WSGI
wsgi, which stands for Web Server Gateway Interface is defined in PEP 333:
This document specifies a proposed standard interface between web
servers and Python web applications or frameworks, to promote web
application portability across a variety of web servers.
framework side
flask, django and many other frameworks all implement this interface. So when you write an app in flask, app implements wsgi so any web server that knows how to serve a wsgi app can serve it.
web server side
There are many choices and you can find more at wsgi.org:
gunicorn
uWSGI
...
Basically, you can choose any of these to start your server and use httpd to proxy requests to it. Or you can use mod_wsgi:
mod_wsgi is an Apache module that embeds a Python application within
the server and allow them to communicate through the Python WSGI
interface as defined in the Python PEP 333.
Note
The bundled server in flask is not suitable for production use, you can see this question for more details.
For Updated Question
I want to run Flask's built-in web server and HTTPServer
simultaneously from a script.
Just Run Shell Command
You can start another process and call shell command using Popen:
if __name__ == '__main__':
p = Popen(['python -m http.server'], shell=True)
app.run(host='localhost', port=5000)
Use whatever command you like, you can even start a node http server here.
Use Thread/Process
You can start http server in another thread or process, here I use threading for example:
if __name__ == '__main__':
from threading import Thread
Thread(target=run, daemon=True).start()
app.run(host='localhost', port=5000)
Use Flask To Serve File
Instead of starting two servers binding to two ports, you can actually use flask to serve file as well. In this way, you only start one server binding to one port:
#app.route('/videos/<path:filename>')
def download_file(filename):
return send_from_directory(r'C:\Users\User\Videos',
filename, as_attachment=True)
You can see documentation for more details.
app.run() is a blocking operation. The below lines won't be interpreted.
Run your apps from separate files or in different thread/process.
Related
I have a dash(plotly) app set up using flask as the server and can serve it on our Windows Server to port:8041 using waitress. My code to launch waitress is below;
#!/usr/bin/env python3
from waitress import serve
from src.pacedash.app import server as application
if __name__ == "__main__":
serve(application, threads=100, port=8041)
Everything works great if I use python run_waitress.py, except that when someone on our network navigates to the servename:8041 there is a "Not Secure" warning next to the url. Our IT vendor was able to get a cert file and key, but I'm not sure how to bring those into my current setup.
I have been trying to use nginx, but I can't find a guide to setting it up with waitress and I'm not too familiar with web apps or wsgi because I primarily work as the lone data person here.
I have been working on this same issue and have a solution. The nginx .conf file needs have a location defined like this:
location /myapp/ {
# Define the location of the proxy server to send the request to
proxy_pass http://localhost:8041/myapp/;
# standard proxy_set_header stuff below...
}
Then in your Dash application set the url_base_pathname to the same value:
app = dash.Dash(__name__, url_base_pathname='/myapp/')
I would use ngrok to expose your web app. It's amazingly simple:
Read this: https://ngrok.com/
I could be misinterpreting what you need, because I am not familiar with waitress (why not serve the app locally just using flask?), but if you need to test the live app, ngrok is what you should use.
I'm working on a Flask-SocketIO server that works just fine.
However, I'm getting lots of requests like this in my server log:
"GET /socket.io/?EIO=3&transport=polling&t=LBS1TQt HTTP/1.1"
Here's the code I'm working with:
from flask import Flask, render_template, redirect, url_for
from flask_socketio import SocketIO, emit
import json
def load_config():
# configuration
return json.load(open('/etc/geekdj/config.json'))
config = load_config()
geekdj = Flask(__name__)
geekdj.config["DEBUG"] = config["debug"]
geekdj.config["SECRET_KEY"] = config["secret_key"]
geekdj.config.from_envvar("FLASKR_SETTINGS", silent=True)
socketio = SocketIO(geekdj)
#geekdj.route('/')
def index():
return render_template('index.html')
# SocketIO functions
#socketio.on('connect')
def chat_connect():
print ('connected')
#socketio.on('disconnect')
def chat_disconnect():
print ("Client disconnected")
#socketio.on('broadcast')
def chat_broadcast(message):
print ("test")
emit("chat", {'data': message['data']})
if __name__ == "__main__":
socketio.run(geekdj, port=8000)
and the JS in index.html:
<script src="//cdn.socket.io/socket.io-1.4.5.js"></script>
<script type="text/javascript" charset="utf-8">
$(document).ready(function(){
// the socket.io documentation recommends sending an explicit package upon connection
// this is specially important when using the global namespace
var socket = io.connect('http://localhost:8000');
socket.on('connection', function(socket) {
socket.emit('foo', {foo: "bar"});
socket.join("test");
});
socket.on('joined', function(data) {
console.log('Joined room!');
console.log(data["room"]);
});
});
I'd prefer to be using actual Websockets if possible, does anyone know why SocketIO is falling back on polling?
Did the first answer work? If so, you should accept it. If not, post your requirements.txt please.
I had the same problem and found the resolution by fully absorbing the documentation page:
The asynchronous services that this package relies on can be selected
among three choices:
eventlet is the best performant option, with support for long-polling and WebSocket transports.
gevent is supported in a number of different configurations. The long-polling transport is fully supported with the gevent package,
but unlike eventlet, gevent does not have native WebSocket support.
To add support for WebSocket there are currently two options.
Installing the gevent-websocket package adds WebSocket support to
gevent or one can use the uWSGI web server, which comes with
WebSocket functionality. The use of gevent is also a performant
option, but slightly lower than eventlet.
The Flask development server based on Werkzeug can be used as well, with the caveat that it lacks the performance of the other two
options, so it should only be used to simplify the development
workflow. This option only supports the long-polling transport.
Basically, I didn't have evenlet nor gevent-websocket in my virtual environment. I installed eventlet, and transport upgrade to websocket was near instantaneous! Hope this helps.
I found the solution in this other Q/A.
It turns out that SocketIO sets a cookie with the most recent connection type that worked. In my case, it was polling.
So, I changed the SocketIO connect statement in my JS from
var socket = io.connect('http://localhost:8000');
to
var socket = io.connect(null, {port: 8000, rememberTransport: false});
and now there is activity in the websockets type under the Network tab in the Chrome developer tools (which there wasn't previously):
Is it possible to have a single flask app with routes on two different ports? My Flask app needs to listen for webhooks and due to some security biz it can't receive foreign POST requests on the default port. Is it possible to do something like this?
#app.route('/hook/<sourcename>', methods=["POST"], port=5051)
def handle_hook(sourcename):
print 'asdf'
If you don't need any socket code inside C plugins, gevent could help, e.g. with
import gevent
from gevent.pywsgi import WSGIServer
app = Flask(__name__)
https_server = WSGIServer((HOST, HTTPS_PORT), app, keyfile=PRIVKEY, certfile=CERT)
https_server.start()
http_server = WSGIServer((HOST, HTTP_PORT), app)
http_server.start()
while True:
gevent.sleep(60)
A server by default only listens to a single port. Wouldn't it make more sense, since the additional port requires additional functionality, to implement a front-end server on the second port that proxies the POST request locally? There are many well-documented ways to do this such as this one
I am starting with web development. I am trying to develop and webapp using the Instagram API and Django. I was looking that a lot of people it's using Tornado Web Server for Real Time Subscriptions. So I am using Webfaction as a host and found this code so I can wrap my Django project with the "WSGI Container" that Tornado Web Server provides:
import os
import tornado.httpserver
import tornado.ioloop
import tornado.wsgi
import tornado.web
import sys
import django.core.handlers.wsgi
sys.path.append('/path/to/project')
class HelloHandler(tornado.web.RequestHandler):
def get(self):
self.write('Hello from tornado')
def main():
os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' # path to your settings module
wsgi_app = tornado.wsgi.WSGIContainer(django.core.handlers.wsgi.WSGIHandler())
tornado_app = tornado.web.Application(
[
('/hello-tornado', HelloHandler),
('.*', tornado.web.FallbackHandler, dict(fallback=wsgi_app)),
]
)
http_server = tornado.httpserver.HTTPServer(tornado_app)
http_server.listen(8080)
tornado.ioloop.IOLoop.instance().start()
if __name__ == "__main__":
main()
So I run this python script inside my Webfaction server and everytime I try to access "http://mywebsite.com/hello-tornado/" does not seem to work. I know I am running that Tornado web server on that port but do not know how too access from the browser or something like that. What I am doing wrong here? Thanks for your help and patience. Will cyber high-five for every answer.
EDIT: What I am really trying to do is that I want to receive all the calls from the subscriptions that I make with the Instagram RealTime Subscription API through Tornado, for that I have a callback url "http://mysite.com/sub" and I want to be able to receive through Tornado.
You are starting the server at port 8080, Web browsers use port 80 by default, try using: http://mywebsite.com:8080/hello-tornado
if you want to use port 80 and you already have a web server running in the box you can try following Ali-Akber Saifee suggestion, or run the WSGI application directly from the server, using something like mod_python (http://www.modpython.org), you will lose the ability to run Tornado code, but Django will work.
You have to create a custom app (listening on port), note the port that is assigned to your app then configure tornado to serve on that port: http_server.listen(my port)
You can also avoid tornado and start directly by installing a django app.
I want a simple python web server for the following use case:
I want to write a simple server that will accept HTTP requests from my application running on Google App Engine.
The server will accept HTTP requests, and then send iphone notifications. (Basically, I need this extra server to account for the lack of socket support in google app engine).
I guess I need the server to be able to maintain this persistent connection with Apple's Push Notification Service. So I'll need to have some sort of thread always open for this. So I need some sort of web server that can accept the request pass it off to the other thread with the persistent connection to APNS.
Maybe multiple processes and one of pythons queuing tools to communicate between them? Accept the HTTP request, then enqueue a message to the other process?
I was wondering what someone with a bit of experience would suggest. I'm starting to think that maybe even writing my own simple server is a good option (http://fragments.turtlemeat.com/pythonwebserver.php).
One option would be the (appropriately named) SimpleHTTPServer, which is part of the Python standard library. Another, more flexible but more complicated option would be to write your server in Twisted.
I've been writing simple http servers using gevent and bottle -- an example:
#!/usr/bin/env python
import gevent.monkey
gevent.monkey.patch_all()
import bottle
bottle.debug(True)
import gevent.wsgi
from bottle import route, run, request, response, static_file, abort
#route('/echo')
def echo():
s = request.GET.get('s', 'o hai')
return '<html><head><title>echo server</title></head><body>%s</body></html>\r\n' % (s)
#route('/static/:filename')
def send_static(filename):
root = os.getcwd() + '/static'
return static_file(filename, root=root)
if __name__ == '__main__':
app = bottle.app()
wsgi_server = gevent.wsgi.WSGIServer(('0.0.0.0', 8000), app)
print 'Starting wsgi search on port 8000'
wsgi_server.serve_forever()
So you could write a simple server that sticks a job into a Queue (see gevent.queue) and have another worker greenlet that handles reading requests from the queue and processing them...