Is it possible to run Tornado such that it listens to a local port (e.g. localhost:8000). I can't seem to find any documentation explaining how to do this.
Add an address argument to Application.listen() or HTTPServer.listen().
It's documented here (Application.listen) and here (TCPServer.listen).
For example:
application = tornado.web.Application([
(r'/blah', BlahHandler),
], **settings)
# Create an HTTP server listening on localhost, port 8080.
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(8080, address='127.0.0.1')
In the documetaion they mention to run on the specific port like
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
application = tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
application.listen(8000)
tornado.ioloop.IOLoop.instance().start()
You will get more help from http://www.tornadoweb.org/documentation/overview.html and http://www.tornadoweb.org/documentation/index.html
Once you've defined an application (like in the other answers) in a file (for example server.py), you simply save and run that file.
python server.py
If you want to daemonize tornado - use supervisord. If you want to access tornado on address like http://mylocal.dev/ - you should look at nginx and use it like reverse proxy. And on specific port it can be binded like in Lafada's answer.
Related
I am trying to create tornado webserver and quick start made me standard project of tornado, but according to documentation this configuration is blocking.
I am new to non-blocking python.
I have this wsgi file that lies in root folder in my PAAS server
#!/usr/bin/env python
import os
import imp
import sys
#
# Below for testing only
#
if __name__ == '__main__':
ip = 'localhost'
port = 8051
zapp = imp.load_source('application', 'wsgi/application')
from wsgiref.simple_server import make_server
httpd = make_server(ip, port, zapp.application)
httpd.serve_forever()
This is main handler file
#!/usr/bin/env python
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.render('index.html')
And Application folder contains this
# Put here yours handlers.
import tornado.wsgi
from . import handlers
handlers = [(r'/',MainHandler),]
application = tornado.wsgi.WSGIApplication(handlers, **settings)
In WSGI mode asynchronous methods are not supported
uses WSGI to deploy the python applications
Is it possible to configure python application on openshift to be fully non-blocking
Though i have seen project that seemed to work
If you are talking about OpenShift V2 (not V3 which uses Kubernetes/Docker), then you need to use the app.py file as described in:
http://blog.dscpl.com.au/2015/08/running-async-web-applications-under.html
How can I manage my rabbit-mq connection in Pyramid app?
I would like to re-use a connection to the queue throughout the web application's lifetime. Currently I am opening/closing connection to the queue for every publish call.
But I can't find any "global" services definition in Pyramid. Any help appreciated.
Pyramid does not need a "global services definition" because you can trivially do that in plain Python:
db.py:
connection = None
def connect(url):
global connection
connection = FooBarBaz(url)
your startup file (__init__.py)
from db import connect
if __name__ == '__main__':
connect(DB_CONNSTRING)
elsewhere:
from db import connection
...
connection.do_stuff(foo, bar, baz)
Having a global (any global) is going to cause problems if you ever run your app in a multi-threaded environment, but is perfectly fine if you run multiple processes, so it's not a huge restriction. If you need to work with threads the recipe can be extended to use thread-local variables. Here's another example which also connects lazily, when the connection is needed the first time.
db.py:
import threading
connections = threading.local()
def get_connection():
if not hasattr(connections, 'this_thread_connection'):
connections.this_thread_connection = FooBarBaz(DB_STRING)
return connections.this_thread_connection
elsewhere:
from db import get_connection
get_connection().do_stuff(foo, bar, baz)
Another common problem with long-living connections is that the application won't auto-recover if, say, you restart RabbitMQ while your application is running. You'll need to somehow detect dead connections and reconnect.
It looks like you can attach objects to the request with add_request_method.
Here's a little example app using that method to make one and only one connection to a socket on startup, then make the connection available to each request:
from wsgiref.simple_server import make_server
from pyramid.config import Configurator
from pyramid.response import Response
def index(request):
return Response('I have a persistent connection: {} with id {}'.format(
repr(request.conn).replace("<", "<"),
id(request.conn),
))
def add_connection():
import socket
s = socket.socket()
s.connect(("google.com", 80))
print("I should run only once")
def inner(request):
return s
return inner
if __name__ == '__main__':
config = Configurator()
config.add_route('index', '/')
config.add_view(index, route_name='index')
config.add_request_method(add_connection(), 'conn', reify=True)
app = config.make_wsgi_app()
server = make_server('0.0.0.0', 8080, app)
server.serve_forever()
You'll need to be careful about threading / forking in this case though (each thread / process will need its own connection). Also, note that I am not very familiar with pyramid, there may be a better way to do this.
I would like to create a minimal socket server written in python that I can run with my OpenShift account. I searched more than a day, found lots of libraries(tornado, django, twisted, flask, autobahn, gevent) that could be used for this, but I could not manage to implement it for me. (Actually I do not really know the differences between these.)
I looked for lots of tutorials as well, I found an implementation using Tornado:
import tornado.ioloop
import tornado.web
import tornado.websocket
import tornado.template
class MainHandler(tornado.web.RequestHandler):
def get(self):
loader = tornado.template.Loader(".")
self.write('hello world')
class WSHandler(tornado.websocket.WebSocketHandler):
def open(self):
print 'connection opened...'
self.write_message("The server says: 'Hello'. Connection was accepted.")
def on_message(self, message):
self.write_message("The server says: " + message + " back at you")
print 'received:', message
def on_close(self):
print 'connection closed...'
application = tornado.web.Application([
(r'/ws', WSHandler),
(r'/', MainHandler),
(r"/(.*)", tornado.web.StaticFileHandler, {"path": "./resources"}),
])
if __name__ == "__main__":
application.listen(8000)
tornado.ioloop.IOLoop.instance().start()
However I cannot connect to it from a simple html5 websocket client, furthermore I get 503 Service Temporarily Unavailable when I enter my domain.
Could you please either give me a minimal implementation (if possible using tornado, or maybe django) that works if upload it to OpenShift or link me a trustworthy and 100% reliable tutorial? I would be really pleased I can't get my head around this.
You cannot use port address on openshift like that, I suggest you to do this:
ip = os.environ['OPENSHIFT_PYTHON_IP']
port = int(os.environ['OPENSHIFT_PYTHON_PORT'])
application.listen(port , ip)
tornado.ioloop.IOLoop.instance().start()
Check this repo for example: https://github.com/avinassh/openshift-tornado-starter
It doesn't look like OpenShift lets you just run an application like that. You can see a howto guide here: https://github.com/giulivo/openshift-hellotornado
I'm attempting to integrate fanstatic.org with tornadoweb and was curious if anyone has any prior knowledge or any code that may reflect how this is done? I've been reading the documentation and i believe it can be done since tornado does provide a wsgi interface.
thanks
import tornado.wsgi
from fanstatic import Fanstatic
from your_tornado_app import MainHandler # tornado.web.RequestHandler
app = tornado.wsgi.WSGIApplication([
(r"/", MainHandler),
])
app = Fantastic(app)
if __name__ == '__main__':
from wsgiref.simple_server import make_server
server = make_server('127.0.0.1', 8080, app)
server.serve_forever()
I'm wondering if it is possible in the Tornado framework to register multiple Application on the same IOLoop ?
Something like
application1 = web.Application([
(r"/", MainPageHandler),
])
http_server = httpserver.HTTPServer(application1)
http_server.listen(8080)
application2 = web.Application([
(r"/appli2", MainPageHandler2),
])
http_server2 = httpserver.HTTPServer(application2)
http_server2.listen(8080)
ioloop.IOLoop.instance().start()
Basically I'm trying to structure my webapp so that:
functional applications are separated
multiple handlers with the same purpose (e.g. admin/monitoring/etc) are possible on each webapp
The simple thing is if you were to bind your applications to different ports:
...
http_server = httpserver.HTTPServer(application1)
http_server.listen(8080) # NOTE - port 8080
...
http_server2 = httpserver.HTTPServer(application2)
http_server2.listen(8081) # NOTE - port 8081
ioloop.IOLoop.instance().start()
This is the base case that Tornado makes easy. The challenge is that by routing to applications at the URI level you're crossing a design boundary which is that each application is responsible for all of the URIs that that are requested by it.
If they all really need to be serviced at the URI level not port, it would probably be best to host different applications on different ports and have Nginx/Apache do the URI routing - anything that involves messing with the Application/Request handling is going to be a world of hurt.