I would like to create a minimal socket server written in python that I can run with my OpenShift account. I searched more than a day, found lots of libraries(tornado, django, twisted, flask, autobahn, gevent) that could be used for this, but I could not manage to implement it for me. (Actually I do not really know the differences between these.)
I looked for lots of tutorials as well, I found an implementation using Tornado:
import tornado.ioloop
import tornado.web
import tornado.websocket
import tornado.template
class MainHandler(tornado.web.RequestHandler):
def get(self):
loader = tornado.template.Loader(".")
self.write('hello world')
class WSHandler(tornado.websocket.WebSocketHandler):
def open(self):
print 'connection opened...'
self.write_message("The server says: 'Hello'. Connection was accepted.")
def on_message(self, message):
self.write_message("The server says: " + message + " back at you")
print 'received:', message
def on_close(self):
print 'connection closed...'
application = tornado.web.Application([
(r'/ws', WSHandler),
(r'/', MainHandler),
(r"/(.*)", tornado.web.StaticFileHandler, {"path": "./resources"}),
])
if __name__ == "__main__":
application.listen(8000)
tornado.ioloop.IOLoop.instance().start()
However I cannot connect to it from a simple html5 websocket client, furthermore I get 503 Service Temporarily Unavailable when I enter my domain.
Could you please either give me a minimal implementation (if possible using tornado, or maybe django) that works if upload it to OpenShift or link me a trustworthy and 100% reliable tutorial? I would be really pleased I can't get my head around this.
You cannot use port address on openshift like that, I suggest you to do this:
ip = os.environ['OPENSHIFT_PYTHON_IP']
port = int(os.environ['OPENSHIFT_PYTHON_PORT'])
application.listen(port , ip)
tornado.ioloop.IOLoop.instance().start()
Check this repo for example: https://github.com/avinassh/openshift-tornado-starter
It doesn't look like OpenShift lets you just run an application like that. You can see a howto guide here: https://github.com/giulivo/openshift-hellotornado
Related
How can I manage my rabbit-mq connection in Pyramid app?
I would like to re-use a connection to the queue throughout the web application's lifetime. Currently I am opening/closing connection to the queue for every publish call.
But I can't find any "global" services definition in Pyramid. Any help appreciated.
Pyramid does not need a "global services definition" because you can trivially do that in plain Python:
db.py:
connection = None
def connect(url):
global connection
connection = FooBarBaz(url)
your startup file (__init__.py)
from db import connect
if __name__ == '__main__':
connect(DB_CONNSTRING)
elsewhere:
from db import connection
...
connection.do_stuff(foo, bar, baz)
Having a global (any global) is going to cause problems if you ever run your app in a multi-threaded environment, but is perfectly fine if you run multiple processes, so it's not a huge restriction. If you need to work with threads the recipe can be extended to use thread-local variables. Here's another example which also connects lazily, when the connection is needed the first time.
db.py:
import threading
connections = threading.local()
def get_connection():
if not hasattr(connections, 'this_thread_connection'):
connections.this_thread_connection = FooBarBaz(DB_STRING)
return connections.this_thread_connection
elsewhere:
from db import get_connection
get_connection().do_stuff(foo, bar, baz)
Another common problem with long-living connections is that the application won't auto-recover if, say, you restart RabbitMQ while your application is running. You'll need to somehow detect dead connections and reconnect.
It looks like you can attach objects to the request with add_request_method.
Here's a little example app using that method to make one and only one connection to a socket on startup, then make the connection available to each request:
from wsgiref.simple_server import make_server
from pyramid.config import Configurator
from pyramid.response import Response
def index(request):
return Response('I have a persistent connection: {} with id {}'.format(
repr(request.conn).replace("<", "<"),
id(request.conn),
))
def add_connection():
import socket
s = socket.socket()
s.connect(("google.com", 80))
print("I should run only once")
def inner(request):
return s
return inner
if __name__ == '__main__':
config = Configurator()
config.add_route('index', '/')
config.add_view(index, route_name='index')
config.add_request_method(add_connection(), 'conn', reify=True)
app = config.make_wsgi_app()
server = make_server('0.0.0.0', 8080, app)
server.serve_forever()
You'll need to be careful about threading / forking in this case though (each thread / process will need its own connection). Also, note that I am not very familiar with pyramid, there may be a better way to do this.
I'm new to python and currently researching its viability to be used as a soap server. I currently have a very rough application that uses the mysql blocking api, but would like to try twisted adbapi. I've successfully used twisted adbapi on regular twisted code using reactors, but can't seem to make it work with code below using ZSI framework. It's not returning anything from mysql. Anyone ever used twisted adbapi with ZSI?
import os
import sys
from dpac_server import *
from ZSI.twisted.wsgi import (SOAPApplication,
soapmethod,
SOAPHandlerChainFactory)
from twisted.enterprise import adbapi
import MySQLdb
def _soapmethod(op):
op_request = GED("http://www.example.org/dpac/", op).pyclass
op_response = GED("http://www.example.org/dpac/", op + "Response").pyclass
return soapmethod(op_request.typecode, op_response.typecode,operation=op, soapaction=op)
class DPACServer(SOAPApplication):
factory = SOAPHandlerChainFactory
#_soapmethod('GetIPOperation')
def soap_GetIPOperation(self, request, response, **kw):
dbpool = adbapi.ConnectionPool("MySQLdb", '127.0.0.1','def_user', 'def_pwd', 'def_db', cp_reconnect=True)
def _dbSPGeneric(txn, cmts):
txn.execute("call def_db.getip(%s)", (cmts, ))
return txn.fetchall()
def dbSPGeneric(cmts):
return dbpool.runInteraction(_dbSPGeneric, cmts)
def returnResults(results):
response.Result = results
def showError(msg):
response.Error = msg
response.Result = ""
response.Error = ""
d = dbSPGeneric(request.Cmts)
d.addCallbacks(returnResults, showError)
return request, response
def main():
from wsgiref.simple_server import make_server
from ZSI.twisted.wsgi import WSGIApplication
application = WSGIApplication()
httpd = make_server('127.0.0.1', 8080, application)
application['dpac'] = DPACServer()
print "listening..."
httpd.serve_forever()
if __name__ == '__main__':
main()
The code you posted creates a new ConnectionPool per (some kind of) request and it never stops the pool. This means you'll eventually run out of resources and you won't be able to service any more requests. "Eventually" is probably after one or two or three requests.
If you never get any responses perhaps this isn't the problem you've encountered. It will be a problem at some point though.
On closer inspection, I wonder if this code even runs the Twisted reactor at all. On first read, I thought you were using some ZSI Twisted integration to run your server. Now I see that you're using wsgiref.simple_server. I am moderately confident that this won't work.
You're already using Twisted, use Twisted's WSGI server instead.
Beyond that, verify that ZSI executes your callbacks in the correct thread. The default for WSGI applications is to run in a non-reactor thread. Twisted APIs are not thread-safe, so if ZSI doesn't do something to correct for this, you'll have bugs introduced by using un-thread-safe APIs in threads.
I have a Python script that I'd like to be run from the browser, it seem mod_wsgi is the way to go but the method feels too heavy-weight and would require modifications to the script for the output. I guess I'd like a php approach ideally. The scripts doesn't take any input and will only be accessible on an internal network.
I'm running apache on Linux with mod_wsgi already set up, what are the options here?
I would go the micro-framework approach just in case your requirements change - and you never know, it may end up being an app rather just a basic dump... Perhaps the simplest (and old fashioned way!?) is using CGI:
Duplicate your script and include print 'Content-Type: text/plain\n' before any other output to sys.stdout
Put that script somewhere apache2 can access it (your cgi-bin for instance)
Make sure the script is executable
Make sure .py is added to the Apache CGI handler
But - I don't see anyway this is going to be a fantastic advantage (in the long run at least)
You could use any of python's micro frameworks to quickly run your script from a server. Most include their own lightweight servers.
From cherrypy home page documentation
import cherrypy
class HelloWorld(object):
def index(self):
# run your script here
return "Hello World!"
index.exposed = True
cherrypy.quickstart(HelloWorld())
ADditionally python provides the tools necessary to do what you want in its standard library
using HttpServer
A basic server using BaseHttpServer:
import time
import BaseHTTPServer
HOST_NAME = 'example.net' # !!!REMEMBER TO CHANGE THIS!!!
PORT_NUMBER = 80 # Maybe set this to 9000.
class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_HEAD(s):
s.send_response(200)
s.send_header("Content-type", "text/html")
s.end_headers()
def do_GET(s):
"""Respond to a GET request."""
s.send_response(200)
s.send_header("Content-type", "text/html")
s.end_headers()
s.wfile.write("<html><head><title>Title goes here.</title></head>")
s.wfile.write("<body><p>This is a test.</p>")
# If someone went to "http://something.somewhere.net/foo/bar/",
# then s.path equals "/foo/bar/".
s.wfile.write("<p>You accessed path: %s</p>" % s.path)
s.wfile.write("</body></html>")
if __name__ == '__main__':
server_class = BaseHTTPServer.HTTPServer
httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)
print time.asctime(), "Server Starts - %s:%s" % (HOST_NAME, PORT_NUMBER)
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
print time.asctime(), "Server Stops - %s:%s" % (HOST_NAME, PORT_NUMBER)
What's nice about the microframeworks is they abstract out writing headers and such (but should still provide you an interface to, if required)
Is it possible to run Tornado such that it listens to a local port (e.g. localhost:8000). I can't seem to find any documentation explaining how to do this.
Add an address argument to Application.listen() or HTTPServer.listen().
It's documented here (Application.listen) and here (TCPServer.listen).
For example:
application = tornado.web.Application([
(r'/blah', BlahHandler),
], **settings)
# Create an HTTP server listening on localhost, port 8080.
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(8080, address='127.0.0.1')
In the documetaion they mention to run on the specific port like
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
application = tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
application.listen(8000)
tornado.ioloop.IOLoop.instance().start()
You will get more help from http://www.tornadoweb.org/documentation/overview.html and http://www.tornadoweb.org/documentation/index.html
Once you've defined an application (like in the other answers) in a file (for example server.py), you simply save and run that file.
python server.py
If you want to daemonize tornado - use supervisord. If you want to access tornado on address like http://mylocal.dev/ - you should look at nginx and use it like reverse proxy. And on specific port it can be binded like in Lafada's answer.
I am new to Python and Tornado WebServer.
I am trying to figure out the number of request and number of requests/second in my server side code. I am using Tornadio2 to implement websockets.
Kindly take a look at the following code and let me know, what modification can be done to it.
I am using the RequestHandler.prepare() to bottleneck all the requests and using a list as it is immutable to store the count.
Consider all modules are included
count=[0]
class IndexHandler(tornado.web.RequestHandler):
"""Regular HTTP handler to serve the chatroom page"""
def prepare(self):
count[0]=count[0]+1
def get(self):
self.render('index1.html')
class SocketIOHandler(tornado.web.RequestHandler):
def get(self):
self.render('../socket.io.js')
partQue=Queue.Queue()
class ChatConnection(tornadio2.conn.SocketConnection):
participants = set()
def on_open(self, info):
self.send("Welcome from the server.")
self.participants.add(self)
def on_message(self, message):
partQue.put(message)
time.sleep(10)
self.qmes=partQue.get()
for p in self.participants:
p.send(self.qmes+" "+str(count[0]))
partQue.task_done()
def on_close(self):
self.participants.remove(self)
partQue.join()
# Create tornadio server
ChatRouter = tornadio2.router.TornadioRouter(ChatConnection)
# Create socket application
sock_app = tornado.web.Application(
ChatRouter.urls,
flash_policy_port = 843,
flash_policy_file = op.join(ROOT, 'flashpolicy.xml'),
socket_io_port = 8002)
# Create HTTP application
http_app = tornado.web.Application(
[(r"/", IndexHandler), (r"/socket.io.js", SocketIOHandler)])
if __name__ == "__main__":
import logging
logging.getLogger().setLevel(logging.DEBUG)
# Create http server on port 8001
http_server = tornado.httpserver.HTTPServer(http_app)
http_server.listen(8001)
# Create tornadio server on port 8002, but don't start it yet
tornadio2.server.SocketServer(sock_app, auto_start=False)
# Start both servers
tornado.ioloop.IOLoop.instance().start()
Also, I am confused about every Websocket messages. Does each Websocket event got to server in the form of an HTTP request? or a Socket.IO request?
I use Siege - excellent tool for testing requests if your running on linux. Example
siege http://localhost:8000/?q=yourquery -c10 -t10s
-c10 = 10 concurrent users
-t10s = 10 seconds
Tornadio2 has built-in statistics module, which includes incoming connections/s and other counters.
Check following example: https://github.com/MrJoes/tornadio2/tree/master/examples/stats
When testing applications, always approach performance testing with a healthy appreciation for the uncertainty principle..
If you want to test a server, hook up two PCs to a HUB where you can monitor traffic from one going to the other. Then bang the hell out of the server. There are a variety of tools for doing this, just look for web load testing tools.
Normal HTTP requests in Tornado create a new RequestHandler instance, which persists until the connection is terminated.
WebSockets use persistent connections. One WebSocketHandler instance is created, and each message sent by the browser to the server calls the on_message method.
From what I understand, Socket.IO/Tornad.IO will use WebSockets if supported by the browser, falling back to long polling.