Call Tornado WebSocketHandler from Requesthandler - python

I am using Tornado Webserver and want to internally call a WebSocketHandler from a RequestHandler.
It is not possible to use the redirect /redirectHandler functionality, because the WebSocketHandler class to call ("IndexHandlerDynamic1" in the example below) will be created with a classFactory.
Using the definition of Requesthandler (here) my example looks like:
class IndexHandlerDynamic1(tornado.web.WebSocketHandler):
def initialize(self):
print "Forwarded to Websocket"
def open(self):
print "WebSocket opened"
class IndexHandlerDistributor(tornado.web.RequestHandler):
def get(self, channelId):
IndexHandlerDynamic1(self.application, self.request)
If I request the related url he jumps into IndexHandlerDistributor and IndexHandlerDynamic1.initialize() is called.
But on Clientside the Browser console outputs the following error:
Error during WebSocket handshake: Unexpected response code: 200
Obviously the socket connection is not opened correctly, what's my mistake ?
EDIT:
Thanks to Ben for his help!
Sadly I still have trouble to route the user to a dynamically created class named like a url parameter. I hope you can understand my problem by having a look on my example:
app = tornado.web.Application(
[(r"/", IndexHandler)] +
[(r"/channel/(?P<channelId>[^\/]+)?", ClassFactory(channelId))]
)
How to use channelId as a parameter for my call of ClassFactory as Requesthandler?
Or is there maybe another way to dynamically change the routing of my application while the application is running? If so, i could use this way to solve my initial task.

The problem is that you're attaching two RequestHandlers to the same request. I'm not sure that dynamically creating handler classes is a great idea, but if you want to do it just pass your factory function (which is not itself a RequestHandler) to the url routing table. The routing table doesn't necessarily need a RequestHandler subclass, it just needs an object which can be called with (app, request) and return a RequestHandler instance.

Related

Call torando Request Handler function without running tornado loop start?

I need to store the output of a get function of a request handler before running the tornado server from outside the application.
Example:-
class Test(RequestHandler):
def get:
print "safds"'
....
...
I need to call get function without tornado loop server from outside. Is it possible ? Is there any turnaround. Please help.
Thanks
If you happen to end up reading this question, I knew that I had to somehow create an instance of my handler to be able to call the post or get function inside. After looking at the RequestHandler's implementation, I came up with the following snippet:
from tornado.web import Application
from tornado.httpserver import HTTPRequest
mock_app = Mock(spec=Application)
request = HTTPRequest(
method='GET', uri='/', headers=None, body=None
)
response = Handler(mock_app, request).get()

tornado one handler blocks for another

Using python/tornado I wanted to set up a little "trampoline" server that allows two devices to communicate with each other in a RESTish manner. There's probably vastly superior/simpler "off the shelf" ways to do this. I'd welcome those suggestions, but I still feel it would be educational to figure out how to do my own using tornado.
Basically, the idea was that I would have the device in the role of server doing a longpoll with a GET. The client device would POST to the server, at which point the POST body would be transferred as the response of the blocked GET. Before the POST responded, it would block. The server side then does a PUT with the response, which is transferred to the blocked POST and return to the device. I thought maybe I could do this with tornado.queues. But that appears to not have worked out. My code:
import tornado
import tornado.web
import tornado.httpserver
import tornado.queues
ToServerQueue = tornado.queues.Queue()
ToClientQueue = tornado.queues.Queue()
class Query(tornado.web.RequestHandler):
def get(self):
toServer = ToServerQueue.get()
self.write(toServer)
def post(self):
toServer = self.request.body
ToServerQueue.put(toServer)
toClient = ToClientQueue.get()
self.write(toClient)
def put(self):
ToClientQueue.put(self.request.body)
self.write(bytes())
services = tornado.web.Application([(r'/query', Query)], debug=True)
services.listen(49009)
tornado.ioloop.IOLoop.instance().start()
Unfortunately, the ToServerQueue.get() does not actually block until the queue has an item, but rather returns a tornado.concurrent.Future. Which is not a legal value to pass to the self.write() call.
I guess my general question is twofold:
1) How can one HTTP verb invocation (e.g. get, put, post, etc) block and then be signaled by another HTTP verb invocation.
2) How can I share data from one invocation to another?
I've only really scratched the simple/straightforward use cases of making little REST servers with tornado. I wonder if the coroutine stuff is what I need, but haven't found a good tutorial/example of that to help me see the light, if that's indeed the way to go.
1) How can one HTTP verb invocation (e.g. get, put, post,u ne etc) block and then be signaled by another HTTP verb invocation.
2) How can I share data from one invocation to another?
The new RequestHandler object is created for every request. So you need some coordinator e.g. queues or locks with state object (in your case it would be re-implementing queue).
tornado.queues are queues for coroutines. Queue.get, Queue.put, Queue.join return Future objects, that need to be "resolved" - scheduled task done either with success or exception. To wait until future is resolved you should yielded it (just like in the doc examples of tornado.queues). The verbs method also need to be decorated with tornado.gen.coroutine.
import tornado.gen
class Query(tornado.web.RequestHandler):
#tornado.gen.coroutine
def get(self):
toServer = yield ToServerQueue.get()
self.write(toServer)
#tornado.gen.coroutine
def post(self):
toServer = self.request.body
yield ToServerQueue.put(toServer)
toClient = yield ToClientQueue.get()
self.write(toClient)
#tornado.gen.coroutine
def put(self):
yield ToClientQueue.put(self.request.body)
self.write(bytes())
The GET request will last (wait in non-blocking manner) until something will be available on the queue (or timeout that can be defined as Queue.get arg).
tornado.queues.Queue provides also get_nowait (there is put_nowait as well) that don't have to be yielded - returns immediately item from queue or throws exception.

How to push notification from server (django) to client (socketio)?

I want to emit message from server to client.
I have look at this but cannot use because I cannot create a namespace instance.
How to emit SocketIO event on the serverside
My use case is:
I have a database of price of product. A lot of users are currently surf my website. Some of them is viewing product X.
On the server side, the admin can edit the price of the product. If he edit the price of X, all the client must see the notification that X price change (e.x: a simple js alert).
My client javascript now:
var socket = io.connect('/product');
#notify server that this client is viewing product X
socket.emit("join", current_product.id);
#upon receive msg from server
socket.on('notification', function (data) {
alert("Price change");
});
My server code (socket.py):
#namespace('/products')
class ProductsNamespace(BaseNamespace, ProductSubscriberMixin):
def initialize(self, *args, **kwargs):
_connections[id(self)] = self
super(ProductsNamespace, self).initialize(*args, **kwargs)
def disconnect(self, *args, **kwargs):
del _connections[id(self)]
super(ProductsNamespace, self).disconnect(*args, **kwargs)
def on_join(self, *args):
print "joining"
def emit_to_subscribers(self): pass
I use the runserver_socketio.py as in this link.
(Thanks to Calvin Cheng for this excellent up-to-date example.)
I don't know how to call the emit_to_subscribers. Since I have no instance of namespace.
As I read from this doc ,
Namespaces are created only when some packets arrive that ask for the namespace.
But how can I send the packet to that namespace from the code? IF I can only create the instance when a client emit message to server, when no one is surfing the site, right after finish editing the price, the system will fail.
I am really confused about the namespace and its instance. If you have any clearer docs, please help me.
Thanks a lot!
This is my current state of understanding, hopefully it will be helpful to someone. Building up further from How to emit SocketIO event on the serverside, you now have a dictionary with ProductsNamespace objects as values. You can iterate through this dictionary to find the desired socket object. For example, if you set socket identifier upon connection, as described in the Django and Flask example apps by using on_nickname method, then you can retrieve the socket like so:
for key in _connections:
socket = _connections[key]
if 'nickname' in socket.session and socket.session['nickname'] == unicode('uniqueName'):
socket.emit('eventTag', 'message from server')
Similarly socket.session['rooms'] can be used to emit to all members of the room, and if there are multiple SocketIO namespaces, socket.ns_name can be used.

What's the Google App Engine equivalent of ASP.NET's Server.Transfer?

Server.Transfer is sort of like a Redirect except instead of requesting the browser to do another page fetch, it triggers an internal request that makes the request handler "go to" another request handler.
Is there a Python equivalent to this in Google App Engine?
Edit: webapp2
With most Python frameworks the request handler is simply a function: I should imagine you can just import the actual handler function you want to use and pass it the parameters you received in the current handler function.
In Django (for example): you usually have a function that takes at least 1 parameter, the request object. You should be able to simply import the next handler and then return the result of executing it. Something like:
def actual_update_app_queue_settings(request):
return HttpResponse()
def update_app_queue_settings(request):
return actual_update_app_queue_settings(request):
For the framework you've mentioned, probably something like this:
class ProductHandler(webapp2.RequestHandler):
def get(self, product_id):
self.response.write('You requested product %r.' % product_id)
class ProductHandler2(webapp2.RequestHandler):
def get(self, product_id):
nph = ProductHandler()
nph.initialize(request, response)
nph.get(product_id)
I'm fudging that by looking at http://webapp-improved.appspot.com/guide/handlers.html: it looks reasonable. If you're using route annotations I'm honestly not sure what you do, but that might do it.
Usually, you just have to call the corresponding method.
For being more specific... Which flavour of AppEngine are you using? Java, Python, Go... Php?
If you are using java/servlet, then the "forward" is
protected void doGet(HttpServletRequest request, HttpServletResponse response){
request.getRequestDispatcher("/newurl").forward(request, response);
}

In Pyramid, how to check if view is static in NewRequest event handler?

I have a NewRequest event handler (subscriber) in Pyramid which looks like this:
#subscriber(NewRequest)
def new_request_subscriber(event):
request = event.request
print('Opening DB conn')
// Open the DB
request.db = my_connect_to_db()
request.add_finished_callback(close_db_connection)
However, I have observed that a connection to the DB is opened even if the request goes to a static asset, which is obviously unnecessary. Is there a way, from the NewRequest handler, to check if the request is bound for a static asset? I have tried comparing the view_name to my static view's name, but apparently the view_name attribute is not available at this early stage of processing the request.
If anyone has any interesting ideas about this, please let me know!
The brute force way is to compare the request.path variable to your static view's root, a la request.path.startswith('/static/').
The method I like the best and use in my own apps is to add a property to the request object called db that is lazily evaluated upon access. So while you may add it to the request, it doesn't do anything until it is accessed.
import types
def get_db_connection(request):
if not hasattr(request, '_db'):
request._db = my_connect_to_db()
request.add_finished_callback(close_db_connection)
return request._db
def new_request_subscriber(event):
request = event.request
request.db = types.MethodType(get_db_connection, request)
Later in your code you can access request.db() to get the connection. Unfortunately it's not possible to add a property to an object at runtime (afaik), so you can't set it up so that request.db gives you what you want. You can get this behavior without using a subscriber by the cookbook entry where you subclass Request and add your own lazy property via Pyramid's #reify decorator.
def _connection(request):
print "******Create connection***"
#conn = request.registry.dbsession()
conn = MySQLdb.connect("localhost", "DB_Login_Name", "DB_Password", "data_base_name")
def cleanup(_):
conn.close()
request.add_finished_callback(cleanup)
return conn
#subscriber(NewRequest)
def new_request_subscriber(event):
print "new_request_subscriber"
request = event.request
request.set_property(_connection, "db", reify = True)
try this one, I reference fallow web page
http://pyramid.readthedocs.org/en/1.3-branch/api/request.html
"set_property" section, it works for me.

Categories

Resources