Blocking Thrift calls with Twisted - python

I have a Twisted/Thrift server that uses the TTwisted protocol. I want to keep connections from clients open until a given event happens (I am notified of this event by the Twisted reactor via a callback).
class ServiceHandler(object):
interface.implements(Service.Iface)
def listenForEvent(self):
"""
A method defined in my Thrift interface that should block the Thrift
response until my event of interest occurs.
"""
# I need to somehow return immediately to free up the reactor thread,
# but I don't want the Thrift call to be considered completed. The current
# request needs to be marked as "waiting" somehow.
def handleEvent(self):
"""
A method called when the event of interest occurs. It is a callback method
registered with the Twisted reactor.
"""
# Find all the "waiting" requests and notify them that the event occurred.
# This will cause all of the Thrift requests to complete.
How can I quickly return from a method of my handler object while maintaining the illusion of a blocking Thrift call?
I initialize the Thrift handler from the Twisted startup trigger:
def on_startup():
handler = ServiceHandler()
processor = Service.Processor(handler)
server = TTwisted.ThriftServerFactory(processor=processor,
iprot_factory=TBinaryProtocol.TBinaryProtocolFactory())
reactor.listenTCP(9160, server)
My client in PHP connects with:
$socket = new TSocket('localhost', 9160);
$transport = new TFramedTransport($socket);
$protocol = new TBinaryProtocol($transport);
$client = new ServiceClient($protocol);
$transport->open();
$client->listenForEvent();
That last call ($client->listenForEvent()) successfully makes it over to the server and runs ServiceHandler.listenForEvent, but even when that server method returns a twisted.internet.defer.Deferred() instance, the client immediately receives an empty array and I get the exception:
exception 'TTransportException' with message 'TSocket: timed out
reading 4 bytes from localhost:9160 to local port 38395'

You should be able to return a Deferred from listenForEvent. Later handleEvent should fire that returned Deferred (or those returned Deferreds) to actually generate the response.

The error you're seeing seems to indicate that the transport is not framed (Twisted needs it to be so it can know the length of each message beforehand). Also, Thrift servers support returning deferreds from handlers, so that's even more strange. Have you tried returning defer.succeed("some value") and see if deferreds actually work? You can then move to this to check that it fully works:
d = defer.Deferred()
reactor.callLater(0, d.callback, results)
return d

Related

Python openapi-generator how to make http asyncrhonous calls

I'm creating a auto-generated API client using the openapi-generator for python. This API has some asynchronous functions that returns lines of data: JSON asynchronously.
I would like to have a callback that get this data and process it on a separate thread.
By the way, on the generated python code is wrote on the call functions:
"""
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.rs_gxs_channels_turtle_search_request(async_req=True)
>>> result = thread.get()
"""
So it seems that support in some way async calls.
But is not defined how to work with, for example if I call the thread:
thread = api_instance.rs_gxs_channels_turtle_search_request(async_req=True, req_rs_gxs_channels_turtle_search_request=req_rs_gxs_channels_turtle_search_request)
result = thread.get()
print("should be printed")
The program just stop, the "should be printed" phrase is never printed until the connection is closed by the server, so is not async, so is not a separated thread.
So I expect a way to set a callback and to reproduce the call in async way.

Tornado 4.x solution of running game on ThreadPoolExecutor not working anymore. Need help refactoring it

My ThreadPoolExecutor/gen.coroutine(tornado v4.x) solution to circumvent blocking the webserver is not working anymore with tornado version 6.x.
A while back I started to develop an online Browser game using a Tornado webserver(v4.x) and websockets. Whenever user input is expected, the game would send the question to the client and wait for the response. Back than i used gen.coroutine and a ThreadPoolExecutor to make this task non-blocking. Now that I started refactoring the game, it is not working with tornado v6.x and the task is blocking the server again. I searched for possible solutions, but so far i have been unable to get it working again. It is not clear to me how to change my existing code to be non-blocking again.
server.py:
class PlayerWebSocket(tornado.websocket.WebSocketHandler):
executor = ThreadPoolExecutor(max_workers=15)
#run_on_executor
def on_message(self,message):
params = message.split(':')
self.player.callbacks[int(params[0])]=params[1]
if __name__ == '__main__':
application = Application()
application.listen(9999)
tornado.ioloop.IOLoop.instance().start()
player.py:
#gen.coroutine
def send(self, message):
self.socket.write_message(message)
def create_choice(self, id, choices):
d = {}
d['id'] = id
d['choices']=choices
self.choice[d['id']]=d
self.send('update',self)
while not d['id'] in self.callbacks:
pass
del self.choice[d['id']]
return self.callbacks[d['id']]
Whenever a choice is to be made, the create_choice function creates a dict with a list (choices) and an id and stores it in the players self.callbacks. After that it just stays in the while loop until the websocket.on_message function puts the received answer (which looks like this: id:Choice_id, so for example 1:12838732) into the callbacks dict.
The WebSocketHandler.write_message method is not thread-safe, so it can only be called from the IOLoop's thread, and not from a ThreadPoolExecutor (This has always been true, but sometimes it might have seemed to work anyway).
The simplest way to fix this code is to save IOLoop.current() in a global variable from the main thread (the current() function accesses a thread-local variable so you can't call it from the thread pool) and use ioloop.add_callback(self.socket.write_message, message) (and remove #gen.coroutine from send - it doesn't do any good to make functions coroutines if they contain no yield expressions).

Event handling in Autobahn Python

I want my websocket client to close connection as soon as my_flag is set to True.
Here's my socket class:
class BridgeSocket(WebSocketClientProtocol):
def __init__(self,factory,my_flag):
self.my_flag = my_flag
Now, my_flag is set as true after some time somewhere else in the program run.(inside a different thread). Instead of waiting in a
while True:
sleep(1)
kind of loop, is there any event which I can define and attach to my websocket class.
i.e. a function which gets fired when my_flag is set to true
Use threading.Event.
# initialization code
self.my_event = threading.Event()
# sct.my_event.set() is called from somewhere
# waiting code, to replace while True:...
sct.my_event.wait()
You should spawn your background task using Twisted deferToThread and then notify the WebSocket stuff running on the main thread using callFromThread.

How can I keep multiple gevent servers serving forever?

Currently, I have an application that has two servers: the first processes orders and responds individually, the second broadcasts results to other interested subscribers. They need to be served from different ports. I can start() both of them, but I can only get one or the other to serve_forever() as I read it is a blocking function. I am looking for ideas on how to keep both the servers from exiting. abbreviated code below:
def main():
stacklist = []
subslist = []
stacklist.append(CreateStack('stuff'))
subslist.append(Subscription('stuff'))
bcastserver = BroadcastServer(subslist) # creates a new server
tradeserver = TradeServer(stacklist) # creates a new server
bcastserver.start() # start accepting new connections
tradeserver.start() # start accepting new connections
#bcastserver.serve_forever() #if I do it here, the first one...
#tradeserver.serve_forever() #blocks the second one
class TradeServer(StreamServer):
def __init__(self, stacklist):
self.stacklist = stacklist
StreamServer.__init__(self, ('localhost', 12345), self.handle)
#self.serve_forever() #If I put it here in both, neither works
def handle(self, socket, address):
#handler here
class BroadcastServer(StreamServer):
def __init__(self, subslist):
StreamServer.__init__(self, ('localhost', 8000), self.handle)
self.subslist = subslist
#self.serve_forever() #If I put it here in both, neither works
def handle(self, socket, address):
#handler here
Perhaps I just need a way to keep the two from exiting, but I'm not sure how. In the end, I want both servers to listen forever for incoming connections and handle them.
I know this question has an accepted answer, but there is a better one. I'm adding it for people like me who find this post later.
As described on in the gevent documentation about servers:
The BaseServer.serve_forever() method calls BaseServer.start() and then waits until interrupted or until the server is stopped.
So you can just do:
def main():
stacklist = []
subslist = []
stacklist.append(CreateStack('stuff'))
subslist.append(Subscription('stuff'))
bcastserver = BroadcastServer(subslist) # creates a new server
tradeserver = TradeServer(stacklist) # creates a new server
bcastserver.start() # starts accepting bcast connections and returns
tradeserver.serve_forever() # starts accepting trade connections and blocks until tradeserver stops
bcastserver.stop() # stops also the bcast server
The gevent introduction documentation explains why this works:
Unlike other network libraries, though in a similar fashion as
eventlet, gevent starts the event loop implicitly in a dedicated
greenlet. There’s no reactor that you must call a run() or dispatch()
function on. When a function from gevent’s API wants to block, it
obtains the gevent.hub.Hub instance — a special greenlet that runs the
event loop — and switches to it (it is said that the greenlet yielded
control to the Hub).
When serve_forever() blocks, it does not prevent either server from continuing communication.
Note: In the above code the trader server is the one that decides when the whole application stops. If you want the broadcast server to decide this, you should swap them in the start() and serve_forever() calls.
ok, I was able to do this using threading and with gevent's monkeypatch library:
from gevent import monkey
def main():
monkey.patch_thread()
# etc, etc
t = threading.Thread(target=bcastserver.serve_forever)
t.setDaemon(True)
t.start()
tradeserver.serve_forever()
Start each server loop in its own instance of Python (one console per gevent). I've never understood trying to run multiple servers from one program. You can run the same server many times and use a reverse proxy like nginx to load balance and route accordingly.

Proper way of cancelling accept and closing a Python processing/multiprocessing Listener connection

(I'm using the pyprocessing module in this example, but replacing processing with multiprocessing should probably work if you run python 2.6 or use the multiprocessing backport)
I currently have a program that listens to a unix socket (using a processing.connection.Listener), accept connections and spawns a thread handling the request. At a certain point I want to quit the process gracefully, but since the accept()-call is blocking and I see no way of cancelling it in a nice way. I have one way that works here (OS X) at least, setting a signal handler and signalling the process from another thread like so:
import processing
from processing.connection import Listener
import threading
import time
import os
import signal
import socket
import errno
# This is actually called by the connection handler.
def closeme():
time.sleep(1)
print 'Closing socket...'
listener.close()
os.kill(processing.currentProcess().getPid(), signal.SIGPIPE)
oldsig = signal.signal(signal.SIGPIPE, lambda s, f: None)
listener = Listener('/tmp/asdf', 'AF_UNIX')
# This is a thread that handles one already accepted connection, left out for brevity
threading.Thread(target=closeme).start()
print 'Accepting...'
try:
listener.accept()
except socket.error, e:
if e.args[0] != errno.EINTR:
raise
# Cleanup here...
print 'Done...'
The only other way I've thought about is reaching deep into the connection (listener._listener._socket) and setting the non-blocking option...but that probably has some side effects and is generally really scary.
Does anyone have a more elegant (and perhaps even correct!) way of accomplishing this? It needs to be portable to OS X, Linux and BSD, but Windows portability etc is not necessary.
Clarification:
Thanks all! As usual, ambiguities in my original question are revealed :)
I need to perform cleanup after I have cancelled the listening, and I don't always want to actually exit that process.
I need to be able to access this process from other processes not spawned from the same parent, which makes Queues unwieldy
The reasons for threads are that:
They access a shared state. Actually more or less a common in-memory database, so I suppose it could be done differently.
I must be able to have several connections accepted at the same time, but the actual threads are blocking for something most of the time. Each accepted connection spawns a new thread; this in order to not block all clients on I/O ops.
Regarding threads vs. processes, I use threads for making my blocking ops non-blocking and processes to enable multiprocessing.
Isnt that what select is for??
Only call accept on the socket if the select indicates it will not block...
The select has a timeout, so you can break out occasionally occasionally to check
if its time to shut down....
I thought I could avoid it, but it seems I have to do something like this:
from processing import connection
connection.Listener.fileno = lambda self: self._listener._socket.fileno()
import select
l = connection.Listener('/tmp/x', 'AF_UNIX')
r, w, e = select.select((l, ), (), ())
if l in r:
print "Accepting..."
c = l.accept()
# ...
I am aware that this breaks the law of demeter and introduces some evil monkey-patching, but it seems this would be the most easy-to-port way of accomplishing this. If anyone has a more elegant solution I would be happy to hear it :)
I'm new to the multiprocessing module, but it seems to me that mixing the processing module and the threading module is counter-intuitive, aren't they targetted at solving the same problem?
Anyway, how about wrapping your listen functions into a process itself? I'm not clear how this affects the rest of your code, but this may be a cleaner alternative.
from multiprocessing import Process
from multiprocessing.connection import Listener
class ListenForConn(Process):
def run(self):
listener = Listener('/tmp/asdf', 'AF_UNIX')
listener.accept()
# do your other handling here
listen_process = ListenForConn()
listen_process.start()
print listen_process.is_alive()
listen_process.terminate()
listen_process.join()
print listen_process.is_alive()
print 'No more listen process.'
Probably not ideal, but you can release the block by sending the socket some data from the signal handler or the thread that is terminating the process.
EDIT: Another way to implement this might be to use the Connection Queues, since they seem to support timeouts (apologies, I misread your code in my first read).
I ran into the same issue. I solved it by sending a "stop" command to the listener. In the listener's main thread (the one that processes the incoming messages), every time a new message is received, I just check to see if it's a "stop" command and exit out of the main thread.
Here's the code I'm using:
def start(self):
"""
Start listening
"""
# set the command being executed
self.command = self.COMMAND_RUN
# startup the 'listener_main' method as a daemon thread
self.listener = Listener(address=self.address, authkey=self.authkey)
self._thread = threading.Thread(target=self.listener_main, daemon=True)
self._thread.start()
def listener_main(self):
"""
The main application loop
"""
while self.command == self.COMMAND_RUN:
# block until a client connection is recieved
with self.listener.accept() as conn:
# receive the subscription request from the client
message = conn.recv()
# if it's a shut down command, return to stop this thread
if isinstance(message, str) and message == self.COMMAND_STOP:
return
# process the message
def stop(self):
"""
Stops the listening thread
"""
self.command = self.COMMAND_STOP
client = Client(self.address, authkey=self.authkey)
client.send(self.COMMAND_STOP)
client.close()
self._thread.join()
I'm using an authentication key to prevent would be hackers from shutting down my service by sending a stop command from an arbitrary client.
Mine isn't a perfect solution. It seems a better solution might be to revise the code in multiprocessing.connection.Listener, and add a stop() method. But, that would require sending it through the process for approval by the Python team.

Categories

Resources