Asynchronous Client/Server pattern in Python ZeroMQ - python

I have 3 programs written in Python, which need to be connected. 2 programs X and Y gather some information, which are sent by them to program Z. Program Z analyzes the data and send to program X and Y some decisions. Number of programs similar to X and Y will be expanded in the future. Initially I used named pipe to allow communication from X, Y to Z. But as you can see, I need bidirectional relation. My boss told me to use ZeroMQ. I have just found pattern for my use case, which is called Asynchronous Client/Server. Please see code from ZMQ book (http://zguide.zeromq.org/py:all) below.
The problem is my boss does not want to use any threads, forks etc. I moved client and server tasks to separate programs, but I am not sure what to do with ServerWorker class. Can this be somehow used without threads? Also, I am wondering, how to establish optimal workers amount.
import zmq
import sys
import threading
import time
from random import randint, random
__author__ = "Felipe Cruz <felipecruz#loogica.net>"
__license__ = "MIT/X11"
def tprint(msg):
"""like print, but won't get newlines confused with multiple threads"""
sys.stdout.write(msg + '\n')
sys.stdout.flush()
class ClientTask(threading.Thread):
"""ClientTask"""
def __init__(self, id):
self.id = id
threading.Thread.__init__ (self)
def run(self):
context = zmq.Context()
socket = context.socket(zmq.DEALER)
identity = u'worker-%d' % self.id
socket.identity = identity.encode('ascii')
socket.connect('tcp://localhost:5570')
print('Client %s started' % (identity))
poll = zmq.Poller()
poll.register(socket, zmq.POLLIN)
reqs = 0
while True:
reqs = reqs + 1
print('Req #%d sent..' % (reqs))
socket.send_string(u'request #%d' % (reqs))
for i in range(5):
sockets = dict(poll.poll(1000))
if socket in sockets:
msg = socket.recv()
tprint('Client %s received: %s' % (identity, msg))
socket.close()
context.term()
class ServerTask(threading.Thread):
"""ServerTask"""
def __init__(self):
threading.Thread.__init__ (self)
def run(self):
context = zmq.Context()
frontend = context.socket(zmq.ROUTER)
frontend.bind('tcp://*:5570')
backend = context.socket(zmq.DEALER)
backend.bind('inproc://backend')
workers = []
for i in range(5):
worker = ServerWorker(context)
worker.start()
workers.append(worker)
poll = zmq.Poller()
poll.register(frontend, zmq.POLLIN)
poll.register(backend, zmq.POLLIN)
while True:
sockets = dict(poll.poll())
if frontend in sockets:
ident, msg = frontend.recv_multipart()
tprint('Server received %s id %s' % (msg, ident))
backend.send_multipart([ident, msg])
if backend in sockets:
ident, msg = backend.recv_multipart()
tprint('Sending to frontend %s id %s' % (msg, ident))
frontend.send_multipart([ident, msg])
frontend.close()
backend.close()
context.term()
class ServerWorker(threading.Thread):
"""ServerWorker"""
def __init__(self, context):
threading.Thread.__init__ (self)
self.context = context
def run(self):
worker = self.context.socket(zmq.DEALER)
worker.connect('inproc://backend')
tprint('Worker started')
while True:
ident, msg = worker.recv_multipart()
tprint('Worker received %s from %s' % (msg, ident))
replies = randint(0,4)
for i in range(replies):
time.sleep(1. / (randint(1,10)))
worker.send_multipart([ident, msg])
worker.close()
def main():
"""main function"""
server = ServerTask()
server.start()
for i in range(3):
client = ClientTask(i)
client.start()
server.join()
if __name__ == "__main__":
main()

So, you grabbed the code from here: Asynchronous Client/Server Pattern
Pay close attention to the images that show you the model this code is targeted to. In particular, look at "Figure 38 - Detail of Asynchronous Server". The ServerWorker class is spinning up 5 "Worker" nodes. In the code, those nodes are threads, but you could make them completely separate programs. In that case, your server program (probably) wouldn't be responsible for spinning them up, they'd spin up separately and just communicate to your server that they are ready to receive work.
You'll see this often in ZMQ examples, a multi-node topology mimicked in threads in a single executable. It's just to make reading the whole thing easy, it's not always intended to be used that way.
For your particular case, it could make sense to have the workers be threads or to break them out into separate programs... but if it's a business requirement from your boss, then just break them out into separate programs.
Of course, to answer your second question, there's no way to know how many workers would be optimal without understanding the work load they'll be performing and how quickly they'll need to respond... your goal is to have the worker complete the work faster than new work is received. There's a fair chance, in many cases, that that can be accomplished with a single worker. If so, you can have your server itself be the worker, and just skip the entire "worker tier" of the architecture. You should start there, for the sake of simplicity, and just do some load testing to see if it will actually cope with your workload effectively. If not, get a sense of how long it takes to complete a task, and how quickly tasks are coming in. Let's say a worker can complete a task in 15 seconds. That's 4 tasks a minute. If tasks are coming in 5 tasks a minute, you need 2 workers, and you'll have a little headroom to grow. If things are wildly variable, then you'll have to make a decision about resources vs. reliability.
Before you get too much farther down the trail, make sure you read Chapter 4, Reliable Request/Reply Patterns, it will provide some insight for handling exceptions, and might give you a better pattern to follow.

Related

ZeroMQ Asynchronous Client-Server using Python multiprocessing

I am trying to adopt the ZeroMQ asynchronous client-server pattern described here with python multiprocessing. A brief description in the ZeroMQ guide
It's a DEALER/ROUTER for the client to server frontend communication and DEALER/DEALER for the server backend to the server workers communication. The server frontend and backend are connected using a zmq.proxy()-instance.
Instead of using threads, I want to use multiprocessing on the server. But requests from the client do not reach the server workers. However, they do reach the server frontend. And also the backend. But the backend is not able to connect to the server workers.
How do we generally debug these issues in pyzmq?How to turn on verbose logging for the sockets?
The python code snippets I am using -
server.py
import zmq
import time
from multiprocessing import Process
def run(context, worker_id):
socket = context.socket(zmq.DEALER)
socket.connect("ipc://backend.ipc")
print(f"Worker {worker_id} started")
try:
while True:
ident, msg = socket.recv_multipart()
print("Worker received %s from %s" % (msg, "ident"))
time.sleep(5)
socket.send_multipart([ident, msg])
print("Worker sent %s from %s" % (msg, ident))
except:
socket.close()
if __name__ == "__main__":
context = zmq.Context()
frontend = context.socket(zmq.ROUTER)
frontend.bind("tcp://*:5570")
backend = context.socket(zmq.DEALER)
backend.bind("ipc://backend.ipc")
N_WORKERS = 7
jobs = []
try:
for worker_id in range(N_WORKERS):
job = Process(target=run, args=(context, worker_id,))
jobs.append(job)
job.start()
zmq.proxy(frontend, backend)
for job in jobs:
job.join()
except:
frontend.close()
backend.close()
context.term()
client.py
import re
import zmq
from uuid import uuid4
if __name__ == "__main__":
context = zmq.Context()
socket = context.socket(zmq.DEALER)
identity = uuid4()
socket.identity = identity.encode("ascii")
socket.connect("tcp://localhost:5570")
poll = zmq.Poller()
poll.register(socket, zmq.POLLIN)
request = {
"body": "Some request body.",
}
socket.send_string(json.dumps(request))
while True:
for i in range(5):
sockets = dict(poll.poll(10))
if socket in sockets:
msg = socket.recv()
print(msg)
Q : "How to turn on verbose logging for the sockets?"
Start using the published native API socket_monitor() for all relevant details, reported as events arriving from socket-(instance)-under-monitoring.
Q : "How do we generally debug these issues in pyzmq?"
There is no general strategy on doing this. Having gone into a domain of a distributed-computing, you will almost always create your own, project-specific, tools for "collecting" & "viewing/interpreting" a time-ordered flow of (principally) distributed-events.
Last but not least : avoid trying to share a Context()-instance, the less "among" 8 processes
The Art of Zen of Zero strongly advocates to avoid any shape and form of sharing. Here, the one and the very same Context()-instance is referenced ("shared") via a multiprocessing.Process's process-instantiation call-signature interface, which does not make the inter-process-"sharing" work.
One may let each spawned process-instance create it's own Context()-instance and use it from inside its private space during its own life-cycle.
Btw, your code ignores any return-codes, documented in the native API, that help you handle ( in worse cases debug post-mortem ) what goes alongside the distributed-computing. The try: ... except: ... finally: scaffolding also helps a lot here.
Anyway, the sooner you will learn to stop using the blocking-forms of the { .send() | .recv() | .poll() }-methods, the better your code starts to re-use the actual powers of the ZeroMQ.

Python performance - best parallelism approach

I am implementing a Python script that needs to keep sending 1500+ packets in parallel in less than 5 seconds each.
In a nutshell what I need is:
def send_pkts(ip):
#craft packet
while True:
#send packet
time.sleep(randint(0,3))
for x in list[:1500]:
send_pkts(x)
time.sleep(randint(1,5))
I have tried the simple single-threaded, multithreading, multiprocessing and multiprocessing+multithreading forms and had the following issues:
Simple single-threaded:
The "for delay" seems to compromise the "5 seconds" dependency.
Multithreading:
I think I could not accomplish what I desire due to Python GIL limitations.
Multiprocessing:
That was the best approach that seemed to work. However, due to excessive quantity of process the VM where I am running the script freezes (of course, 1500 process running). Thus becoming impractical.
Multiprocessing+Multithreading:
In this approach I created less process with each of them calling some threads (lets suppose: 10 process calling 150 threads each). It was clear that the VM is not freezing as fast as approach number 3, however the most "concurrent packet sending" I could reach was ~800. GIL limitations? VM limitations?
In this attempt I also tried using Process Pool but the results where similar.
Is there a better approach I could use to accomplish this task?
[1] EDIT 1:
def send_pkt(x):
#craft pkt
while True:
#send pkt
gevent.sleep(0)
gevent.joinall([gevent.spawn(send_pkt, x) for x in list[:1500]])
[2] EDIT 2 (gevent monkey-patching):
from gevent import monkey; monkey.patch_all()
jobs = [gevent.spawn(send_pkt, x) for x in list[:1500]]
gevent.wait(jobs)
#for send_pkt(x) check [1]
However I got the following error: "ValueError: filedescriptor out of range in select()". So I checked my system ulimit (Soft and Hard both are maximum: 65536).
After, I checked it has something to do with select() limitations over Linux (1024 fds maximum). Please check: http://man7.org/linux/man-pages/man2/select.2.html (BUGS section) - In orderto overcome that I should use poll() (http://man7.org/linux/man-pages/man2/poll.2.html) instead. But with poll() I return to same limitations: as polling is a "blocking approach".
Regards,
When using parallelism in Python a good approach is to use either ThreadPoolExecutor or ProcessPoolExecutor from
https://docs.python.org/3/library/concurrent.futures.html#module-concurrent.futures
these work well in my experience.
an example of threadedPoolExecutor that can be adapted for your use.
import concurrent.futures
import urllib.request
import time
IPs= ['168.212. 226.204',
'168.212. 226.204',
'168.212. 226.204',
'168.212. 226.204',
'168.212. 226.204']
def send_pkt(x):
status = 'Failed'
while True:
#send pkt
time.sleep(10)
status = 'Successful'
break
return status
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
future_to_ip = {executor.submit(send_pkt, ip): ip for ip in IPs}
for future in concurrent.futures.as_completed(future_to_ip):
ip = future_to_ip[future]
try:
data = future.result()
except Exception as exc:
print('%r generated an exception: %s' % (ip, exc))
else:
print('%r send %s' % (url, data))
Your result in option 3: "due to excessive quantity of process the VM where I am running the script freezes (of course, 1500 process running)" could bear further investigation. I believe it may be underdetermined from the information gathered so far whether this is better characterized as a shortcoming of the multiprocessing approach, or a limitation of the VM.
One fairly simple and straightforward approach would be to run a scaling experiment: rather than either having all sends happen from individual processes or all from the same, try intermediate values. Time it how long it takes to split the workload in half between two processes, or 4, 8, so on.
While doing that it may also be a good idea to run a tool like xperf on Windows or oprofile on Linux to record whether these different choices of parallelism are leading to different kinds of bottlenecks, for example thrashing the CPU cache, running the VM out of memory, or who knows what else. Easiest way to say is to try it.
Based on prior experience with these types of problems and general rules of thumb, I would expect the best performance to come when the number of multiprocessing processes is less than or equal to the number of available CPU cores (either on the VM itself or on the hypervisor). That is however assuming that the problem is CPU bound; it's possible performance would still be higher with more than #cpu processes if something blocks during packet sending that would allow better use of CPU time if interleaved with other blocking operations. Again though, we don't know until some profiling and/or scaling experiments are done.
You are correct that python is single-threaded, however your desired task (sending network packets) is considered IO-bound operation, therefor a good candidate for multi-threading. Your main thread is not busy while the packets are transmitting, as long as your write your code with async in mind.
Take a look at the python docs on async tcp networking - https://docs.python.org/3/library/asyncio-protocol.html#tcp-echo-client.
If the bottleneck is http based ("sending packets") then the GIL actually shouldn't be too much of a problem.
If there is computation happening within python as well, then the GIL may get in the way and, as you say, process-based parallelism would be preferred.
You do not need one process per task! This seems to be the oversight in your thinking. With python's Pool class, you can easily create a set of workers which will receive tasks from a queue.
import multiprocessing
def send_pkts(ip):
...
number_of_workers = 8
with multiprocessing.Pool(number_of_workers) as pool:
pool.map(send_pkts, list[:1500])
You are now running number_of_workers + 1 processes (the workers + the original process) and the N workers are running the send_pkts function concurrently.
The main issue keeping you from achieving your desired performance is the send_pkts() method. It doesn't just send the packet, it also crafts the packet:
def send_pkts(ip):
#craft packet
while True:
#send packet
time.sleep(randint(0,3))
While sending a packet is almost certainly an I/O bound task, crafting a packet is almost certainly a CPU bound task. This method needs to be split into two tasks:
craft a packet
send a packet
I've written a basic socket server and a client app that crafts and sends packets to the server. The idea is to have a separate process which crafts the packets and puts them into a queue. There is a pool of threads that share the queue with the packet crafting process. These threads pull packets off of the queue and send them to the server. They also stick the server's responses into another shared queue but that's just for my own testing and not relevant to what you're trying to do. The threads exit when they get a None (poison pill) from the queue.
server.py:
import argparse
import socketserver
import time
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--host", type=str, help="bind to host")
parser.add_argument("--port", type=int, help="bind to port")
parser.add_argument("--packet-size", type=int, help="size of packets")
args = parser.parse_args()
HOST, PORT = args.host, args.port
class MyTCPHandler(socketserver.BaseRequestHandler):
def handle(self):
time.sleep(1.5)
data = self.request.recv(args.packet_size)
self.request.sendall(data.upper())
with socketserver.ThreadingTCPServer((HOST, PORT), MyTCPHandler) as server:
server.serve_forever()
client.py:
import argparse
import logging
import multiprocessing as mp
import os
import queue as q
import socket
import time
from threading import Thread
def get_logger():
logger = logging.getLogger("threading_example")
logger.setLevel(logging.INFO)
fh = logging.FileHandler("client.log")
fmt = '%(asctime)s - %(threadName)s - %(levelname)s - %(message)s'
formatter = logging.Formatter(fmt)
fh.setFormatter(formatter)
logger.addHandler(fh)
return logger
class PacketMaker(mp.Process):
def __init__(self, result_queue, max_packets, packet_size, num_poison_pills, logger):
mp.Process.__init__(self)
self.result_queue = result_queue
self.max_packets = max_packets
self.packet_size = packet_size
self.num_poison_pills = num_poison_pills
self.num_packets_made = 0
self.logger = logger
def run(self):
while True:
if self.num_packets_made >= self.max_packets:
for _ in range(self.num_poison_pills):
self.result_queue.put(None, timeout=1)
self.logger.debug('PacketMaker exiting')
return
self.result_queue.put(os.urandom(self.packet_size), timeout=1)
self.num_packets_made += 1
class PacketSender(Thread):
def __init__(self, task_queue, result_queue, addr, packet_size, logger):
Thread.__init__(self)
self.task_queue = task_queue
self.result_queue = result_queue
self.server_addr = addr
self.packet_size = packet_size
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.connect(addr)
self.logger = logger
def run(self):
while True:
packet = self.task_queue.get(timeout=1)
if packet is None:
self.logger.debug("PacketSender exiting")
return
try:
self.sock.sendall(packet)
response = self.sock.recv(self.packet_size)
except socket.error:
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.connect(self.server_addr)
self.sock.sendall(packet)
response = self.sock.recv(self.packet_size)
self.result_queue.put(response)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--num-packets', type=int, help='number of packets to send')
parser.add_argument('--packet-size', type=int, help='packet size in bytes')
parser.add_argument('--num-threads', type=int, help='number of threads sending packets')
parser.add_argument('--host', type=str, help='name of host packets will be sent to')
parser.add_argument('--port', type=int, help='port number of host packets will be sent to')
args = parser.parse_args()
logger = get_logger()
logger.info(f"starting script with args {args}")
packets_to_send = mp.Queue(args.num_packets + args.num_threads)
packets_received = q.Queue(args.num_packets)
producers = [PacketMaker(packets_to_send, args.num_packets, args.packet_size, args.num_threads, logger)]
senders = [PacketSender(packets_to_send, packets_received, (args.host, args.port), args.packet_size, logger)
for _ in range(args.num_threads)]
start_time = time.time()
logger.info("starting workers")
for worker in senders + producers:
worker.start()
for worker in senders:
worker.join()
logger.info("workers finished")
end_time = time.time()
print(f"{packets_received.qsize()} packets received in {end_time - start_time} seconds")
run.sh:
#!/usr/bin/env bash
for i in "$#"
do
case $i in
-s=*|--packet-size=*)
packet_size="${i#*=}"
shift
;;
-n=*|--num-packets=*)
num_packets="${i#*=}"
shift
;;
-t=*|--num-threads=*)
num_threads="${i#*=}"
shift
;;
-h=*|--host=*)
host="${i#*=}"
shift
;;
-p=*|--port=*)
port="${i#*=}"
shift
;;
*)
;;
esac
done
python3 server.py --host="${host}" \
--port="${port}" \
--packet-size="${packet_size}" &
server_pid=$!
python3 client.py --packet-size="${packet_size}" \
--num-packets="${num_packets}" \
--num-threads="${num_threads}" \
--host="${host}" \
--port="${port}"
kill "${server_pid}"
$ ./run.sh -s=1024 -n=1500 -t=300 -h=localhost -p=9999
1500 packets received in 4.70330023765564 seconds
$ ./run.sh -s=1024 -n=1500 -t=1500 -h=localhost -p=9999
1500 packets received in 1.5025699138641357 seconds
This result may be verified by changing the log level in client.py to DEBUG. Note that the script does take much longer than 4.7 seconds to complete. There is quite a lot of teardown required when using 300 threads, but the log makes it clear that the threads are done processing at 4.7 seconds.
Take all performance results with a grain of salt. I have no clue what system you're running this on. I will provide my relevant system stats:
2 Xeon X5550 #2.67GHz
24MB DDR3 #1333MHz
Debian 10
Python 3.7.3
I'll address the issues with your attempts:
Simple single-threaded: This is all but guaranteed to take at least 1.5 x num_packets seconds due to the randint(0, 3) delay
Multithreading: The GIL is the likely bottleneck here, but it's likely because of the craft packet part rather than send packet
Multiprocessing: Each process requires at least one file descriptor so you're probably exceeding the user or system limit, but this could work if you change the appropriate settings
Multiprocessing+multithreading: This fails for the same reason as #2, crafting the packet is probably CPU bound
The rule of thumb is: I/O bound - use threads, CPU bound - use processes

Random freezing / hanging in Python ZeroMQ

I am writing a broker-less, balanced, client-worker service written in python with ZeroMQ.
The clients acquire a worker's address, establish a connection ( zmq.REQ / zmq.REP ), send single request, receive a single response and then disconnect.
I have chosen a broker-less architecture because the amount of a data that needs to get transferred between the clients and workers is relatively large, despite there only being a single REQ/REP pair per connection, and using a broker as a 'middle man' would create a bottleneck.
While testing the system, I noticed that the communication between the clients and workers was halting randomly, only sometimes resuming after a couple of seconds (often several minutes).
I narrowed down the issue to the .connect() / .disconnect() of clients to workers.
I have written two small python scripts that reproduce the bug.
import zmq
class Site:
def __init__(self):
ctx = zmq.Context()
self.pair_socket = ctx.socket(zmq.REQ)
self.num = 0
def __del__(self):
print "closed"
def run_site(self):
print "running..."
while True:
self.pair_socket.connect('tcp://127.0.0.1:5555')
print 'connected'
self.pair_socket.send_pyobj(self.num)
print 'sent', self.num
print self.pair_socket.recv_pyobj()
self.pair_socket.disconnect('tcp://127.0.0.1:5555')
print 'disconnected'
self.num += 1
s = Site()
s.run_site()
and
import zmq
class Server:
def __init__(self):
ctx = zmq.Context()
self.pair_socket = ctx.socket(zmq.REP)
self.pair_socket.bind('tcp://127.0.0.1:5555')
def __del__(self):
print " closed"
def run_server(self):
print "running..."
while True:
x = self.pair_socket.recv_pyobj()
print x
self.pair_socket.send_pyobj(x)
s = Server()
s.run_server()
I don't think the issue is related to memory or gc as I have tried disabling gc - without much affect.
I have tried using zmq.LINGER as described here: Zeromq with python hangs if connecting to invalid socket
What could cause these randoms freezes?
The REP socket is synchronous by definition. So your server can only serve one request at a time, rest of them will just fill up the buffer and get lost at some point.
To fix the root cause, you need to use the ROUTER socket instead.
class Server:
def __init__(self):
ctx = zmq.Context()
self.pair_socket = ctx.socket(zmq.ROUTER)
self.pair_socket.bind('tcp://127.0.0.1:5555')
self.poller = zmq.Poller()
self.poller.register(self.pair_socket, zmq.POLLIN)
def __del__(self):
print " closed"
def run_server(self):
print "running..."
while True:
try:
items = dict(self.poller.poll())
except KeyboardInterrupt:
break
if self.pair_socket in items:
x = self.pair_socket.recv_multipart()
print x
self.pair_socket.send_multipart(x)

concurrency of heavy tasks in tornado

my code:
import tornado.tcpserver
import tornado.ioloop
import itertools
import socket
import time
class Talk():
def __init__(self, id):
self.id = id
#tornado.gen.coroutine
def on_connect(self):
try:
while "connection alive":
self.said = yield self.stream.read_until(b"\n")
response = yield tornado.gen.Task(self.task) ### LINE 1
yield self.stream.write(response) ### LINE 2
except tornado.iostream.StreamClosedError:
print('error: socket closed')
return
#tornado.gen.coroutine
def task(self):
if self.id == 1:
time.sleep(3) # sometimes request is heavy blocking
return b"response"
#tornado.gen.coroutine
def on_disconnect(self):
yield []
class Server(tornado.tcpserver.TCPServer):
def __init__(self, io_loop=None, ssl_options=None, max_buffer_size=None):
tornado.tcpserver.TCPServer.__init__(self,
io_loop=io_loop,
ssl_options=ssl_options,
max_buffer_size=max_buffer_size)
self.talk_id_alloc = itertools.count(1)
return
#tornado.gen.coroutine
def handle_stream(self, stream, address):
talk_id = next(self.talk_id_alloc)
talk = Talk(talk_id)
stream.set_close_callback(talk.on_disconnect)
stream.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
stream.socket.setsockopt(socket.IPPROTO_TCP, socket.SO_KEEPALIVE, 1)
talk.stream = stream
yield talk.on_connect()
return
Server().listen(8888)
tornado.ioloop.IOLoop.instance().start()
problem:
I need a tornado as tcp server - it looks like a good choice for handling many requests with low computation.
however:
99% of requests will last less than 0,05 sec, but
1% of them can last even 3 sec (special cases).
single response must be returned at once, not partially.
what is best aproach here?
how to achieve a code where LINE #1 is never blocking more than 0.1 sec
yield tornado.gen.with_timeout(
datetime.timedelta(seconds=0.1), tornado.gen.Task(self.task))
doesnt work form me - do nothing
tornado.ioloop.IOLoop.current().add_timeout(
datetime.timedelta(seconds=0.1),
lambda: result.set_exception(TimeoutError("Timeout")))
either nothing.
looking for better solutions:
task can detect if need high computation (API ...) - using timeout?,
then run/fork to another thread or even process
and send to tornado server execption - "receive" me later from results queue (consumer/producer)
i dont want case where timeout kill heavy task without saving results, and task is reopened within special wrapper - so consumer/producer pattern should be for all tasks?
adding new ioloop when current is blocked - how detect blocking?
I dont see any solution in tornado.
task in line #1 could be simple (~99%) or complicated, which can require:
I/O:
- disk/DB access
- ram/redis access
network:
- API call
CPU:
- algorithms, regex
(the worst task will do all of above).
I know what kind of task it is (the weight) only when I start doing it,
so appriopriate is use a task queue in separate threads.
I dont want delay simple/quick tasks.
so if you manage to cancel the heavy tasks, I recommend cancelling them with a time-out and then spawning them off to another thread. Performance-wise this is not ideal (GIL) but you prevent tornado from blocking - which is your ultimate goal.
A nice write-up about how this can be done can be found here: http://lbolla.info/blog/2013/01/22/blocking-tornado.
If you want to go further you could use something like celery where you can offload to other processes transparently - though this much heavier.

pyzmq non-blocking socket

Can someone point me to an example of a REQ/REP non-blocking ZeroMQ (0MQ) with Python bindings? Perhaps my understanding of ZMQ is faulty but I couldn't find an example online.
I have a server in Node.JS that sends work from multiple clients to the server. The idea is that the server can spin up a bunch of jobs that operate in parallel instead of processing data for one client followed by the next
You can use for this goal both zmq.Poller (many examples you can find in zguide repo, eg rrbroker.py) or gevent-zeromq implementation (code sample).
The example provided in the accepted answer gives the gist of it, but you can get away with something a bit simpler as well by using zmq.device for the broker while otherwise sticking to the "Extended Request-Reply" pattern from the guide. As such, a hello worldy example for the server could look something like the following:
import time
import threading
import zmq
context = zmq.Context()
def worker():
socket = context.socket(zmq.REP)
socket.connect('inproc://workers')
while True:
msg = socket.recv_string()
print(f'Received request: [{msg}]')
time.sleep(1)
socket.send_string(msg)
url_client = 'tcp://*:5556'
clients = context.socket(zmq.ROUTER)
clients.bind(url_client)
workers = context.socket(zmq.DEALER)
workers.bind('inproc://workers')
for _ in range(4):
thread = threading.Thread(target=worker)
thread.start()
zmq.device(zmq.QUEUE, clients, workers)
Here we're letting four workers handle incoming requests in parallel. Now, you're using Node on the client side, but just to keep the example complete, one can use the Python client below to see that this works. Here, we're creating 10 requests which will then be handled in 3 batches:
import zmq
import threading
context = zmq.Context()
def make_request(a):
socket = context.socket(zmq.REQ)
socket.connect('tcp://localhost:5556')
print(f'Sending request {a} ...')
socket.send_string(str(a))
message = socket.recv_string()
print(f'Received reply from request {a} [{message}]')
for a in range(10):
thread = threading.Thread(target=make_request, args=(a,))
thread.start()

Categories

Resources