Python: how to create a server to supervise a thread pool? - python

I have a thread pool that handles some tasks concurrently. Now I'd like the tasks (multiply_by_2 here) to print something before exit.
Originally, I created a lock and passed the lock to each worker thread. If a thread wants to print something, it first acquires the lock, prints its message to stdout, then releases the lock.
Now, I want to have a dedicated event-driven server thread to handle the printing. If a thread wants to print something, it just send its message to that server, via a Unix domain socket (AF_UNIX). I hope in this way, each thread's blocking time can be reduced (no need to wait for the lock) and I don't need to share a lock among worker threads. The server thread just prints whatever messages it got from clients (i.e. the worker threads) in order.
I tried for some time with Python's asyncio module (requiring Python 3.7+) but couldn't figure it out. How should I do it?
This cleaned-up template is:
# Python 3.7+
import asyncio
import multiprocessing.dummy as mp # Threading wrapped using multiprocessing API.
import os
import socket
import sys
import threading
import time
server_address = './uds_socket' # UNIX domain socket
def run_multiple_clients_until_complete(input_list):
pool = mp.Pool(8)
result_list = pool.map(multiply_by_2, input_list)
return result_list
def multiply_by_2(n):
time.sleep(0.2) # Simulates some blocking call.
message_str = "client: n = %d" % n
# TODO send message_str.encode() to server
return n * 2
# Server's callback when it gets a client connection
# If you want to change it, please do..
def client_connected_cb(
stream_reader: asyncio.StreamReader,
stream_writer: asyncio.StreamWriter) -> None:
message_str = reader.read().decode()
print(message_str)
def create_server_thread():
pass # TODO
# Let the server finish handling all connections it got, then
# stop the server and join the thread
def stop_server_and_wait_thread(thread):
pass # TODO
def work(input_list):
thread = create_server_thread()
result_list = run_multiple_clients_until_complete(input_list)
stop_server_and_wait_thread(thread)
return result_list
def main():
input_list = list(range(20))
result_list = work(input_list)
print(result_list)
if __name__ == "__main__":
sys.exit(main())
Some extra requirements:
Don't make async: run_multiple_clients_until_complete(), multiply_by_2(), main().
It would be nicer to use the SOCK_DGRAM UDP protocol instead of SOCK_STREAM TCP, but it's unnecessary.

Related

Understanding implementation of parallel programming via threading

Scenarion
Sensor is continuously sending data in an interval of 100 milliseconds ( time needs to be configurable)
One Thread read the data continuously from sensor and write it to a common queue
This process is continuous until keyboard interrupt press happens
Thread 2 locks queue, ( may momentarily block Thread1)
Read full data from queue to temp structure
Release the queue
process the data in it. It is a computational task. While performing this task. Thread 1 should keep on filling the buffer with sensor data.
I have read about threading and GIL, so step 7 cannot afford to have any loss in data sent by the sensor while performing the computational process() on thread 2.
How this can be implemented using Python?
What I started with it is
import queue
from threading import Thread
import queue
from queue import Queue
q = Queue(maxsize=10)
def fun1():
fun2Thread = Thread(target=fun2)
fun2Thread.start()
while True:
try:
q.put(1)
except KeyboardInterrupt:
print("Key Interrupt")
fun2Thread.join()
def fun2():
print(q.get())
def read():
fun1Thread = Thread(target=fun1)
fun1Thread.start()
fun1Thread.join()
read()
The issue I'm facing in this is the terminal is stuck after printing 1. Can someone please guide me on how to implement this scenario?
Here's an example that may help.
We have a main program (driver), a client and a server. The main program manages queue construction and the starting and ending of the subprocesses.
The client sends a range of values via a queue to the client. When the range is exhausted it tells the server to terminate. There's a delay (sleep) in enqueueing the data for demonstration purposes.
Try running it once without any interrupt and note how everything terminates nicely. Then run again and interrupt (Ctrl-C) and again note a clean termination.
from multiprocessing import Queue, Process
from signal import signal, SIGINT, SIG_IGN
from time import sleep
def client(q, default):
signal(SIGINT, default)
try:
for i in range(10):
sleep(0.5)
q.put(i)
except KeyboardInterrupt:
pass
finally:
q.put(-1)
def server(q):
while (v := q.get()) != -1:
print(v)
def main():
q = Queue()
default = signal(SIGINT, SIG_IGN)
(server_p := Process(target=server, args=(q,))).start()
(client_p := Process(target=client, args=(q, default))).start()
client_p.join()
server_p.join()
if __name__ == '__main__':
main()
EDIT:
Edited to ensure that the server process continues to drain the queue if the client is terminated due to a KeyboardInterrupt (SIGINT)

Implementing a single thread server/daemon (Python)

I am developing a server (daemon).
The server has one "worker thread". The worker thread runs a queue of commands. When the queue is empty, the worker thread is paused (but does not exit, because it should preserve certain state in memory). To have exactly one copy of the state in memory, I need to run all time exactly one (not several and not zero) worker thread.
Requests are added to the end of this queue when a client connects to a Unix socket and sends a command.
After the command is issued, it is added to the queue of commands of the worker thread. After it is added to the queue, the server replies something like "OK". There should be not a long pause between server receiving a command and it "OK" reply. However, running commands in the queue may take some time.
The main "work" of the worker thread is split into small (taking relatively little time) chunks. Between chunks, the worker thread inspects ("eats" and empties) the queue and continues to work based on the data extracted from the queue.
How to implement this server/daemon in Python?
This is a sample code with internet sockets, easily replaced with unix domain sockets. It takes whatever you write to the socket, passes it as a "command" to worker, responds OK as soon as it has queued the command. The single worker simulates a lengthy task with sleep(30). You can queue as many tasks as you want, receive OK immediately and every 30 seconds, your worker prints a command from the queue.
import Queue, threading, socket
from time import sleep
class worker(threading.Thread):
def __init__(self,q):
super(worker,self).__init__()
self.qu = q
def run(self):
while True:
new_task=self.qu.get(True)
print new_task
i=0
while i < 10:
print "working ..."
sleep(1)
i += 1
try:
another_task=self.qu.get(False)
print another_task
except Queue.Empty:
pass
task_queue = Queue.Queue()
w = worker(task_queue)
w.daemon = True
w.start()
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('localhost', 4200))
sock.listen(1)
try:
while True:
conn, addr = sock.accept()
data = conn.recv(32)
task_queue.put(data)
conn.sendall("OK")
conn.close()
except:
sock.close()

Worker connecting to server but executing on client with multiprocessing package (python 2.7)

First post here, hello everyone.
I have a problem with the multiprocessing package with python 2.7.
I wish to have some processes run in parallel on a server; they do connect but they are executed locally instead.
This is the code I use on the server (Ubuntu 14.04):
from multiprocessing import Process
from multiprocessing.managers import BaseManager
from multiprocessing import cpu_count
class MyManager(BaseManager):
pass
def server():
mgr = MyManager(address=("", 2288), authkey="12345")
mgr.get_server().serve_forever()
if __name__ == "__main__":
print "number of cpus/cores:", cpu_count()
server = Process(target=server)
server.start()
print "server started"
server.join()
server.terminate()
while this is the code that runs on the client (Mac OS 10.11):
from multiprocessing import Manager
from multiprocessing import Process
from multiprocessing import current_process
from multiprocessing.managers import BaseManager
from math import sqrt
class MyManager(BaseManager):
pass
def worker(address, port, authkey):
mgr = MyManager(address=(address, port), authkey=authkey)
try:
mgr.connect()
print "- {} connected to {}:{}".format(current_process().name, address, port)
except:
print "- {} could not connect to server ({}:{})".format(current_process().name, address, port)
current_process().authkey = authkey
for k in range(1000000000):
sqrt(k * k)
if __name__ == "__main__":
# create processes
p = [Process(target=worker, args=("xx.xx.xx.xx", 2288, "12345")) for _ in range(4)]
# start processes
for each in p:
each.start()
# join the processes
for each in p:
each.join()
The for loop
for k in range(1000000000):
sqrt(k * k)
that's inside the worker function is just to let the workers process a lot, so I can monitor their activity into Activity Monitor or with top.
The problem is that the processes connect (as a matter of fact if I put a wrong address they do not) but they are executed on the local machine, as I see the server CPUs staying idle while the local CPUs going all towards 100%.
Am I getting something wrong?
You are starting your Process locally on your client. p and for each in p: each.start() is executed on your client, where it is run and starts the workers.
While each Process "connects" to the Manager via mgr.connect() it never interacts with it. The local Processes don't magically transfer to your server just because you opened a connection. Furthermore, a Manager isn't meant to run workers, it is meant to share data.
You'd have to start workers on the server, then send work to there.

Different behavior in run and start in Python multiprocessing

I am trying to start multiple processes in a Python program, using multiprocessing.Queue to share data between them.
My code is shown as follows, TestClass is the process that receives packets from a zmq socket, and feeds them into the queue. There is another process(I took it out from the code) keeps fetching messages from the queue. I also have a script running to publish messages to this zmq channel.
from multiprocessing import Process, Queue
import zmq
import time
class TestClass(Process):
def __init__(self, queue):
super(TestClass, self).__init__()
# Setting up connections
self.context = zmq.Context()
self.socket = self.context.socket(zmq.SUB)
self.socket.connect("tcp://192.168.0.6:8577")
self.socket.setsockopt(zmq.SUBSCRIBE, b'')
self.queue = queue
def run(self):
while True:
msg = self.socket.recv()
self.queue.put(msg)
queue = Queue()
c = TestClass(queue)
c.run()
# Do something else
If I use c.run() to start the process, it runs fine, but it is not started as a Process because it blocks the following statement.
Then I switched to c.start() to start the process, but it was stuck at the line socket.recv() and cannot get any incoming messages. Can anybody please explain this and suggest a good solution? Thanks
The issue is that you're creating the zmq socket in the parent process, but then trying to use it in the child. Something in the forking process is breaking the socket, so it's not working when you try using it. You can fix it by simply creating the socket in the child, rather than the parent. This has no negative side effects, since you're not trying to use the socket in the parent to begin with.
from multiprocessing import Process, Queue
import zmq
import time
class TestClass(Process):
def __init__(self, queue):
super(TestClass, self).__init__()
self.queue = queue
def run(self):
# Setting up connections
self.context = zmq.Context()
self.socket = self.context.socket(zmq.SUB)
self.socket.connect("tcp://192.168.0.6:8577")
self.socket.setsockopt(zmq.SUBSCRIBE, b'')
while True:
msg = self.socket.recv()
self.queue.put(msg)
if __name__ == "__main__":
queue = Queue()
c = TestClass(queue)
c.start() # Don't use run()
# Do something else

How to put tcp server on another thread in python

I try to write a daemon in python. But I have no idea how can I use a thread to start parallel tcp server in this daemon. And even what type of server I should use : asyncore?SocketServer?socket?
this is part of my code:
import os
def demonized():
child_pid = os.fork()
if child_pid == 0:
child_pid = os.fork()
if child_pid == 0: #fork twice for demonize
file = open('###', "r") # open file
event = file.read()
while event:
#TODO check for changes put changes in list variable
event = file.read()
file.close()
else:
sys.exit(0)
else:
sys.exit(0)
if __name__ == "__main__":
demonized()
So in a loop I have a list variable with some data appended every circle, and I want to start a thread with tcp server that wait for connection in the loop and if client connects send it this data(with zeroing variable). So I do not need to handle multiple clients, the client will be only one at time. What is the optimal way to implement this?
Thank you.
In case you want to avoid repeating boilerplate, Python will soon have a standard module that does the fork() pair and standard-I/O manipulations (which you have not added to your program yet?) that make it a daemon. You can download and use this module right now, from:
http://pypi.python.org/pypi/python-daemon
Running a TCP server in a separate thread is often as simple as:
import threading
def my_tcp_server():
sock = socket.socket(...)
sock.bind(...)
sock.listen()
while True:
conn, address = sock.accept()
...
... talk on the connection ...
...
conn.close()
def main():
...
threading.Thread(target=my_tcp_server).start()
...
I strongly recommend against trying to get your file-reader thread and your socket-answering thread talking with a list and lock of your own devising; such schemes are hard to get working and hard to keep working. Instead, use the standard library's Queue.Queue() class which does all of the locking and appending correctly for you.
Do you want to append items to the list in while event:... loop and serving this list simultaneously? If so then you have two writers and you must somehow protect your list.
In the sample SocketServer.TCPServer and threading.Lock was used:
import threading
import SocketServer
import time
class DataHandler(SocketServer.StreamRequestHandler):
def handle(self):
self.server.list_block.acquire()
self.wfile.write(', '.join(self.server.data))
self.wfile.flush()
self.server.data = []
self.server.list_block.release()
if __name__ == '__main__':
data = []
list_block = threading.Lock()
server = SocketServer.TCPServer(('localhost', 0), DataHandler)
server.list_block = list_block
server.data = data
t = threading.Thread(target=server.serve_forever)
t.start()
while True:
list_block.acquire()
data.append(1)
list_block.release()
time.sleep(1)

Categories

Resources