Python thread blocking further execution - python

I have been trying to write a python script that initiates a thread to listen on a socket and send HTTP data to another application to be launched by the same program. There is a requirement for the socket server to be running prior to executing the application. However, the thread running the socket server blocks further execution of the program and it freezes where it is listening. Putting some dummy code.
In module 1:
def runServer(Port, Host, q):
HTTPServerObj = HTTPServer((Host, Port), RequestHandler)
HTTPServerObj.handle_request()
HTTPServerObj.server_close()
q.put((True, {'messageDoNotDuplicate': 'Data sent successfully by the server'}))
class SpoofHTTPServer(object):
def runServerThread(self):
q = Queue.Queue()
serverThread=Thread(target=runServer, args=(self.Port, self.Host, q))
serverThread.daemon=True
serverThread.start()
result = q.get()
print result
return result
In module 2:
from module1 import SpoofHTTPServer
spoofHTTPServer = SpoofHTTPServer()
result = spoofHTTPServer.runServerThread()
rc = myApp.start()
The myApp.start() never gets executed as the thread is blocking it.

It looks to me like the method that blocks execution is not the thread but q.get(). It will listen to the Queue until an item is available, but since it's executed before running the client application nothing ever gets posted into the queue. Maybe you should return q instead and listen to the queue in module 2 after calling myApp.start()?

This may work for you from Python 3. Make a connection to ('localhost', 8080) to see it work.
import queue as Queue
from threading import Thread
from http.server import HTTPServer
from socketserver import BaseRequestHandler as RequestHandler
def runServer(Port, Host, q):
HTTPServerObj = HTTPServer((Host, Port), RequestHandler)
HTTPServerObj.handle_request()
HTTPServerObj.server_close()
q.put((True, {'messageDoNotDuplicate':
'Data sent successfully by the server'}))
class SpoofHTTPServer(object):
Port = 8080
Host = ''
def runServerThread(self):
q = Queue.Queue()
serverThread=Thread(target=runServer, args=(self.Port, self.Host, q))
serverThread.daemon=True
serverThread.start()
result = q.get()
print(result)
return result
spoofHTTPServer = SpoofHTTPServer()
result = spoofHTTPServer.runServerThread()
##rc = myApp.start()

Related

concurrent futur object block forever

I try to understand concurrency from David Beazley talks. But when executing the server and client and try to submit the number 20 fro client, is seem that the futur object block forever when calling futur.result(). I can't understand why:
# server.py
# Fib microservice
from socket import *
from fib import fib
from threading import Thread
from concurrent.futures import ProcessPoolExecutor as Pool
pool = Pool(4)
def fib_server(address):
sock = socket(AF_INET, SOCK_STREAM)
sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)
sock.bind(address)
sock.listen(5)
while True:
client, addr = sock.accept()
print("Connection", addr)
Thread(target=fib_handler, args=(client,), daemon=True).start()
def fib_handler(client):
while True:
req = client.recv(100)
if not req:
break
n = int(req)
future = pool.submit(fib, n)
#Next line will block!!!!
result = future.result()
resp = str(result).encode('ascii') + b'\n'
client.send(resp)
print("Closed")
fib_server(('',25000))
#client.py
import socket
s = socket.socket()
s.connect(('localhost',25000))
while True:
num=input("number?")
s.send(str(num).encode('ascii') + b'\n')
res = s.recv(1000)
print('res:',res)
server> python server.py
client> python client.py
We see in order:
server> Connection ('127.0.0.1', 57876)
client> number?20
server>[freeze]
Finally this post help me to solve the problem: All example concurrent.futures code is failing with "BrokenProcessPool".
"Under Windows, it is important to protect the main loop of code to avoid recursive spawning of subprocesses when using processpoolexecutor or any other parallel code which spawns new processes.
Basically, all your code which creates new processes must be under if name == 'main': , for the same reason you cannot execute it in interpreter."

Python: how to create a server to supervise a thread pool?

I have a thread pool that handles some tasks concurrently. Now I'd like the tasks (multiply_by_2 here) to print something before exit.
Originally, I created a lock and passed the lock to each worker thread. If a thread wants to print something, it first acquires the lock, prints its message to stdout, then releases the lock.
Now, I want to have a dedicated event-driven server thread to handle the printing. If a thread wants to print something, it just send its message to that server, via a Unix domain socket (AF_UNIX). I hope in this way, each thread's blocking time can be reduced (no need to wait for the lock) and I don't need to share a lock among worker threads. The server thread just prints whatever messages it got from clients (i.e. the worker threads) in order.
I tried for some time with Python's asyncio module (requiring Python 3.7+) but couldn't figure it out. How should I do it?
This cleaned-up template is:
# Python 3.7+
import asyncio
import multiprocessing.dummy as mp # Threading wrapped using multiprocessing API.
import os
import socket
import sys
import threading
import time
server_address = './uds_socket' # UNIX domain socket
def run_multiple_clients_until_complete(input_list):
pool = mp.Pool(8)
result_list = pool.map(multiply_by_2, input_list)
return result_list
def multiply_by_2(n):
time.sleep(0.2) # Simulates some blocking call.
message_str = "client: n = %d" % n
# TODO send message_str.encode() to server
return n * 2
# Server's callback when it gets a client connection
# If you want to change it, please do..
def client_connected_cb(
stream_reader: asyncio.StreamReader,
stream_writer: asyncio.StreamWriter) -> None:
message_str = reader.read().decode()
print(message_str)
def create_server_thread():
pass # TODO
# Let the server finish handling all connections it got, then
# stop the server and join the thread
def stop_server_and_wait_thread(thread):
pass # TODO
def work(input_list):
thread = create_server_thread()
result_list = run_multiple_clients_until_complete(input_list)
stop_server_and_wait_thread(thread)
return result_list
def main():
input_list = list(range(20))
result_list = work(input_list)
print(result_list)
if __name__ == "__main__":
sys.exit(main())
Some extra requirements:
Don't make async: run_multiple_clients_until_complete(), multiply_by_2(), main().
It would be nicer to use the SOCK_DGRAM UDP protocol instead of SOCK_STREAM TCP, but it's unnecessary.

Multithreaded TCP socket

I'm trying to create a threaded TCP socket server that can handle multiple socket request at a time.
To test it, I launch several thread in the client side to see if my server can handle it. The first socket is printed successfully but I get a [Errno 32] Broken pipe for the others.
I don't know how to avoid it.
import threading
import socketserver
import graphitesend
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
data = self.request.recv(1024)
if data != "":
print(data)
class ThreadedTCPServer(socketserver.ThreadingTCPServer):
allow_reuse_address = True
def __init__(self, host, port):
socketserver.ThreadingTCPServer.__init__(self, (host, port), ThreadedTCPRequestHandler)
def stop(self):
self.server_close()
self.shutdown()
def start(self):
threading.Thread(target=self._on_started).start()
def _on_started(self):
self.serve_forever()
def client(g):
g.send("test", 1)
if __name__ == "__main__":
HOST, PORT = "localhost", 2003
server = ThreadedTCPServer(HOST, PORT)
server.start()
g = graphitesend.init(graphite_server = HOST, graphite_port = PORT)
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
threading.Thread(target = client, args=(g,)).start()
server.stop()
It's a little bit difficult to determine what exactly you're expecting to happen, but I think the proximate cause is that you aren't giving your clients time to run before killing the server.
When you construct a Thread object and call its start method, you're creating a thread, and getting it ready to run. It will then be placed on the "runnable" task queue on your system, but it will be competing with your main thread and all your other threads (and indeed all other tasks on the same machine) for CPU time.
Your multiple threads (main plus others) are also likely being serialized by the python interpreter's GIL (Global Interpreter Lock -- assuming you're using the "standard" CPython) which means they may not have even gotten "out of the gate" yet.
But then you're shutting down the server with server_close() before they've had a chance to send anything. That's consistent with the "Broken Pipe" error: your remaining clients are attempting to write to a socket that has been closed by the "remote" end.
You should collect the thread objects as you create them and put them in a list (so that you can reference them later). When you're finished creating and starting all of them, then go back through the list and call the .join method on each thread object. This will ensure that the thread has had a chance to finish. Only then should you shut down the server. Something like this:
threads = []
for n in range(7):
th = threading.Thread(target=client, args=(g,))
th.start()
threads.append(th)
# All threads created. Wait for them to finish.
for th in threads:
th.join()
server.stop()
One other thing to note is that all of your clients are sharing the same single connection to send to the server, so that your server will never create more than one thread: as far as it's concerned, there is only a single client. You should probably move the graphitesend.init into the client function if you actually want separate connections for each client.
(Disclaimer: I know nothing about graphitesend except what I could glean in a 15 second glance at the first result in google; I'm assuming it's basically just a wrapper around a TCP connection.)

Worker connecting to server but executing on client with multiprocessing package (python 2.7)

First post here, hello everyone.
I have a problem with the multiprocessing package with python 2.7.
I wish to have some processes run in parallel on a server; they do connect but they are executed locally instead.
This is the code I use on the server (Ubuntu 14.04):
from multiprocessing import Process
from multiprocessing.managers import BaseManager
from multiprocessing import cpu_count
class MyManager(BaseManager):
pass
def server():
mgr = MyManager(address=("", 2288), authkey="12345")
mgr.get_server().serve_forever()
if __name__ == "__main__":
print "number of cpus/cores:", cpu_count()
server = Process(target=server)
server.start()
print "server started"
server.join()
server.terminate()
while this is the code that runs on the client (Mac OS 10.11):
from multiprocessing import Manager
from multiprocessing import Process
from multiprocessing import current_process
from multiprocessing.managers import BaseManager
from math import sqrt
class MyManager(BaseManager):
pass
def worker(address, port, authkey):
mgr = MyManager(address=(address, port), authkey=authkey)
try:
mgr.connect()
print "- {} connected to {}:{}".format(current_process().name, address, port)
except:
print "- {} could not connect to server ({}:{})".format(current_process().name, address, port)
current_process().authkey = authkey
for k in range(1000000000):
sqrt(k * k)
if __name__ == "__main__":
# create processes
p = [Process(target=worker, args=("xx.xx.xx.xx", 2288, "12345")) for _ in range(4)]
# start processes
for each in p:
each.start()
# join the processes
for each in p:
each.join()
The for loop
for k in range(1000000000):
sqrt(k * k)
that's inside the worker function is just to let the workers process a lot, so I can monitor their activity into Activity Monitor or with top.
The problem is that the processes connect (as a matter of fact if I put a wrong address they do not) but they are executed on the local machine, as I see the server CPUs staying idle while the local CPUs going all towards 100%.
Am I getting something wrong?
You are starting your Process locally on your client. p and for each in p: each.start() is executed on your client, where it is run and starts the workers.
While each Process "connects" to the Manager via mgr.connect() it never interacts with it. The local Processes don't magically transfer to your server just because you opened a connection. Furthermore, a Manager isn't meant to run workers, it is meant to share data.
You'd have to start workers on the server, then send work to there.

Why does putting a socket in a queue close it?

I'm writing a server that operates with a fixed number of workers, each with different properties (in the snippet below, n is such a property.
Upon getting a request, I would like to put it into a queue, so the first available worker can deal with the task.
Unfortunately, the socket gets closed when it's enqueued.
import threading
from queue import Queue
import socketserver
thread = True
queue = Queue()
class BasicHandler(socketserver.BaseRequestHandler):
def handle(self):
while True:
sock = self.request
byte = sock.recv(10)
print(byte)
class ThreadedHandler(socketserver.BaseRequestHandler):
def handle(self):
queue.put(self.request)
def worker(n):
print('Started worker ' + str(n))
while True:
sock = queue.get()
byte = sock.recv(10)
print(byte)
if thread:
[threading.Thread(target=worker, args=(n,)).start() for n in range(2)]
handler = ThreadedHandler
else:
handler = BasicHandler
socketserver.TCPServer.allow_reuse_address = True
server = socketserver.TCPServer(("localhost", 9999), handler)
server.serve_forever()
Running the above snippet with thread = False works as fine, but when I try to connect to the thread = True version, telnet immediately says:
Connection closed by foreign host.
and the server prints:
Started worker 0
Started worker 1
b''
The request is automatically closed, when the method ThreadedHandler.handler finished. You have to override TCPServer.shutdown_request if you want to keep the socket open.

Categories

Resources