I want to dynamically create multiple Processes, where each instance has a queue for incoming messages from other instances, and each instance can also create new instances. So we end up with a network of processes all sending to each other. Every instance is allowed to send to every other.
The code below would do what I want: it uses a Manager.dict() to store the queues, making sure updates are propagated, and a Lock() to protect write-access to the queues. However when adding a new queue it throws "RuntimeError: Queue objects should only be shared between processes through inheritance".
The problem is that when starting-up, we don't know how many queues will eventually be needed, so we have to create them dynamically. But since we can't share queues except at construction time, I don't know how to do that.
I know that one possibility would be to make queues a global variable instead of a managed one passed-in to __init__: the problem then, as I understand it, is that additions to the queues variable wouldn't be propagated to other processes.
EDIT I'm working on evolutionary algorithms. EAs are a type of machine learning technique. An EA simulates a "population", which evolves by survival of the fittest, crossover, and mutation. In parallel EAs, as here, we also have migration between populations, corresponding to interprocess communication. Islands can also spawn new islands, and so we need a way to send messages between dynamically-created processes.
import random, time
from multiprocessing import Process, Queue, Lock, Manager, current_process
try:
from queue import Empty as EmptyQueueException
except ImportError:
from Queue import Empty as EmptyQueueException
class MyProcess(Process):
def __init__(self, queues, lock):
super(MyProcess, self).__init__(target=lambda x: self.run(x),
args=tuple())
self.queues = queues
self.lock = lock
# acquire lock and add a new queue for this process
with self.lock:
self.id = len(list(self.queues.keys()))
self.queues[self.id] = Queue()
def run(self):
while len(list(self.queues.keys())) < 10:
# make a new process
new = MyProcess(self.lock)
new.start()
# send a message to a random process
dest_key = random.choice(list(self.queues.keys()))
dest = self.queues[dest_key]
dest.put("hello to %s from %s" % (dest_key, self.id))
# receive messages
message = True
while message:
try:
message = self.queues[self.id].get(False) # don't block
print("%s received: %s" % (self.id, message))
except EmptyQueueException:
break
# what queues does this process know about?
print("%d: I know of %s" %
(self.id, " ".join([str(id) for id in self.queues.keys()])))
time.sleep(1)
if __name__ == "__main__":
# Construct MyProcess with a Manager.dict for storing the queues
# and a lock to protect write access. Start.
MyProcess(Manager().dict(), Lock()).start()
I'm not entirely sure what your use case actually is here. Perhaps if you elaborate a bit more on why you want to have each process dynamically spawn a child with a connected queue it'll be a bit more clear what the right solution would be in this situation.
Anyway, with the question as is it seems that there is not really a good way to dynamically create pipes or queues with Multiprocessing right now.
I think that if you're willing to spawn threads within each of your processes you may be able to use multiprocessing.connection.Listener/Client to communicate back and forth. Rather than spawning threads I took an approach using network sockets and select to communicate between threads.
Dynamic process spawning and network sockets may still be flaky depending on how multiprocessing cleans up your file descriptors when spawning/forking a new process and your solution will most likely work more easily on *nix derivatives. If you're concerned about socket overhead you could use unix domain sockets to be a little more lightweight at the cost of added complexity running nodes on multiple worker machines.
Anyway, here's an example using network sockets and a global process list to accomplish this since I was unable to find a good way to make multiprocessing do it.
import collections
import multiprocessing
import random
import select
import socket
import time
class MessagePassingProcess(multiprocessing.Process):
def __init__(self, id_, processes):
self.id = id_
self.processes = processes
self.queue = collections.deque()
super(MessagePassingProcess, self).__init__()
def run(self):
print "Running"
inputs = []
outputs = []
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
address = self.processes[self.id]["address"]
print "Process %s binding to %s"%(self.id, address)
server.bind(address)
server.listen(5)
inputs.append(server)
process = self.processes[self.id]
process["listening"] = True
self.processes[self.id] = process
print "Process %s now listening!(%s)"%(self.id, process)
while inputs:
readable, writable, exceptional = select.select(inputs,
outputs,
inputs,
0.1)
for sock in readable:
print "Process %s has a readable scoket: %s"%(self.id,
sock)
if sock is server:
print "Process %s has a readable server scoket: %s"%(self.id,
sock)
conn, addr = sock.accept()
conn.setblocking(0)
inputs.append(conn)
else:
data = sock.recv(1024)
if data:
self.queue.append(data)
print "non server readable socket with data"
else:
inputs.remove(sock)
sock.close()
print "non server readable socket with no data"
for sock in exceptional:
print "exception occured on socket %s"%(sock)
inputs.remove(sock)
sock.close()
while len(self.queue) >= 1:
print "Received:", self.queue.pop()
# send a message to a random process:
random_id = random.choice(list(self.processes.keys()))
print "%s Attempting to send message to %s"%(self.id, random_id)
random_process = self.processes[random_id]
print "random_process:", random_process
if random_process["listening"]:
random_address = random_process["address"]
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect(random_address)
except socket.error:
print "%s failed to send to %s"%(self.id, random_id)
else:
s.send("Hello World!")
finally:
s.close()
time.sleep(1)
if __name__=="__main__":
print "hostname:", socket.getfqdn()
print dir(multiprocessing)
manager = multiprocessing.Manager()
processes = manager.dict()
joinable = []
for n in xrange(multiprocessing.cpu_count()):
mpp = MessagePassingProcess(n, processes)
processes[n] = {"id":n,
"address":("127.0.0.1",7000+n),
"listening":False,
}
print "processes[%s] = %s"%(n, processes[n])
mpp.start()
joinable.append(mpp)
for process in joinable:
process.join()
With a lot of polish and testing love this might be a logical extension to multiprocessing.Process and/or multiprocessing.Pool as this does seem like something people would use if it were available in the standard lib. It may also be reasonable to create a DynamicQueue class that uses sockets to be discoverable to other queues.
Anyway, hope it helps. Please update if you figure out a better way to make this work.
This code is based on the accepted answer. It's in Python 3 since OSX Snow Leopard segfaults on some uses of multiprocessing stuff.
#!/usr/bin/env python3
import collections
from multiprocessing import Process, Manager, Lock, cpu_count
import random
import select
import socket
import time
import pickle
class Message:
def __init__(self, origin):
self.type = "long_msg"
self.data = "X" * 3000
self.origin = origin
def __str__(self):
return "%s %d" % (self.type, self.origin)
class MessagePassingProcess(Process):
def __init__(self, processes, lock):
self.lock = lock
self.processes = processes
with self.lock:
self.id = len(list(processes.keys()))
process_dict = {"id": self.id,
"address": ("127.0.0.1", 7000 + self.id),
"listening": False
}
self.processes[self.id] = process_dict
print("new process: processes[%s] = %s" % (self.id, processes[self.id]))
self.queue = collections.deque()
super(MessagePassingProcess, self).__init__()
def run(self):
print("Running")
self.processes[self.id]["joinable"] = True
inputs = []
outputs = []
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
address = self.processes[self.id]["address"]
print("Process %s binding to %s" % (self.id, address))
server.bind(address)
server.listen(5)
inputs.append(server)
process = self.processes[self.id]
process["listening"] = True
self.processes[self.id] = process
print("Process %s now listening!(%s)" % (self.id, process))
while inputs and len(list(self.processes.keys())) < 10:
readable, writable, exceptional = select.select(inputs,
outputs,
inputs,
0.1)
# read incoming messages
for sock in readable:
print("Process %s has a readable socket: %s" % (self.id, sock))
if sock is server:
print("Process %s has a readable server socket: %s" %
(self.id, sock))
conn, addr = sock.accept()
conn.setblocking(0)
inputs.append(conn)
else:
data = True
item = bytes() # empty bytes object, to be added to
recvs = 0
while data:
data = sock.recv(1024)
item += data
recvs += 1
if len(item):
self.queue.append(item)
print("non server readable socket: recvd %d bytes in %d parts"
% (len(item), recvs))
else:
inputs.remove(sock)
sock.close()
print("non server readable socket: nothing to read")
for sock in exceptional:
print("exception occured on socket %s" % (sock))
inputs.remove(sock)
sock.close()
while len(self.queue):
msg = pickle.loads(self.queue.pop())
print("received:" + str(msg))
# send a message to a random process:
random_id = random.choice(list(self.processes.keys()))
print("%s attempting to send message to %s" % (self.id, random_id))
random_process = self.processes[random_id]
if random_process["listening"]:
random_address = random_process["address"]
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect(random_address)
except socket.error:
print("%s failed to send to %s"%(self.id, random_id))
else:
item = pickle.dumps(Message(self.id))
print("sending a total of %d bytes" % len(item))
s.sendall(item)
finally:
s.close()
# make a new process
if random.random() < 0.1:
mpp = MessagePassingProcess(self.processes, self.lock)
mpp.start()
else:
time.sleep(1.0)
print("process %d finished looping" % self.id)
if __name__=="__main__":
manager = Manager()
processes = manager.dict()
lock = Lock()
# make just one process: it will make more
mpp = MessagePassingProcess(processes, lock)
mpp.start()
# this doesn't join on all the other processes created
# subsequently
mpp.join()
The standard library socketserver is provided to help avoid programming select() manually. In this version, we start a socketserver in a separate thread so that each Process can do (well, pretend to do) computation in its main loop.
#!/usr/bin/env python3
# Each Node is an mp.Process. It opens a client-side socket to send a
# message to another Node. Each Node listens using a separate thread
# running a socketserver (so avoiding manual programming of select()),
# which itself starts a new thread to handle each incoming connection.
# The socketserver puts received messages on an mp.Queue, where they
# are picked up by the Node for processing once per loop. This setup
# allows the Node to do computation in its main loop.
import multiprocessing as mp
import threading, random, socket, socketserver, time, pickle, queue
class Message:
def __init__(self, origin):
self.type = "long_message"
self.data = "X" * random.randint(0, 2000)
self.origin = origin
def __str__(self):
return "Message of type %s, length %d from %d" % (
self.type, len(self.data), self.origin)
class Node(mp.Process):
def __init__(self, nodes, lock):
super().__init__()
# Add this node to the Manager.dict of node descriptors.
# Write-access is protected by a Lock.
self.nodes = nodes
self.lock = lock
with self.lock:
self.id = len(list(nodes.keys()))
host = "127.0.0.1"
port = 7022 + self.id
node = {"id": self.id, "address": (host, port), "listening": False}
self.nodes[self.id] = node
print("new node: nodes[%s] = %s" % (self.id, nodes[self.id]))
# Set up socketserver.
# don't know why collections.deque or queue.Queue don't work here.
self.queue = mp.Queue()
# This MixIn usage is directly from the python.org
# socketserver docs
class ThreadedTCPServer(socketserver.ThreadingMixIn,
socketserver.TCPServer):
pass
class HandlerWithQueue(socketserver.BaseRequestHandler):
# Something of a hack: using class variables to give the
# Handler access to this Node-specific data
handler_queue = self.queue
handler_id = self.id
def handle(self):
# could receive data in multiple chunks, so loop and
# concatenate
item = bytes()
recvs = 0
data = True
if data:
data = self.request.recv(4096)
item += data
recvs += 1
if len(item):
# Receive a pickle here and put it straight on
# queue. Will be unpickled when taken off queue.
print("%d: socketserver received %d bytes in %d recv()s"
% (self.handler_id, len(item), recvs))
self.handler_queue.put(item)
self.server = ThreadedTCPServer((host, port), HandlerWithQueue)
self.server_thread = threading.Thread(target=self.server.serve_forever)
self.server_thread.setDaemon(True) # Tell it to exit when Node exits.
self.server_thread.start()
print("%d: server loop running in thread %s" %
(self.id, self.server_thread.getName()))
# Now ready to receive
with self.lock:
# Careful: if we assign directly to
# self.nodes[self.id]["listening"], the new value *won't*
# be propagated to other Nodes by the Manager.dict. Have
# to use this hack to re-assign the Manager.dict key.
node = self.nodes[self.id]
node["listening"] = True
self.nodes[self.id] = node
def send(self):
# Find a destination. All listening nodes are eligible except self.
dests = [node for node in self.nodes.values()
if node["id"] != self.id and node["listening"]]
if len(dests) < 1:
print("%d: no node to send to" % self.id)
return
dest = random.choice(dests)
print("%d: sending to %s" % (self.id, dest["id"]))
# send
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect(dest["address"])
except socket.error:
print("%s: failed to send to %s" % (self.id, dest["id"]))
else:
item = pickle.dumps(Message(self.id))
s.sendall(item)
finally:
s.close()
# Check our queue for incoming messages.
def receive(self):
while True:
try:
message = pickle.loads(self.queue.get(False))
print("%d: received %s" % (self.id, str(message)))
except queue.Empty:
break
def run(self):
print("%d: in run()" % self.id)
# Main loop. Loop until at least 10 Nodes exist. Because of
# parallel processing we might get a few more
while len(list(self.nodes.keys())) < 10:
time.sleep(random.random() * 0.5) # simulate heavy computation
self.send()
time.sleep(random.random() * 0.5) # simulate heavy computation
self.receive()
# maybe make a new node
if random.random() < 0.1:
new = Node(self.nodes, self.lock)
new.start()
# Seems natural to call server_thread.shutdown() here, but it
# hangs. But since we've set the thread to be a daemon, it
# will exit when this process does.
print("%d: finished" % self.id)
if __name__=="__main__":
manager = mp.Manager()
nodes = manager.dict()
lock = mp.Lock()
# make just one node: it will make more
node0 = Node(nodes, lock)
node0.start()
# This doesn't join on all the other nodes created subsequently.
# But everything seems to work out ok.
node0.join()
Related
I hope the title is appropriate. If not please suggest an alternative. I am working with the following Python Client Class.
import Queue
import socket
import struct
import threading
import time
class ClientCommand(object):
CONNECT, SEND, RECEIVE, CLOSE = range(4)
def __init__(self, type, data=None):
self.type = type
self.data = data
class ClientReply(object):
ERROR, SUCCESS = range(2)
def __init__(self, type, data = None):
self.type = type
self.data = data
class SocketClientThread(threading.Thread):
def __init__(self, cmd_q = Queue.Queue(), reply_q = Queue.Queue()):
super(SocketClientThread, self).__init__()
self.cmd_q = cmd_q
self.reply_q = reply_q
self.alive = threading.Event()
self.alive.set()
self.socket = None
#self.stopped = False
self.handlers = {
ClientCommand.CONNECT: self._handle_CONNECT,
ClientCommand.CLOSE: self._handle_CLOSE,
ClientCommand.SEND: self._handle_SEND,
ClientCommand.RECEIVE: self._handle_RECEIVE
}
def run(self):
while self.alive.isSet():
#while not self.stopped:
try:
cmd = self.cmd_q.get(True, 0.1)
self.handlers[cmd.type](cmd)
except Queue.Empty as e:
continue
def stop(self):
self.alive.clear()
def join(self, timeout=None):
self.alive.clear()
threading.Thread.join(self, timeout)
def _handle_CONNECT(self, cmd):
try:
self.socket = socket.socket(
socket.AF_INET, socket.SOCK_STREAM)
self.socket.connect((cmd.data[0], cmd.data[1]))
self.reply_q.put(self._success_reply())
except IOError as e:
self.reply_q.put(self._error_reply(str(e)))
def _handle_CLOSE(self, cmd):
self.socket.close()
reply = ClientReply(ClientReply.SUCCESS)
self.reply_q.put(reply)
def _handle_SEND(self, cmd):
try:
print "about to send: ", cmd.data
self.socket.sendall(cmd.data)
print "sending data"
self.reply_q.put(self._success_reply())
except IOError as e:
print "Error in sending"
self.reply_q.put(self._error_reply(str(e)))
def _handle_RECEIVE(self, cmd):
try:
#TODO Add check for len(data)
flag = True
while flag:
print "Receiving Data"
data = self._recv_n_bytes()
if len(data) != '':
self.reply_q.put(self._success_reply(data))
if data == "Stop":
print "Stop command"
flag = False
except IOError as e:
self.reply_q.put(self._error_reply(str(e)))
def _recv_n_bytes(self):
data = self.socket.recv(1024)
return data
def _error_reply(Self, errstr):
return ClientReply(ClientReply.ERROR, errstr)
def _success_reply(self, data = None):
return ClientReply(ClientReply.SUCCESS, data)
My main script code -
import socket
import time
import Queue
import sys
import os
from client import *
sct = SocketClientThread()
sct.start()
host = '127.0.0.1'
port = 1234
sct.cmd_q.put(ClientCommand(ClientCommand.CONNECT, (host, port)))
try:
while True:
sct.cmd_q.put(ClientCommand(ClientCommand.RECEIVE))
reply = sct.reply_q
tmp = reply.get(True)
data = tmp.data
if data != None:
if data != "step1":
//call function to print something
else:
// call_function that prints incoming data till server stops sending data
print "Sending OK msg"
sct.cmd_q.put(ClientCommand(ClientCommand.SEND, "Hello\n"))
print "Done"
else:
print "No Data"
except:
#TODO Add better error handling than a print
print "Server down"
So here is the issue. Once the thread starts, and the Receive handler is called, I get some data, if that data is not "Step1", I just call a function (another script) to print it.
However, if the data is "step1", I call a function which will then continue printing whatever data the server sends next, till the server sends a "Stop" message. At this point, I break out of the "Receive Handler", and try to send an "Ok" message to the Server.
However, as soon as I break out of the "Receive Handler", it automatically calls upon that function again. So while I am trying to send back a message, the client is again waiting for data from the server. So due to the "Receiver function" being called again, the "Send function" blocks.
I can't seem to understand how to switch between receiving and sending. What is wrong with my approach here and how should I fix this? Do I need to re-write the code to have two separate threads for sending and receiving?
If you require any more details please let me know before you decide to flag my question for no reason.
However, as soon as I break out of the "Receive Handler", it
automatically calls upon that function again.
This is because you call sct.cmd_q.put(ClientCommand(ClientCommand.RECEIVE)) within the while True loop that's run through for each single chunk of data received, i. e. for each data before "step1" one more command to call the "Receive Handler" (which itself loops until "Stop") is put into the ClientCommand queue, and those commands are of course then executed before the SEND command. If you place the RECEIVE call before this while True loop, your approach can work.
The error is
if msgid != "step1":
NameError: name 'msgid' is not defined
Instead of
#TODO Add better error handling than a print
print "Server down"
you had better written
raise
and spotted it immediately.
From my understanding python can only run 1 thread at a time so if I were to do something like this
import socket, select
from threading import Thread
import config
class Source(Thread):
def __init__(self):
self._wait = False
self._host = (config.HOST, config.PORT + 1)
self._socket = socket.socket()
self._socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self._sock = None
self._connections = []
self._mount = "None"
self._writers = []
self._createServer()
Thread.__init__(self)
def _createServer(self):
self._socket.bind(self._host)
self._socket.listen(2)
self._connections.append(self._socket)
self._audioPackets=[]
def _addPacket(self, packet):
self._audioPackets.append(packet)
def _removePacket(self, packet):
self._audioPackets.remove(packet)
def _getPacket(self):
if len(self._audioPackets) > 0:
return self._audioPackets[0]
else:
return None
def _sendOK(self, sock):
sock.send("OK")
def _sendDenied(self, sock):
sock.send("DENIED")
def _sendMount(self, sock):
sock.send("mount:{0}".format(self._mount))
def _sendBufPacket(self, sock, packet):
packet = "buffer:%s" % packet
sock.send(packet)
def recv(self, sock, data):
data = data.split(":", 1)
if data[0] == "WAIT": self._wait = True
elif data[0] == "STOP_WAITING": self._wait = False
elif data[0] == "LOGIN":
if data[1] == config.SOURCE_AUTH:
self._source = sock
self._sendOK(sock)
else:
self._sendClose(sock)
elif data[0] == "MOUNT":
if self._source == sock:
self._mount = data[1]
else:
self._sendClose(sock)
elif data[0] == "CLIENT":
self._sendMount(sock)
self._writers.append(sock)
def _sendCloseAll(self):
for sock in self._connections:
sock.send("CLOSE")
sock.close()
def _sendClose(self, sock):
sock.send("CLOSE")
sock.close()
def main(self):
while True:
rl, wl, xl = select.select(self._connections, self._writers, [], 0.2)
for sock in rl:
if sock == self._socket:
con, ip = sock.accept()
self._connections.append(con)
else:
data = sock.recv(config.BUFFER)
if data:
self.recv(sock, data)
else:
if sock in self._writers:
self._writers.remove(sock)
if sock in self._connections:
self._connections.remove(sock)
for sock in wl:
packet = self._getPacket()
if packet != None:
self._sendBufPacket(sock, packet)
def run(self):
self.main()
class writeThread(Thread):
def __init__(self):
self.running = False
def make(self, client):
self.client = client
self.running = True
def run(self):
host = (config.HOST, config.PORT+1)
sock = socket.socket()
sock.connect(host)
sock.send("CLIENT")
sock.send("MOUNT:mountpoint")
while self.running:
data = sock.recv(config.BUFFER)
if data:
data = data.split(":", 1)
if data[0] == "buffer":
self.client.send(data[1])
elif data[0] == "CLOSE":
self.client.close()
break
if __name__=="__main__":
source = Source()
source.start()
webserver = WebServer()
webserver.runloop()
if I need to build the webserver part I will. But, I'll explain it.
Okay, so basically when someone connects to the websever under the mountpoint that was set, They will get there own personal thread that then grabs the data from Source() and sends it to them. Now say another person connects to the mount point and the last client as well as the source is still going. Wouldn't the new client be blocked from getting the Source data considering there are two active threads?
Your understanding of how Threads work in Python seems to be incorrect, based on the question you are asking. If used correctly, threads will not be blocking: you can instantiate multiple thread with Python. The limitation is that, due to the Global Interpreter Lock (GIL), you cannot get the full parallelism expected in thread programming (e.g. simultaneous execution and thus, reduced runtime).
What is going to happen in your case is that the two threads will take, together, the same amount of time that they would take if they were executed sequentially (although that is not necessarily what happens in practice).
Okay, I have copy and pasted some Python3 code that I have already written for a project that I am currently working on. With modification, you can make this code serve your purposes.
The code uses multiprocessing and multithreading. For my purposes, I am using multiprocessing so that sockets will run on one processor, and I can run a GUI program on another processor. You can remove the multiprocessor part if you prefer. The code below runs a socket message server. The server will listen for clients one at a time. Once a client has connected, a new thread will be initiated to handle all the communications between the server and each client. The server will then continue to search for for clients. At the moment however, the server only listens to data being sent from each client, and then it prints it to the terminal. With a small amount of effort, you can modify my code to send information from the server to each client individually.
import multiprocessing
import threading
from threading import Thread
class ThreadedServer(object):
def __init__(self, host, port):
self.host = host
self.port = port
self.sock = socket(AF_INET, SOCK_STREAM)
self.sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1)
self.sock.bind((self.host, self.port))
def listen(self):
self.sock.listen(3) #Allow 3 Clients to connect to this server
while True:
#The program will search for one client at a time
print("Searching for Client")
client, address = self.sock.accept()
print(address, " is connected")
#client.settimeout(60)
#Once a client has been found, start a individual client thread
d = threading.Thread(target = self.listenToClient, args=(client, address))
d.daemon = True
d.start()
def listenToClient(self, client, address):
size = 1024
while True:
try:
data = client.recv(size)
if not data:
break
if data:
print(data)
#client.send(response)
else:
raise error('Client disconnected')
except:
client.close()
return False
def dataSharingHost():
#Using Sockets to send information between Processes
#This is the server Function
#ThreadServer(Host_IP, Port_Number), for LocalHost use ''
ThreadedServer('', 8000).listen()
def Main():
commServer = multiprocessing.Process(target=dataSharingHost, args=())
commServer.daemon = True
commServer.start()
if __name__== '__main__':
Main()
And to be fair, my code is modified from https://www.youtube.com/watch?v=qELZAi4yra8 . The client code is covered in those videos. I think the 3rd video covers the multiple client connects.
I'm running a python server using the socketserver module in python 2.7. OmniPeek packet analysis tool shows the TCP handshake completes,
but the server immediately sends a reset packet killing the connection.
Simplified server code which shows the problem is:
from threading import Lock, Thread, Condition
import SocketServer
import socket
import sys
import time
class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler):
def __init__(self, state, *args, **keys):
try:
state['lock'].acquire()
state['client_count'] += 1
finally:
state['lock'].release()
self.state = state
SocketServer.BaseRequestHandler.__init__(self, *args, **keys)
def handle(self):
self.state['lock'].acquire()
count = self.state['client_count']
self.state['lock'].release()
while True:
try:
self.state['lock'].acquire()
running = self.state['running']
self.state['lock'].release()
if not running:
break;
time.sleep(1) # do some work
except Exception as msg:
print msg
print "ThreadedTCPRequestHandler shutting down..."
class ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
pass
def handler_factory(state):
def createHandler(*args, **keys):
return ThreadedTCPRequestHandler(state, *args, **keys)
return createHandler
if __name__ == "__main__":
lock = Lock()
cv = Condition(lock)
state = {'running': True, 'client_count': 0, 'lock': lock, 'cv': cv}
server = ThreadedTCPServer(('localhost', 12345), handler_factory(state))
server_thread = Thread(target=server.serve_forever)
server_thread.daemon = True
server_thread.start()
print "Server loop running in thread:", server_thread.name
# wait for a client to connect
cv.acquire()
while state['client_count'] == 0 and state['running']:
cv.wait(1.0)
# print msg
cv.release()
# substitute real work here...
time.sleep(5)
lock.acquire()
state['running'] = False
lock.release()
server.shutdown()
and the client code:
import socket
if __name__ == "__main__":
try:
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print 'ip: {} port {}'.format('10.4.2.54', 12345)
client.connect(('10.4.2.54', 12345))
while True:
data = client.recv(4096)
if len(data) == 0:
break;
print 'data: {}'.format(data)
client.shutdown(socket.SHUT_RDWR)
client.close()
except Exception as msg:
print msg
The server code is based off python 2.7 docs serversocket Mixin example, and seems pretty straightforward, but...
Thanks
not sure what your expected behaviour is but if you make a couple of changes, you'll be able to see that it can work
replace your handle method
def handle(self):
while True:
try:
data = self.request.recv(1024).strip()
if len(data) != 0:
print data
time.sleep(1) # do some work
self.request.send('test data')
except Exception as msg:
print msg
break
print "ThreadedTCPRequestHandler shutting down..."
and client(inside main):
try:
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print 'ip: {} port {}'.format('localhost', 1232)
client.connect(('localhost', 1232))
client.send('test')
n = 0
while True:
data = client.recv(4096)
if len(data) != 0:
print 'data: {}'.format(data)
time.sleep(1)
n += 1
client.send('keep-alive' + str(n) + '\n')
print 'here'
client.shutdown(socket.SHUT_RDWR)
client.close()
except Exception as msg:
print msg
I just modded it to send stuff and print stuff. But it doesn't crash.
I think there is an issue with your self.state['lock'].acquire() and release() calls. I took out the 'running' check as it's not really used except at the end of the server code.
Also, without any action, sockets will time out.
Once again, I'm not claiming to have 'fixed' your problem...and I'm not sure exactly what you are looking for...just helping you brainstorm!
Apologies - red herring. The problem is only occurring under VM when server is running under guest and client is running under host. TCP reset packet never sent when both client and server are running either on host or guest
I need to implement audio player that could deal with jitter. So I need buffering and hence I need to have minimal buffer size and to know how much elements are in buffer at the time.
But in python Queue qsize() method is not implemented. What can I do about it?
class MultiprocessedAudioPlayer(object):
def __init__(self, sampling_frequency, min_buffer_size=1, max_buffer_size=10, sample_width=2):
self.p = PyAudio()
self.stream = self.p.open(format=self.p.get_format_from_width(width=sample_width), rate=sampling_frequency,
output=True, channels=1)
self.max_buffer_size = max_buffer_size
self.min_buffer_size = min_buffer_size
self.buffer = Queue(maxsize=max_buffer_size)
self.process = Process(target=self.playing)
self.process.start()
self.condition = Condition()
def schedule_to_play(self, frame):
self.condition.acquire()
if self.buffer.full():
print('Buffer is overflown')
self.condition.wait()
self.buffer.put(frame)
if self.buffer.qsize() > self.min_buffer_size:
print('Buffer length is', len(self.buffer))
self.condition.notify()
print('It is sufficient to play')
self.condition.release()
# print('frame appended buffer length is {} now'.format(self.buffer.qsize()))
def play(self, frame):
print('started playing frame at {}'.format(datetime.now()))
self.stream.write(frame, num_frames=len(frame))
print('stopped playing frame at {}'.format(datetime.now()))
def close(self):
self.stream.stop_stream()
self.stream.close()
def playing(self):
while True:
self.condition.acquire()
if self.buffer.qsize() < self.min_buffer_size:
self.condition.wait()
frame = self.buffer.popleft()
print('popping frame from buffer')
print('Buffer length is {} now'.format(len(self.buffer)))
self.condition.notify()
self.condition.release()
self.play(frame)
Two suggestions:
use threading -- the qsize() method is reliable. (It isn't reliable with multiprocessing because of latency sending messages back and forth.)
use multiprocessing with a Manager instance that holds your shared state. Each process can set and get data, and the Manager handles the sending of updates back and forth.
The following example adds data to a list every second, and every now and then the data is scanned by a second process. Also note the extensive logging, which is extremely helpful with multiprocess programs.
#!/usr/bin/env python
'''
mptest_proxy.py -- producer adds to fixed-sized list; scanner uses them
OPTIONS:
-v verbose multiprocessing output
'''
import logging, multiprocessing, sys, time
def producer(objlist):
'''
add an item to list every sec; ensure fixed size list
'''
logger = multiprocessing.get_logger()
logger.info('start')
while True:
try:
time.sleep(1)
except KeyboardInterrupt:
return
msg = 'ding: {:04d}'.format(int(time.time()) % 10000)
logger.info('put: %s', msg)
del objlist[0]
objlist.append( msg )
def scanner(objlist):
'''
every now and then, run calculation on objlist
'''
logger = multiprocessing.get_logger()
logger.info('start')
while True:
try:
time.sleep(5)
except KeyboardInterrupt:
return
logger.info('items: %s', list(objlist))
def main():
opt_verbose = '-v' in sys.argv[1:]
logger = multiprocessing.log_to_stderr(
level=logging.DEBUG if opt_verbose else logging.INFO,
)
logger.info('setup')
# create fixed-length list, shared between producer & consumer
manager = multiprocessing.Manager()
my_objlist = manager.list( # pylint: disable=E1101
[None] * 10
)
multiprocessing.Process(
target=producer,
args=(my_objlist,),
name='producer',
).start()
multiprocessing.Process(
target=scanner,
args=(my_objlist,),
name='scanner',
).start()
logger.info('running forever')
try:
manager.join() # wait until both workers die
except KeyboardInterrupt:
pass
logger.info('done')
if __name__=='__main__':
main()
I want to read messages from either a Queue.Queue or a TCP socket, whichever comes first.
How can it be achieved without resorting to 2 threads ?
Platform is CPython 2.7.5 on Windows
There is a very nice trick to do this here that applies to your problem.
import queue
import socket
import os
class PollableQueue(queue.Queue):
def __init__(self):
super().__init__()
# Create a pair of connected sockets
if os.name == 'posix':
self._putsocket, self._getsocket = socket.socketpair()
else:
# Compatibility on non-POSIX systems
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind(('127.0.0.1', 0))
server.listen(1)
self._putsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self._putsocket.connect(server.getsockname())
self._getsocket, _ = server.accept()
server.close()
def fileno(self):
return self._getsocket.fileno()
def put(self, item):
super().put(item)
self._putsocket.send(b'x')
def get(self):
self._getsocket.recv(1)
return super().get()
To do it in a single thread, you'll have to use non-blocking methods, and merge them into a single event loop. I'm actually using select instead of non-blocking socket I/O here, since it's slightly cleaner if you need to read from multiple sockets...
import socket
import select
import Queue
import time
TIMEOUT = 0.1 # 100ms
def process_queue_item(item):
print 'Got queue item: %r' % item
def process_socket_data(data):
print 'Got socket data: %r' % data
def main():
# Build queue
queue = Queue.Queue()
for i in range(10):
queue.put(i)
queue.put(None) # Using None to indicate no more data on queue
queue_active = True
# Build socket
sock = socket.socket()
sock.connect(('www.google.com', 80))
sock.send('GET / HTTP/1.0\r\n\r\n')
socket_active = True
# Main event loop
while 1:
# If there's nothing to read, bail out
if not (socket_active or queue_active):
break
# By default, sleep at the end of the loop
do_sleep = True
# Get data from socket without blocking if possible
if socket_active:
r, w, x = select.select([sock], [], [], TIMEOUT)
if r:
data = sock.recv(64)
if not data: # Hit EOF
socket_active = False
else:
do_sleep = False
process_socket_data(data)
# Get item from queue without blocking if possible
if queue_active:
try:
item = queue.get_nowait()
if item is None: # Hit end of queue
queue_active = False
else:
do_sleep = False
process_queue_item(item)
except Queue.Empty:
pass
# If we didn't get anything on this loop, sleep for a bit so we
# don't max out CPU time
if do_sleep:
time.sleep(TIMEOUT)
if __name__ == '__main__':
main()
Output looks like...
Got socket data: 'HTTP/1.0 302 Found\r\nLocation: http://www.google.co.uk/\r\nCache-Co'
Got queue item: 0
Got socket data: 'ntrol: private\r\nContent-Type: text/html; charset=UTF-8\r\nSet-Cook'
Got queue item: 1
Got socket data: 'ie: PREF=ID=a192ab09b4c13176:FF=0:TM=1373055330:LM=1373055330:S='
Got queue item: 2
etc.
You can do something along these lines:
def check_for_message(queue,socket,sock_accept_size=512):
socket.setblocking(0)
while True:
try:
sock_msg=socket.recv(sock_accept_size)
except socket.error:
"""Do stuff if there is no message"""
sock_msg=None
try:
que_msg=queue.get()
except Queue.Empty:
"""Do stuff if there is no message"""
que_msg=None
yield (que_msg,sock_msg)
Then you can iterate through it using:
for que_message,sock_message in check_for_message(que_instance,socket_instance):
print que_message,sock_message