Non-blocking multiprocessing.connection.Listener? - python

I use multiprocessing.connection.Listener for communication between processes, and it works as a charm for me. Now i would really love my mainloop to do something else between commands from client. Unfortunately listener.accept() blocks execution until connection from client process is established.
Is there a simple way of managing non blocking check for multiprocessing.connection? Timeout? Or shall i use a dedicated thread?
# Simplified code:
from multiprocessing.connection import Listener
def mainloop():
listener = Listener(address=(localhost, 6000), authkey=b'secret')
while True:
conn = listener.accept() # <--- This blocks!
msg = conn.recv()
print ('got message: %r' % msg)
conn.close()

One solution that I found (although it might not be the most "elegant" solution is using conn.poll. (documentation) Poll returns True if the Listener has new data, and (most importantly) is nonblocking if no argument is passed to it. I'm not 100% sure that this is the best way to do this, but I've had success with only running listener.accept() once, and then using the following syntax to repeatedly get input (if there is any available)
from multiprocessing.connection import Listener
def mainloop():
running = True
listener = Listener(address=(localhost, 6000), authkey=b'secret')
conn = listener.accept()
msg = ""
while running:
while conn.poll():
msg = conn.recv()
print (f"got message: {msg}")
if msg == "EXIT":
running = False
# Other code can go here
print(f"I can run too! Last msg received was {msg}")
conn.close()
The 'while' in the conditional statement can be replaced with 'if,' if you only want to get a maximum of one message at a time. Use with caution, as it seems sort of 'hacky,' and I haven't found references to using conn.poll for this purpose elsewhere.

You can run the blocking function in a thread:
conn = await loop.run_in_executor(None, listener.accept)

I've not used the Listener object myself- for this task I normally use multiprocessing.Queue; doco at the following link:
https://docs.python.org/2/library/queue.html#Queue.Queue
That object can be used to send and receive any pickle-able object between Python processes with a nice API; I think you'll be most interested in:
in process A
.put('some message')
in process B
.get_nowait() # will raise Queue.Empty if nothing is available- handle that to move on with your execution
The only limitation with this is you'll need to have control of both Process objects at some point in order to be able to allocate the queue to them- something like this:
import time
from Queue import Empty
from multiprocessing import Queue, Process
def receiver(q):
while 1:
try:
message = q.get_nowait()
print 'receiver got', message
except Empty:
print 'nothing to receive, sleeping'
time.sleep(1)
def sender(q):
while 1:
message = 'some message'
q.put('some message')
print 'sender sent', message
time.sleep(1)
some_queue = Queue()
process_a = Process(
target=receiver,
args=(some_queue,)
)
process_b = Process(
target=sender,
args=(some_queue,)
)
process_a.start()
process_b.start()
print 'ctrl + c to exit'
try:
while 1:
time.sleep(1)
except KeyboardInterrupt:
pass
process_a.terminate()
process_b.terminate()
process_a.join()
process_b.join()
Queues are nice because you can actually have as many consumers and as many producers for that exact same Queue object as you like (handy for distributing tasks).
I should point out that just calling .terminate() on a Process is bad form- you should use your shiny new messaging system to pass a shutdown message or something of that nature.

The multiprocessing module comes with a nice feature called Pipe(). It is a nice way to share resources between two processes(never tried more than two before). With the dawn of python 3.80 came the shared memory function in the multiprocessing module but i have not really tested that so i cannot vouch for it
You will use the pipe function something like
from multiprocessing import Pipe
.....
def sending(conn):
message = 'some message'
#perform some code
conn.send(message)
conn.close()
receiver, sender = Pipe()
p = Process(target=sending, args=(sender,))
p.start()
print receiver.recv() # prints "some message"
p.join()
with this you should be able to have separate processes running independently and when you get to the point which you need the input from one process. If there is somehow an error due to the unrelieved data of the other process you can put it on a kind of sleep or halt or use a while loop to constantly check pending when the other process finishes with that task and sends it over
while not parent_conn.recv():
time.sleep(5)
this should keep it in an infinite loop until the other process is done running and sends the result. This is also about 2-3 times faster than Queue. Although queue is also a good option personally I do not use it.

Related

Safe way to exit an infinite loop within a Thread Pool for Python3

I am using Python3 modules:
requests for HTTP GET calls to a few Particle Photons which are set up as simple HTTP Servers
As a client I am using the Raspberry Pi (which is also an Access Point) as a HTTP Client which uses multiprocessing.dummy.Pool for making HTTP GET resquests to the above mentioned photons
The polling routine is as follows:
def pollURL(url_of_photon):
"""
pollURL: Obtain the IP Address and create a URL for HTTP GET Request
#param: url_of_photon: IP address of the Photon connected to A.P.
"""
create_request = 'http://' + url_of_photon + ':80'
while True:
try:
time.sleep(0.1) # poll every 100ms
response = requests.get(create_request)
if response.status_code == 200:
# if success then dump the data into a temp dump file
with open('temp_data_dump', 'a+') as jFile:
json.dump(response.json(), jFile)
else:
# Currently just break
break
except KeyboardInterrupt as e:
print('KeyboardInterrupt detected ', e)
break
The url_of_photon values are simple IPv4 Addresses obtained from the dnsmasq.leases file available on the Pi.
the main() function:
def main():
# obtain the IP and MAC addresses from the Lease file
IP_addresses = []
MAC_addresses = []
with open('/var/lib/misc/dnsmasq.leases', 'r') as leases_file:
# split lines and words to obtain the useful stuff.
for lines in leases_file:
fields = lines.strip().split()
# use logging in future
print('Photon with MAC: %s has IP address: %s' %(fields[1],fields[2]))
IP_addresses.append(fields[2])
MAC_addresses.append(fields[1])
# Create Thread Pool
pool = ThreadPool(len(IP_addresses))
results = pool.map(pollURL, IP_addresses)
pool.close()
pool.join()
if __name__ == '__main__':
main()
Problem
The program runs well however when I press CTRL + C the program does not terminate. Upon digging I found that the way to do so is using CTRL + \
How do I use this in my pollURL function for a safe way to exit the program, i.e. perform poll.join() so no leftover processes are left?
notes:
the KeyboardInterrupt is never recognized with the function. Hence I am facing trouble trying to detect CTRL + \.
The pollURL is executed in another thread. In Python, signals are handled only in the main thread. Therefore, SIGINT will raise the KeyboardInterrupt only in the main thread.
From the signal documentation:
Signals and threads
Python signal handlers are always executed in the main Python thread, even if the signal was received in another thread. This means that signals can’t be used as a means of inter-thread communication. You can use the synchronization primitives from the threading module instead.
Besides, only the main thread is allowed to set a new signal handler.
You can implement your solution in the following way (pseudocode).
event = threading.Event()
def looping_function( ... ):
while event.is_set():
do_your_stuff()
def main():
try:
event.set()
pool = ThreadPool()
pool.map( ... )
except KeyboardInterrupt:
event.clear()
finally:
pool.close()
pool.join()

Synchronous and blocking consumption in RabbitMQ using pika

I want to consume a queue (RabbitMQ) synchronously with blocking.
Note: below is full code ready to be run.
The system set up is using RabbitMQ as it's queuing system, but asynchronous consumption is not needed in one of our modules.
I've tried using basic_get on top of a BlockingConnection, which doesn't block (returns (None, None, None) immediately):
# declare queue
get_connection().channel().queue_declare(TEST_QUEUE)
def blocking_get_1():
channel = get_connection().channel()
# get from an empty queue (prints immediately)
print channel.basic_get(TEST_QUEUE)
I've also tried to use the consume generator, fails with "Connection Closed" after a long time of not consuming.
def blocking_get_2():
channel = get_connection().channel()
# put messages in TEST_QUEUE
for i in range(4):
channel.basic_publish(
'',
TEST_QUEUE,
'body %d' % i
)
consume_generator = channel.consume(TEST_QUEUE)
print next(consume_generator)
time.sleep(14400)
print next(consume_generator)
Is there a way to use RabbitMQ using the pika client as I would a Queue.Queue in python? or anything similar?
My option at the moment is busy-wait (using basic_get) - but I rather use the existing system to not busy-wait, if possible.
Full code:
#!/usr/bin/env python
import pika
import time
TEST_QUEUE = 'test'
def get_connection():
# define connection
connection = pika.BlockingConnection(
pika.ConnectionParameters(
host=YOUR_IP,
port=YOUR_PORT,
credentials=pika.PlainCredentials(
username=YOUR_USER,
password=YOUR_PASSWORD,
)
)
)
return connection
# declare queue
get_connection().channel().queue_declare(TEST_QUEUE)
def blocking_get_1():
channel = get_connection().channel()
# get from an empty queue (prints immediately)
print channel.basic_get(TEST_QUEUE)
def blocking_get_2():
channel = get_connection().channel()
# put messages in TEST_QUEUE
for i in range(4):
channel.basic_publish(
'',
TEST_QUEUE,
'body %d' % i
)
consume_generator = channel.consume(TEST_QUEUE)
print next(consume_generator)
time.sleep(14400)
print next(consume_generator)
print "blocking_get_1"
blocking_get_1()
print "blocking_get_2"
blocking_get_2()
get_connection().channel().queue_delete(TEST_QUEUE)
A common problem with Pika is that it is currently not handling incoming events in the background. This basically means that in many scenarios you will need to call connection.process_data_events() periodically to ensure that it does not miss heartbeats.
This also means that if you sleep for a extended period of time, pika will not be handling incoming data, and eventually die as it is not responding to heartbeats. An option here is to disable heartbeats.
I usually solve this by having a thread in the background check for new events, as seen in this example.
If you want to block completely I would do something like this (based on my own library AMQPStorm).
while True:
result = channel.basic.get(queue='simple_queue', no_ack=False)
if result:
print("Message:", message.body)
message.ack()
else:
print("Channel Empty.")
sleep(1)
This is based on the example found here.

Can't catch SIGINT in multithreaded program

I've seen many topics about this particular problem but i still can't figure why i'm not catching a SIGINT in my main Thread.
Here is my code:
def connect(self, retry=100):
tries=retry
logging.info('connecting to %s' % self.path)
while True:
try:
self.sp = serial.Serial(self.path, 115200)
self.pileMessage = pilemessage.Pilemessage()
self.pileData = pilemessage.Pilemessage()
self.reception = reception.Reception(self.sp,self.pileMessage,self.pileData)
self.reception.start()
self.collisionlistener = collisionListener.CollisionListener(self)
self.message = messageThread.Message(self.pileMessage,self.collisionlistener)
self.datastreaminglistener = dataStreamingListener.DataStreamingListener(self)
self.datastreaming = dataStreaming.Data(self.pileData,self.datastreaminglistener)
return
except serial.serialutil.SerialException:
logging.info('retrying')
if not retry:
raise SpheroError('failed to connect after %d tries' % (tries-retry))
retry -= 1
def disconnect(self):
self.reception.stop()
self.message.stop()
self.datastreaming.stop()
while not self.pileData.isEmpty():
self.pileData.pop()
self.datastreaminglistener.remove()
while not self.pileMessage.isEmpty():
self.pileMessage.pop()
self.collisionlistener.remove()
self.sp.close()
if __name__ == '__main__':
import time
try:
logging.getLogger().setLevel(logging.DEBUG)
s = Sphero("/dev/rfcomm0")
s.connect()
s.set_motion_timeout(65525)
s.set_rgb(0,255,0)
s.set_back_led_output(255)
s.configure_locator(0,0)
except KeyboardInterrupt:
s.disconnect()
In the main function I call Connect() which is launching Threads over which i don't have direct controll.
When I launch this script I would like to be able to stop it when hitting Control+C by calling the "disconnect()" function which stops all the other threads.
In the code i provided it doesn't work because there is no thread in the main function. But I already tryied putting all the instuctions from Main() in a Thread with a While loop without success.
Is there a simple way to solve my problem ?
Thanx
Your indentation is messed up, but there's enough to go on.
Your main thread isn't catching SIGINT because it's not alive. There is nothing that stops your main thread from continuing past the try block, seeing no more code, and closing up shop.
I am not familiar with Sphero. I just attempted to google its docs and was linked to a bunch of 404 pages, so I'll tell you what you would normally do in a threaded environment - join your threads to the main thread so that the main thread can't finish execution before the worker threads.
for t in my_thread_list:
t.join() #main thread can't get past here until all the threads finish
If your Sphero object doesn't provide join-like functionality, you could hack something in that blocks, i.e.
raw_input('Press Enter to disconnect')
s.disconnect()

python sockets stop recv from hanging?

I am trying to create a two player game in pygame using sockets, the thing is, when I try to receive data on on this line:
message = self.conn.recv(1024)
python hangs until it gets some data. The problem with this is that is pauses the game loop when the client is not sending anything through the socket and causes a black screen. How can I stop recv from doing this?
Thanks in advance
Use nonblocking mode. (See socket.setblocking.)
Or check if there is data available before call recv.
For example, using select.select:
r, _, _ = select.select([self.conn], [], [])
if r:
# ready to receive
message = self.conn.recv(1024)
you can use signal module to stop an hangs recv thread.
in recv thread:
try:
data = sock.recv(1024)
except KeyboardInterrupt:
pass
in interpret thread:
signal.pthread_kill(your_recving_thread.ident, signal.SIGINT)
I know that this is an old post, but since I worked on a similar project lately, I wanted to add something that hasn't already been stated yet for anybody having the same issue.
You can use threading to create a new thread, which will receive data. After this, run your game loop normally in your main thread, and check for received data in each iteration. Received data should be placed inside a queue by the data receiver thread and read from that queue by the main thread.
#other imports
import queue
import threading
class MainGame:
def __init__(self):
#any code here
self.data_queue = queue.Queue()
data_receiver = threading.Thread(target=self.data_receiver)
data_receiver.start()
self.gameLoop()
def gameLoop(self):
while True:
try:
data = self.data_queue.get_nowait()
except queue.Empty:
pass
self.gameIteration(data)
def data_receiver(self):
#Assuming self.sock exists
data = self.sock.recv(1024).decode("utf-8")
#edit the data in any way necessary here
self.data_queue.put(data)
def gameIteration(self, data):
#Assume this method handles updating, drawing, etc
pass
Note that this code is in Python 3.

Python - Waiting for variable change

I have a Python script that opens a websocket to the Twitter API and then waits. When an event is passed to the script via amq, I need to open a new websocket connection and immediately close the old one just as soon as the new connection is registered.
It looks something like this:
stream = TwitterStream()
stream.start()
for message in broker.listen():
if message:
new_stream = TwitterStream()
# need to close the old connection as soon as the
# new one connects here somehow
stream = new_stream()
I'm trying to figure out how I'd establish a 'callback' in order to notify my script as to when the new connection is established. The TwitterStream class has a "is_running" boolean variable that I can reference, so I was thinking perhaps something like:
while not new_stream.is_running:
time.sleep(1)
But it seems kind of messy. Does anyone know a better way to achieve this?
A busy loop is not the right approach, since it obviously wastes CPU. There are threading constructs that let you communicate such events, instead. See for example: http://docs.python.org/library/threading.html#event-objects
Here is an example with threading event:
import threading
from time import sleep
evt = threading.Event()
result = None
def background_task():
global result
print("start")
result = "Started"
sleep(5)
print("stop")
result = "Finished"
evt.set()
t = threading.Thread(target=background_task)
t.start()
# optional timeout
timeout=3
evt.wait(timeout=timeout)
print(result)

Categories

Resources