Unable to stop consuming in Pika - python

I am unable to return the data from the Pika since it start_consuming is not stopping. It prints the results but it does not return the output
def on_request(ch, method, props, body):
directory =body
print(directory.decode('utf-8'))
response = parse(directory.decode('utf-8'))
ch.basic_publish(exchange='',
routing_key=props.reply_to,
properties=pika.BasicProperties(correlation_id = \
props.correlation_id),
body=str(response))
ch.basic_ack(delivery_tag=method.delivery_tag)
def start():
print("hi")
connection = pika.BlockingConnection(
pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='rpc_queue')
channel.basic_qos(prefetch_count=2)
channel.basic_consume(queue='rpc_queue', on_message_callback=on_request)
print(" [x] Awaiting RPC requests")
channel.start_consuming()

By design start_consuming blocks forever. You will have to cancel the consumer in your on_request method.
You can also use this method to consume messages which allows an inactivity_timeout to be set, where you could then cancel your consumer.
Finally, SelectConnection allows much more flexibility in interacting with Pika's I/O loop and is recommended when your requirements are more complex than what BlockingConnection supports.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Just use channel.stop_consuming()

Related

How to change timeout using RabbitMQ pika.basic_consume in Python

With the RabbitMQ Python client running subscriber.py:
import pika, time
credentials = pika.PlainCredentials('user', 'pass')
parameters = pika.ConnectionParameters(host='localhost', port=6672, credentials=credentials)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.basic_qos(prefetch_count=1)
channel.queue_declare(queue='my_queue')
def callback(ch, method, properties, body):
ch.basic_ack(delivery_tag=method.delivery_tag)
time.sleep(600)
print ('process completed')
channel.basic_consume(queue='my_queue', on_message_callback=callback)
channel.start_consuming()
the connection breaks after the callback function is completed.
It appears it always happens on 60th second. It seems the channel.basic_consume() method doesn't want to wait for the main thread to complete the callback function. Is there a way to make sure the connection doesn't drop after 60th second?
Your time.sleep call is blocking Pika's I/O loop which prevents heartbeats from being processed. Don't block the I/O loop!!!
Instead, you should do your long-running work in a separate thread and acknowledge the message correctly from that thread. Fortunately, I have an example right here: link
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
I think the "heartbeat" parameter solves this problem. Just set the time in seconds:
import pika, time
credentials = pika.PlainCredentials('user', 'pass')
parameters = pika.ConnectionParameters(host='localhost', port=6672, credentials=credentials, heartbeat=36000)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.basic_qos(prefetch_count=1)
channel.queue_declare(queue='my_queue')
def callback(ch, method, properties, body):
ch.basic_ack(delivery_tag=method.delivery_tag)
time.sleep(600)
print ('process completed')
channel.basic_consume(queue='my_queue', on_message_callback=callback)
channel.start_consuming()

pika, rabbitmq - get all messages from the queue without consuming them

Using the pika client, I want to display all the messages currently in the queue, without consuming them. Just to know how busy is the queue and display the jobs.
So far, I can only read one message as it arrives:
channel.queue_declare(queue='queue1', durable=True)
channel.basic_consume(on_message, queue='queue1')
channel.start_consuming()
def on_message(channel, method, properties, message):
channel.basic_ack(delivery_tag=method.delivery_tag)
print("Message: %s", message)
How can I read the whole queue?
To read messages "without consuming them", don't acknowledge delivery of the message. In your case above, get rid of
channel.basic_ack(delivery_tag=method.delivery_tag)
or set auto_ack to False:
def callback(ch, method, properties, body):
print(body)
channel.basic_consume(queue='your_queue', on_message_callback=callback, auto_ack=False)
The messages will be read and marked as unacked in rabbitMQ but will still be available in the queue.

Pika: Consume the next message even the last message was not acknowledged

For server automation, we're trying to develop a tool, which can handle and execute a lot of tasks on different servers. We send the task and the server hostname into a queue. The queue is then consumed from a requester, which give the information to the ansible api. To achieve that we can execute more then one task at once, we're using threading.
Now we're stuck with the acknowledge of the message...
What we have done so far:
The requester.py consumes the queue and starts then a thread, in which the ansible task is running. The result is then sended into another queue. So each new messages creates a new thread. Is the task finished, the thread dies.
But now comes difficult part. We have to made the messages persistent, in case our server dies. So each message should be acknowledged after the result from ansible was sended back.
Our problem is now, when we try to acknowledged the message in the thread itselfs, there is no more "simultaneously" work done, because the consume of pika waits for the acknowledge. So how we can achieve, that the consume consumes messages and dont wait for the acknowledge? Or how we can work around or improve our little programm?
requester.py
#!/bin/python
from worker import *
import ansible.inventory
import ansible.runner
import threading
class Requester(Worker):
def __init__(self):
Worker.__init__(self)
self.connection(self.selfhost, self.from_db)
self.receive(self.from_db)
def send(self, result, ch, method):
self.channel.basic_publish(exchange='',
routing_key=self.to_db,
body=result,
properties=pika.BasicProperties(
delivery_mode=2,
))
print "[x] Sent \n" + result
ch.basic_ack(delivery_tag = method.delivery_tag)
def callAnsible(self, cmd, ch, method):
#call ansible api pre 2.0
result = json.dumps(result, sort_keys=True, indent=4, separators=(',', ': '))
self.send(result, ch, method)
def callback(self, ch, method, properties, body):
print(" [x] Received by requester %r" % body)
t = threading.Thread(target=self.callAnsible, args=(body,ch,method,))
t.start()
worker.py
import pika
import ConfigParser
import json
import os
class Worker(object):
def __init__(self):
#read some config files
def callback(self, ch, method, properties, body):
raise Exception("Call method in subclass")
def receive(self, queue):
self.channel.basic_qos(prefetch_count=1)
self.channel.basic_consume(self.callback,queue=queue)
self.channel.start_consuming()
def connection(self,server,queue):
self.connection = pika.BlockingConnection(pika.ConnectionParameters(
host=server,
credentials=self.credentials))
self.channel = self.connection.channel()
self.channel.queue_declare(queue=queue, durable=True)
We're working with Python 2.7 and pika 0.10.0.
And yes, we noticed in the pika FAQ: http://pika.readthedocs.io/en/0.10.0/faq.html
that pika is not thread safe.
Disable auto-acknowledge and set the prefetch count to something bigger then 1, depending on how many messages would you like your consumer to take.
Here is how to set prefetch
channel.basic_qos(prefetch_count=1), found here.

Pika channel.stop_consuming doesn't stop start_consuming loop

I have this piece of code, basically it run channel.start_consuming().
I want it to stop after a while.
I think that channel.stop_consuming() is the right method:
def stop_consuming(self, consumer_tag=None):
""" Cancels all consumers, signalling the `start_consuming` loop to
exit.
But it doesn't work: start_consuming() never ends (execution doesn't exit from this call, "end" is never printed).
import unittest
import pika
import threading
import time
_url = "amqp://user:password#xxx.rabbitserver.com/aaa"
class Consumer_test(unittest.TestCase):
def test_startConsuming(self):
def callback(channel, method, properties, body):
print("callback")
print(body)
def connectionTimeoutCallback():
print("connecionClosedCallback")
def _closeChannel(channel_):
print("_closeChannel")
time.sleep(1)
print("close")
if channel_.is_open:
channel_.stop_consuming()
print("stop_cosuming")
else:
print("channel is closed")
#channel_.close()
params = pika.URLParameters(_url)
params.socket_timeout = 5
connection = pika.BlockingConnection(params)
#connection.add_timeout(2, connectionTimeoutCallback)
channel = connection.channel()
channel.basic_consume(callback,
queue='test',
no_ack=True)
t = threading.Thread(target=_closeChannel, args=[channel])
t.start()
print("start_consuming")
channel.start_consuming() # start consuming (loop never ends)
connection.close()
print("end")
connection.add_timeout solve my problem, maybe call basic_cancel too, but I want to use the right method.
Thanks
Note:
I can't respond or add comment to this (pika, stop_consuming does not work) due to my low reputation points.
Note 2:
I think that I'm not sharing channel or connection across threads (Pika doesn't support this) because I use "channel_" passed as parameter and not "channel" instance of the class (Am I wrong?).
I was having the same problem; as pika is not thread safe. i.e. connections and channels can't be safely shared across threads.
So I used a separate connection to send a shutdown message; then stopped consuming the original channel from the callback function.

Consuming rabbitmq queue from inside python threads

This is a long one.
I have a list of usernames and passwords. For each one I want to login to the accounts and do something things. I want to use several machines to do this faster. The way I was thinking of doing this is have a main machine whose job is just having a cron which from time to time checks if the rabbitmq queue is empty. If it is, read the list of usernames and passwords from a file and send it to the rabbitmq queue. Then have a bunch of machines which are subscribed to that queue whose job is receiving a user/pass, do stuff on it, acknowledge it, and move on to the next one, until the queue is empty and then the main machine fills it up again. So far I think I have everything down.
Now comes my problem. I have checked that the things to be done with each user/passes aren't so intensive and so I could have each machine doing three of them simultaneously using python's threading. In fact for a single machine I have implemented this where I load the user/passes into a python Queue() and then have three threads consume that Queue(). Now I want to do something similar, but instead of consuming from a python Queue(), each thread of each machine should consume from a rabbitmq queue. This is where I'm stuck. To run tests I started by using rabbitmq's tutorial.
send.py:
import pika, sys
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
message = ' '.join(sys.argv[1:])
channel.basic_publish(exchange='',
routing_key='hello',
body=message)
connection.close()
worker.py
import time, pika
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
def callback(ch, method, properties, body):
print ' [x] received %r' % (body,)
time.sleep( body.count('.') )
ch.basic_ack(delivery_tag = method.delivery_tag)
channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback, queue='hello', no_ack=False)
channel.start_consuming()
For the above you can run two worker.py which will subscribe to the rabbitmq queue and consume as expected.
My threading without rabbitmq is something like this:
runit.py
class Threaded_do_stuff(threading.Thread):
def __init__(self, user_queue):
threading.Thread.__init__(self)
self.user_queue = user_queue
def run(self):
while True:
login = self.user_queue.get()
do_stuff(user=login[0], pass=login[1])
self.user_queue.task_done()
user_queue = Queue.Queue()
for i in range(3):
td = Threaded_do_stuff(user_queue)
td.setDaemon(True)
td.start()
## fill up the queue
for user in list_users:
user_queue.put(user)
## go!
user_queue.join()
This also works as expected: you fill up the queue and have 3 threads subscribe to it. Now what I want to do is something like runit.py but instead of using a python Queue(), using something like worker.py where the queue is actually a rabbitmq queue.
Here's something which I tried and didn't work (and I don't understand why)
rabbitmq_runit.py
import time, threading, pika
class Threaded_worker(threading.Thread):
def callback(self, ch, method, properties, body):
print ' [x] received %r' % (body,)
time.sleep( body.count('.') )
ch.basic_ack(delivery_tag = method.delivery_tag)
def __init__(self):
threading.Thread.__init__(self)
self.connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
self.channel = self.connection.channel()
self.channel.queue_declare(queue='hello')
self.channel.basic_qos(prefetch_count=1)
self.channel.basic_consume(self.callback, queue='hello')
def run(self):
print 'start consuming'
self.channel.start_consuming()
for _ in range(3):
print 'launch thread'
td = Threaded_worker()
td.setDaemon(True)
td.start()
I would expect that this launches three threads each of which is blocked by .start_consuming() which just stays there waiting for the rabbitmq queue to send them sometihing. Instead, this program starts, does some prints, and exits. The pattern of the exists is weird too:
launch thread
launch thread
start consuming
launch thread
start consuming
In particular notice there is one "start consuming" missing.
What's going on?
EDIT: One answer I found to a similar question is here
Consuming a rabbitmq message queue with multiple threads (Python Kombu)
and the answer is to "use celery", whatever that means. I don't buy it, I shouldn't need anything remotely as sophisticated as celery. In particular, I'm not trying to set up an RPC and I don't need to read replies from the do_stuff routines.
EDIT 2: The print pattern that I expected would be the following. I do
python send.py first message......
python send.py second message.
python send.py third message.
python send.py fourth message.
and the print pattern would be
launch thread
start consuming
[x] received 'first message......'
launch thread
start consuming
[x] received 'second message.'
launch thread
start consuming
[x] received 'third message.'
[x] received 'fourth message.'
The problem is that you're making the thread daemonic:
td = Threaded_worker()
td.setDaemon(True) # Shouldn't do that.
td.start()
Daemonic threads will be terminated as soon as the main thread exits:
A thread can be flagged as a “daemon thread”. The significance of this
flag is that the entire Python program exits when only daemon threads
are left. The initial value is inherited from the creating thread. The
flag can be set through the daemon property.
Leave out setDaemon(True) and you should see it behave the way you expect.
Also, the pika FAQ has a note about how to use it with threads:
Pika does not have any notion of threading in the code. If you want to
use Pika with threading, make sure you have a Pika connection per
thread, created in that thread. It is not safe to share one Pika
connection across threads.
This suggests you should move everything you're doing in __init__() into run(), so that the connection is created in the same thread you're actually consuming from the queue in.

Categories

Resources