Posting messages in two RabbitMQ queue, instead of one (using py-amqp) - python

I've got this strange problem using py-amqp and the Flopsy module. I have written a publisher that sends messages to a RabbitMQ server, and I wanted to be able to send it to a specified queue. On the Flopsy module that is not possible, so I tweaked it adding a parameter and a line to declare the queue on the _init__ method of the Publisher object
def __init__(self, routing_key=DEFAULT_ROUTING_KEY,
exchange=DEFAULT_EXCHANGE, connection=None,
delivery_mode=DEFAULT_DELIVERY_MODE, queue=DEFAULT_QUEUE):
self.connection = connection or Connection()
self.channel = self.connection.connection.channel()
self.channel.queue_declare(queue) # ADDED TO SET UP QUEUE
self.exchange = exchange
self.routing_key = routing_key
self.delivery_mode = delivery_mode
The channel object is part of the py-amqplib library
The problem I've got it's that, even if it's sending the messages to the specified queue, it's ALSO sending the messages to the default queue. AS in this system we expect to send quite a lot of messages, we don't want to stress the system making useless duplicates... I've tried to debug the code and go inside the py-amqplib library, but I'm not able figure out any error or lacking step. Also, I'm not able to find any documentation form py-amqplib outside the code.
Any ideas on why is this happening and how to correct it?

OK, I've think I've got it. unless anybody else have a better idea. I've check this tutorial on AMQP I was assuming that the publisher should know the queue, but that's not the case, you need to send the message to a exchange, and the consumer will declare that the queue is related to the exchange. That allow different options on sending and receiving, as you can see on the tutorial.
So, I've been including the exchange information on both the publisher and the consumer, not making use of the call to queue_declare and it appears to be working just fine.

Related

Sending messages from Python using osc4py3?

I'm currently trying to figure out how to send OSC messages from Python to Max/MSP. I'm currently using osc4py3 to do so, and I have a sample code from the documentation that should hypothetically be working, written out here:
from osc4py3.as_eventloop import *
from osc4py3 import oscbuildparse
# Start the system.
osc_startup()
# Make client channels to send packets.
osc_udp_client("127.0. 0.1", 5000, "tester")
msg = oscbuildparse.OSCMessage("/test/me", ",sif", ["text", 672, 8.871])
osc_send(msg, "tester")
The receiver in Max is just a udprecieve object listening to port 5000. I managed to get Processing to send OSC messages to Max and it worked pretty simply using the oscp5 library, but I can't seem to have the same luck in Python.
What is it I'm missing? Moreover, I don't entirely understand the structure for building OSC messages in osc4py3, even after doing my best with the documentation; if someone would be willing to explain what exactly is going on (namely, the arguments) in something like
msg = oscbuildparse.OSCMessage("/test/me", ",sif", ["text", 672, 8.871])
then I would be forever grateful.
I'm entirely open to using another OSC library, but all I ask is a run-through on how to send a message (I've attempted using pyOSC but that too proved too confusing for me).
Maybe you already solved it but in the posted code there are two problems. One is the IP address format (there is a space before the second "0"). Then you need the command osc.process() at the end. So the following way should work
from osc4py3.as_eventloop import *
from osc4py3 import oscbuildparse
# Start the system.
osc_startup()
# Make client channels to send packets.
osc_udp_client("127.0.0.1", 5000, "tester")
msg = oscbuildparse.OSCMessage("/test/me", ",sif", ["text", 672,
8.871])
osc_send(msg, "tester")
osc_process()
Hope it will work out
There are different possible scheduling policies in osc4py3. The documentation uses the event-loop model with as_eventloop, where user code must periodically call osc_process() to have osc4py3 deal with internal messages queues and communications.
The client example for sending OSC messages wrap osc_process() call in a loop (generally it is into an event processing loop).
You may dismiss osc_process() call simply by importing names with full multithreading scheduling policy at the beginning of your code:
from osc4py3.as_allthreads import *
The third scheduling policy is as_comthreads, where communications are processed in background threads, but received messages (in server side) are processed synchronously at the osc_process() call.
(by the author of osc4py3)

How to write INPUT REGISTERS using pymodbus for external Modbus client which will read them

I have been tasked with implementing a pymodbus-based Modbus server. The server will run on a Linux machine like a Raspberry Pi or Up2 controller. It is expected to interface with a Modbus client which I have no control over. That external Modbus client is expecting to be able to read INPUT REGISTERS as well as holding registers served by my Modbus server.
I can set the values of the HOLDING registers that will be read by the external client. I have been unable to set the values of the INPUT registers that the external client will read. How does one do that?
I saw this post which asked a similar question but the question doesn't seem to ever have been answered:
How to write to PLC input registers using pymodbus
Thanks in advance for any help!
As I said, I am not familiar with python or pymodbus, but take a look at this example which is something like what I expected would exist: https://pymodbus.readthedocs.io/en/latest/source/example/updating_server.html
Four 100 "register" arrays are created as the data store. I assume di=digital inputs, co=coils, hr=holding registers, ir=input registers
store = ModbusSlaveContext(
di=ModbusSequentialDataBlock(0, [17]*100),
co=ModbusSequentialDataBlock(0, [17]*100),
hr=ModbusSequentialDataBlock(0, [17]*100),
ir=ModbusSequentialDataBlock(0, [17]*100))
context = ModbusServerContext(slaves=store, single=True)
These values are then updated in "updating_writer(a)" which is called by the background thread. It looks to me like it just adds 1 to each value every time it is called. In a real world PLC, this function would probably read things like sensors, settings, and other operational/state/configuration data.
Thanks to Marker and to all the examples online. I finally got this working as I wanted. Hope this helps someone else.
There were several gotchas I ran into:
I tried following examples I found online all of which used pymodbus.server.async instead of pymodbus.server.sync. I found that I could not import pymodbus.server.async because "async" is a reserved word in Python3.7! (not in older versions of Python). Either way I wanted to use pymodbus.server.sync because I wanted to avoid importing twisted if at all possible. This server will have 1-3 clients connecting to it at the most.
All the examples showing an updating writer used "LoopingCall" from Twisted. I have no idea what Twisted is and didn't want to use it unless I had to. I was familiar with multiprocessing and threading. I was already launching the ModbusTcpServer in a process and was trying to create managed object(s) around the store / context so I could have a different Process doing the updating. But that didn't work: I'm guessing StartTcpServer doesn't like receiving managed objects(?) and I didn't want to delve into that function.
One of the examples commented that a Python thread could be used, and that solved it. I still have the ModbusTcpServer launched in a Process but right before I call "StartTcpServer" I kickoff a THREAD rather than a PROCESS with the updating writer. Then I didn't need to put the store / context in managed object(s) since the Thread can see the same dataspace as the Process that kicked it off. I just needed ANOTHER managed object to send messages into this Thread the way I was already used to doing with a Process.
Sooo...
First I had to do this:
from threading import Thread
Then I kicked the following off in a Process as I'd done before, but RIGHT BEFORE calling StartTcpServer I kicked off the updating_writer Thread (all start_addr, init_val and num_addrs variables are set earlier).
discrete_inputs_obj = ModbusSequentialDataBlock(di_start_addr, [di_init_val]*di_num_addrs)
coils_obj = ModbusSequentialDataBlock(co_start_addr, [co_init_val]*co_num_addrs)
holding_regs_obj = ModbusSequentialDataBlock(hr_start_addr, [hr_init_val]*hr_num_addrs)
input_regs_obj = ModbusSequentialDataBlock(ir_start_addr, [ir_init_val]*ir_num_addrs)
mb_store = ModbusSlaveContext(di=discrete_inputs_obj, co=coils_obj, hr=holding_regs_obj, ir=input_regs_obj, zero_mode=True)
mb_context = ModbusServerContext(slaves=mb_store, single=True)
mb_store = ModbusSlaveContext(
di=ModbusSequentialDataBlock(di_start_addr, [di_init_val]*di_num_addrs),
co=ModbusSequentialDataBlock(co_start_addr, [co_init_val]*co_num_addrs),
hr=ModbusSequentialDataBlock(hr_start_addr, [hr_init_val]*hr_num_addrs),
ir=ModbusSequentialDataBlock(ir_start_addr, [ir_init_val]*ir_num_addrs))
mb_context = ModbusServerContext(slaves=mb_store, single=True)
updating_writer_cfg = {}
updating_writer_cfg["mb_context"] = mb_context
updating_writer_cfg["managed_obj"] = managed_obj #For being able to send messages to this Thread
updating_writer_thread = Thread(target = updating_writer, args = [updating_writer_cfg]) # We need this to be a thread in this process so that they can share the same datastore
updating_writer_thread.start()
StartTcpServer(mb_context, address=("", port))
In the While loop of updating_writer I have code that polls the managed_obj to receive messages. In adding the key bits of code in that loop are:
mb_context[0].setValues(4, addr_to_write, regs_to_write)
...where 4 is the write function, addr_to_write is the register address at which to start writing and regs_to_write is a list of register values...AND...
regs_to_read = mb_context[0].getValues(3, addr_to_read, num_regs_to_read)
...where 3 is the read function, addr_to_read is the register address at which to start reading. regs_to_read will be a list of length num_regs_to_read.

Asynchronous Bidirectional RPC

I am looking for a RPC library in Java or Python (Python is preferred) that uses TCP. It should support:
Asynchronous
Bidirectional
RPC
Some sort of event loop (with callbacks or similar)
Any recommendations? I have looked a things like bjsonrpc which seemed to be the right sort of thing however it didn't seem possible for the server to identify which connections; so if a user has identified himself/herself and a request comes in from another user to send a message to that user it doesn't expose that users connection so we can send the message.
You should definitely check out Twisted. It's an event-based Python networking framework that has an implementation of an event loop (called the "reactor") supporting select, poll, epoll, kqueue and I/O completion ports, and mediates asynchronous calls with objects called Deferreds
As for your RPC requirement, perhaps you should check out Twisted's PB library or AMP.
I'm not entirely sure what you meanאt by "Event loop", but you should check out RPyC (Python)
RPyC Project page
i'm the author of bjsonrpc. I'm sure it's possible to do what you want with it.
Some things maybe are poorly documented or maybe some examples are needed.
But, in short, Handlers can store internal states (like authenticated or not, or maybe username). From any handler you can access the "Connection" class, which has the socket itself.
Seems you want something like a chat as an example. I did something similar in the past. I'll try to add a chat example for a new release.
Internal states are explained here:
http://packages.python.org/bjsonrpc/tutorial1/index.html#stateful-server
They should be used for authentication (but no standard auth method is provided yet).
On how to reach the connection class from the handler, that isn't documented yet (sorry), but it is used sometimes in the examples inside the source code. For example, example1-server.py contains this public function:
def gettotal(self):
self._conn.notify.notify("total")
return self.value_total
BaseHandler._conn represents the connection for that user. And is exactly the same class you get when you connect:
conn = bjsonrpc.connect(host=host,port=port,handler_factory=MyHandler)
So, you can store the connections for logged users in a global variable, and later call any client method you want to.
I am involved in developing Versile Python (VPy) which provides the capabilities you are requesting. It is currently available as development releases intended primarily for testing, however you may want to check it out.
Regarding identifying users you can configure remote methods to receive a context object which enables the method to receive information about an authenticated user, using a syntax similar to this draft code.
from versile.quick import *
#doc
class MessageBox(VExternal):
"""Dispatches IM messages."""
#publish(show=True, doc=True, ctx=True)
def send_message(self, msg, ctx=None):
"""Sends a message to the message box"""
if ctx.identity is None:
raise VException('No authenticated user')
else:
# do something ...
pass

How can I use Pika to send and receive RabbitMQ messages?

I'm having some issue getting Pika to work with routing keys or exchanges in a way that's consistent with it AMQP or RabbitMQ documentation. I understand that the RabbitMQ documentation uses an older version of Pika, so I have disregarded their example code.
What I'm trying to do is define a queue, "order" and have two consumers, one that handle the exchange or routing_key "production" and one that handles "test". From looking at that RabbitMQ documentation that should be easy enough to do by using either a direct exchange and routing keys or by using a topic exchange.
Pika however doesn't appear to know what to do with the exchanges and routing keys. Using the RabbitMQ management tool to inspect the queues, it's pretty obvious that Pika either didn't queue the message correctly or that RabbitMQ just threw it away.
On the consumer side it isn't really clear how I should bind a consumer to an exchange or handle routing keys and the documentation isn't really helping.
If I drop all ideas or exchanges and routing keys, messages queue up nicely and are easily handled by my consumer.
Any pointers or example code people have would be nice.
As it turns out, my understanding of AMQP was incomplete.
The idea is as following:
Client:
The client after getting the connection should not care about anything else but the name of the exchange and the routing key. That is we don't know which queue this will end up in.
channel.basic_publish(exchange='order',
routing_key="order.test.customer",
body=pickle.dumps(data),
properties=pika.BasicProperties(
content_type="text/plain",
delivery_mode=2))
Consumer
When the channel is open, we declare the exchange and queue
channel.exchange_declare(exchange='order',
type="topic",
durable=True,
auto_delete=False)
channel.queue_declare(queue="test",
durable=True,
exclusive=False,
auto_delete=False,
callback=on_queue_declared)
When the queue is ready, in the "on_queue_declared" callback is a good place, we can bind the queue to the exchange, using our desired routing key.
channel.queue_bind(queue='test',
exchange='order',
routing_key='order.test.customer')
#handle_delivery is the callback that will actually pickup and handle messages
#from the "test" queue
channel.basic_consume(handle_delivery, queue='test')
Messages send to the "order" exchange with the routing key "order.test.customer" will now be routed to the "test" queue, where the consumer can pick it up.
While Simon's answer seems right in general, you might need to swap the parameters for consuming
channel.basic_consume(queue='test', on_message_callback=handle_delivery)
Basic setup is sth like
credentials = pika.PlainCredentials("some_user", "some_password")
parameters = pika.ConnectionParameters(
"some_host.domain.tld", 5672, "some_vhost", credentials
)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
To start consuming:
channel.start_consuming()

Using zeromq in a distributed load pipeline, can I get the address of the last server a message was sent to

If I set up a pipeline which distributes load across a cluster, I would like to log where the messages get sent. This is what I have in mind (python):
import zmq
context = zmq.Context()
socket = context.socket(zmq.DOWNSTREAM)
socket.connect("tcp://127.0.0.1:5000")
socket.connect("tcp://127.0.0.1:6000")
msg = "Hello World\0"
connection_string = socket.send(msg)
# should print "Sent message to tcp://127.0.0.1:5000"
print "Sent message to", connection_string
But I cant find anything that talks about this. Any help at all is appreciated.
It think you'll want to use the X(REP|REQ) sockets somewhere in the topology for this. You might want to checkout the monitered queue device:
http://github.com/zeromq/pyzmq/blob/master/zmq/devices.pyx
And for general info have a look at the diagram and explanations of the patterns here too:
http://ipython.scipy.org/doc/nightly/html/development/parallel_connections.html
Also note that the DOWNSTREAM socket type is now called PUSH (and the corresponding UPSTREAM has changed to PULL).
Perhaps you could include the address of the recipient within the reply and log it then?
My 0.02 cents...
Well i believe since ZMQ routes in fair queue manner, that would be round robin. Hence messages would be sent to recepients in order. However, writing code under that assumption would not be the right thing since the underlying routing logic might change in future versions.
For PUSH/PULL, i am not sure, however for REQ/REP, i guess the REP side can send the reply back in an envolope with the first field as its address. That way the REQ can read twice on the socket and get the responder address as well as the data.

Categories

Resources