I want to use Pyro with an existing set of classes that involve a factory pattern, i.e. an object of Class A (typically there will only be one of these) is used to instantiate objects of Class B (there can be an arbitrary number of these) through a factory method . So, I'm exposing an object of Class A as the Pyro proxy object.
I've extended the Pyro introductory sample code to reflect roughly what I'm trying to do. The server-side code is as follows:
# saved as greeting.py
import Pyro4
import socket
class NewObj:
func_count = None
def __init__(self):
print "{0} ctor".format(self)
func_count = 0
def __del__(self):
print "{0} dtor".format(self)
def func(self):
print "{0} func call {1}".format(self, self.func_count)
self.func_count += 1
class GreetingMaker(object):
def __init__(self):
print "{0} ctor".format(self)
def __del__(self):
print "{0} dtor".format(self)
def get_fortune(self, name):
print "getting fortune"
return "Hello, {0}. Here is your fortune message:\n" \
"Behold the warranty -- the bold print giveth and the fine print taketh away.".format(name)
def make_obj(self):
return NewObj()
greeting_maker=GreetingMaker()
daemon=Pyro4.Daemon(host=socket.gethostbyname(socket.gethostname()), port=8080) # make a Pyro daemon
uri=daemon.register(greeting_maker, "foo") # register the greeting object as a Pyro object
print "Ready. Object uri =", uri # print the uri so we can use it in the client later
daemon.requestLoop() # start the event loop of the server to wait for calls
The client side code was also altered slightly:
# saved as client.py
import Pyro4
uri="PYRO:foo#10.2.129.6:8080"
name="foo"
greeting_maker=Pyro4.Proxy(uri) # get a Pyro proxy to the greeting object
print greeting_maker.get_fortune(name) # call method normally
print greeting_maker.make_obj()
My intention is to be able to create instances of NewObj and to manipulate them just as I can manipulate instances of GreetingMaker on the client side, but it looks as though what happens is when the make_obj method gets called, a NewObj is created on the server side, immediately falls out of scope, and is consequently garbage collected.
This is what the output looks like, server-side:
<__main__.GreetingMaker object at 0x2aed47e01110> ctor
/usr/lib/python2.6/site-packages/Pyro4-4.12-py2.6.egg/Pyro4/core.py:152: UserWarning: HMAC_KEY not set, protocol data may not be secure
warnings.warn("HMAC_KEY not set, protocol data may not be secure")
Ready. Object uri = PYRO:foo#10.2.129.6:8080
getting fortune
<__main__.NewObj instance at 0x175c8098> ctor
<__main__.NewObj instance at 0x175c8098> dtor
... and client-side:
/usr/local/lib/python2.6/dist-packages/Pyro4-4.12-py2.6.egg/Pyro4/core.py:152: UserWarning: HMAC_KEY not set, protocol data may not be secure
warnings.warn("HMAC_KEY not set, protocol data may not be secure")
Hello, foo. Here is your fortune message:
Behold the warranty -- the bold print giveth and the fine print taketh away.
Traceback (most recent call last):
File "client.py", line 9, in <module>
print greeting_maker.make_obj()
File "/usr/local/lib/python2.6/dist-packages/Pyro4-4.12-py2.6.egg/Pyro4/core.py", line 146, in __call__
return self.__send(self.__name, args, kwargs)
File "/usr/local/lib/python2.6/dist-packages/Pyro4-4.12-py2.6.egg/Pyro4/core.py", line 269, in _pyroInvoke
data=self._pyroSerializer.deserialize(data, compressed=flags & MessageFactory.FLAGS_COMPRESSED)
File "/usr/local/lib/python2.6/dist-packages/Pyro4-4.12-py2.6.egg/Pyro4/util.py", line 146, in deserialize
return self.pickle.loads(data)
AttributeError: 'module' object has no attribute 'NewObj'
I suspect I could hack around this problem by having the factory class (i.e. GreetingMaker) keep a reference to every NewObj that it creates, and add a cleanup method of some sort... but is that really necessary? Am I missing something in Pyro that can help me implement this?
(edited for clarity)
I recently came across this feature and am using it. It is crucial for my code which uses a similar factory pattern.
Pyro Server
class Foo(object):
def __init__(self, x=5):
self.x = x
class Server(object):
def build_foo(self, x=5):
foo = Foo(x)
# This line will register your foo instance as its own proxy
self._pyroDaemon.register(foo)
# Returning foo here returns the proxy, not the actual foo
return foo
#...
uri = daemon.register(Server()) # In the later versions, just use Server, not Server()
#...
The problem here is that pyro pickles the NewObj object in the server side, but it fails to unpickle it in the client side because the NewObj implementation is unknown to the client.
One way to fix the problem, would be to create a third module called, for example new_obj.py and, after that, import it in both the server and the client as follows:
from new_obj import NewObj
This will let the client unpickle the NewObj instance and work with it. Anyway, please note that it will be a real NewObj object living in the client, not a proxy to an object living in the server.
Related
I have a class that inherits from another class in which we build a client:
class Client(ClientLibrary):
def __init__(self, hosts=[{'host':<HOST_ADDRESS>, 'port':<PORT>}], **kwargs):
''' alternative constructor, where i'd pass in some defaults to simplify connection'''
super().__init__(hosts, *args, **kwargs)
def some_method(self):
...
I want to test this class, and already have a test server set up that I want to connect to for testing. My initial approach was to create a MockClient that inherits from the original Client but swaps out the hosts parameter for the test host like so:
# I create a mock client that inherits from the original `Client` class, but passes in the host and port of the test server.
class MockClient(Client):
def __init__(self, hosts=[{'host':MOCK_HOST, 'port':MOCK_PORT}]):
super().__init__(hosts=hosts)
The idea was then that i'd use this mock client in the tests, however I have faced a lot of issues where I am testing functions that encapsulate the original Client class. I have tried patching it but keep on running into issues.
Is there a better way to approach this? And can this be done using pytest fixtures?
I want to be able to perform the following sorts of tests:
class TestFunctionThatUtilisesClient:
def test_in_which_class_is_constructed_explicitly(self):
client = Client()
r = client.some_method()
assert r == 'something'
def test_in_which_class_is_constructed_implicitly(self):
r = another_method() # Client() is called somewhere in here
assert r == 'something else'
I'm trying to write a script that saves mqtt data and sends it to influxDB. The issue I'm having is that the callback function of the mqtt-paho module keeps giving the error:
AttributeError: 'Client' object has no attribute 'write_api'. I think this is because of the self in the internal 'Client' class of the mqtt-paho. My full script can be found below:
# Imported modules
# standard time module
from datetime import datetime
import time
# InfluxDB specific modules
from influxdb_client import InfluxDBClient, Point, WritePrecision
from influxdb_client.client.write_api import SYNCHRONOUS
#MQTT paho specific modules
import paho.mqtt.client as mqtt
class data_handler(): # Default namespaces are just for all the ESPs.
def __init__(self, namespace_list=["ESP01","ESP02","ESP03","ESP04","ESP05","ESP06","ESP07","ESP08"]):
# initialize influxdb client and define access token and data bucket
token = "XXXXXXXXXX" # robotlab's token
self.org = "Home"
self.bucket = "HomeSensors"
self.flux_client = InfluxDBClient(url="http://localhost:8086", token=token)
self.write_api = self.flux_client.write_api(write_options=SYNCHRONOUS)
# Initialize and establish connection to MQTT broker
broker_address="XXX.XXX.XXX.XXX"
self.mqtt_client = mqtt.Client("influx_client") #create new instance
self.mqtt_client.on_message=data_handler.mqtt_message #attach function to callback
self.mqtt_client.connect(broker_address) #connect to broker
# Define list of namespaces
self.namespace_list = namespace_list
print(self.namespace_list)
def mqtt_message(self, client, message):
print("message received " ,str(message.payload.decode("utf-8")))
print("message topic=",message.topic)
print("message qos=",message.qos)
print("message retain flag=",message.retain)
sequence = [message.topic, message.payload.decode("utf-8")]
self.write_api.write(self.bucket, self.org, sequence)
def mqtt_listener(self):
for namespace in self.namespace_list:
self.mqtt_client.loop_start() #start the loop
print("Subscribing to topics!")
message = namespace+"/#"
self.mqtt_client.subscribe(message, 0)
time.sleep(4) # wait
self.mqtt_client.loop_stop() #stop the loop
def main():
influxHandler = data_handler(["ESP07"])
influxHandler.mqtt_listener()
if __name__ == '__main__':
main()
The code works fine until I add self.someVariable in the callback function. What would be a good way to solve this problem? I don't really want to be making global variables hence why I chose to use a class.
Thanks in advance!
Dealing with self when there are multiple classes involved can get confusing. The paho library calls on_message as follows:
on_message(self, self._userdata, message)
So the first argument passed is the instance of Client so what you are seeing is expected (in the absence of any classes).
If the callback is a method object (which appears to be your aim) "the instance object is passed as the first argument of the function". This means your function would take four arguments and the definition be:
mqtt_message(self, client, userdata, msg)
Based upon this you might expect your application to fail earlier than it is but lets look at how you are setting the callback:
self.mqtt_client.on_message=data_handler.mqtt_message
datahandler is the class itself, not an instance of the class. This means you are effectively setting the callback to a static function (with no binding to any instance of the class - this answer might help). You need to change this to:
self.mqtt_client.on_message=self.mqtt_message
However this will not work as the method currently only takes three arguments; update the definition to:
def mqtt_message(self, client, userdata, msg)
with those changes I believe this will work (or at least you will find another issue :-) ).
An example might be a better way to explain this:
class mqtt_sim():
def __init__(self):
self._on_message = None
#property
def on_message(self):
return self._on_message
#on_message.setter
def on_message(self, func):
self._on_message = func
# This is what you are doing
class data_handler1(): # Default namespaces are just for all the ESPs.
def __init__(self):
self.mqtt = mqtt_sim()
self.mqtt.on_message = data_handler1.mqtt_message # xxxxx
def mqtt_message(self, client, message):
print("mqtt_message1", self, client, message)
# This is what you should be doing
class data_handler2(): # Default namespaces are just for all the ESPs.
def __init__(self):
self.mqtt = mqtt_sim()
self.mqtt.on_message = self.mqtt_message #attach function to callback
def mqtt_message(self, mqttself, client, message):
print("mqtt_message2", self, mqttself, client, message)
# Lets try using both of the above
d = data_handler1()
d.mqtt._on_message("self", "userdata", "message")
d = data_handler2()
d.mqtt._on_message("self", "userdata", "message")
I am trying to store the content of a callback function, in order to access and manipulate the data within the script.
As far as I am concerned, the given function .subscribe() in my code below does not return anything (None). My function is only passed as a reference to the function as an argument. Is there a way to return the data from the function that calls my function?
My code is a simple example with roslibpy (a library for Python that interacts with the open-source robotics framework ROS through Websockets). It is mentioned, that the data is published as a stream via a Websocket each time a message is published into the topic /turtle1/pose. My goal here is to return the data that is being published into the topic. The print command provides a nice visualization of the data, which just works fine.
import roslibpy
client = roslibpy.Ros(host='localhost', port=9090)
client.run()
def myfunc(msg):
print(msg)
listener = roslibpy.Topic(client, '/turtle1/pose', 'turtlesim/Pose')
#my function is passed as an argument
listener.subscribe(myfunc)
try:
while True:
pass
except KeyboardInterrupt:
client.terminate()
The subscribe() method in the roslibpy library is defined as follows:
def subscribe(self, callback):
"""Register a subscription to the topic.
Every time a message is published for the given topic,
the callback will be called with the message object.
Args:
callback: Function to be called when messages of this topic are published.
"""
# Avoid duplicate subscription
if self._subscribe_id:
return
self._subscribe_id = 'subscribe:%s:%d' % (
self.name, self.ros.id_counter)
self.ros.on(self.name, callback)
self._connect_topic(Message({
'op': 'subscribe',
'id': self._subscribe_id,
'type': self.message_type,
'topic': self.name,
'compression': self.compression,
'throttle_rate': self.throttle_rate,
'queue_length': self.queue_length
}))
Is there common way to deal with such problems? Does it make more sense to store the output as an external source (e.g. .txt) and then access the source trough the script?
You can define a Python class that acts like a function, that can modify its own state when called, by defining the magic __call__ method. When obj(whatever) is done on a non-function obj, Python will run obj.__call__(whatever). subscribe only needs its input to be callable; whether it is an actual function or an object with a __call__ method does not matter to subscribe.
Here's an example of what you could do:
class MessageRecorder():
def __init__(self):
self.messages = []
# Magic python 'dunder' method
# Whenever a MessageRecorder is called as a function
# This function defined here will be called on it
# In this case, adds the message to a list of received messages
def __call__(self, msg):
self.messages.append(msg)
recorder = MessageRecorder()
# recorder can be called like a function
recorder("Hello")
listener.subscribe(recorder)
try:
while True:
pass
except KeyboardInterrupt:
client.terminate()
"""Now you can do whatever you'd like with recorder.messages,
which contains all messages received before termination,
in order of reception. If you wanted to print all of them, do:"""
for m in recorder.messages:
print(m)
I just jump into websocket programing with basic knowledge of "Asynchronous" and "Threads", i have something like this
import tornado.httpserver
import tornado.websocket
import tornado.ioloop
import tornado.web
import socket
import uuid
import json
import datetime
class WSHandler(tornado.websocket.WebSocketHandler):
clients = []
def open(self):
self.id = str(uuid.uuid4())
self.user_info = self.request.remote_ip +' - '+ self.id
print (f'[{self.user_info}] Conectado')
client = {"sess": self, "id" : self.id}
self.clients.append(client.copy())
def on_message(self, message):
print (f'[{self.user_info}] Mensaje Recivido: {message}')
print (f'[{self.user_info}] Respuesta al Cliente: {message[::-1]}')
self.write_message(message[::-1])
self.comm(message)
def on_close(self):
print (f'[{self.user_info}] Desconectado')
for x in self.clients:
if x["id"] == self.id :
self.clients.remove(x)
def check_origin(self, origin):
return True
application = tornado.web.Application([
(r'/', WSHandler),
])
if __name__ == "__main__":
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(80)
myIP = socket.gethostbyname(socket.gethostname())
print ('*** Websocket Server Started at %s***' % myIP)
tornado.ioloop.IOLoop.instance().start()
my question is where do I add code ?, should I add everything inside the WShandler class, or outside, or in another file ? and when to use #classmethod?. for now there is no problem with the code when i add code inside the handler but i have just few test clients.
maybe not the full solution but just a few thoughts..
You can maybe look at the tornado websocket chat example,
here.
First good change is, that their clients (waiters) is a set()
which makes sure that every client is only contained once by default. And it is defined and accessed as a class variable. So you don't use self.waiters but cls.waiters or ClassName.waiters (in this case ChatSocketHandler.waiters) to access it.
class ChatSocketHandler(tornado.websocket.WebSocketHandler):
waiters = set()
Second change is that they update every client (you could choose here
to send the update not to all but only some) as a #classmethod, since
they dont want to receive the instance (self) but the class (cls) and
refer to the class variables (in their case waiters, cache and cach_size)
We can forget about the cache and cache size here.
So like this:
#classmethod
def send_updates(cls, chat):
logging.info("sending message to %d waiters", len(cls.waiters))
for waiter in cls.waiters:
try:
waiter.write_message(chat)
except:
logging.error("Error sending message", exc_info=True)
On every API call a new instance of your handler will be created, refered to as self. And every parameter in self is really unique to the instance and related to the actual client, calling your methods. This is good to identify a client on each call.
So a instance based client list like (self.clients) would always be empty on each call. And adding a client would only add it to this instance's view of the world.
But sometimes you want to have some variables like the list of clients the same for all instances created from your class.
This is where class variables (the ones you define directly under the class definition) and the #classmethod decorator come into play.
#classmethod makes the method call independant from the a instance. This means that you can only access class variables in those methods. But in the case of a
message broker this is pretty much what we want:
add clients to the class variable which is the same for all instances of your handler. And since it is defined as a set, each client is unique.
when receiving messages, send them out to all (or a subset of clients)
so on_message is a "normal" instance method but it calls something like: send_updates() which is a #classmethod in the end.
send_updates() iterates over all (or a subset) of clients (waiters) and uses this to send the actual updates in the end.
From the example:
#classmethod
def send_updates(cls, chat):
logging.info("sending message to %d waiters", len(cls.waiters))
for waiter in cls.waiters:
try:
waiter.write_message(chat)
except:
logging.error("Error sending message", exc_info=True)
Remember that you added waiters with waiters.append(self) so every waiter is really an instance and you are "simply" calling the instances (the instance is representing a caller) write_message() method. So this is not broadcasted but send to every caller one by one. This would be the place where you can separate by some criteria like topics or groups ...
So in short: use #classmethod for methods that are independant from a specific instance (like caller or client in your case) and you want to make actions for "all" or a subset of "all" of your clients. But you can only access class variables in those methods. Which should be fine since it's their purpose ;)
So, I'm trying to use SQS to pass a Python object between two EC2 instances. Here's my failed attempt:
import boto.sqs
from boto.sqs.message import Message
class UserInput(Message):
def set_email(self, email):
self.email = email
def set_data(self, data):
self.data = data
def get_email(self):
return self.email
def get_data(self):
return self.data
conn = boto.sqs.connect_to_region('us-west-2')
q = conn.create_queue('user_input_queue')
q.set_message_class(UserInput)
m = UserInput()
m.set_email('something#something.com')
m.set_data({'foo': 'bar'})
q.write(m)
It returns an error message saying that The request must contain the parameter MessageBody. Indeed, the tutorial tells us to do m.set_body('something') before writing the message to the queue. But here I'm not passing a string, I want to pass an instance of my UserInput class. So, what should MessageBody be? I've read the docs and they say that
The constructor for the Message class must accept a keyword parameter “body” which represents the content or body of the message. The format of this parameter will depend on the behavior of the particular Message subclass. For example, if the Message subclass provides dictionary-like behavior to the user the body passed to the constructor should be a dict-like object that can be used to populate the initial state of the message.
I guess the answer to my question might be in that paragraph, but I can't make sense of it. Could anyone provide a concrete code example to illustrate what they are talking about?
For an arbitrary Python object the answer is to serialize the object into a string, use SQS to send that string to the other EC2 instance and deserialize the string back to an instance of the same class.
For example, you could use JSON with base64 encoding to serialize an object into a string and that would be the body of your message.