Python multi layer dot notation - python

I'm trying to get a set of python classes built to simplify the sending and receiving of information over a socket connection.
I've had success using a getter and setter via the #property.
I would like the code to be in a dot notation such as:
class dc_load_fake_socket:
#staticmethod
def sendall(msg):
print(msg)
#staticmethod
def recv(size):
return 'Dummy Info'
class test_equipment:
def __init__(self, TCP_IP, TCP_PORT=2101, BUFFER_SIZE=1024):
self.TCP_IP = str(TCP_IP)
self.TCP_PORT = TCP_PORT
self.BUFFER_SIZE = BUFFER_SIZE
self.dc_load_socket = dc_load_fake_socket
def sendall(self, message):
message = f'{message}\n'
self.dc_load_socket.sendall(message.encode())
def recv(self):
return self.dc_load_socket.recv(self.BUFFER_SIZE)
def query(self, message):
self.sendall(message)
return self.recv().strip('\n')
#property
def voltage(self):
return self.query("MEASure:VOLTage?")
dcl = test_equipment('192.168.0.2')
print(dcl.voltage)
While this works, the issue is the fact that this isn't the only 'subsystem' that uses voltage.
Ideally I would like it to act like this:
dcl.measure.voltage
dcl.fetch.voltage
dcl.spec.voltage
dcl.spec.voltage = 1.5
I've looked at using inner classes but I'm not able to use the recv(), sendall() and query() of the main class.
Originally this was done in a python file with only functions. But I ran into an issue were I actually needed two of them. I started turning this into a class as a way to make it easier to maintain, and at the same time had to update from python 2.7 to 3
I'm not the most experienced in python and any help with this would be extremely appreciated.

Related

Python: How do I refactor and structure code in this scenario?

I'm quite stuck on structuring the code in this scenario. Can anyone help me with this?
| module.py
import asyncio
class Server:
def __init__(self):
self.d = {}
#classmethod
async def create(cls):
self = cls()
await self.func()
return self
async def func(self):
await asyncio.sleep(5) # Some other async code here
self.a = 12
def reg(self, ev):
def decorator(func):
self.d[ev] = func()
retun func
return decorator
def reg2(self, ev, func):
self.d[ev] = func
| main.py
import asyncio
from module import Server
async def main():
ser = await Server.create()
# This would be another way... but i find the other way one neater
serv.reg2('msg', some_handler)
# I want to decorate and register this using
# reg func; but since object is not created yet
# how do i acomplish this?
# #ser.reg('msg')
async def some_handler():
...
if __name__ == "__main__":
asyncio.run(main())
Some key points of my aim:
The function 'some_handler' is never used other than the time for register. That is, the function soley exists to be registered and is not used anywhere else.
Since Server class needs an asynchronous initialisation, it cannot be done globally.
(I dont know whether this point is helpful) Generally only one Server instance is created for a single program. There wont be any other instance even in other modules.
How do I model my code to satisfy this senario? I have mentioned an alternate way to register the function, but I feel I am missing something, as some_handler isn't used anywhere else. I have thought about making Server class into a metaclass to do registering and converting the main() and some_handler() as parts of the metclass's class but I'm seeking for different views and opinions.

modify a function of a class from another class

In pymodbus library in server.sync, SocketServer.BaseRequestHandler is used, and defines as follow:
class ModbusBaseRequestHandler(socketserver.BaseRequestHandler):
""" Implements the modbus server protocol
This uses the socketserver.BaseRequestHandler to implement
the client handler.
"""
running = False
framer = None
def setup(self):
""" Callback for when a client connects
"""
_logger.debug("Client Connected [%s:%s]" % self.client_address)
self.running = True
self.framer = self.server.framer(self.server.decoder, client=None)
self.server.threads.append(self)
def finish(self):
""" Callback for when a client disconnects
"""
_logger.debug("Client Disconnected [%s:%s]" % self.client_address)
self.server.threads.remove(self)
def execute(self, request):
""" The callback to call with the resulting message
:param request: The decoded request message
"""
try:
context = self.server.context[request.unit_id]
response = request.execute(context)
except NoSuchSlaveException as ex:
_logger.debug("requested slave does not exist: %s" % request.unit_id )
if self.server.ignore_missing_slaves:
return # the client will simply timeout waiting for a response
response = request.doException(merror.GatewayNoResponse)
except Exception as ex:
_logger.debug("Datastore unable to fulfill request: %s; %s", ex, traceback.format_exc() )
response = request.doException(merror.SlaveFailure)
response.transaction_id = request.transaction_id
response.unit_id = request.unit_id
self.send(response)
# ----------------------------------------------------------------------- #
# Base class implementations
# ----------------------------------------------------------------------- #
def handle(self):
""" Callback when we receive any data
"""
raise NotImplementedException("Method not implemented by derived class")
def send(self, message):
""" Send a request (string) to the network
:param message: The unencoded modbus response
"""
raise NotImplementedException("Method not implemented by derived class")
setup() is called when a client is connected to the server, and finish() is called when a client is disconnected. I want to manipulate these methods (setup() and finish()) in another class in another file which use the library (pymodbus) and add some code to setup and finish functions. I do not intend to modify the library, since it may cause strange behavior in specific situation.
---Edited ----
To clarify, I want setup function in ModbusBaseRequestHandler class to work as before and remain untouched, but add sth else to it, but this modification should be done in my code not in the library.
The simplest, and usually best, thing to do is to not manipulate the methods of ModbusBaseRequestHandler, but instead inherit from it and override those methods in your subclass, then just use the subclass wherever you would have used the base class:
class SoupedUpModbusBaseRequestHandler(ModbusBaseRequestHandler):
def setup(self):
# do different stuff
# call super().setup() if you want
# or call socketserver.BaseRequestHandler.setup() to skip over it
# or call neither
Notice that a class statement is just a normal statement, and can go anywhere any other statement can, even in the middle of a method. So, even if you need to dynamically create the subclass because you won't know what you want setup to do until runtime, that's not a problem.
If you actually need to monkeypatch the class, that isn't very hard—although it is easy to screw things up if you aren't careful.
def setup(self):
# do different stuff
ModbusBaseRequestHandler.setup = setup
If you want to be able to call the normal implementation, you have to stash it somewhere:
_setup = ModbusBaseRequestHandler.setup
def setup(self):
# do different stuff
# call _setup whenever you want
ModbusBaseRequestHandler.setup = setup
If you want to make sure you copy over the name, docstring, etc., you can use `wraps:
#functools.wraps(ModbusBaseRequestHandler.setup)
def setup(self):
# do different stuff
ModbusBaseRequestHandler.setup = setup
Again, you can do this anywhere in your code, even in the middle of a method.
If you need to monkeypatch one instance of ModbusBaseRequestHandler while leaving any other instances untouched, you can even do that. You just have to manually bind the method:
def setup(self):
# do different stuff
myModbusBaseRequestHandler.setup = setup.__get__(myModbusBaseRequestHandler)
If you want to call the original method, or wraps it, or do this in the middle of some other method, etc., it's otherwise basically the same as the last version.
It can be done by Interceptor
from functools import wraps
def iterceptor(func):
print('this is executed at function definition time (def my_func)')
#wraps(func)
def wrapper(*args, **kwargs):
print('this is executed before function call')
result = func(*args, **kwargs)
print('this is executed after function call')
return result
return wrapper
#iterceptor
def my_func(n):
print('this is my_func')
print('n =', n)
my_func(4)
more explanation can be found here

dynamically choose API to use

I use an external tool in my Python code. In order to initialize this tool, I have to create a couple of objects. The external tool in question provides two quite different APIs, and no one of these APIs is capable of creating all objects the tool needs. Let's say, the tool is trafic simulation tool, where car objects are created using API 1 and bikes are created using API 2.
I have played with inheritance, tried to pick an appropriate design pattern but all my solutions look ugly to me.
The most simple way to represent what I am trying to achieve is:
class ObjectHandler():
api_1_types = ('type1', 'foo')
api_2_types = ('type2', 'bar')
def __init__(self):
self.handler1 = ObjectHandler1()
self.handler2 = ObjectHandler2()
def create(self, obj_type):
if obj_type in self.api_1_types:
return self.handler1.create()
elif obj_type in self.api_2_types:
return self.handler2.create()
else:
raise NotImplementedError
class ObjectHandler1():
def __init__(self):
# load external module that defines API 1
def create(self):
# return an object created via API 1
class ObjectHandler2():
def __init__(self):
# load external module that defines API 2
def create(self):
# return an object created via API 2
if __name__ == '__main__':
handler = ObjectHandler()
object_1 = handler.create('type1') # must be created by ObjectHandler1
object_2 = handler.create('type2') # must be created by ObjectHandler2
I am now searching for a good OO and pythonic way to achieve this.
Your method looks ok. Should use sets for in tests but it doesn't really matter. An alternative could be the following but I don't know if it is better:
def __init__(self):
self.handlers = dict()
handler1 = ObjectHandler1()
for type in api_1_types:
# These won't be copied but simply be a reference to the object
self.handlers[type] = handler1
# Repeat for the other one
and
def create(self, obj_type):
try:
return self.handlers[obj_type].create()
except KeyError:
raise NotImplementedError

Dynamic traits do not survive pickling

traits_pickle_problem.py
from traits.api import HasTraits, List
import cPickle
class Client(HasTraits):
data = List
class Person(object):
def __init__(self):
self.client = Client()
# dynamic handler
self.client.on_trait_event(self.report,'data_items')
def report(self,obj,name,old,new):
print 'client added-- ' , new.added
if __name__ == '__main__':
p = Person()
p.client.data = [1,2,3]
p.client.data.append(10)
cPickle.dump(p,open('testTraits.pkl','wb'))
The above code reports a dynamic trait. Everything works as expected in this code. However, using a new python process and doing the following:
>>> from traits_pickle_problem import Person, Client
>>> p=cPickle.load(open('testTraits.pkl','rb'))
>>> p.client.data.append(1000)
causes no report of the list append. However, re-establishing the listener separately as follows:
>>> p.client.on_trait_event(p.report,'data_items')
>>> p.client.data.append(1000)
client added-- [1000]
makes it work again.
Am I missing something or does the handler need to be re-established in __setstate__ during the unpickling process.
Any help appreciated. This is for Python 2.7 (32-bit) on windows with traits version 4.30.
Running pickletools.dis(cPickle.dumps(p)), you can see the handler object being referenced:
...
213: c GLOBAL 'traits.trait_handlers TraitListObject'
...
But there's no further information on how it should be wired to the report method. So either the trait_handler doesn't pickle itself out properly, or it's an ephemeral thing like a file handle that can't be pickled in the first place.
In either case, your best option is to overload __setstate__ and re-wire the event handler when the object is re-created. It's not ideal, but at least everything is contained within the object.
class Person(object):
def __init__(self):
self.client = Client()
# dynamic handler
self.client.on_trait_event(self.report, 'data_items')
def __setstate__(self, d):
self.client = d['client']
self.client.on_trait_event(self.report, 'data_items')
def report(self, obj, name, old, new):
print 'client added-- ', new.added
Unpickling the file now correctly registers the event handler:
p=cPickle.load(open('testTraits.pkl','rb'))
p.client.data.append(1000)
>>> client added-- [1000]
You might find this talk Alex Gaynor did at PyCon interesting. It goes into the high points of how pickling work under the hood.
EDIT - initial response used on_trait_change - a typo that appears to work. Changed it back to on_trait_event for clarity.
I had the same problem but came around like this: Imaging I want to pickle only parts of a quiet big class and some of the objects has been set so transient=True so they're not pickled because there is nothing important to save, e.g.
class LineSpectrum(HasTraits):
andor_cam = Instance(ANDORiKonM, transient=True)
In difference to objects which should be saved, e.g.
spectrometer = Instance(SomeNiceSpectrometer)
In my LineSpectrum class, I have a
def __init__(self, f):
super(LineSpectrum, self).__init__()
self.load_spectrum(f)
def __setstate__(self, state): # WORKING!
print("LineSpectrum: __setstate__ with super(...) call")
self.__dict__.update(state)
super(LineSpectrum, self).__init__() # this has to be done, otherwise pickled sliders won't work, also first update __dict__!
self.from_pickle = True # is not needed by traits, need it for myself
self.andor_cam = ANDORiKonM(self.filename)
self.load_spectrum(self.filename)
In my case, this works perfectly - all sliders are working, all values set at the time the object has been pickled are set back.
Hope this works for you or anybody who's having the same problem. Got Anaconda Python 2.7.11, all packages updated.
PS: I know the thread is old, but didn't want to open a new one just for this.

What kind of design pattern am I looking for and how do I implement this in python

I am trying to give a slight amount of genericness to my code . Basically what I am looking for is this .
I wish to write an API interface MyAPI :
class MyAPI(object):
def __init__(self):
pass
def upload(self):
pass
def download(self):
pass
class MyAPIEx(object):
def upload(self):
#specific implementation
class MyAPIEx2(object):
def upload(self)
#specific implementation
#Actual usage ...
def use_api():
obj = MyAPI()
obj.upload()
SO what I want is that based on a configuration I should be able to call the upload function
of either MyAPIEx or MyAPIEx2 . What is the exact design pattern I am looking for and how do I implement it in python.
You are looking for Factory method (or any other implementation of a factory).
Its really hard to say what pattern you are using, without more info. The way to instantiate MyAPI is indeed a Factory like #Darhazer mentioned, but it sounds more like you're interested in knowing about the pattern used for the MyAPI class hierarchy, and without more info we cant say.
I made some code improvements below, look for the comments with the word IMPROVEMENT.
class MyAPI(object):
def __init__(self):
pass
def upload(self):
# IMPROVEMENT making this function abstract
# This is how I do it, but you can find other ways searching on google
raise NotImplementedError, "upload function not implemented"
def download(self):
# IMPROVEMENT making this function abstract
# This is how I do it, but you can find other ways searching on google
raise NotImplementedError, "download function not implemented"
# IMPROVEMENT Notice that I changed object to MyAPI to inherit from it
class MyAPIEx(MyAPI):
def upload(self):
#specific implementation
# IMPROVEMENT Notice that I changed object to MyAPI to inherit from it
class MyAPIEx2(MyAPI):
def upload(self)
#specific implementation
# IMPROVEMENT changed use_api() to get_api(), which is a factory,
# call it to get the MyAPI implementation
def get_api(configDict):
if 'MyAPIEx' in configDict:
return MyAPIEx()
elif 'MyAPIEx2' in configDict:
return MyAPIEx2()
else
# some sort of an error
# Actual usage ...
# IMPROVEMENT, create a config dictionary to be used in the factory
configDict = dict()
# fill in the config accordingly
obj = get_api(configDict)
obj.upload()

Categories

Resources