how does the traitsui object model work - python

Can someone explain why this code crashes? What I think should happen is that it should not crash if it is using fully qualified trait names, which it is in this case.
from traits.api import *
from traitsui.api import *
class Struct(HasTraits): pass
class Struct1(Struct):
some_data=Int(4)
some_more_data=Str('pizza')
class Struct2(Struct):
some_data=Int(5)
some_more_data=Str('wossar')
class Subwindow(Handler):
struct1=Instance(Struct1)
struct2=Instance(Struct2)
which_struct=Enum(1,2)
cur_struct=Any
def _struct1_default(self): return Struct1()
def _struct2_default(self): return Struct2()
def _cur_struct(self): return self.struct1
#on_trait_change('which_struct')
def switch_views(self): NotImplemented #switch views here
traits_view=View(
Item(name='which_struct'),
Item(name='object.cur_struct.some_data'),
Item(name='object.cur_struct.some_more_data'),
)
Subwindow().configure_traits()
When I run this, I get
AttributeError: 'Subwindow' object has no attribute 'object.cur_struct.some_data'
but it does, if you inspect the object.
I was fiddling with this example and I made it work correctly if I replace cur_struct with a Property trait, and I don't know why. However, that isn't feasible for my real application, where another class listens for events from an entirely different class and switches cur_struct.

Ah, don't use Item(name=...). Just pass the name as the first positional argument. The constructor does some special processing on the value passed to it before assigning it to the name trait. Explicitly using name is only used internally when we need to avoid that processing.

Related

Python - Using classes as default values of another class' attribute - NameError

I'm creating a structure of classes for a wrapper of an API I'm currently writing.
I have multiple classes defined inside my models file. I want to assign the default value of some attributes of classes to other classes. When I do this, I get a NameError because sometimes I try to use classes that are defined below the current class, thus Python does not know these classes yet. I've tried multiple solutions but none of them seem to work. Does anybody know an alternative or has experience with this?
my classes I've defined:
class RateResponse(BaseModel):
def __init__(self,
provider=Provider()
):
self.provider = provider
class Provider(ObjectListModel):
def __init__(self):
super(Provider, self).__init__(list=[], listObject=ProviderItem)
#property
def providerItems(self):
return self.list
class ProviderItem(BaseModel):
def __init__(self,
code=None,
notification=Notification(),
service=Service()
):
self.code = code
self.notification = notification
self.service = service
As you can see above, I'm initialising the attribute 'provider' on the class RateResponse with the an empty object of the class Provider, which is defined below it. I'm getting a NameError on this line because it's defined below RateResponse.
provider=Provider()
NameError: name 'Provider' is not defined
The simple solution to above would be to shift the places of the classes. However, this is only a snippet of my file that is currently 400 lines long, all with these types of classes and initializations. It would be impossible to order them all correctly.
I've looked up some solutions where I thought I could return an empty object of a class by a string. I thought the function would only evaluate after all the classes were defined, but I was wrong. This is what I tried:
def getInstanceByString(classStr):
return globals()[classStr]()
class RateResponse(BaseModel):
def __init__(self,
provider=getInstanceByString('Provider')
):
self.provider = provider
But to no avail. Does anybody have experience with this? Is this even possible within Python? Is my structure just wrong? Any help is appreciated. Thanks.
This code might not mean what you want it to mean:
class RateResponse(BaseModel):
def __init__(self,
provider=Provider()
):
...
This code is saying that when this class is declared you want to make an instance of Provider which will be the default value for the provider parameter.
You may have meant that the default argument should be a new instance of Provider for each client that makes an instance of RateResponse.
You can use the Mutable Default Argument pattern to get the latter:
class RateResponse(BaseModel):
def __init__(self, provider=None):
if provider is None:
provider = Provider()
...
However, if you really do want a single instance when the client wants the default you could add a single instance below the Provider definition:
class Provider(ObjectListModel):
...
Singleton_Provider = Provider()
Then the RateResponse class could still use the current pattern, but instead perform this assignment inside the if:
if provider is None:
provider = Singleton_Provider
At the time that the assignment is performed, the Singleton_Provider will have been created.

Python Crash Course, Alien Invasion, Chapter 12, " Unresolved attribute reference 'draw_bullet' for class 'Sprite' " [duplicate]

I have two classes that look like this:
class BaseClass:
def the_dct(self):
return self.THE_DCT
class Kid(BaseClass):
THE_DCT = {'key': 'value'}
# Code i ll be running
inst = Kid()
print(inst.the_dct())
Inheritance has to be this way; second class containing THE_DCT and first class containing def the_dct.
It works just fine, but my problem is that i get a warning in Pycharm (unresolved attribute reference), about THE_DCT in BaseClass.
Is there a reason why it's warning me (as in why i should avoid it)?
Is there something i should do differently?
Within BaseClass you reference self.THE_DCT, yet when PyCharm looks at this class, it sees that THE_DCT doesn't exist.
Assuming you are treating this as an Abstract Class, PyCharm doesn't know that that is your intention. All it sees is a class accessing an attribute, which doesn't exist, and therefore it displays the warning.
Although your code will run perfectly fine (as long as you never instantiate BaseClass), you should really change it to:
class BaseClass(object):
THE_DCT = {}
def the_dct(self):
return self.THE_DCT
In addition to the existing answers, or as an alternative, you can use Type Hints. This satisfies PyCharm's warnings and also distinguishes the attribute as being inherited (or at least not native to the class). It's as simple as adding THE_DCT: dict at the very top of your class (before anything else).
class BaseClass(object):
THE_DCT: dict # Add a type-hint at the top of the class, before anything else
def the_dct(self):
return self.THE_DCT
class Kid(BaseClass):
THE_DCT = {'vars': 'values'}
I prefer this approach because it negates the need to unnecessarily add a placeholder attribute (self.THE_DCT = {}) and, because it's visually different than declaring an attribute, it can also negate the need for adding a comment next to the placeholder attribute to explain that it's inherited.

How does Python interface work (in Twisted)?

I am following this explanation, and I don't quite get how Python interpreter arrives at the following. In the first example, is Python seeing #implementer(IAmericanSocket) is not implemented by UKSocket, then it decides to make it a AdaptToAmericanSocket because that is the only implementation of IAmericanSocket with one argument? What if there is another class instance implementing IAmericanSocket with one argument? In the second example, why is IAmericanSocket not overriding AmericanSocket's voltage method?
>>> IAmericanSocket(uk)
<__main__.AdaptToAmericanSocket instance at 0x1a5120>
>>> IAmericanSocket(am)
<__main__.AmericanSocket instance at 0x36bff0>
with the code below:
from zope.interface import Interface, implementer
from twisted.python import components
class IAmericanSocket(Interface):
def voltage():
"""
Return the voltage produced by this socket object, as an integer.
"""
#implementer(IAmericanSocket)
class AmericanSocket:
def voltage(self):
return 120
class UKSocket:
def voltage(self):
return 240
#implementer(IAmericanSocket)
class AdaptToAmericanSocket:
def __init__(self, original):
self.original = original
def voltage(self):
return self.original.voltage() / 2
components.registerAdapter(
AdaptToAmericanSocket,
UKSocket,
IAmericanSocket)
You can see the full documentation for zope.interface here: http://docs.zope.org/zope.interface/ - it may provide a more thorough introduction than Twisted's quick tutorial.
To answer your specific question, the registerAdapter call at the end there changes the behavior of calling IAmericanSocket.
When you call an Interface, it first checks to see if its argument provides itself. Since the class AmericanSocket implements IAmericanSocket, instances of AmericanSocket provide IAmericanSocket. This means that when you call IAmercianSocket with an argument of an AmericanSocket instance, you just get the instance back.
However, when the argument does not provide the interface already, the interface then searches for adapters which can convert something that the argument does provide to the target interface. ("Searches for adapters" is a huge oversimplification, but Twisted's registerAdapter exists specifically to allow for this type of simplification.)
So when IAmericanSocket is called with an instance of a UKSocket, it finds a registered adapter from instances of UKSocket. The adapter itself is a 1-argument callable that takes an argument of the type being adapted "from" (UKSocket) and returns a value of the type being adapted "to" (provider of IAmericanSocket). AdaptToAmericanSocket is a class, but classes are themselves callable, and since its constructor takes a UKSocket, it fits the contract of thing-that-takes-1-argument-of-type-UKSocket-and-returns-an-IAmericanSocket.
The existence of another class would not make a difference, unless it were registered as an adapter. If you register two adapters which might both be suitable their interactions are complicated, but since they both do the job, in theory you shouldn't care which one gets used.

python: How do I dynamically create bound methods from user supplied source code?

I would like to construct a class in python that supports dynamic updating of methods from user supplied source code.
Instances of class Agent have a method go. At the time an instance is constructed, its .go() method does nothing. For example, if we do a=Agent(), and then a.go() we should get a NotImplementedError or something like that. The user then should be able to interactively define a.go() by supplying source code. A simple source code example would be
mySourceString = "print('I learned how to go!')"
which would be injected into a like this
a.update(mySourceString)
Further invokations of a.go() would then result in "I learned how to go!" being printed to the screen.
I have partially figured out how to do this with the following code:
import types
class Error(Exception):
"""Base class for exceptions in this module."""
pass
class NotImplementedError(Error):
pass
class Agent(object):
def go(self):
raise NotImplementedError()
def update(self,codeString):
#Indent each line of user supplied code
codeString = codeString.replace('\n','\n ')
#Turn code into a function called func
exec "def func(self):\n"+' '+codeString
#Make func a bound method on this instance
self.go = types.MethodType(func, self)
QUESTIONS
Is this implementation sensible?
Will this implementation incur unexpected scope issues?
Is there an obvious way to sandbox the user supplied code to prevent it from touching external objects? I can think of ways to do this by supplying sets of allowed external objects, but this seems not pythonic.
Possibly useful SO posts
What's the difference between eval, exec, and compile in Python?
Adding a Method to an Existing Object
(I am working in python 2.6)

Python - how do I force the use of a factory method to instantiate an object?

I have a set of related classes that all inherit from one base class. I would like to use a factory method to instantiate objects for these classes. I want to do this because then I can store the objects in a dictionary keyed by the class name before returning the object to the caller. Then if there is a request for an object of a particular class, I can check to see whether one already exists in my dictionary. If not, I'll instantiate it and add it to the dictionary. If so, then I'll return the existing object from the dictionary. This will essentially turn all the classes in my module into singletons.
I want to do this because the base class that they all inherit from does some automatic wrapping of the functions in the subclasses, and I don't want to the functions to get wrapped more than once, which is what happens currently if two objects of the same class are created.
The only way I can think of doing this is to check the stacktrace in the __init__() method of the base class, which will always be called, and to throw an exception if the stacktrace does not show that the request to make the object is coming from the factory function.
Is this a good idea?
Edit: Here is the source code for my base class. I've been told that I need to figure out metaclasses to accomplish this more elegantly, but this is what I have for now. All Page objects use the same Selenium Webdriver instance, which is in the driver module imported at the top. This driver is very expensive to initialize -- it is initialized the first time a LoginPage is created. After it is initialized the initialize() method will return the existing driver instead of creating a new one. The idea is that the user must begin by creating a LoginPage. There will eventually be dozens of Page classes defined and they will be used by unit testing code to verify that the behavior of a website is correct.
from driver import get_driver, urlpath, initialize
from settings import urlpaths
class DriverPageMismatchException(Exception):
pass
class URLVerifyingPage(object):
# we add logic in __init__() to check the expected urlpath for the page
# against the urlpath that the driver is showing - we only want the page's
# methods to be invokable if the driver is actualy at the appropriate page.
# If the driver shows a different urlpath than the page is supposed to
# have, the method should throw a DriverPageMismatchException
def __init__(self):
self.driver = get_driver()
self._adjust_methods(self.__class__)
def _adjust_methods(self, cls):
for attr, val in cls.__dict__.iteritems():
if callable(val) and not attr.startswith("_"):
print "adjusting:"+str(attr)+" - "+str(val)
setattr(
cls,
attr,
self._add_wrapper_to_confirm_page_matches_driver(val)
)
for base in cls.__bases__:
if base.__name__ == 'URLVerifyingPage': break
self._adjust_methods(base)
def _add_wrapper_to_confirm_page_matches_driver(self, page_method):
def _wrapper(self, *args, **kwargs):
if urlpath() != urlpaths[self.__class__.__name__]:
raise DriverPageMismatchException(
"path is '"+urlpath()+
"' but '"+urlpaths[self.__class.__name__]+"' expected "+
"for "+self.__class.__name__
)
return page_method(self, *args, **kwargs)
return _wrapper
class LoginPage(URLVerifyingPage):
def __init__(self, username=username, password=password, baseurl="http://example.com/"):
self.username = username
self.password = password
self.driver = initialize(baseurl)
super(LoginPage, self).__init__()
def login(self):
driver.find_element_by_id("username").clear()
driver.find_element_by_id("username").send_keys(self.username)
driver.find_element_by_id("password").clear()
driver.find_element_by_id("password").send_keys(self.password)
driver.find_element_by_id("login_button").click()
return HomePage()
class HomePage(URLVerifyingPage):
def some_method(self):
...
return SomePage()
def many_more_methods(self):
...
return ManyMorePages()
It's no big deal if a page gets instantiated a handful of times -- the methods will just get wrapped a handful of times and a handful of unnecessary checks will take place, but everything will still work. But it would be bad if a page was instantiated dozens or hundreds or tens of thousands of times. I could just put a flag in the class definition for each page and check to see if the methods have already been wrapped, but I like the idea of keeping the class definitions pure and clean and shoving all the hocus-pocus into a deep corner of my system where no one can see it and it just works.
In Python, it's almost never worth trying to "force" anything. Whatever you come up with, someone can get around it by monkeypatching your class, copying and editing the source, fooling around with bytecode, etc.
So, just write your factory, and document that as the right way to get an instance of your class, and expect anyone who writes code using your classes to understand TOOWTDI, and not violate it unless she really knows what she's doing and is willing to figure out and deal with the consequences.
If you're just trying to prevent accidents, rather than intentional "misuse", that's a different story. In fact, it's just standard design-by-contract: check the invariant. Of course at this point, SillyBaseClass is already screwed up, and it's too late to repair it, and all you can do is assert, raise, log, or whatever else is appropriate. But that's what you want: it's a logic error in the application, and the only thing to do is get the programmer to fix it, so assert is probably exactly what you want.
So:
class SillyBaseClass:
singletons = {}
class Foo(SillyBaseClass):
def __init__(self):
assert self.__class__ not in SillyBaseClass.singletons
def get_foo():
if Foo not in SillyBaseClass.singletons:
SillyBaseClass.singletons[Foo] = Foo()
return SillyBaseClass.singletons[Foo]
If you really do want to stop things from getting this far, you can check the invariant earlier, in the __new__ method, but unless "SillyBaseClass got screwed up" is equivalent to "launch the nukes", why bother?
it sounds like you want to provide a __new__ implementation: Something like:
class MySingledtonBase(object):
instance_cache = {}
def __new__(cls, arg1, arg2):
if cls in MySingletonBase.instance_cache:
return MySingletonBase.instance_cache[cls]
self = super(MySingletonBase, cls).__new__(arg1, arg2)
MySingletonBase.instance_cache[cls] = self
return self
Rather than adding complex code to catch mistakes at runtime, I'd first try to use convention to guide users of your module to do the right thing on their own.
Give your classes "private" names (prefixed by an underscore), give them names that suggest they shouldn't be instantiated (eg _Internal...) and make your factory function "public".
That is, something like this:
class _InternalSubClassOne(_BaseClass):
...
class _InternalSubClassTwo(_BaseClass):
...
# An example factory function.
def new_object(arg):
return _InternalSubClassOne() if arg == 'one' else _InternalSubClassTwo()
I'd also add docstrings or comments to each class, like "Don't instantiate this class by hand, use the factory method new_object."
You can also just nest classes in factory method, as described here:
https://python-3-patterns-idioms-test.readthedocs.io/en/latest/Factory.html#preventing-direct-creation
Working example from mentioned source:
# Factory/shapefact1/NestedShapeFactory.py
import random
class Shape(object):
types = []
def factory(type):
class Circle(Shape):
def draw(self): print("Circle.draw")
def erase(self): print("Circle.erase")
class Square(Shape):
def draw(self): print("Square.draw")
def erase(self): print("Square.erase")
if type == "Circle": return Circle()
if type == "Square": return Square()
assert 0, "Bad shape creation: " + type
def shapeNameGen(n):
for i in range(n):
yield factory(random.choice(["Circle", "Square"]))
# Circle() # Not defined
for shape in shapeNameGen(7):
shape.draw()
shape.erase()
I'm not fan of this solution, just want to add this as one more option.

Categories

Resources