Wondering whether I should just bail on using properties in python - python

I have been trying to use properties instead of specific setters and getters in my app. They seem more pythonic and generally make my code more readable.
More readable except for one issue: Typos.
consider the following simple example (note, my properties actually do some processing even though the examples here just set or return a simple variable)
class GotNoClass(object):
def __init__(self):
object.__init__(self)
self.__a = None
def __set_a(self, a):
self.__a = a
def __get_a(self):
return self.__a
paramName = property(__get_a, __set_a)
if __name__ == "__main__":
classy = GotNoClass()
classy.paramName = 100
print classy.paramName
classy.paranName = 200
print classy.paramName
#oops! Typo above! as seen by this line:
print classy.paranName
The output, as anyone who reads a little closely will see, is:
100
100
200
Oops. Shouldn't have been except for the fact that I made a typo - I wrote paranName (two n's) instead of paramName.
This is easy to debug in this simple example, but it has been hurting me in my larger project. Since python happily creates a new variable when I accidentally meant to use a property, I get subtle errors in my code. Errors that I am finding hard to track down at times. Even worse, I once used the same typo twice (once as I was setting and later once as I was getting) so my code appeared to be working but much later, when a different branch of code finally tried to access this property (correctly) I got the wrong value - but it took me several days before I realized that my results were just a bit off.
Now that I know that this is an issue, I am spending more time closely reading my code, but ideally I would have a way to catch this situation automatically - if I miss just one I can introduce an error that does not show up until a fair bit of time has passed...
So I am wondering, should I just switch to using good old setters and getters? Or is there some neat way to avoid this situation? Do people just rely on themselves to catch these errors manually? Alas I am not a professional programmer, just someone trying to get some stuff done here at work and I don't really know the best way to approach this.
Thanks.
P.S.
I understand that this is also one of the benefits of Python and I am not complaining about that. Just wondering whether I would be better off using explicit setters and getters.

Have you tried a static analysis tool? Here is a great thread about them.

Depending on how your code works, you could try using slots. You'll get an AttributeError exception thrown when you try to assign something that's not in slots then, which will make such typo's more obvious.

There are times when compile-time checking really saves time. You seem to have identified one such case. By accident rather than careful choice I use getters and setters, and am happy ;-)

Related

Mutable Default Arguments - (Why) is my code dangerous?

My code triggers a warning in pylint:
def getInsertDefault(collection=['key', 'value'], usefile='defaultMode.xml'):
return doInsert(collection,usefile,True)
The warning is pretty clear, it's Mutable Default Arguments, I'm getting the point in several instances it can give a wrong impression of what's happening. There are several posts on SA already, but it doesn't feel this one here is covered.
Most questions and examples deal with empty lists which are weak-referenced and can cause an error.
I'm also aware it's better practice to change the code to getInsertDefault(collection=None ...) but in this method for default-initialization, I don't intend to do anything with the list except reading, (why) is my code dangerous or could result in a pitfall?
--EDIT--
To the point: Why is the empty dictionary a dangerous default value in Python? would be answering the question.
Kind of: I am aware my code is against the convention and could result in a pitfall - but in this very specific case: Am I safe?
I found the suggestion in the comments useful to use collection=('key', 'value') instead as it's conventional and safe. Still, out of pure interest: Is my previous attempt able to create some kind of major problem?
Assuming that doInsert() (and whatever code doInsert is calling) is only ever reading collection, there is indeed no immediate issue - just a time bomb.
As soon as any part of the code seeing this list starts mutating it, your code will break in the most unexpected way, and you may have a hard time debugging the issue (imagine if what changes is a 3rd part library function tens of stack frames away... and that's in the best case, where the issue and it's root cause are still in the same direct branch of the call stack - it might stored as an instance attribute somewhere and mutated by some unrelated call, and then you're in for some fun).
Now the odds that this ever happens are rather low, but given the potential for subtles (it might just result in incorrect results once in a while, not necessarily crash the program) and hard to track bugs this introduces, you should think twice before assuming that this is really "safe".

Most pythonic way to call dependant methods

I have a class with few methods - each one is setting some internal state, and usually requires some other method to be called first, to prepare stage.
Typical invocation goes like this:
c = MyMysteryClass()
c.connectToServer()
c.downloadData()
c.computeResults()
In some cases only connectToServer() and downloadData() will be called (or even just connectToServer() alone).
The question is: how should those methods behave when they are called in wrong order (or, in other words, when the internal state is not yet ready for their task)?
I see two solutions:
They should throw an exception
They should call correct previous method internally
Currently I'm using second approach, as it allows me to write less code (I can just write c.computeResults() and know that two other methods will be called if necessary). Plus, when I call them multiple times, I don't have to keep track of what was already called and so I avoid multiple reconnecting or downloading.
On the other hand, first approach seems more predictable from the caller perspective, and possibly less error prone.
And of course, there is a possibility for a hybrid solution: throw and exception, and add another layer of methods with internal state detection and proper calling of previous ones. But that seems to be a bit of an overkill.
Your suggestions?
They should throw an exception. As said in the Zen of Python: Explicit is better than implicit. And, for that matter, Errors should never pass silently. Unless explicitly silenced. If the methods are called out of order that's a programmer's mistake, and you shouldn't try to fix that by guessing what they mean. You might accidentally cover up an oversight in a way that looks like it works, but is not actually an accurate reflection of the programmer's intent. (That programmer may be future you.)
If these methods are usually called immediately one after another, you could consider collating them by adding a new method that simply calls them all in a row. That way you can use that method and not have to worry about getting it wrong.
Note that classes that handle internal state in this way are sometimes called for but are often not, in fact, necessary. Depending on your use case and the needs of the rest of your application, you may be better off doing this with functions and actually passing connection objects, etc. from one method to another, rather than using a class to store internal state. See for instance Stop Writing Classes. This is just something to consider and not an imperative; plenty of reasonable people disagree with the theory behind Stop Writing Classes.
You should write exceptions. It is good programming practice to write Exceptions to make your code easier to understand for the following reasons:
What you are describe fits the literal description of "exception" -- it is an exception to normal proceedings.
If you build in some kind of work around, you will likely have "spaghetti code" = BAD.
When you, or someone else goes back and reads this code later, it will be difficult to understand if you do not provide the hint that it is an exception to have these methods executed out of order.
Here's a good source:
http://jeffknupp.com/blog/2013/02/06/write-cleaner-python-use-exceptions/
As my CS professor always said "Good programmers can write code that computers can read, but great programmers write code that humans and computers can read".
I hope this helps.
If it's possible, you should make the dependencies explicit.
For your example:
c = MyMysteryClass()
connection = c.connectToServer()
data = c.downloadData(connection)
results = c.computeResults(data)
This way, even if you don't know how the library works, there's only one order the methods could be called in.

python isinstance vs hasattr vs try/except: What is better?

I am trying to figure out the tradeoffs between different approaches of determining whether or not with object obj you can perform action do_stuff(). As I understand, there are three ways of determining if this is possible:
# Way 1
if isinstance(obj, Foo):
obj.do_stuff()
# Way 2
if hasattr(obj, 'do_stuff'):
obj.do_stuff()
# Way 3
try:
obj.do_stuff()
except:
print 'Do something else'
Which is the preferred method (and why)?
I believe that the last method is generally preferred by Python coders because of a motto taught in the Python community: "Easier to ask for forgiveness than permission" (EAFP).
In a nutshell, the motto means to avoid checking if you can do something before you do it. Instead, just run the operation. If it fails, handle it appropriately.
Also, the third method has the added advantage of making it clear that the operation should work.
With that said, you really should avoid using a bare except like that. Doing so will capture any/all exceptions, even the unrelated ones. Instead, it is best to capture exceptions specifically.
Here, you will want to capture for an AttributeError:
try:
obj.do_stuff() # Try to invoke do_stuff
except AttributeError:
print 'Do something else' # If unsuccessful, do something else
Checking with isinstance runs counter to the Python convention of using duck typing.
hasattr works fine, but is Look Before you Leap instead of the more Pythonic EAFP.
Your implementation of way 3 is dangerous, since it catches any and all errors, including those raised by the do_stuff method. You could go with the more precise:
try:
_ds = obj.do_stuff
except AttributeError:
print('Do something else')
else:
_ds()
But in this case, I'd prefer way 2 despite the slight overhead - it's just way more readable.
The correct answer is 'neither'
hasattr delivers functionality however it is possibly the worst of all options.
We use the object oriented nature of python because it works. OO analysis is never accurate and often confuses however we use class hierarchies because we know they help people do better work faster. People grasp objects and a good object model helps coders change things more quickly and with less errors. The right code ends up clustered in the right places. The objects:
Can just be used without considering which implementation is present
Make it clear what needs to be changed and where
Isolate changes to some functionality from changes to some other functionality – you can fix X without fearing you will break Y
hasattr vs isinstance
Having to use isinstance or hasattr at all indicates the object model is broken or we are using it incorrectly. The right thing to do is to fix the object model or change how we are using it.
These two constructs have the same effect and in the imperative ‘I need the code to do this’ sense they are equivalent. Structurally there is a huge difference. On meeting this method for the first time (or after some months of doing other things), isinstance conveys a wealth more information about what is actually going on and what else is possible. Hasattr does not ‘tell’ you anything.
A long history of development lead us away from FORTRAN and code with loads of ‘who am I’ switches. We choose to use objects because we know they help make the code easier to work with. By choosing hasattr we deliver functionality however nothing is fixed, the code is more broken than it was before we started. When adding or changing this functionality in the future we will have to deal with code that is unequally grouped and has at least two organising principles, some of it is where it ‘should be’ and the rest is randomly scattered in other places. There is nothing to make it cohere. This is not one bug but a minefield of potential mistakes scattered over any execution path that passes through your hasattr.
So if there is any choice, the order is:
Use the object model or fix it or at least work out what is wrong
with it and how to fix it
Use isinstance
Don’t use hasattr

Very basic OO: changing attribute of a method

I'm certain this has been asked a million times, but it's difficult to search for something when you don't know the correct terminology :(
I'm attempting (again... I've never understood OO, since I got taught it very badly 8 years ago, and avoid it as much as possible, to the horror of every other programmer I know - my mind doesn't seem wired to get it, at all) to teach myself OO and PyQt simultaneously.
I don't even know if this is logically possible, but I've got a PyQt action, which is referred to by 2 different things; one of the arguments of the action is an Icon. When the action called by one of those things, I'd like to change the icon; code snippet:
self.menu()
self.toolbar()
self.actions()
def menu(self):
self.fileMenu = QtGui.QMenu("&File", self)
self.fileMenu.addAction(self.exitAct)
def toolbar(self):
self.toolbar = self.addToolBar("Exit")
self.toolbar.addAction(self.exitAct)
def actions(self):
self.exitIcon = QtGui.QIcon('exit.png')
self.exitAct = QtGui.QAction(self.exitIcon, "&Exit", self, triggered=self.close)
In the toolbar, I'd like a different icon (exitdoor.png). The documentation for QtIcon has an addFile method, so I tried:
self.toolbar.addAction(self.exitAct)
self.exitIcon.addFile("exitdoor.png")
but this didn't work, with the error ("'QToolBar' object has no attribute 'addFile'"), and
self.exitAct.addFile("exitdoor.png")
with the error 'QAction' object has no attribute 'addFile' (I do understand why this doesn't work and what the error means).
What's my stupid mistake?! (Apart from the mistake of putting myself through the pain of continuing to try and learn OO...)
'QToolBar' object has no attribute 'addFile'
Hmm, since you called addFile on self.exitIcon, it looks like you have the wrong kind of object in the self.exitIcon variable. It seems you want it to be a QtGui.QIcon type, but instead it's a QToolBar type.
You should look at where you are making assignments to self.exitIcon.
In this case, trying to learn object-oriented programming through Python is not the easiest way. Python is a fine object-oriented language, but it does not catch your errors as immediately as other languages. When you get an error like the above, the mistake was not in that line of code, but rather in a line of code that ran a while ago. Other languages would catch the mistake even before you run your program, and point you directly at the line you need to fix. It might be worthwhile for you to practice a little basic Java to get trained in OOP by a strict teacher before you go off into the wilds of Python.
I'm not familiar with PyQt and only a beginner with Python, but I have quite a bit of experience with other OO languages (primarily C++ and Java). It looks like you need to determine what kind of object is stored in the fields exitIcon and exitAct. From there, you need to find out what attributes these objects have and find the one which you can change to get the behavior you want.
More thoughts:
Somewhere between the execution of self.exitIcon = QtGui.QIcon('exit.png') and self.exitIcon.addFile("exitdoor.png") self.exitIcon is changed to refer to a QToolBar. I suggest that you start sprinkling your code with print statements so you can trace the execution of your code and find out how this variable was changed.
After talking to some much cleverer people than me, the short answer is that what I was trying to do is seemingly impossible. Once an action has been defined, it is not possible to change any attribute of it.
At the moment, my knowledge of python isn't great enough to understand exactly why this is (or how I could have realised this from the docs), but it seems that when an action is defined, it is effectively a blackbox stored in memory, and other objects can only use, and not modify, the blackbox.
Further comments still welcome!

Python Unittest Modularity vs Readability

I have a Python unittest, with some tests having the same type object tested. The basic outline in one test-class is:
class TestClass(unittest.TestCase):
def setup(self):
...
def checkObjects(self, obj):
for i in [...values...]:
self.assertEqual(starttags(i,obj), endtags(i,obj))
def testOne(self):
#Get object one.
checkObjects(objone)
def testAnother(self):
#Access another object.
checkObjects(another)
... various tests for similar objects.
Although it's modular, I noticed that any failures will give an error like AssertionError: number != anothernumber, and the line of code generating the error, self.assertEqual(starttags(i,obj), endtags(i,obj)). If I had listed the tests out instead of placing in a for loop, I would have something like:
self.assertEqual(starttags(value1,obj), endtags(value1,obj))
self.assertEqual(starttags(value2,obj), endtags(value2,obj))
Which shows exactly which case causes the error, but is copy-paste code, which I thought was generally not recommended. I noticed the problem recently when a contributor reworked a more clean unit-test, that unfortunately would give little debug-info on assertion failures. So, what is the best-practice in these cases? Something like a list of tuples, fed into a for-loop with assertEquals is "cleaner", but copy-paste with different values on different lines gives useful stack traces.
If by cleaner you mean less code, then this kind of cleaner code is not always more readable code. In fact, it usually is less readable (especially when you come back to it). You can always go with fancy refactorings but you need to know when to stop. In longer run, it's always better to use more obvious, simple code than trying to squeeze one less line of code for artificial gains - not just in terms of unit testing.
Unit tests go with their own rules. For example they usually allow different naming conventions than what your regular code standards say, you hardly ever document it - they're kind of special part of your codebase. Also, having repeated code is not that uncommon. Actually, it's rather typical to have many similar looking, small tests.
Design your tests (code) with simplicity in mind
And at the moment, your tests are confusing even at the stage of writing them - imagine coming back to that code in 3 months from now. Imagine one of those tests breaking as a result of somebody else doing some change in some other place. It won't get any better.
Design your tests in such a way, that when one of them breaks, you immediately know why it did so and where. Not only that - design them in such a way, that you can tell what they are doing in a blink of an eye. Having for loops, ifs and basically any other kind of control flow mechanism (or too extensive refactoring) in test code, usually results in one question springing to your mind - What are we doing here? This is the kind of question you don't want to find yourself asking.
To summarize this longish post with words of people smarter than myself:
Any fool can write code that a computer can understand. Good programmers write code that humans can understand.
-Martin Fowler et al, Refactoring: Improving the Design of Existing Code, 1999
Do yourself a favor and stick to that rule.
Use nosetests, it makes your tests so much cleaner:
#!/usr/bin/python
def test_one():
for i in [...]:
assert xxx(i) == yyy
def test_two():
...

Categories

Resources