Duck typing: how to avoid name collisions? - python

I think understand the idea of duck typing, and would like to use it more often in my code. However, I am concerned about one potential problem: name collision.
Suppose I want an object to do something. I know the appropriate method, so I simply call it and see what happens. In general, there are three possible outcomes:
The method is not found and AttributeError exception is raised. This indicates that the object isn't what I think it is. That's fine, since with duck typing I'm either catching such an exception, or I am willing to let the outer scope deal with it (or let the program terminate).
The method is found, it does precisely what I want, and everything is great.
The method is found, but it's not the method that I want; it's a same-name method from an entirely unrelated class. The execution continues, until either inconsistent state is detected later, or, in the worst case, the program silently produces incorrect output.
Now, I can see how good quality names can reduce the chances of outcome #3. But projects are combined, code is reused, libraries are swapped, and it's quite possible that at some point two methods have the same name and are completely unrelated (i.e., they are not intended to substitute for each other in a polymorphism).
One solution I was thinking about is to add a registry of method names. Each registry record would contain:
method name (unique; i.e., only one record per name)
its generalized description (i.e., applicable to any instance it might be called on)
the set of classes which it is intended to be used in
If a method is added to a new class, the class needs to be added to the registry (by hand). At that time, the programmer would presumably notice if the method is not consistent with the meaning already attached to it, and if necessary, use another name.
Whenever a method is called, the program would automatically verify that the name is in the registry and the class of the instance is one of the classes in the record. If not, an exception would be raised.
I understand this is a very heavy approach, but in some cases where precision is critical, I can see it might be useful. Has it been tried (in Python or other dynamically typed languages)? Are there any tools that do something similar? Are there any other approaches worth considering?
Note: I'm not referring to name clashes at the global level, where avoiding namespace pollution would be the right approach. I'm referring to clashes at the method names; these are not affected by namespaces.

Well, if this is critical, you probably should not be using duck typing...
In practice, programs are finite systems, and the range of possible types passed into any particular routine does not cause the issues you are worrying about (most often there's only ever one type passed in).
But if you want address this issue anyway, python provides ABCs (abstract base classes). these allow you to associated a "type" with any set of methods and so would work something like the registry you suggest (you can either inherit from an ABC in the normal way, or simply "register" with it).
You can then check for these types manually or automate the checking with decorators from pytyp.
But, despite being the author of pytyp, and finding these questions interesting, I personally do not find such an approach useful. In practice, what you are worrying about simply does not happen (if you want to worry about something, focus on the lack of documentation from types when using higher order functions!).
PS note - ABCs are purely metadata. They do not enforce anything. Also, checking with pytyp decorators is horrendously inefficient - you really want to do this only where it is critical.

If you are following good programming practice or let me rather say if your code is Pythoic then chances are you would seldom face such issues. Refer the FAQ What are the “best practices” for using import in a module?.
It is generally not advised to clutter the namespace and the only time when there could be a conflict if you are trying to reuse the Python reserved names and or standard libraries or name conflicts with module name. But if you encounter conflict as such then there is a serious issue with the code. For example
Why would someone name a variable as list or define a function called len?
Why would someone name a variable difflib when s/he is intending to import it in the current namespace?

To address your problem, look at abstract base classes. They're a Pythonic way to deal with this issue; you can define common behavior in a base class, and even define ways to determine if a particular object is a "virtual baseclass" of the abstract base class. This somewhat mimics the registry you're describing, without requiring all classes know about the registry beforehand.
In practice, though, this issue doesn't crop up as often as you might expect. Objects with an __iter__ method or a __str__ method are simply broken if the methods don't work the way you expect. Likewise, if you say an argument to your function requires a .callback() method defined on it, people are going to do the right thing.

If you are worried that the lack of static type checking will let some bugs get through, the answer isn't to bolt on type checking, it is to write tests.
In the presence of unit tests, a type checking system becomes largely redundant as a means of catching bugs. While it is true that a type checking system can catch some bugs, it will only catch a small subset of potential bugs. To catch the rest you'll need tests. Those unit tests will necessarily catch most of the type errors that a type checking system would have caught, as well as bugs that the type checking system cannot catch.

Related

(Python) Monkeypatch __new__ for objects of type int, float, str, list, dict, set, and module in python

I want to implicitly extend the int, float, str, list, dict, set, and module classes with custom built substitutions (extensions).
When I say 'implicitly', what I mean is that when I declare 'a = 1', and object of the type Custom_Int (as an example) is produced, as opposed to a normal integer object.
Now, I understand and respect the reasons not to do this. Firstly- messing with built-ins is like messing with the laws of physics. No good can come from it. That said- I do understand the gravity of what I'm trying to do and what can happen if I do it wrong.
Second- I understand that modifying a base case will effect not just the current run-time but all running python processes. I feel that by overriding the __new__ method of these base classes, such that it returns Custom_Object_Whatever if and ONLY IF certain environmental factors are true, other run times will remain largely unaffected.
So, getting back to the issue at hand- how can I override the __new__ method of these various types?
Pythons forbiddenfruit package seems to be promising. I havn't had a chance to reeeeeeally investigate it though, and if someone who understands it could summarize what it does, that would save me a lot of time.
Beyond that, I've observed something strange.
Every answer to monkeypatching that doesn't eventually circle back to forbiddenfruit or how forbiddenfruit works has to do with modifying what I will refer to as the 'absolute_dictionary' of the class. Because everything in Python is essentially a mapping (or dictionary) of functions/values to names, if you change the name __new__ within the right mapping, you change the nature of the object.
Problem is- every near-success I've had has it that if I call 'str( 'a' ).__new__( *args )' it works fine {in some cases}, but the calling of varOne = 'a' does not seem to actually call str.__new__().
My guess- this has something to do with either python's parsing of a program prior to launch, or else the caching of the various classes during/post launch. Or maybe I'm totally off the mark. Either python pre-reads and applies some regex to it's modules prior to launch or else the machine code, when it attempts to implicitly create an object, it reaches for something other than the class located in moduleObject.builtins[ __classname__ ]
Any ideas?
If you want to do this, your best option is probably to modify the CPython source code and build your own custom Python build with your extensions baked into the actual built-in types. The result will integrate a lot better with all the low-level mechanisms you don't yet understand, and you'll learn a lot in the process.
Right now, you're getting blocked by a lot of factors. Here are the ones that have come to my mind.
The first is that most ways of creating built-in objects don't go through a __new__ method at all. They go through C-level calls like PyLong_FromLong or PyList_New. These calls are hardwired to use the actual built-in types, allocating memory sized for the real built-in types, fetching the type object by the address of its statically-allocated C struct, and stuff like that. It's basically impossible to change any of this without building your own custom Python.
The second factor is that messing with __new__ isn't even enough to correctly affect things that theoretically should go through __new__, like int("5"). Python has reasons for stopping you from setting attributes on built-in classes, and two of those reasons are slots and the type attribute cache.
Slots are a public part of the C API that you'll probably learn about if you try to modify the CPython source code. They're function pointers in the C structs that make up type objects at C level, and most of them correspond to Python-level magic methods. For example, the __new__ method has a corresponding tp_new slot. Most C code accesses slots instead of methods, and there's code to ensure the slots and methods are in sync, but if you bypass Python's protections, that breaks and everything goes to heck.
The type attribute cache isn't a public part of anything even at C level. It's a cache that saves the results of type object attribute lookups, to make Python go faster. Its memory safety relies on all type object attribute modification going through type.__setattr__ (and all built-in type object attribute modification getting rejected by type.__setattr__), but if you bypass the protection, memory safety goes out the window and arbitrarily weird results can occur.
The third factor is that there's a bunch of caching for immutable objects. The small int cache, the interned string dict, constants getting saved in bytecode objects, compile-time constant folding... there's a lot. Objects aren't going to be created when you expect. (There's also stuff like, say, zip saving the last output tuple and reusing it if it sees you didn't keep a reference, for even more ways object creation will mess with your assumptions.)
There's more. Stuff like, what argument would int.__new__ even take if you tried to use int.__new__ to evaluate the expression 5? Stuff like all the low-level code that knows exactly how to work with the types it expects and will get very confused if it gets a MyCustomTuple with a completely different memory layout from a real tuple. Screwing with built-ins has a lot of issues.
Incidentally, one of the things you expected to be a problem is mostly not a problem. Screwing with one Python process's built-ins won't affect other Python processes' built-ins... unless those other processes are created by forking the first process, such as with multiprocessing in fork mode.

Python private method for public usage

I have a class A that need to implement a method meth().
Now, I don't want this method to be called by the end-user of my package. Thus, I have to make this method private (i.e. _meth(). I know that it's not really, private, but conventions matter.)
The problem though is that I have yet another class B in my package that has to call that method _meth(). Problem is that I now get the warning method that say that B tries to access a protected method of a class. Thus, I have to make the method public, i.e. without the leading underscore. This contradicts my intentions.
What is the most pythonic way to solve this dilemma?
I know I can re-implement that method outside of A, but it will lead to code duplication and, as meth() uses private attributes of A, will lead to the same problem.
Inheriting from a single metaclass is not an option as those classes have entirely different purposes and that will be contributing towards a ghastly mess.
The fact that pylint/your editor/whatever external tool gives you a warning doesn't prevent code execution. I don't know about your editor but pylint warnings can be disabled on a case-by-case basis using special comments (nb: "case by case" meaning: "do not warn me for this line or block", not "totally disable this warning").
And it's perfectly ok for your own code to access protected attributes and methods in the same package - the "_protected" naming convention does not mean "None shall pass", just "are you sure you understand what you're doing and willing to take responsability if you break something ?". Since you're the author/maintainer of the package and those are intra-package access you are obviously entitled to take this responsability ;)
The "most pythonic way" would be to not care about private and protected, as these concepts do not exist in Python.
Everything is public. Adding a underscore in the name does not make it private, it just indicates the method is for internal use in the class (not to prevent usage by some end-user).
If you need to use the method from another class, it shows that you're not using classes and objects correctly, and you probably come from a different language like Java where classes are used to group methods together in some namespace.
Just move the function to the module level (outside the class), as you're not using the object (self) anyway.

Most pythonic way to call dependant methods

I have a class with few methods - each one is setting some internal state, and usually requires some other method to be called first, to prepare stage.
Typical invocation goes like this:
c = MyMysteryClass()
c.connectToServer()
c.downloadData()
c.computeResults()
In some cases only connectToServer() and downloadData() will be called (or even just connectToServer() alone).
The question is: how should those methods behave when they are called in wrong order (or, in other words, when the internal state is not yet ready for their task)?
I see two solutions:
They should throw an exception
They should call correct previous method internally
Currently I'm using second approach, as it allows me to write less code (I can just write c.computeResults() and know that two other methods will be called if necessary). Plus, when I call them multiple times, I don't have to keep track of what was already called and so I avoid multiple reconnecting or downloading.
On the other hand, first approach seems more predictable from the caller perspective, and possibly less error prone.
And of course, there is a possibility for a hybrid solution: throw and exception, and add another layer of methods with internal state detection and proper calling of previous ones. But that seems to be a bit of an overkill.
Your suggestions?
They should throw an exception. As said in the Zen of Python: Explicit is better than implicit. And, for that matter, Errors should never pass silently. Unless explicitly silenced. If the methods are called out of order that's a programmer's mistake, and you shouldn't try to fix that by guessing what they mean. You might accidentally cover up an oversight in a way that looks like it works, but is not actually an accurate reflection of the programmer's intent. (That programmer may be future you.)
If these methods are usually called immediately one after another, you could consider collating them by adding a new method that simply calls them all in a row. That way you can use that method and not have to worry about getting it wrong.
Note that classes that handle internal state in this way are sometimes called for but are often not, in fact, necessary. Depending on your use case and the needs of the rest of your application, you may be better off doing this with functions and actually passing connection objects, etc. from one method to another, rather than using a class to store internal state. See for instance Stop Writing Classes. This is just something to consider and not an imperative; plenty of reasonable people disagree with the theory behind Stop Writing Classes.
You should write exceptions. It is good programming practice to write Exceptions to make your code easier to understand for the following reasons:
What you are describe fits the literal description of "exception" -- it is an exception to normal proceedings.
If you build in some kind of work around, you will likely have "spaghetti code" = BAD.
When you, or someone else goes back and reads this code later, it will be difficult to understand if you do not provide the hint that it is an exception to have these methods executed out of order.
Here's a good source:
http://jeffknupp.com/blog/2013/02/06/write-cleaner-python-use-exceptions/
As my CS professor always said "Good programmers can write code that computers can read, but great programmers write code that humans and computers can read".
I hope this helps.
If it's possible, you should make the dependencies explicit.
For your example:
c = MyMysteryClass()
connection = c.connectToServer()
data = c.downloadData(connection)
results = c.computeResults(data)
This way, even if you don't know how the library works, there's only one order the methods could be called in.

How does eclipse's pydev do code completion?

Does anyone know how pydev determines what to use for code completion? I'm trying to define a set of classes specifically to enable code completion. I've tried using __new__ to set __dict__ and also __slots__, but neither seems to get listed in pydev autocomplete.
I've got a set of enums I want to list in autocomplete, but I'd like to set them in a generator, not hardcode them all for each class.
So rather than
class TypeA(object):
ValOk = 1
ValSomethingSpecificToThisClassWentWrong = 4
def __call__(self):
return 42
I'd like do something like
def TYPE_GEN(name, val, enums={}):
def call(self):
return val
dct = {}
dct["__call__"] = call
dct['__slots__'] = enums.keys()
for k, v in enums.items():
dct[k] = v
return type(name, (), dct)
TypeA = TYPE_GEN("TypeA",42,{"ValOk":1,"ValSomethingSpecificToThisClassWentWrong":4})
What can I do to help the processing out?
edit:
The comments seem to be about questioning what I am doing. Again, a big part of what I'm after is code completion. I'm using python binding to a protocol to talk to various microcontrollers. Each parameter I can change (there are hundreds) has a name conceptually, but over the protocol I need to use its ID, which is effectively random. Many of the parameters accept values that are conceptually named, but are again represented by integers. Thus the enum.
I'm trying to autogenerate a python module for the library, so the group can specify what they want to change using the names instead of the error prone numbers. The __call__ property will return the id of the parameter, the enums are the allowable values for the parameter.
Yes, I can generate the verbose version of each class. One line for each type seemed clearer to me, since the point is autocomplete, not viewing these classes.
Ok, as pointed, your code is too dynamic for this... PyDev will only analyze your own code statically (i.e.: code that lives inside your project).
Still, there are some alternatives there:
Option 1:
You can force PyDev to analyze code that's in your library (i.e.: in site-packages) dynamically, in which case it could get that information dynamically through a shell.
To do that, you'd have to create a module in site-packages and in your interpreter configuration you'd need to add it to the 'forced builtins'. See: http://pydev.org/manual_101_interpreter.html for details on that.
Option 2:
Another option would be putting it into your predefined completions (but in this case it also needs to be in the interpreter configuration, not in your code -- and you'd have to make the completions explicit there anyways). See the link above for how to do this too.
Option 3:
Generate the actual code. I believe that Cog (http://nedbatchelder.com/code/cog/) is the best alternative for this as you can write python code to output the contents of the file and you can later change the code/rerun cog to update what's needed (if you want proper completions without having to put your code as it was a library in PyDev, I believe that'd be the best alternative -- and you'd be able to grasp better what you have as your structure would be explicit there).
Note that cog also works if you're in other languages such as Java/C++, etc. So, it's something I'd recommend adding to your tool set regardless of this particular issue.
Fully general code completion for Python isn't actually possible in an "offline" editor (as opposed to in an interactive Python shell).
The reason is that Python is too dynamic; basically anything can change at any time. If I type TypeA.Val and ask for completions, the system had to know what object TypeA is bound to, what its class is, and what the attributes of both are. All 3 of those facts can change (and do; TypeA starts undefined and is only bound to an object at some specific point during program execution).
So the system would have to know st what point in the program run do you want the completions from? And even if there were some unambiguous way of specifying that, there's no general way to know what the state of everything in the program is like at that point without actually running it to that point, which you probably don't want your editor to do!
So what pydev does instead is guess, when it's pretty obvious. If you have a class block in a module foo defining class Bar, then it's a safe bet that the name Bar imported from foo is going to refer to that class. And so you know something about what names are accessible under Bar., or on an object created by obj = Bar(). Sure, the program could be rebinding foo.Bar (or altering its set of attributes) at runtime, or could be run in an environment where import foo is hitting some other file. But that sort of thing happens rarely, and the completions are useful in the common case.
What that means though is that you basically lose completions whenever you use "too much" of Python's dynamic language flexibility. Defining a class by calling a function is one of those cases. It's not ready to guess that TypeA has names ValOk and ValSomethingSpecificToThisClassWentWrong; after all, there's presumably lots of other objects that result from calls to TYPE_GEN, but they all have different names.
So if your main goal is to have completions, I think you'll have to make it easy for pydev and write these classes out in full. Of course, you could use similar code to generate the python files (textually) if you wanted. It looks though like there's actually more "syntactic overhead" of defining these with dictionaries than as a class, though; you're writing "a": b, per item rather than a = b. Unless you can generate these more systematically or parse existing definition files or something, I think I'd find the static class definition easier to read and write than the dictionary driving TYPE_GEN.
The simpler your code, the more likely completion is to work. Would it be reasonable to have this as a separate tool that generates Python code files containing the class definitions like you have above? This would essentially be the best of both worlds. You could even put the name/value pairs in a JSON or INI file or what have you, eliminating the clutter of the methods call among the name/value pairs. The only downside is needing to run the tool to regenerate the code files when the codes change, but at least that's an automated, simple process.
Personally, I would just go with making things more verbose and writing out the classes manually, but that's just my opinion.
On a side note, I don't see much benefit in making the classes callable vs. just having an id class variable. Both require knowing what to type: TypeA() vs TypeA.id. If you want to prevent instantiation, I think throwing an exception in __init__ would be a bit more clear about your intentions.

Would optional static typing benefit Python API-design or be a disadvantage? (type checking decorator example included)

I'm a long time Python developer and I really love the dynamic nature of the language, but I wonder if Python would benefit from optional static typing.
Would it be beneficial to be able to apply static typing to the API of a library, and what would the disadvantages of this be?
I quickly sketched up a decorator implementing runtime-static type checking on pastebin and it works like this:
# A TypeError will be thrown if the argument "string" is not a "str" and if
# the returned value is not an "int"
#typed(int, string = str)
def getStringLength(string):
return len(string)
Would it be practical to use a decorator like this on the API-functions of a library? In my point of view type checking is not needed in the internal workings of a domain specific module of a library, but on the connection points between the library and it's client a simple version of design by contract by applying type checking could be useful. Especially as a type of enforced documentation which clearly states to the client of the library what it expects and returns.
Like this example where addObjectToQueue() and isObjectProcessed() are exposed for use by the client and processTheQueueAndDoAdvancedStuff() is an internal library function. I think type checking could be useful on the outward facing functions but would only bloat and restrict the dynamicness and usefulness of python if used on the internal functions.
# some_library_module.py
#typed(int, name = string)
def addObjectToQueue(name):
return random.randint() # Some object id
def processTheQueueAndDoAdvancedStuff(arg_of_library_specific_type)
# Function body here
#typed(bool, object_id = int)
def isObjectProcessed(object_id):
return True
What would the disadvantages of using this technique be?
What would the disadvantages of my naive implementation on pastebin be?
I don't want answers discussing the conversion of Python to a statically typed language, but thoughts about API design-specific pros/cons. (please move this to programmers.stackexchange.com if you consider it not a question)
Personally, I don't find this idea attractive for Python. This is all just my opinion, of course, but for context I'll tell you that Python and Haskell are probably my two favourite programming languages - I like languages at both extreme ends of the static vs dynamic typing spectrum.
I see the main benefits of static typing as follows:
Increased likelihood that your code is correct once the compiler has accepted it; if I know I've threaded my values through all the operations I invoked in such a way that the result type of one always matches the input type of another, and the final result type is the one I wanted, it increases the probability that I've selected the correct operations. This point is of deeply arguable value, since it only really matters if you're not testing very much, which would be bad. But it is true that, when programming in Haskell, when I sit back and say "there, done!" I am actually done a lot of the time, whereas that's almost never true of my Python code.
The compiler automatically points out most of the places that need changing when I make an incompatible change to a data structure or interface (most of the time). Again, tests are still needed to actually be sure you've caught all the implications, but most of the time the compiler's nagging is actually sufficient, in my experience, which deeply simplifies such refactoring; you can go straight from implementing the core of the refactoring to testing that the program still works okay, because the actual work of making all the flow-on changes is almost mechanical.
Efficient implementation. The compiler gets to use all the knowledge it has about types to do optimisation.
Your suggested system doesn't really provide any of these benefits.
Having written a program making use of your library, I still don't know if it contains any type-incorrect uses of your functions until I do extensive testing with full code coverage to see if any execution path contains a bad call.
When I refactor something, I need to go through many many rounds of "run full test suite, look for exception, find where it came from, fix the code" to get anything at all like a static-typing compiler's problem detection.
Python will still be behaving as if those variables could be anything at any time.
And to get even that much, you've sacrificed the flexibility of Python duck-typing; it's not enough that I provide a sufficiently "list-like" object, I have to actually provide a list.
To me, this sort of static typing is the worst of both worlds. The main dynamic typing argument is "you have to test your code anyway, so you may as well use those tests to catch type errors and free yourself from having to work around the type system when it doesn't help you". That may or may not be a good argument with respect to a really good static type system, but it absolutely is a compelling argument with respect to a weak partial static type system that only detects type errors at runtime. I don't think nicer error messages (which is all it really buys you most of the time; a type error not caught at the interface is almost certainly going to throw an exception deeper in the call stack) is worth the loss of flexibility.

Categories

Resources