I want to force the use of a specific number of arguments when creating an instance of some class. From what I have read here, even if I declare
class someClass:
def __init__(self, arg1, arg2..):
# whatever
it would still be possile to instantiate someClass like this:
a = someClass()
I am very new to python, so my only idea of enforcing that would be to overload an empty constructor and throwing an exception in it. But I would prefer something that enforces it at compile time already. Is that possible?
As you will see when you try it, if your __init__ method has required arguments, then you cannot create the object without passing those arguments. But I will caution you that "forcing" your caller to do something might be going against the Python grain. Often, the Python mantra is, "document your API, and then let the caller do what the caller is going to do."
Related
So I'm looking through some old python 2 code and I see this function
def manage_addMapSamlPlugin(self, id, title='', delegate_path='', REQUEST=None):
""" Factory method to instantiate a MapSamlPlugin """
# Make sure we really are working in our container (the
# PluggableAuthService object)
self = self.this()
# Instantiate the adapter object
lmp = MapSamlPlugin(id, title=title, delegate_path=delegate_path )
self._setObject(id, lmp)
if REQUEST is not None:
REQUEST.RESPONSE.redirect('%s/manage_main' % self.absolute_url())
Now this function is outside of a class, the code compiles and doesn't give any errors. My understanding is that the self keyword in this case is just anything that gets passed in, but self.this() and self._setObject(id, lmp) that shouldn't be a thing right? Shouldn't the compiler throw an error? The code is run on a terminal in a ssh server I don't know what compiler it uses.
At the end of the file this is where the function gets called.
def initialize(context):
registerMultiPlugin(MapSamlPlugin.meta_type)
context.registerClass(
MapSamlPlugin,
constructors=(manage_addMapSamlPluginForm, manage_addMapSamlPlugin),
permission=ManageUsers,
icon=os.path.join(mgr_dir, "saml_icon.png"),
visibility=None,
)
And this is also a standalone function "context" isn't derived from any imports or class.
The comment is an important clue:
def manage_addMapSamlPlugin(self, id, title='', delegate_path='', REQUEST=None):
""" Factory method to instantiate a MapSamlPlugin """
# Make sure we really are working in our container (the
# PluggableAuthService object)
self = self.this()
self is expected to be an object which has a this() method -- it sounds like that method returns a PluggableAuthService object. If you grep the rest of the code for def this you'll probably find it. Looking for class PluggableAuthService might also shed some light.
If you call this function and pass it a self that doesn't implement the expected interface, you'll get an AttributeError at runtime. Since there are no type annotations here, there's not really a way to catch errors statically (at "compile time" -- although typically compiling Python doesn't in itself enforce any static type checks).
My suspicion is that this function was originally a method of that class, and got refactored out of it for some reason (maybe as the first step in some larger refactor that was never finished). A class method works just fine if you yank it out of a class, provided that you explicitly provide the self parameter when you call it.
I have a class like this
class A(object):
def __init__(self, name):
self.name = name
def run(self):
pass
if we look at the type of run it is a function. I am now writing a decorator and this decorator should be used with either a stand alone function or a method but has different behavior if the function it is decorating is a method. When registering the method run, the decorator cannot really tell if the function is a method because it has not been bounded to an object yet. I have tried inspect.ismethod and it also does not work. Is there a way that I can detect run is a method in my decorator instead of a standalone function? Thanks!
To add a bit more info:
Basically I am logging something out. If it is decorating an object method, I need the name of the class of that object and the method name, if it is the decorating a function, I just need the function name.
As mentionned by chepner, a function only becomes a method when it's used as one - ie when it's looked up on an instance and resolved on the class. What you are decorating is and will always be a function (well, unless you already decorated it with something that returns another callable type of course, cf the classmethod type).
At this point you have two options: the safe and explicit one, and the unsafe guessing game one.
The safe and explicit solution is, simply, to have two distinct decorators, one for plain functions, and another for "functions to be used as methods".
The unsafe guessing game one is to inspect the function's first arg name (using inspect.getargspecs()) and consider it's a "function to be used as method" if the first argument is named "self".
Obviously the safe and explicit solution is also much simpler ;-)
I've created a Python proxy class that calls methods on a remote object. I've used a closure to override the doc attribute on my dynamically created methods so that
help(obj.method)
gives me the help on my remote object method. I then decided that I wanted to do the same thing for the object attributes. I have in my class something like:
class Proxy:
def __getattr__(self, name):
# Do stuff to get remote attribute
Now when calling code like this:
help(obj.attribute)
I of course get the doc string of the queried value type (string, int or whatever was returned).
The only way I can think to avoid this is to get a stack dump inside __getattr__(), identify and look for the help() call, and conditionally return object/class instead of the remotely queried value.
This is obviously non-ideal because there are quite a few ways one could specify the same thing, however it would help me from the command-line which is the most likely place I am to use this, so perhaps better than nothing.
Is there a better way?
I have somewhat of a strange question here. Let's say I'm making a simple, basic class as follows:
class MyClass(object):
def __init__(self):
super(MyClass, self).__init__()
Is there any purpose in calling super()? My class only has the default object parent class. The reason why I'm asking this is because my IDE automagically gives me this snippet when I create a new class. I usually remove the super() function because I don't see any purpose in leaving it. But maybe I'm missing something ?
You're not obliged to call object.__init__ (via super or otherwise). It does nothing.
However, the purpose of writing that snippet in that way in an __init__ function (or any function that calls the superclass) is to give you some flexibility to change the superclass without modifying that code.
So it doesn't buy you much, but it does buy you the ability to change the superclass of MyClass to a different class whose __init__ likewise accepts no-args, but which perhaps does do something and does need to be called by subclass __init__ functions. All without modifying your MyClass.__init__.
Your call whether that's worth having.
Also in this particular example you can leave out MyClass.__init__ entirely, because yours does nothing too.
I am writing a framework to be used by people who know some Python. I have settled on some syntax, and it makes sense to me and them to use something like this, where Base is the Base class that implements the framework.
class A(Base):
#decorator1
#decorator2
#decorator3
def f(self):
pass
#decorator4
def f(self):
pass
#decorator5
def g(self)
pass
All my framework is implemented via Base's metaclass. This setup is appropriate for my use case, because all these user-defined classes have a rich inheritance graph. I expect the user to implement some of the methods, or just leave it with pass. Much of the information that the user is giving here is in the decorators. This allows me to avoid other solutions where the user would have to monkey-patch, give less structured dictionaries, and things like that.
My problem here is that f is defined twice by the user (with good reason), and this should be handled by my framework. Unfortunately, by the time this gets to the metaclass'__new__ method, the dictionary of attributes contains only one key f. My idea was to use yet another decorator, such as #duplicate for the user to signal this is happening, and the two f's to be wrapped differently so they don't overwrite each other. Can something like this work?
You should use a namespace to distinguish the different fs.
Heed the advice of the "Zen of Python":
Namespaces are one honking great idea -- let's do more of those!
Yes, you could, but only with an ugly hack, and only by storing the function with a new name.
You'd have to use the sys._getframe() function to retrieve the local namespace of the calling frame. That local namespace is the class-in-construction, and adding items to that namespace means they'll end up in the class dictionary passed to your metaclass.
The following retrieves that namespace:
callframe = sys._getframe(1)
namespace = callframe.f_locals
namespace is a dict-like object, just like locals(). You can now store something in that namespace (like __function_definitions__ or similar) to add extra references to the functions.
You might be thinking Java - methods overloading and arguments signature - but this is Python and you cannot do this. The second f() will override the first f() and you end up with only one f(). The namespace is a dictionary and you cannot have duplicated keys.