unbound method must be called with instance as first argument. - Python - python

I have a relatively simple class which just changes the values of variables depending on the state.
class SetStates:
def LM_State1():
global p_LM1, p_LM2, p_LM3, p_RR1, p_RR2, p_RR3, p_RF1, p_RF2, p_RF3
p_LM1 = Ra_L*P_j1_s1
p_LM2 = P_j2_s1
p_LM3 = P_j3_s1
p_RR1 = Ra_R*(-1)*P_j1_s1
p_RR2 = (-1)*P_j2_s1
p_RR3 = (-1)*P_j3_s1
p_RF1 = Ra_R*(-1)*P_j1_s1
p_RF2 = (-1)*P_j2_s1
p_RF3 = (-1)*P_j3_s1
Initially I was calling the function within the class like so:
if LM_state == 1:
SetStates.LM_State1()
After realizing I need to initialize it now looks like this.
s=SetStates()
if LM_state == 1:
s.LM_State1()
But am now receiving an error specifying that it has been given 1 argument but expected 0. I am almost certain I am missing something very trivial. If someone could clear this up it would be great, thanks

Class methods (that is to say: any def block defined inside a class definition) automatically get passed the instance caller as their first argument (unless it's defined as a staticmethod but let's not muddy the waters). Since your function definition for LM_State1() doesn't include any arguments, Python complains that you gave it an argument (s) that it doesn't know what to do with.
As #BrenBarn mentions in the comments, your class doesn't make a whole lot of sense from a design perspective if it's just modifying global state, but that's the reason for the error anyway. If you really need this (hint: you don't) you should consider wrapping it in a module, importing the module, and defining all your set_state functions at the top-level of that module.
# stateful.py
def set_state_1():
...
# main.py
import stateful
stateful.set_state_1() # set the state!

Related

When is a default object created in Python?

I have a Python (3) structure like following:
main_script.py
util_script.py
AccessClass.py
The main script is calling a function in util with following signature:
def migrate_entity(project, name, access=AccessClass.AccessClass()):
The call itself in the main script is:
migrate_entity(project_from_file, name_from_args, access=access_object)
All objects do have values when the call is done.
However, As soon as the main script is executed the AccessClass in the function parameters defaults is initialized, even though it is never used. For example this main script __init__ will create the default class in the function signature:
if __name__ == "__main__":
argparser = argparse.ArgumentParser(description='Migrate support data')
argparser.add_argument('--name', dest='p_name', type=str, help='The entity name to migrate')
load_dotenv()
fileConfig('logging.ini')
# Just for the sake of it
quit()
# The rest of the code...
# ...and then
migrate_entity(project_from_file, name_from_args, access=access_object)
Even with the quit() added the AccessClass is created. And if I run the script with ./main_script.py -h the AccessClass in the function signature is created. And even though the only call to the function really is with an access object I can see that the call is made to the AccessClass.__init__.
If I replace the default with None and instead check the parameter inside the function and then create it, everything is working as expected, i.e. the AccessClass is not created if not needed.
Can someone please enlighten me why this is happening and how defaults are expected to work?
Are parameter defaults always created in advance in Python?
Basically the mutable objects are initialized the moment you declare the function, not when you invoke it. That's why it's widely discouraged to use mutable types as defaults. You can use None as you mentioned and inside the body do the check if something is None and then initialize it properly.
def foo_bad(x = []): pass # This is bad
foo_bad() # the list initialized during declaration used
foo_bad([1,2]) # provided list used
foo_bad() # again the list initialized during declaration used
def foo_good(x = None):
if x is None:
x=[]
... # further logic
AccessClass is being created because you've set it as a default parameter, so it it's in the scope of the file itself and will be initialised when the file is first imported. This is also why it's not recommended to use lists or dicts as default parameters.
This is a much safer way of defining a default value if nothing is provided:
def migrate_entity(project, name, access=None):
if access is None:
access = AccessClass.AccessClass()
You could also use type hinting to demonstrate what type access should be:
def migrate_entity(project, name, access: Optional[AccessClass.AccessClass] = None): ...

Using constructor parameter variable names during object instantiation in Python?

When declaring a new instance of an object in python, why would someone use the names of the variables from the parameters at instatntiation time? Say you have the following object:
class Thing:
def __init__(self,var1=None,var2=None):
self.var1=var1
self.var2=var2
The programmer from here decides to create an instance of this object at some point later and enters it in the following way:
NewObj = Thing(var1=newVar,var2=otherVar)
Is there a reason why someone would enter it that way vs. just entering the newVar/otherVar variables into the constructor parameters without using "var1=" and "var2="? Like below:
NewObj = Thing(newVar,otherVar)
I'm fairly novice at using python, and I couldn't find anything about this specific sort of syntax even if it seems like a fairly simple/straightforward question
The reason is clarity, not for the computer, but for yourself and other humans.
class Calculation:
def __init__(self, low=None, high=None, mean=None):
self.low=low
self.high=high
self.mean=mean
...
# with naming (notice how ordering is not important)
calc = Calculation(mean=0.5, low=0, high=1)
# without naming (now order is important and it is less clear what the numbers are used for)
calc = Calculation(0, 1, 0.5)
Note that the same can be done for any function, not only when initializing an object.

Will Python automatically detect that the function was never called but defined?

True or False
If a function is defined but never called, then Python automatically detects that and issues a warning
One of the issues with this is that functions in Python are first class objects. So their name can be reassigned. For example:
def myfunc():
pass
a = myfunc
myfunc = 42
a()
We also have closures, where a function is returned by another function and the original name goes out of scope.
Unfortunately it is also perfectly legal to define a function with the same name as an existing one. For example:
def myfunc(): # <<< This code is never called
pass
def myfunc():
pass
myfunc()
So any tracking must include the function's id, not just its name - although that won't help with closures, since the id could get reused. It also won't help if the __name__ attribute of the function is reassigned.
You could track function calls using a decorator. Here I have used the name and the id - the id on its own would not be readable.
import functools
globalDict = {}
def tracecall(f):
#functools.wraps(f)
def wrapper(*args, **kwargs):
global globalDict
key = "%s (%d)" % (f.__name__, id(f))
# Count the number of calls
if key in globalDict:
globalDict[key] += 1
else:
globalDict[key] = 1
return f(*args, **kwargs)
return wrapper
#tracecall
def myfunc1():
pass
myfunc1()
myfunc1()
#tracecall
def myfunc1():
pass
a = myfunc1
myfunc1 = 42
a()
print(globalDict)
Gives:
{'myfunc1 (4339565296)': 2, 'myfunc1 (4339565704)': 1}
But that only gives the functions that have been called, not those that have not!
So where to go from here? I hope you can see that the task is quite difficult given the dynamic nature of python. But I hope the decorator I show above could at least allow you to diagnose the way the code is used.
No it is not. Python is not detect this. If you want to detect which functions are called or not during the run time you can use global set in your program. Inside each function add function name to set. Later you can print your set content and check if the the function is called or not.
False. Ignoring the difficulty and overhead of doing this, there's no reason why it would be useful.
A function that is defined in a module (i.e. a Python file) but not called elsewhere in that module might be called from a different module, so that doesn't deserve a warning.
If Python were to analyse all modules that get run over the course of a program, and print a warning about functions that were not called, it may be that a function was not called because of the input in this particular run e.g. perhaps in a calculator program there is a "multiply" function but the user only asked to sum some numbers.
If Python were to analyse all modules that make up a program and note and print a warning about functions that could not possibly be called (this is impossible but stay with me here) then it would warn about functions that were intended for use in other programs. E.g. if you have two calculator programs, a simple one and an advanced one, maybe you have a central calc.py with utility functions, and then advanced functions like exp and log could not possibly be called when that's used as part of simple program, but that shouldn't cause a warning because they're needed for the advanced program.

Is there a reason not to send super().__init__() a dictionary instead of **kwds?

I just started building a text based game yesterday as an exercise in learning Python (I'm using 3.3). I say "text based game," but I mean more of a MUD than a choose-your-own adventure. Anyway, I was really excited when I figured out how to handle inheritance and multiple inheritance using super() yesterday, but I found that the argument-passing really cluttered up the code, and required juggling lots of little loose variables. Also, creating save files seemed pretty nightmarish.
So, I thought, "What if certain class hierarchies just took one argument, a dictionary, and just passed the dictionary back?" To give you an example, here are two classes trimmed down to their init methods:
class Actor:
def __init__(self, in_dict,**kwds):
super().__init__(**kwds)
self._everything = in_dict
self._name = in_dict["name"]
self._size = in_dict["size"]
self._location = in_dict["location"]
self._triggers = in_dict["triggers"]
self._effects = in_dict["effects"]
self._goals = in_dict["goals"]
self._action_list = in_dict["action list"]
self._last_action = ''
self._current_action = '' # both ._last_action and ._current_action get updated by .update_action()
class Item(Actor):
def __init__(self,in_dict,**kwds)
super().__init__(in_dict,**kwds)
self._can_contain = in_dict("can contain") #boolean entry
self._inventory = in_dict("can contain") #either a list or dict entry
class Player(Actor):
def __init__(self, in_dict,**kwds):
super().__init__(in_dict,**kwds)
self._inventory = in_dict["inventory"] #entry should be a Container object
self._stats = in_dict["stats"]
Example dict that would be passed:
playerdict = {'name' : '', 'size' : '0', 'location' : '', 'triggers' : None, 'effects' : None, 'goals' : None, 'action list' = None, 'inventory' : Container(), 'stats' : None,}
(The None's get replaced by {} once the dictionary has been passed.)
So, in_dict gets passed to the previous class instead of a huge payload of **kwds.
I like this because:
It makes my code a lot neater and more manageable.
As long as the dicts have at least some entry for the key called, it doesn't break the code. Also, it doesn't matter if a given argument never gets used.
It seems like file IO just got a lot easier (dictionaries of player data stored as dicts, dictionaries of item data stored as dicts, etc.)
I get the point of **kwds (EDIT: apparently I didn't), and it hasn't seemed cumbersome when passing fewer arguments. This just appears to be a comfortable way of dealing with a need for a large number of attributes at the the creation of each instance.
That said, I'm still a major python noob. So, my question is this: Is there an underlying reason why passing the same dict repeatedly through super() to the base class would be a worse idea than just toughing it out with nasty (big and cluttered) **kwds passes? (e.g. issues with the interpreter that someone at my level would be ignorant of.)
EDIT:
Previously, creating a new Player might have looked like this, with an argument passed for each attribute.
bob = Player('bob', Location = 'here', ... etc.)
The number of arguments needed blew up, and I only included the attributes that really needed to be present to not break method calls from the Engine object.
This is the impression I'm getting from the answers and comments thus far:
There's nothing "wrong" with sending the same dictionary along, as long as nothing has the opportunity to modify its contents (Kirk Strauser) and the dictionary always has what it's supposed to have (goncalopp). The real answer is that the question was amiss, and using in_dict instead of **kwds is redundant.
Would this be correct? (Also, thanks for the great and varied feedback!)
I'm not sure I understand your question exactly, because I don't see how the code looked before you made the change to use in_dict. It sounds like you have been listing out dozens of keywords in the call to super (which is understandably not what you want), but this is not necessary. If your child class has a dict with all of this information, it can be turned into kwargs when you make the call with **in_dict. So:
class Actor:
def __init__(self, **kwds):
class Item(Actor):
def __init__(self, **kwds)
self._everything = kwds
super().__init__(**kwds)
I don't see a reason to add another dict for this, since you can just manipulate and pass the dict created for kwds anyway
Edit:
As for the question of the efficiency of using the ** expansion of the dict versus listing the arguments explicitly, I did a very unscientific timing test with this code:
import time
def some_func(**kwargs):
for k,v in kwargs.items():
pass
def main():
name = 'felix'
location = 'here'
user_type = 'player'
kwds = {'name': name,
'location': location,
'user_type': user_type}
start = time.time()
for i in range(10000000):
some_func(**kwds)
end = time.time()
print 'Time using expansion:\t{0}s'.format(start - end)
start = time.time()
for i in range(10000000):
some_func(name=name, location=location, user_type=user_type)
end = time.time()
print 'Time without expansion:\t{0}s'.format(start - end)
if __name__ == '__main__':
main()
Running this 10,000,000 times gives a slight (and probably statistically meaningless) advantage passing around a dict and using **.
Time using expansion: -7.9877269268s
Time without expansion: -8.06108212471s
If we print the IDs of the dict objects (kwds outside and kwargs inside the function), you will see that python creates a new dict for the function to use in either case, but in fact the function only gets one dict forever. After the initial definition of the function (where the kwargs dict is created) all subsequent calls are just updating the values of that dict belonging to the function, no matter how you call it. (See also this enlightening SO question about how mutable default parameters are handled in python, which is somewhat related)
So from a performance perspective, you can pick whichever makes sense to you. It should not meaningfully impact how python operates behind the scenes.
I've done that myself where in_dict was a dict with lots of keys, or a settings object, or some other "blob" of something with lots of interesting attributes. That's perfectly OK if it makes your code cleaner, particularly if you name it clearly like settings_object or config_dict or similar.
That shouldn't be the usual case, though. Normally it's better to explicitly pass a small set of individual variables. It makes the code much cleaner and easier to reason about. It's possible that a client could pass in_dict = None by accident and you wouldn't know until some method tried to access it. Suppose Actor.__init__ didn't peel apart in_dict but just stored it like self.settings = in_dict. Sometime later, Actor.method comes along and tries to access it, then boom! Dead process. If you're calling Actor.__init__(var1, var2, ...), then the caller will raise an exception much earlier and provide you with more context about what actually went wrong.
So yes, by all means: feel free to do that when it's appropriate. Just be aware that it's not appropriate very often, and the desire to do it might be a smell telling you to restructure your code.
This is not python specific, but the greatest problem I can see with passing arguments like this is that it breaks encapsulation. Any class may modify the arguments, and it's much more difficult to tell which arguments are expected in each class - making your code difficult to understand, and harder to debug.
Consider explicitly consuming the arguments in each class, and calling the super's __init__ on the remaining. You don't need to make them explicit:
class ClassA( object ):
def __init__(self, arg1, arg2=""):
pass
class ClassB( ClassA ):
def __init__(self, arg3, arg4="", *args, **kwargs):
ClassA.__init__(self, *args, **kwargs)
ClassB(3,4,1,2)
You can also leave the variables uninitialized and use methods to set them. You can then use different methods in the different classes, and all subclasses will have access to the superclass methods.

How do I call a method from a superclass, whos definition is completely deferred to the subclasses?

I want to envoke a method in my code in a supercass, to do some subclass- specific processing before continuing on. I come to python recently from C#... there, I'd probably use an interface. Here's the gist of it (as I picture it, but it's not working):
class superClass:
def do_specific_stuff(self): #To be implemented entirely by the subclass,
#but called from the superclass
pass
def do_general_stuff1(self):
#do misc
def do_general_stuff2(self):
#do more misc
def main_general_stuff(self):
do_general_stuff1()
do_specific_stuff()
do_general_stuff2()
I have a rather complicated implementation of this; this example is exactly what I need and far less painful to understand for a first- time viewer. Calling do_specific_stuff() at the moment gives me the error
'global name 'do_specific_stuff' is not defined.
When I add 'self' as in self.do_specific_stuff I get the error
'TypeError: do_specific_stuff() takes 0 positional arguments but 1 was given.' Any takers? Thanks in advance...
It needs to be
def main_general_stuff(self):
self.do_general_stuff1()
self.do_specific_stuff()
...
The problem is that you are missing the explicit reference to self: Python thinks you mean a global function without it. Note that there is no implicit this like in Java: You need to specify it.

Categories

Resources