Python object is being referenced by an object I cannot find - python

I am trying to remove an object from memory in python and I am coming across an object that it is not being removed. From my understanding if there is no references to the object the garbage collector will de-allocate the memory when it is run. However after I have removed all of the references if I run
bar = Foo()
print gc.get_referrers(bar)
del bar
baz = gc.collect()
print baz
I get a reply of
[< frame object at 0x7f1eba291e50>]
0
So how come does it not delete the object?
I get the same reply for all of the instances of objects if i do
bar = [foo() for i in range(0, 10)]
for x in range(0,len(bar))
baz = bar[x]
del bar[x]
print gc.get_referrers(baz)
How do I completely remove all referrers from an object/any idea what the frame object that is on all is?
I thought it would be the object frame(?) that contains a list of all objects in the program but I have not been able to confirm that/find a way to rid objects from being referenced by said mystical(to me) object fram.
Any help would be greatly appreciated
Edit:
Okay I rewrote the code to the simple form pulling out everything except the basics
import random, gc
class Object():
def __init__(self):
self.n=None
self.p=None
self.isAlive=True
def setNext(self,object):
self.n=object
def setPrev(self, object):
self.p=object
def getNext(self):
return self.n
def getPrev(self):
return self.p
def simulate(self):
if random.random() > .90:
self.isAlive=False
def remove(self):
if self.p is not None and self.n is not None:
self.n.setPrev(self.p)
self.p.setNext(self.n)
elif self.p is not None:
self.p.setNext(None)
elif self.n is not None:
self.n.setPrev(None)
del self
class Grid():
def __init__(self):
self.cells=[[Cell() for i in range(0,500)] for j in range(0,500)]
for x in range(0,100):
for y in range(0,100):
for z in range(0,100):
self.cells[x][y].addObject(Object())
def simulate(self):
for x in range(0,500):
for y in range(0,500):
self.cells[x][y].simulate()
num=gc.collect()
print " " + str(num) +" deleted today."
class Cell():
def __init__(self):
self.objects = None
self.objectsLast = None
def addObject(self, object):
if self.objects is None:
self.objects = object
else:
self.objectsLast.setNext(object)
object.setPrev(self.objectsLast)
self.objectsLast = object
def simulate(self):
current = self.objects
while current is not None:
if current.isAlive:
current.simulate()
current = current.getNext()
else:
delete = current
current = current.getNext()
if delete.getPrev() is None:
self.objects = current
elif delete.getNext() is None:
self.objectsLast = delete.getPrev()
delete.remove()
def main():
print "Building Map..."
x = Grid()
for y in range (1,101):
print "Simulating day " + str(y) +"..."
x.simulate()
if __name__ == "__main__":
main()

gc.get_referrers takes one argument: the object whose referers it should find.
I cannot think of any circumstance in which gc.get_referrers would return no results, because in order to send an object to gc.get_referrers, there has to be a reference to the object.
In other words, if there was no reference to the object, it would not be possible to send it to gc.get_referrers.
At the very least, there will be a reference from the globals() or from the current execution frame (which contains the local variables):
A code block is executed in an execution frame. An execution frame contains some administrative information (used for debugging), determines where and how execution continues after the code block's execution has completed, and (perhaps most importantly) defines two namespaces, the local and the global namespace, that affect execution of the code block.
See an extended version of the example from the question:
class Foo(object):
pass
def f():
bar = [Foo() for i in range(0, 10)]
for x in range(0, len(bar)):
# at this point there is one reference to bar[x]: it is bar
print len(gc.get_referrers(bar[x])) # prints 1
baz = bar[x]
# at this point there are two references to baz:
# - bar refernces it, because it is in the list
# - this "execution frame" references it, because it is in variable "baz"
print len(gc.get_referrers(bar[x])) # prints 2
del bar[x]
# at this point, only the execution frame (variable baz) references the object
print len(gc.get_referrers(baz)) # prints 1
print gc.get_referrers(baz) # prints a frame object
del baz
# now there are no more references to it, but there is no way to call get_referrers
f()
How to test it properly?
There is a better trick to detect whether there are referers or not: weakref.
weakref module provides a way to create weak references to an object which do not count. What it means is that even if there is a weak reference to an object, it will still be deleted when there are no other references to it. It also does not count in the gc.get_referrers.
So:
>>> x = Foo()
>>> weak_x = weakref.ref(x)
>>>
>>> gc.get_referrers(x) == [globals()] # only one reference from global variables
True
>>> x
<__main__.Foo object at 0x000000000272D2E8>
>>> weak_x
<weakref at 0000000002726D18; to 'Foo' at 000000000272D2E8>
>>> del x
>>> weak_x
<weakref at 0000000002726D18; dead>
The weak reference says that the object is dead, so it was indeed deleted.

Okay thanks to cjhanks and user2357112 I came up with this answer
The problem being that if you run the program the gc does not collect anything after each day even though there were things deleted
To test if it is deleted I instead run
print len(gc.get_objects())
each time I go through a "day" doing this shows how many objects python is tracking.
Now with that information and thanks to a comment I tired changing Grid to
class Grid():
def __init__(self):
self.cells=[[Cell() for i in range(0,500)] for j in range(0,500)]
self.add(100)
def add(self, num):
for x in range(0, 100):
for y in range(0, 100):
for z in range(0, num):
self.cells[x][y].addObject(Object())
def simulate(self):
for x in range(0,500):
for y in range(0,500):
self.cells[x][y].simulate()
num=gc.collect()
print " " + str(num) +" deleted today."
print len(gc.get_objects())
and then calling Grid.add(50) halfway through the process. My memory allocation for the program did not increase (watching top in Bash) So my learning points:
GC was running without my knowledge
Memory is allocated and never returned to system until the the program is done
Python will reuse the memory

Related

Type detection and collision avoidance at constructor time

Thanks everyone for your help so far. I've narrowed it down a bit. If you look at HERE in both the script and the class, and run the script, you'll see what is going on.
The ADD line print "789 789"
when it should be printing "456 789"
What appears to be happening, is in new the class is detecting the type of the incoming argument. However if the incoming object, has the same type as the constructor it appears to be paging the incoming object, into itself (at the class level) instead of returning the old object. That is the only thing I can think of that would cause 456 to get creamed.
So how do you detect something that is the same type of a class, within a constructor and decide NOT to page that data into the class memory space, but instead return the previously constructed object?
import sys
import math
class Foo():
# class level property
num = int(0)
#
# Python Instantiation Customs:
#
# Processing polymorphic input new() MUST return something or
# an object?, but init() cannot return anything. During runtime
# __new__ is running at the class level, while init is running
# at the instance level.
#
def __new__(self,*arg):
print ("arg type: ", type(arg[0]).__name__)
### functionally the same as isinstance() below
#
# if (type(arg[0]).__name__) == "type":
# if arg[0].__name__ == "Foo":
# print ("\tinput was a Foo")
# return arg[0] # objects of same type intercede
### HERE <-------------------------------------
#
# this creams ALL instances, because since we are a class
# the properties of the incoming object, seem to overwride
# the class, rather than exist as a separate data structure.
if (isinstance(arg[0], Foo)):
print ("\tinput was a Foo")
return arg[0] # objects of same type intercede
elif (type(arg[0]).__name__) == "int":
print ("\tinput was an int")
self.inum = int(arg[0]) # integers store
return self
elif (type(arg[0]).__name__) == "str":
print ("\tinput was a str")
self.inum = int(arg[0]) # strings become integers
return self
return self
def __init__(self,*arg):
pass
#
# because if I can do collision avoidance, I can instantiate
# inside overloaded operators:
#
def __add__(self,*arg):
print ("add operator overload")
# no argument returns self
if not arg:
return self
# add to None or zero return self
if not arg[0]:
return self
knowntype = Foo.Foo(arg[0])
# add to unknown type returns False
if not knowntype:
return knowntype
# both values are calculable, calculate and return a Foo
typedresult = (self.inum + knowntype.inum)
return Foo.Foo(typedresult)
def __str__(self): # return a stringified int or empty string
# since integers don't have character length,
# this tests the value, not the existence of:
if self.inum:
return str(self.inum)
# so the property could still be zero and we have to
# test again for no reason.
elif self.inum == 0:
return str(self.inum)
# return an empty str if nothing is defined.
return str("")
testfoo.py:
#! /usr/bin/python
import sys
import Foo
# A python class is not transparent like in perl, it is an object
# with unconditional inheritance forced on all instances that share
# the same name.
classhandle = Foo.Foo
# The distinction between the special class object, and instance
# objects is implicitly defined by whether there is a passed value at
# constructor time. The following therefore does not work.
# classhandle = Foo.Foo()
# but we can still write and print from the class, and see it propagate,
# without having any "object" memory allocated.
print ("\nclasshandle: ", classhandle)
print ("classhandle classname: ", classhandle.__name__) # print the classname
print ("class level num: ", classhandle.num) # print the default num
classhandle.classstring = "fdsa" # define an involuntary value for all instances
print ("\n")
# so now we can create some instances with passed properties.
instance1 = Foo.Foo(int(123)) #
print ("\ninstance1: ", instance1)
print ("involuntary property derived from special class memory space: ", instance1.classstring)
print ("instance property from int: ", instance1.inum)
print ("\n")
instance2 = Foo.Foo(str("456"))
print ("\ninstance2: ", instance2)
print ("instance2 property from int: ", instance2.inum)
#
# instance3 stands for (shall we assume) some math that happened a
# thousand lines ago in a class far far away. We REALLY don't
# want to go chasing around to figure out what type it could possibly
# be, because it could be polymorphic itself. Providing a black box so
# that you don't have to do that, is after all, the whole point OOP.
#
print ("\npretend instance3 is unknowningly already a Foo")
instance3 = Foo.Foo(str("789"))
## So our class should be able to handle str,int,Foo types at constructor time.
print ("\ninstance4 should be a handle to the same memory location as instance3")
instance4 = Foo.Foo(instance3) # SHOULD return instance3 on type collision
# because if it does, we should be able to hand all kinds of garbage to
# overloaded operators, and they should remain type safe.
# HERE <-----------------------------
#
# the creation of instance4, changes the instance properties of instance2:
# below, the instance properties inum, are now both "789".
print ("ADDING: ", instance2.inum, " ", instance4.inum)
# instance6 = instance2 + instance4 # also should be a Foo object
# instance5 = instance4 + int(549) # instance5 should be a Foo object.
How do I, at constructor time, return a non-new object?
By overriding the constructor method, __new__, not the initializer method, __init__.
The __new__ method constructs an instance—normally by calling the super's __new__, which eventually gets up to object.__new__, which does the actual allocation and other under-the-covers stuff, but you can override that to return a pre-existing value.
The __init__ method is handed a value that's already been constructed by __new__, so it's too late for it to not construct that value.
Notice that if Foo.__new__ returns a Foo instance (whether a newly-created one or an existing one), Foo.__init__ will be called on it. So, classes that override __new__ to return references to existing objects generally need an idempotent __init__—typically, you just don't override __init__ at all, and do all of your initialization inside __new__.
There are lots of examples of trivial __new__ methods out there, but let's show one that actually does a simplified version of what you're asking for:
class Spam:
_instances = {}
def __new__(cls, value):
if value not in cls._instances:
cls._instances[value] = super().__new__(cls)
cls._instances[value].value = value
return cls._instances[value]
Now:
>>> s1 = Spam(1)
>>> s2 = Spam(2)
>>> s3 = Spam(1)
>>> s1 is s2
False
>>> s1 is s3
True
Notice that I made sure to use super rather than object, and cls._instances1 rather than Spam._instances. So:
>>> class Eggs(Spam):
... pass
>>> e4 = Eggs(4)
>>> Spam(4)
<__main__.Eggs at 0x12650d208>
>>> Spam(4) is e4
True
>>> class Cheese(Spam):
... _instances = {}
>>> c5 = Cheese(5)
>>> Spam(5)
<__main__.Spam at 0x126c28748>
>>> Spam(5) is c5
False
However, it may be a better option to use a classmethod alternate constructor, or even a separate factory function, rather than hiding this inside the __new__ method.
For some types—like, say, a simple immutable container like tuple—the user has no reason to care whether tuple(…) returns a new tuple or an existing one, so it makes sense to override the constructor. But for some other types, especially mutable ones, it can lead to confusion.
The best test is to ask yourself whether this (or similar) would be confusing to your users:
>>> f1 = Foo(x)
>>> f2 = Foo(x)
>>> f1.spam = 1
>>> f2.spam = 2
>>> f1.spam
2
If that can't happen (e.g., because Foo is immutable), override __new__.
If that exactly what users would expect (e.g., because Foo is a proxy to some object that has the actual spam, and two proxies to the same object had better see the same spam), probably override __new__.
If it would be confusing, probably don't override __new__.
For example, with a classmethod:
>>> f1 = Foo.from_x(x)
>>> f2 = Foo.from_x(x)
… it's a lot less likely to be surprising if f1 is f2 turns out to be true.
1. Even though you define __new__ like an instance method, and its body looks like a class method, it's actually a static method, that gets passed the class you're trying to construct (which will be Spam or a subclass of Spam) as an ordinary first parameter, with the constructor arguments (and keyword arguments) passed after that.
Thanks everyone who helped! This answer was saught out to understand how to refactor an existing program that was already written, but that was having scalability problems. The following is the completed working example. What it demonstrates is:
The ability to test incoming types and avoid unneccessary object duplication at constructor time, given incoming types that are both user-defined and built-in. The ability to construct on the fly from a redefined operator or method. These capabilities are neccessary for writing scalable supportable API code. YMMV.
Foo.py
import sys
import math
class Foo():
# class level property
num = int(0)
#
# Python Instantiation Customs:
#
# Processing polymorphic input new() MUST return something or
# an object, but init() MAYNOT return anything. During runtime
# __new__ is running at the class level, while __init__ is
# running at the instance level.
#
def __new__(cls,*arg):
print ("arg type: ", type(arg[0]).__name__)
# since we are functioning at the class level, type()
# is reaching down into a non-public namespace,
# called "type" which is presumably something that
# all objects are ultimately derived from.
# functionally this is the same as isinstance()
if (type(arg[0]).__name__) == "Foo":
fooid = id(arg[0])
print ("\tinput was a Foo: ", fooid)
return arg[0] # objects of same type intercede
# at the class level here, we are calling into super() for
# the constructor. This is presumably derived from the type()
# namespace, which when handed a classname, makes one of
# whatever it was asked for, rather than one of itself.
elif (type(arg[0]).__name__) == "int":
self = super().__new__(cls)
self.inum = int(arg[0]) # integers store
fooid = id(self)
print ("\tinput was an int: ", fooid)
return (self)
elif (type(arg[0]).__name__) == "str":
self = super().__new__(cls)
self.inum = int(arg[0]) # strings become integers
fooid = id(self)
print ("\tinput was a str: ", fooid)
return (self)
# def __init__(self,*arg):
# pass
#
# because if I can do collision avoidance, I can instantiate
# inside overloaded operators:
#
def __add__(self,*arg):
argtype = type(arg[0]).__name__
print ("add overload in class:", self.__class__)
if argtype == "Foo" or argtype == "str" or argtype == "int":
print ("\tfrom a supported type")
# early exit for zero
if not arg[0]:
return self
# localized = Foo.Foo(arg[0])
# FAILS: AttributeError: type object 'Foo' has no attribute 'Foo'
# You can't call a constructor the same way from inside and outside
localized = Foo(arg[0])
print ("\tself class: ", self.__class__)
print ("\tself number: ", self.inum)
print ()
print ("\tlocalized class: ", localized.__class__)
print ("\tlocalized number: ", localized.inum)
print ()
answer = (self.inum + localized.inum)
answer = Foo(answer)
print ("\tanswer class:", answer.__class__)
print ("\tanswer sum result:", answer.inum)
return answer
assert(0), "Foo: cannot add an unsupported type"
def __str__(self): # return a stringified int or empty string
# Allow the class to stringify as if it were an int.
if self.inum >= 0:
return str(self.inum)
testfoo.py
#! /usr/bin/python
import sys
import Foo
# A python class is not transparent like in perl, it is an object
# with unconditional inheritance forced on all instances that share
# the same name.
classhandle = Foo.Foo
# The distinction between the special class object, and instance
# objects is implicitly defined by whether there is a passed value at
# constructor time. The following therefore does not work.
# classhandle = Foo.Foo()
# but we can still write and print from the class, and see it propagate,
# without having any "object" memory allocated.
print ("\nclasshandle: ", classhandle)
print ("classhandle classname: ", classhandle.__name__) # print the classname
print ("class level num: ", classhandle.num) # print the default num
classhandle.classstring = "fdsa" # define an involuntary value for all instances
print ("\n")
# so now we can create some instances with passed properties.
instance1 = Foo.Foo(int(123)) #
print ("\ninstance1: ", instance1)
print ("involuntary property derived from special class memory space: ", instance1.classstring)
print ("instance property from int: ", instance1.inum)
print ("\n")
instance2 = Foo.Foo(str("456"))
print ("\ninstance2: ", instance2)
print ("instance2 property from int: ", instance2.inum)
#
# instance3 stands for (shall we assume) some math that happened a
# thousand lines ago in a class far far away. We REALLY don't
# want to go chasing around to figure out what type it could possibly
# be, because it could be polymorphic itself. Providing a black box so
# that you don't have to do that, is after all, the whole point OOP.
#
print ("\npretend instance3 is unknowningly already a Foo\n")
instance3 = Foo.Foo(str("789"))
## So our class should be able to handle str,int,Foo types at constructor time.
print ("\ninstance4 should be a handle to the same memory location as instance3\n")
instance4 = Foo.Foo(instance3) # SHOULD return instance3 on type collision
print ("instance4: ", instance4)
# because if it does, we should be able to hand all kinds of garbage to
# overloaded operators, and they should remain type safe.
# since we are now different instances these are now different:
print ("\nADDING:_____________________\n", instance2.inum, " ", instance4.inum)
instance5 = instance4 + int(549) # instance5 should be a Foo object.
print ("\n\tAdd instance4, 549, instance5: ", instance4, " ", int(549), " ", instance5, "\n")
instance6 = instance2 + instance4 # also should be a Foo object
print ("\n\tAdd instance2, instance4, instance6: ", instance2, " ", instance4, " ", instance6, "\n")
print ("stringified instance6: ", str(instance6))

Why can two functions with the same `id` have different attributes?

Why can two functions with the same id value have differing attributes like __doc__ or __name__?
Here's a toy example:
some_dict = {}
for i in range(2):
def fun(self, *args):
print i
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
some_dict["function_{}".format(i)] = fun
my_type = type("my_type", (object,), some_dict)
m = my_type()
print id(m.function_0)
print id(m.function_1)
print m.function_0.__doc__
print m.function_1.__doc__
print m.function_0.__name__
print m.function_1.__name__
print m.function_0()
print m.function_1()
Which prints:
57386560
57386560
I am function 0
I am function 1
function_0
function_1
1 # <--- Why is it bound to the most recent value of that variable?
1
I've tried mixing in a call to copy.deepcopy (not sure if the recursive copy is needed for functions or it is overkill) but this doesn't change anything.
You are comparing methods, and method objects are created anew each time you access one on an instance or class (via the descriptor protocol).
Once you tested their id() you discard the method again (there are no references to it), so Python is free to reuse the id when you create another method. You want to test the actual functions here, by using m.function_0.__func__ and m.function_1.__func__:
>>> id(m.function_0.__func__)
4321897240
>>> id(m.function_1.__func__)
4321906032
Method objects inherit the __doc__ and __name__ attributes from the function that they wrap. The actual underlying functions are really still different objects.
As for the two functions returning 1; both functions use i as a closure; the value for i is looked up when you call the method, not when you created the function. See Local variables in Python nested functions.
The easiest work-around is to add another scope with a factory function:
some_dict = {}
for i in range(2):
def create_fun(i):
def fun(self, *args):
print i
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
return fun
some_dict["function_{}".format(i)] = create_fun(i)
Per your comment on ndpu's answer, here is one way you can create the functions without needing to have an optional argument:
for i in range(2):
def funGenerator(i):
def fun1(self, *args):
print i
return fun1
fun = funGenerator(i)
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
some_dict["function_{}".format(i)] = fun
#Martjin Pieters is perfectly correct. To illustrate, try this modification
some_dict = {}
for i in range(2):
def fun(self, *args):
print i
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
some_dict["function_{}".format(i)] = fun
print "id",id(fun)
my_type = type("my_type", (object,), some_dict)
m = my_type()
print id(m.function_0)
print id(m.function_1)
print m.function_0.__doc__
print m.function_1.__doc__
print m.function_0.__name__
print m.function_1.__name__
print m.function_0()
print m.function_1()
c = my_type()
print c
print id(c.function_0)
You see that the fun get's a different id each time, and is different from the final one. It's the method creation logic that send's it pointing to the same location, as that's where the class's code is stored. Also, if you use the my_type as a sort of class, instances created with it have the same memory address for that function
This code gives:
id 4299601152
id 4299601272
4299376112
4299376112
I am function 0
I am function 1
function_0
function_1
1
None
1
None
<main.my_type object at 0x10047c350>
4299376112
You should save current i to make this:
1 # <--- Why is it bound to the most recent value of that variable?
1
work, for example by setting default value to function argument:
for i in range(2):
def fun(self, i=i, *args):
print i
# ...
or create a closure:
for i in range(2):
def f(i):
def fun(self, *args):
print i
return fun
fun = f(i)
# ...

Printing from within properties

I'm trying to make a robotics kit. Its designed to be simple so I'm using properties so when the users change a parameter the property method sends the serial command which controls motors/ servos/whatever.
This is the code at the moment, directly from a previous question I asked on here.
class Servo(object):
def __init__(self, which_servo, angle = 0):
self._angle = angle;
self._servo_no = which_servo
def get_angle(self):
return self._angle
def set_angle(self, value):
self._angle = value
print "replace this print statement with the code to set servo, notice that this method knows the servo number AND the desired value"
def del_angle(self):
del self._angle
angle = property(get_angle, set_angle, del_angle, "I'm the 'angle' property.
this is then initialized as such:
class robot(object):
def __init___(self):
self.servos = [Servo(0), Servo(1), Servo(2), Servo(3)]
Now, this works in the respect that it does change the variable through the getter and setter functions, however the prints in the getter and setter never is printed, thus if I replace it with a serial command I assume it won't do anything either, can anyone shed any light on this?
Thanks
Update: Thanks for the help using the servo file this is whats happened, there are three scenarios the first works and by extension I would have assumed the next two preferable scenarios would work but they don't any ideas?
This works
import servo
class Robot(object):
def __init__(self):
self.servos = [servo.Servo(0, 0), servo.Servo(1,0), servo.Servo(2,0)]
R = Robot()
R.servos[1].angle = 25
This does not:
import servo
class Robot(object):
def __init__(self):
self.servos = [servo.Servo(0, 0), servo.Servo(1,0), servo.Servo(2,0)]
R = Robot()
left_servo = R.servos[1].angle
left_servo = 25
Neither does this
import servo
class Robot(object):
def __init__(self):
self.servos = [servo.Servo(0, 0).angle, servo.Servo(1,0).angle, servo.Servo(2,0).angle]
R = Robot()
R.servo[1] = 25
Using the preferred decorator syntax for properties, this works fine. It'll also help you avoid issues like this in the future
class Servo(object):
def __init__(self, which_servo, angle = 0):
self._angle = angle;
self._servo_no = which_servo
#property
def angle(self):
return self._angle
#angle.setter
def angle(self, value):
self._angle = value
print "replace this print statement with the code to set servo"
#angle.deleter
def angle(self):
del self._angle
Seeing as your indentation is off here, I believe this is likely an issue of indentation in your source. This should work as well if you really want to use the old property function:
class Servo(object):
def __init__(self, which_servo, angle = 0):
self._angle = angle;
self._servo_no = which_servo
def get_angle(self):
return self._angle
def set_angle(self, value):
self._angle = value
print "replace this print statement with the code to set servo"
def del_angle(self):
del self._angle
angle = property(get_angle, set_angle, del_angle,"I'm the 'angle' property.")
Both of these work successfully for me (inside a file called servo.py)
>>> import servo
>>> s = servo.Servo(1, 2)
>>> s.angle
2
>>> s.angle = 3
replace this print statement with the code to set servo
EDIT
To address your new issues. When you assign R.servos[1].angle to left_servo, its not creating a reference to the servos angle, it's just setting left_servo to whatever the angle is. When you reassign 25 to it, you're not assigning to the angle you're assigning to the left_servo.
On the second one, I'm assuming you meant R.servos and not R.servo which should be raising an AttributeError. But the real problem as I see it, is you should be saying R.servos[1].angle = 25 and you're omitting the .angle.
To (attempt to) put it simply: When you use the = operator, you are changing where a name refers to, not what it refers to.
>>> x = 1
>>> x = 2
the second assignment does not overwrite the 1 in memory with a 2, it just changes where x refers to. So if I did something like
>>> x = 1
>>> y = x
>>> y = 2
>>> print x
1
the output is 1 because your are telling y to refer to the same place that x refers. Changing y to 2 changes where y refers to, it does not change the 1 already in memory.

Class does not seem to be Global in python

I setup a class and it accepts and prints out the variables fine in one if statement.
class npc: #class for creating mooks
def __init__(self, name):
self.name = name
def npc_iq (self,iq):
self.iq = []
def npc_pp (self,pp):
self.pp = []
def npc_melee (self, melee):
self.melee = []
def npc_ct (self, ct):
self.ct = []
It works fine in this if statement
if menu_option == 1:
print "Choose melees for npc"
init_bonus = random.randint(0,2)
char_PP = random.randint(7,15)
char_iq = random.randint(7,15)
npc_Melees = int(raw_input(prompt))
combat_time = math.floor((round_attacks - init_bonus - math.floor(char_PP/2) - math.floor(char_iq/2)) / npc_Melees)
#function for calculating sequence number
print "combat time is"
print combat_time
mook = "mook%s" % counter # adds different mook names to program
mook = npc(mook)
mook.iq = (char_iq)
mook.pp = (char_PP)
mook.melee = (npc_Melees)
mook.ct = (combat_time)
counter += 1
But on this statement it will print out the name in the class but not ct.
elif menu_option ==4:
print "Printing out all mooks"
print
printcount = counter -1
while printcount != 0:
mookprint = "mook%s" % printcount
mookprint = npc(mookprint)
print mookprint.name
print mookprint.ct
print
printcount -= 1
Why would a mookprint have any idea what value ct should be? The constructor for npc initialises a new instance of npc, with the name given as a parameter, but ct is left empty.
When you create an NPC in menu option 1, you do not create a global instance of npc. If you want to refer to a previously created instance of npc, you will need to find some way of storing them. Dictionaries may be a good solution for you. A dictionary is an object that holds mappings between keys and values. If you know the key, then you can find the assosicated value. In this case you would make name the key and the value the npc instances.
eg.
npcsDict = dict()
if menu_option == 1:
# code for intialising a new instance of npc
...
# most, if not all of the initialisation code should be moved to the
# __init__ method for npc
# now store the newly created mook
npcsDict[mook.name] = mook
elif menu_option == 4:
print "Printing out all mooks"
print
for mookName in npcsDict:
print npcsDict[mookName].name
print npcsDict[mookName].ct
print
i dont really understand your problem.
your working example:
mook = npc(mook)
mook.iq = (char_iq)
mook.pp = (char_PP)
mook.melee = (npc_Melees)
mook.ct = (combat_time)
mook.ct is value of (combat_time)
your failing example:
mookprint = npc(mookprint)
print mookprint.name
print mookprint.ct
mookprint.ct's value is nothing because it is never set.
The elif will only be executed if the if has not, so if the elif block runs, ct was never set
I don't think you're understanding how four lines work:
mookprint = "mook%s" % printcount
mookprint = npc(mookprint)
print mookprint.name
print mookprint.ct
Every time this block of code is run, the following things are happending:
You're assigning a string of the form "mook" to the variable mookprint
You're creating a new instance of the npc class. You should note that all of the instances you're creating will be separate from eachother. This new instance will have an attribute with the name that was previously held in the variable mookprint and this instance of npc will be assigned to mookprint.
You're printing the name attribute of the instance of the npc class that you created in the previous step. This works because when this instance was created, the __init__ method of your class was called with the argument name being set to "mook1" or whatever was stored in mookprint at the time.
You're printing the ct attribute of the instance of the npc class that you just created. Since you never set the ct attribute to anything, this will not work how you expected.
If you want to count the number of instances of your npc class, you'll need to create a class attribute. This is a variable whose value is common across all instances of a class. To do so, you'll need to modify your class definition to add an item to this attribute every time you make a new instance of the class. It will look something like this:
class npc: #class for creating mooks
ct = []
def __init__(self, name):
self.name = name
self.ct.append(name)
def get_ct(self):
return len(self.ct)
With the above, the variable ct will be a list that is common to all instances of npc and will grow every time a new npc is created. Then the method get_ct will count how long this list is.
Then you'll need to modify the four lines I mentioned to look like:
mookprint = "mook%s" % printcount
mookprint = npc(mookprint)
print mookprint.name
print mookprint.get_ct()
I think the code above shows how to change your code to work more how you expected it to work. However, it should be noted that you rarely want to create classes where each instance depends on information about the other instances. It is usually a better design to do something like Dunes suggested, storing the instances in a dictionary, or some other data structure, and keeping track of them that way.

Undesired python feedparser instantiation relic

Question: How do I kill an instantiation or insure i'm creating a new instantiation of the python universal feedparser?
Info:
I'm working on a program right now that downloads and catalogs large numbers of blogs. It has worked well so for except for an unfortunate bug. My code is set up to take a list of blog urls and run them through a for loop. each run it picks a url and sends it down to a separate class which manages the downloading, extracting, and saving of the data to a file.
The first url works just fine. It downloads the entirety of the blog and saves it to a file. But the second blog it downloads will have all the data from the first one as well, I'm totally clueless as to why.
Code snippets:
class BlogHarvester:
def __init__(self,folder):
f = open(folder,'r')
stop = folder[len(folder)-1]
while stop != '/':
folder = folder[0:len(folder)-1]
stop = folder[len(folder)-1]
blogs = []
for line in f:
blogs.append(line)
for herf in blogs:
blog = BlogParser(herf)
sPath = ""
uid = newguid()##returns random hash.
sPath = uid
sPath = sPath + " - " + blog.posts[0].author[1:5] + ".blog"
print sPath
blog.storeAsFile(sPath)
class BlogParser:
def __init__(self, blogherf='null', path='null', posts = []):
self.blogherf = blogherf
self.blog = feedparser.parse(blogherf)
self.path = path
self.posts = posts
if blogherf != 'null':
self.makeList()
elif path != 'null':
self.loadFromFile()
class BlogPeices:
def __init__(self,title,author,post,date,publisher,rights,comments):
self.author = author
self.title = title
self.post = post
self.date = date
self.publisher = publisher
self.rights = rights
self.comments = comments
I included snippets I figured that would probably be useful. Sorry if there are any confusing artifacts. This program has been a pain in the butt.
The problem is posts=[]. Default arguments are calculated at compile time, not runtime, so mutations to the object remain for the lifetime of the class. Instead use posts=None and test:
if posts is None:
self.posts = []
As what Ignacio said, any mutations that happen to the default arguments in the function list will stay for the life of the class.
From http://docs.python.org/reference/compound_stmts.html#function-definitions
Default parameter values are evaluated
when the function definition is
executed. This means that the
expression is evaluated once, when the
function is defined, and that that
same “pre-computed” value is used for
each call. This is especially
important to understand when a default
parameter is a mutable object, such as
a list or a dictionary: if the
function modifies the object (e.g. by
appending an item to a list), the
default value is in effect modified.
This is generally not what was
intended. A way around this is to use
None as the default, and explicitly
test for it in the body of the
function.
But this brings up sort of a gotcha, you are modifying a reference... So you may be modifying a list that the consumer of the class that wasn't expected to be modified:
For example:
class A:
def foo(self, x = [] ):
x.append(1)
self.x = x
a = A()
a.foo()
print a.x
# prints: [1]
a.foo()
print a.x
# prints: [1,1] # !!!! Consumer would expect this to be [1]
y = [1,2,3]
a.foo(y)
print a.x
# prints: [1, 2, 3, 1]
print y
# prints: [1, 2, 3, 1] # !!!! My list was modified
If you were to copy it instead: (See http://docs.python.org/library/copy.html )
import copy
class A:
def foo(self, x = [] ):
x = copy.copy(x)
x.append(1)
self.x = x
a = A()
a.foo()
print a.x
# prints: [1]
a.foo()
print a.x
# prints: [1] # !!! Much better =)
y = [1,2,3]
a.foo(y)
print a.x
# prints: [1, 2, 3, 1]
print y
# prints: [1, 2, 3] # !!!! My list is how I made it

Categories

Resources