I'm struggling to understand why my simple code behaves like this. I create 2 instances a and b that takes in an array as argument. Then I define a method to change one of the instances array, but then both get changed. Any idea why this happen and how can I avoid the method changing the other instance?
import numpy as np
class Test:
def __init__(self, arg):
self.arg=arg
def change(self,i,j,new):
self.arg[i][j]=new
array=np.array([[11,12,13]])
a=Test(array)
b=Test(array)
#prints the same as expected
print(a.arg)
print(b.arg)
print()
a.change(0,0,3)
#still prints the same, even though I did
#not change b.arg
print(a.arg)
print(b.arg)
Because you assigned the same object as the instance members. You can use np.array(x, copy=True) or x.copy() to generate a new array object:
array = np.array([[11,12,13]])
a = Test(array.copy())
b = Test(np.array(array, copy=True))
Alternatively, if your arg is always a np.array, you could do it in the __init__ method (as noted by roganjosh in the comments):
class Test:
def __init__(self, arg):
self.arg = np.array(arg, copy=True)
...
Related
Is it possible to define an instance variable in a class as a function of another? I haven't gotten it to work unless you redefine the "function instance variable" all the time.
Basically you could have a scenario where you have one instance variable that is a list of integers, and want to have the sum of these as an instance variable, that automatically redefines every time the list is updated.
Is this possible?
class Example:
list_variable = []
sum_variable = sum(list_variable)
def __init__(self, list_variable):
self.list_variable = list_variable
return
This will result in sum_variable = 0 unless you change it.
I understand that this is far from a major issue, you could either define sum_variable as a method or redefine it every time you change list_variable, I'm just wondering if it's possible to skip those things/steps.
Python offers the property decorator for a syntatically identical use of your example:
class Example:
list_variable = []
def __init__(self, list_variable):
self.list_variable = list_variable
return
#property
def sum_variable(self):
return sum(self.list_variable)
e = Example(list_variable=[10, 20, 30])
e.sum_variable # returns 60
I'm creating a class sequence, which inherits from the builtin list and will hold an ordered collection of a second class: d0 which inherits from int. d0, in addition to its int value must contain a secondary value, i which denotes where it exists in the class and a reference to the class itself.
My understanding is because int is an immutable type, I have to use the __new__ method, and because it will have other attributes, I need to use __init__.
I've been trying for a while to get this to work and I've explored a few options.
Attempt 1:
class sequence(list):
def __init__(self, data):
for i, elem in enumerate(data): self.append( d0(elem, i, self) )
class d0(int):
def __new__(self, val, i, parent):
self.i = i
self.parent = parent
return int.__new__(d0, val)
x = sequence([1,2,3])
print([val.i for val in x])
This was the most intuitive to me, but every time self.i is assigned, it overwrites the i attribute for all other instances of d0 in sequence. Though I'm not entirely clear why this happens, I understand that __new__ is not the place instantiate an object.
Attempt 2:
class sequence(list):
def __init__(self, data):
for i, val in enumerate(data): self.append( d0(val, i, self) )
class d0(int):
def __new__(cls, *args):
return super().__new__(cls, *args)
def __init__(self, *args):
self = args[0]
self.i = args[1]
self.parent = args[2]
x = sequence([1,2,3])
print([val.i for val in x])
This raises TypeError: int() takes at most 2 arguments (3 given), though I'm not sure why.
Attempt 3:
class sequence(list):
def __init__(self, data):
for i, val in enumerate(data):
temp = d0.__new__(d0, val)
temp.__init__(i, self)
self.append(temp)
class d0(int):
def __new__(cls, val):
return int.__new__(d0, val)
def __init__(self, i, parent):
self.i = i
self.parent = parent
x = sequence([1,2,3])
print([val.i for val in x])
This accomplishes the task, but is cumbersome and otherwise just feels strange to have to explicitly call __new__ and __init__ to instantiate an object.
What is the proper way to accomplish this? I would also appreciate any explanation for the undesired behavior in attempts 1 and 2.
First, your sequence isn’t much of a type so far: calling append on it won’t preserve its indexed nature (let alone sort or slice assignment!). If you just want to make lists that look like this, just write a function that returns a list. Note that list itself behaves like such a function (it was one back in the Python 1 days!), so you can often still use it like a type.
So let’s talk just about d0. Leaving aside the question of whether deriving from int is a good idea (it’s at least less work than deriving from list properly!), you have the basic idea correct: you need __new__ for an immutable (base) type, because at __init__ time it’s too late to choose its value. So do so:
class d0(int):
def __new__(cls,val,i,parent):
return super().__new__(cls,val)
Note that this is a class method: there’s no instance yet, but we do need to know what class we’re instantiating (what if someone inherits from d0?). This is what attempt #1 got wrong: it thought the first argument was an instance to which to assign attributes.
Note also that we pass only one (other) argument up: int can’t use our ancillary data. (Nor can it ignore it: consider int('f',16).) Thus failed #2: it sent all the arguments up.
We can install our other attributes now, but the right thing to do is use __init__ to separate manufacturing an object from initializing it:
# d0 continued
def __init__(self,val,i,parent):
# super().__init__(val)
self.i=i; self.parent=parent
Note that all the arguments appear again, even val which we ignore. This is because calling a class involves only one argument list (cf. d0(elem,i,self)), so __new__ and __init__ have to share it. (It would therefore be formally correct to pass val to int.__init__, but what would it do with it? There’s no use in calling it at all since we know int is already completely set up.) Using #3 was painful because it didn’t follow this rule.
I am confused by the behaviour of the following code:
data = [0,1,2,3,4,5]
class test():
def __init__(self,data):
self.data=data
self.data2=data
def main(self):
del self.data2[3]
test_var = test(data)
test_var.main()
print(test_var.data)
print(test_var.data2)
what i would think should come out is this:
[0,1,2,3,4,5]
[0,1,2,4,5]
what i get is this:
[0,1,2,4,5]
[0,1,2,4,5]
Why is an element from the second list getting deleted when its not directly changed? Or does python handle attributes in such a way that this happens normally?
So how should i change the code that i get what i want?
Lists are mutable in Python and passed by reference. Whenever you assign it or pass it as an argument, a reference to it is passed and not a copy. Hence the outcome you're seeing. If you really want to mutate it, you need to deepcopy it.
import copy
class test():
def __init__(self, data):
self.data = copy.deepcopy(data)
self.data2 = copy.deepcopy(data2)
# if the list is going to be flat and just contain basic immutable types,
# slicing (or shallow-copy) would do the job just as well.
class test():
def __init__(self, data):
self.data = data[::] # or data[:] for that matter as #Joe Iddon suggested
self.data2 = data[::]
Note: not all types of objects support "deep-copying".
Please guide to an explain of the difference between
object = class()
and
var = class method returning a class:
class Countsome(object):
#classmethod
def get(cls, x, y):
self = cls()
sum = self.add2(x, y)
print sum
return cls
def add2(self, x, y):
sum = x+y
return sum
xyz = Countsome.get(5, 9)
==========================================
class CountSome(object):
def __init__(self):
pass
def add2(self, x, y):
sum = x+y
print sum
xyz = CountSome()
xyz.add2(5, 9)
Looking to understand where I should use one, I am just printing the sum so not returning, so please assume I am asking this question for these kind of tasks(where returning results like sum is not important).
And looking for answers like, which one would be efficient, when.
What are the benefits of each and scenarios best suited for each. Guide to a source if possible
You kinda get it wrong. classmethod should be use when you need to perform action that doesn't need an instance but does need the cls object:
A class method receives the class as implicit first argument, just like an instance method receives the instance.
For example, if you have a COUNTER object in your class which counts how many instances were instantiated.
The second code is actually using staticmethod; that is a method defined in a class but don't need access to any class / instance attributes. staticmethod can be defined outside of a class but resides in it for convenience
I have a class in python that acts as a front-end to a c-library. This library performs simulations and handles very large arrays of data. This library passes forward a ctype array and my wrapper converts it into a proper numpy.ndarray.
class SomeClass(object):
#property
def arr(self):
return numpy.array(self._lib.get_arr())
However, in order to make sure that memory problems don't occur, I keep the ndarray data separate from the library data, so changing the ndarray does not cause a change in the true array being used by the library. However, I can pass along a new array of the same shape and overwrite the library's held array.
#arr.setter
def arr(self, new_arr):
self._lib.set_arr(new_arr.ctypes)
So, I can interact with the array like so:
x = SomeClass()
a = x.arr
a[0] += 1
x.arr = a
My desire is to simplify this even more by allowing syntax to simply be x.arr[0] += 1, which would be more readable and have less variables. I am not exactly sure how to go about creating such a wrapper (I have very little experience making wrapper classes/functions) that mimics properties but allows item access as my example.
How would I go about making such a wrapper class? Is there a better way to accomplish this goal? If you have any advice or reading that could help I would appreciate it very much.
This could work. Array is a proxy for the Numpy/C array:
class Array(object):
def __init__(self):
#self.__lib = ...
self.np_array = numpy.array(self._lib.get_arr())
def __getitem__(self, key):
self.np_array = numpy.array(self._lib.get_arr())
return self.np_array.__getitem__(key)
def __setitem__(self, key, value):
self.np_array.__setitem__(key, value)
self._lib.set_arr(new_arr.ctypes)
def __getattr__(self, name):
"""Delegate to NumPy array."""
try:
return getattr(self.np_array, name)
except AttributeError:
raise AttributeError(
"'Array' object has no attribute {}".format(name))
Should behave like this:
>>> a = Array()
>>> a[1]
1
>>> a[1] = 10
>>> a[1]
10
The 10 should end up in your C array too.
I think your descriptor should return Instance of list-like class which knows about self._lib and will update it during normal operation append, __setitem__, __getitem__, etc.