How can I protect in Python class methods from beeing mistakenly changed? Is there some kind of a "write protection"?
Example:
class bar():
def blob(self):
return 2
if __name__ == "__main__":
foo = bar()
print(foo.blob()) # Returns 2
foo.blob = 1 # Overwrites the method "blob" without a warning!
# foo.blob returns 1, foo.blob() is not callabele anymore :(
foo.blib = 1 # Is also possible
print(foo.blob)
print(foo.blob())
When I call this script returns:
2
1
Traceback (most recent call last):
File "blob.py", line 18, in <module>
print(foo.blob())
TypeError: 'int' object is not callable
I would prefer do get a warning.
Related
Say I had code like this:
class Animals:
def __init__(self):
print('Woah')
def Eat(self):
print('yum')
And then you made an animals Gilberto:
gilberto = Animals()
And then you wanted to make another animal named Elijah. Why would you use the copy module:
elijah = copy.copy(gilberto)
When you could just do:
elijah = gilberto
Is there anything special about the copy module? In the case of the Animals class, it seems the same.
When using copy.copy you're creating a new object, instead of referencing the same object (which is what you're doing in the last snippet).
Consider this:
Setting up
import copy
class Animals:
def __init__(self):
print('Woah')
def Eat(self):
print('yum')
gilberto = Animals()
elijah_copy = copy.copy(gilberto)
elijah_reference = gilberto
In the interpreter
>>> id(gilberto) == id(elijah_copy) # Different objects!
False
>>> id(gilberto) == id(elijah_reference) # It's the same object!
True
So if I were to, for example, define a new attribute in elijah_reference it would be available in gilberto also, but not in elijah_copy:
elijah_reference.color = 'red'
In the interpreter
>>> gilberto.color
'red'
>>> elijah_copy.color
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Animals' object has no attribute 'color'
I was trying to use use some closures in my multiprocessing code and it kept failing for no reason. So I did a little test:
#!/usr/bin/env python3
import functools
from multiprocessing import Pool
def processing_function(unprocessed_data):
return unprocessed_data
def callback_function(processed_data):
print("FUNCTION: " + str(processed_data))
def create_processing_closure(initial_data):
def processing_function(unprocessed_data):
return initial_data + unprocessed_data
return processing_function
def create_callback_closure():
def callback(processed_data):
print("CLOSURE: " + str(processed_data))
return callback
def create_processing_lambda(initial_data):
return lambda unprocessed_data: initial_data + unprocessed_data
def create_callback_lambda():
return lambda processed_data: print("LAMBDA: " + str(processed_data))
def processing_partial(unprocessed_data1, unprocessed_data2):
return (unprocessed_data1 + unprocessed_data2)
def callback_partial(initial_data, processed_data):
print("PARTIAL: " + str(processed_data))
pool = Pool(processes=1)
print("Testing if they work normally...")
f1 = processing_function
f2 = callback_function
f2(f1(1))
f3 = create_processing_closure(1)
f4 = create_callback_closure()
f4(f3(1))
f5 = create_processing_lambda(1)
f6 = create_callback_lambda()
f6(f5(1))
f7 = functools.partial(processing_partial, 1)
f8 = functools.partial(callback_partial, 1)
f8(f7(1))
# bonus round!
x = 1
f9 = lambda unprocessed_data: unprocessed_data + x
f10 = lambda processed_data: print("GLOBAL LAMBDA: " + str(processed_data))
f10(f9(1))
print("Testing if they work in apply_async...")
# works
pool.apply_async(f1, args=(1,), callback=f2)
# doesn't work
pool.apply_async(f3, args=(1,), callback=f4)
# doesn't work
pool.apply_async(f5, args=(1,), callback=f6)
# works
pool.apply_async(f7, args=(1,), callback=f8)
# doesn't work
pool.apply_async(f9, args=(1,), callback=f10)
pool.close()
pool.join()
The results are:
> ./apply_async.py
Testing if they work normally...
FUNCTION: 1
CLOSURE: 2
LAMBDA: 2
PARTIAL: 2
GLOBAL LAMBDA: 2
Testing if they work in apply_async...
FUNCTION: 1
PARTIAL: 2
Can anyone explain this weird behavior?
Because those objects can't be transferred to another process; pickling of callables only ever stores the module and name, not the object itself.
The partial only works because it shares the underlying function object, which here is another global.
See the What can be pickled and unpickled section of the pickle module documentation:
functions defined at the top level of a module (using def, not lambda)
built-in functions defined at the top level of a module
[...]
Note that functions (built-in and user-defined) are pickled by “fully qualified” name reference, not by value. [2] This means that only the function name is pickled, along with the name of the module the function is defined in. Neither the function’s code, nor any of its function attributes are pickled. Thus the defining module must be importable in the unpickling environment, and the module must contain the named object, otherwise an exception will be raised. [3]
Do note the multiprocessing Programming guidelines:
Picklability
Ensure that the arguments to the methods of proxies are picklable.
and
Better to inherit than pickle/unpickle
When using the spawn or forkserver start methods many types from multiprocessing need to be picklable so that child processes can use them. However, one should generally avoid sending shared objects to other processes using pipes or queues. Instead you should arrange the program so that a process which needs access to a shared resource created elsewhere can inherit it from an ancestor process.
If you try to pickle each of your callable objects directly, you can see that which can be pickled happen to coincide with what callables successfully were executed using multiprocessing:
>>> import pickle
>>> f2(f1(1))
FUNCTION: 1
>>> pickle.dumps([f1, f2]) is not None
True
>>> f4(f3(1))
CLOSURE: 2
>>> pickle.dumps([f3, f4]) is not None
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: Can't pickle local object 'create_processing_closure.<locals>.processing_function'
>>> f6(f5(1))
LAMBDA: 2
>>> pickle.dumps([f5, f6]) is not None
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: Can't pickle local object 'create_processing_lambda.<locals>.<lambda>'
>>> f8(f7(1))
PARTIAL: 2
>>> pickle.dumps([f7, f8]) is not None
True
>>> f10(f9(1))
GLOBAL LAMBDA: 2
>>> pickle.dumps([f9, f10]) is not None
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
_pickle.PicklingError: Can't pickle <function <lambda> at 0x10994e8c8>: attribute lookup <lambda> on __main__ failed
I'm converting some code from Python 2 to Python 3, and I have hard time with a pickle problem! Here is a simple example of what I'm trying to do:
class test(str):
def __new__(self, value, a):
return (str.__new__(self, value))
def __init__(self, value, a):
self.a = a
if __name__ == '__main__':
import pickle
t = test("abs", 5)
print (t)
print( t.a)
wdfh = open("./test.dump", "wb")
pickle.dump(t, wdfh)
wdfh.close()
awfh = open("./test.dump", "rb")
newt = pickle.load(awfh)
awfh.close()
print (t)
print (newt.a)
This works just fine with Python 2 but I have the following error with Python 3:
Traceback (most recent call last):
File "test.py", line 21, in
newt = pickle.load(awfh)
TypeError: new() takes exactly 3 arguments (2 given)
I do not understand what is the difference, any idea?
The problem here is that your code only works with protocol 0 or 1. By default, Python 2 uses protocol 0, whereas Python 3 uses protocol 3.
For protocol 2 and above you can't have additional arguments to the __new__ method unless you implement the __getnewargs__ method.
In this case simply adding:
def __getnewargs__(self):
return (str(self),self.a)
should do the trick.
Or you could stick with protocol 0 or 1 and change the dump call:
pickle.dump(t, wdfh, 0)
I wrote a code which is going to store occurrences of words from a text file and store it to a dictionary:
class callDict(object):
def __init__(self):
self.invertedIndex = {}
then I write a method
def invertedIndex(self):
print self.invertedIndex.items()
and here is how I am calling:
if __name__ == "__main__":
c = callDict()
c.invertedIndex()
But it gives me the error:
Traceback (most recent call last):
File "E\Project\xyz.py", line 56, in <module>
c.invertedIndex()
TypeError: 'dict' object is not callable
How can I resolve this?
You are defining a method and an instance variable in your code, both with the same name. This will result in a name clash and hence the error.
Change the name of one or the other to resolve this.
So for example, this code should work for you:
class CallDict(object):
def __init__(self):
self.inverted_index = {}
def get_inverted_index_items(self):
print self.inverted_index.items()
And check it using:
>>> c = CallDict()
>>> c.get_inverted_index_items()
[]
Also check out ozgur's answer for doing this using #property decorator.
In addition to mu's answer,
#property
def invertedIndexItems(self):
print self.invertedIndex.items()
then here is how you'll cal it:
if __name__ == "__main__":
c = callDict()
print c.invertedIndexItems
Methods are attributes in Python, so you can't share the same name between them. Rename one of them.
This is what I have so far
vdcm = (self.register(self.checkForInt), '%S')
roundsNumTB = Entry(self, validate = 'key', validatecommand = vdcm)
Then the checkForInt() function is defined as so
def checkForInt(self, S):
return (S.isDigit())
The entry box is meant to take an even number, and a number only; not characters. If a character is inputted, it is rejected. This will only work once though. If a character is inputted, the next keystroke which is an input is not rejected.
If someone could tell me how to make it permanently check to make sure the string is a digit, and an even one at that, it would be appreciated.
This is the error message I get if it's any help
Exception in Tkinter callback
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 1470, in __call__
return self.func(*args)
File "[py directory]", line 101, in checkForInt
return (S.isDigit())
AttributeError: 'str' object has no attribute 'isDigit'
I think the function call is isdigit() and not isDigit(), note the capitalization difference. If you want to test that the input is an integer and is even you would have to first convert the string using int() and test:
def checkForEvenInt(self, S):
if S.isdigit():
if int(S) % 2 is 0:
return True
return False
Keep in mind that Python is quite case-sensitive, including functions. For example, here's an iPython session:
In [1]: def my_func(): return True
In [2]: my_func()
Out[2]: True
In [3]: my_Func()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-25-ac6a0a3aba88> in <module>()
----> 1 my_Func()
NameError: name 'my_Func' is not defined