difference between F(x) and F x in Python - python

In Python it is possible to call either del x or del (x) . I know how to define a function called F(x) , but I do not know how to define a function that cal be called like del, without a tuple as parameters.
What is the difference between F x and F(x), and how can I define a function that can be called without parenthesis ?
>>> a = 10
>>> a
10
>>> del a <------------ can be called without parenthesis
>>> a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'a' is not defined
>>> a = 1
>>> del (a)
>>> a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'a' is not defined
>>> def f(x): 1
...
>>> f (10)
>>> print f (10)
None
>>> def f(x): return 1
...
>>> print f (10)
1
>>> f 1 <------ cannot be called so
File "<stdin>", line 1
f 1
^
SyntaxError: invalid syntax
>>>

The main reason is that del is actually a statement and therefore has special behavior in Python. Therefore you cannot actually define these (and this behavior) yourself* - it is a built-in part of the language for a set of reserved keywords.
**I guess you could potentially edit the source of Python itself and build your own in, but I don't think that is what you're after :)*

Related

How does del interact with object attributes? [duplicate]

This question already has answers here:
What does "del" do exactly?
(1 answer)
Variable scopes in Python classes
(4 answers)
Closed 5 years ago.
I'm new to Python and saw this code snippet:
class C:
abc = 2
c1 = C()
print c1.abc
c1.abc = 3
print c1.abc
del c1.abc
print c1.abc
I understand why the first and the second print statements print 2, respectively 3. Coming from a Java background however, I don't understand what happens in the line 'del c1.abc' and why the last print statement prints 2 and not some kind of an error. Can someone explain? If possible by comparing to Java?
The sticky issue to a Python beginner here is that abc is a class variable (i.e. a "static" variable), and when you do c1.abc = 3, you shadow the class variable with an instance variable. When you do del c1.abc the del applies to the instance variable, so calling c1.abc now returns the class variable.
The following interactive session should clear some things up:
>>> class C:
... abc = 2
...
>>> c1 = C()
>>> c2 = C()
>>> c1.abc = 3
>>> c1.abc
3
>>> c2.abc
2
>>> C.abc # class "static" variable
2
>>> del c1.abc
>>> c1.abc
2
>>> c2.abc
2
>>> C.abc
2
>>> del c2.abc
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: C instance has no attribute 'abc'
>>> del C.abc
>>> c1.abc
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: C instance has no attribute 'abc'
>>>
It is del.<someattribute> always deletes the instance attribute. It won't delete a class-level attribute if applied to an instance, instead, you have to apply it to the class!
In Python, everything written inside a class block is always at the class level. In this sense, it is simpler than Java. To define an instance variable, you need to assign directly to an instance, outisde a method (c1.abc = 3) or inside a method, using the first parameter passed to that method (by convention this is called self but could be banana if you wanted):
>>> class C:
... def some_method(banana, x): # by convention you should use `self` instead of `banana`
... banana.x = x
...
>>> c = C()
>>> c.x
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: C instance has no attribute 'x'
>>> c.some_method(5)
>>> c.x
5

How does the list.append work?

alist = []
def show(*args, **kwargs):
alist.append(*args, **kwargs)
print(alist)
>>> show('tiger')
['tiger']
>>> show('tiger','cat')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in show
TypeError: append() takes exactly one argument (2 given)
>>> show('tiger','cat', {'name':'tom'})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in show
TypeError: append() takes exactly one argument (3 given)
Since the method append of alist only accepts one argument, why not detect a syntax error on the line alist.append(*args, **kwargs) in the definition of the method show?
It's not a syntax error because the syntax is perfectly fine and that function may or may not raise an error depending on how you call it.
The way you're calling it:
alist = []
def show(*args, **kwargs):
alist.append(*args, **kwargs)
print(alist)
>>> show('tiger')
['tiger']
>>> show('tiger','cat')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in show
TypeError: append() takes exactly one argument (2 given)
A different way:
alist = []
def show(*args, **kwargs):
alist.append(*args, **kwargs)
print(alist)
>>> show('tiger')
['tiger', 'tiger']
>>> class L: pass
...
>>> alist = L()
>>> alist.append = print
>>> show('tiger','cat')
tiger cat
<__main__.L object at 0x000000A45DBCC048>
Python objects are strongly typed. The names that bind to them are not. Nor are function arguments. Given Python's dynamic nature it would be extremely difficult to statically predict what type a variable at a given source location will be at execution time, so the general rule is that Python doesn't bother trying.
In your specific example, alist is not in the local scope. Therefore it can be modified after your function definition was executed and the changes will be visible to your function, cf. code snippets below.
So, in accord with the general rule: predicting whether or not alist will be a list when you call .append? Near-impossible. In particular, the interpreter cannot predict that this will be an error.
Here is some code just to drive home the point that static type checking is by all practical means impossible in Python. It uses non-local variables as in your example.
funcs = []
for a in [1, "x", [2]]:
def b():
def f():
print(a)
return f
funcs.append(b())
for f in funcs:
f()
Output:
[2] # value of a at definition time (of f): 1
[2] # value of a at definition time (of f): 'x'
[2] # value of a at definition time (of f): [2]
And similarly for non-global non-local variables:
funcs = []
for a in [1, "x", [2]]:
def b(a):
def f():
print(a)
a = a+a
return f
funcs.append(b(a))
for f in funcs:
f()
Output:
2 # value of a at definition time (of f): 1
xx # value of a at definition time (of f): 'x'
[2, 2] # value of a at definition time (of f): [2]
It's not a syntax error because it's resolved at runtime. Syntax errors are caught initially during parsing. Things like unmatched brackets, undefined variable names, missing arguments (this is not a missing argument *args means any number of arguments).
show has no way of knowing what you'll pass it at runtime and since you are expanding your args variable inside show, there could be any number of arguments coming in and it's valid syntax! list.append takes one argument! One tuple, one list, one int, string, custom class etc. etc. What you are passing it is some number elements depending on input. If you remove the * it's all dandy as its one element e.g. alist.append(args).
All this means that your show function is faulty. It is equipped to handle args only when its of length 1. If its 0 you also get a TypeError at the point append is called. If its more than that its broken, but you wont know until you run it with the bad input.
You could loop over the elements in args (and kwargs) and add them one by one.
alist = []
def show(*args, **kwargs):
for a in args:
alist.append(a)
for kv in kwargs.items():
alist.append(kv)
print(alist)

How to use a DefaultDict with a lambda expression to make the default changeable?

DefaultDicts are useful objects to be able to have a dictionary that can create new keys on the fly with a callable function used to define the default value. eg. Using str to make an empty string the default.
>>> food = defaultdict(str)
>>> food['apple']
''
You can also use lambda to make an expression be the default value.
>>> food = defaultdict(lambda: "No food")
>>> food['apple']
'No food'
However you can't pass any parameters to this lambda function, that causes an error when it tries to be called, since you can't actually pass a parameter to the function.
>>> food = defaultdict(lambda x: "{} food".format(x))
>>> food['apple']
Traceback (most recent call last):
File "<pyshell#9>", line 1, in <module>
food['apple']
TypeError: <lambda>() takes exactly 1 argument (0 given)
Even if you try to supply the parameter
>>> food['apple'](12)
Traceback (most recent call last):
File "<pyshell#9>", line 1, in <module>
food['apple']
TypeError: <lambda>() takes exactly 1 argument (0 given)
How could these lambda functions be responsive rather than a rigid expression?
Using a variable in the expression can actually circumvent this somewhat.
>>> from collections import defaultdict
>>> baseLevel = 0
>>> food = defaultdict(lambda: baseLevel)
>>> food['banana']
0
>>> baseLevel += 10
>>> food['apple']
10
>>> food['banana']
0
The default lambda expression is tied to a variable that can change without affecting the other keys its already created. This is particularly useful when it can be tied to other functions that only evaluate when a non existant key is being accessed.
>>> joinTime = defaultdict(lambda: time.time())
>>> joinTime['Steven']
1432137137.774
>>> joinTime['Catherine']
1432137144.704
>>> for customer in joinTime:
print customer, joinTime[customer]
Catherine 1432137144.7
Steven 1432137137.77
Ugly but may be useful to someone:
class MyDefaultDict(defaultdict):
def __init__(self, func):
super(self.__class__, self).__init__(self._func)
self.func = func
def _func(self):
return self.func(self.cur_key)
def __getitem__(self, key):
self.cur_key = key
return super().__getitem__(self.cur_key)

Argument Unpacking wastes Stack Frames

When a function is called by unpacking arguments, it seems to increase the recursion depth twice. I would like to know why this happens.
Normally:
depth = 0
def f():
global depth
depth += 1
f()
try:
f()
except RuntimeError:
print(depth)
#>>> 999
With an unpacking call:
depth = 0
def f():
global depth
depth += 1
f(*())
try:
f()
except RuntimeError:
print(depth)
#>>> 500
In theory both should reach about 1000:
import sys
sys.getrecursionlimit()
#>>> 1000
This happens on CPython 2.7 and CPython 3.3.
On PyPy 2.7 and PyPy 3.3 there is a difference, but it is much smaller (1480 vs 1395 and 1526 vs 1395).
As you can see from the disassembly, there is little difference between the two, other than the type of call (CALL_FUNCTION vs CALL_FUNCTION_VAR):
import dis
def f():
f()
dis.dis(f)
#>>> 34 0 LOAD_GLOBAL 0 (f)
#>>> 3 CALL_FUNCTION 0 (0 positional, 0 keyword pair)
#>>> 6 POP_TOP
#>>> 7 LOAD_CONST 0 (None)
#>>> 10 RETURN_VALUE
def f():
f(*())
dis.dis(f)
#>>> 47 0 LOAD_GLOBAL 0 (f)
#>>> 3 BUILD_TUPLE 0
#>>> 6 CALL_FUNCTION_VAR 0 (0 positional, 0 keyword pair)
#>>> 9 POP_TOP
#>>> 10 LOAD_CONST 0 (None)
#>>> 13 RETURN_VALUE
The exception message actually offers you a hint. Compare the non-unpacking option:
>>> import sys
>>> sys.setrecursionlimit(4) # to get there faster
>>> def f(): f()
...
>>> f()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in f
File "<stdin>", line 1, in f
File "<stdin>", line 1, in f
RuntimeError: maximum recursion depth exceeded
with:
>>> def f(): f(*())
...
>>> f()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in f
File "<stdin>", line 1, in f
RuntimeError: maximum recursion depth exceeded while calling a Python object
Note the addition of the while calling a Python object. This exception is specific to the PyObject_CallObject() function. You won't see this exception when you set an odd recursion limit:
>>> sys.setrecursionlimit(5)
>>> f()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in f
File "<stdin>", line 1, in f
RuntimeError: maximum recursion depth exceeded
because that is the specific exception raised in the ceval.c frame evaluation code inside PyEval_EvalFrameEx():
/* push frame */
if (Py_EnterRecursiveCall(""))
return NULL;
Note the empty message there. This is a crucial difference.
For your 'regular' function (no variable arguments), what happens is that an optimized path is picked; a Python function that doesn't need tuple or keyword argument unpacking support is handled directly in the fast_function() function of the evaluation loop. A new frameobject with the Python bytecode object for the function is created, and run. This is one recursion check.
But for a function call with variable arguments (tuple or dictionary or both), the fast_function() call cannot be used. Instead, ext_do_call() (extended call) is used, which handles the argument unpacking, then uses PyObject_Call() to invoke the function. PyObject_Call() does a recursion limit check, and 'calls' the function object. The function object is invoked via the function_call() function, which calls PyEval_EvalCodeEx(), which calls PyEval_EvalFrameEx(), which makes the second recursion limit check.
TL;DR version
Python functions calling Python functions are optimised and bypass the PyObject_Call() C-API function, unless argument unpacking takes place. Both Python frame execution and PyObject_Call() make recursion limit tests, so bypassing PyObject_Call() avoids incrementing the recursion limit check per call.
More places with 'extra' recursion depth checks
You can grep the Python source code for Py_EnterRecursiveCall for other locations where recursion depth checks are made; various libraries, such as json and pickle use it to avoid parsing structures that are too deeply nested or recursive, for example. Other checks are placed in the list and tuple __repr__ implementations, rich comparisons (__gt__, __lt__, __eq__, etc.), handling the __call__ callable object hook and handling __str__ calls.
As such, you can hit the recursion limit much faster still:
>>> class C:
... def __str__(self):
... global depth
... depth += 1
... return self()
... def __call__(self):
... global depth
... depth += 1
... return str(self)
...
>>> depth = 0
>>> sys.setrecursionlimit(10)
>>> C()()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 9, in __call__
File "<stdin>", line 5, in __str__
RuntimeError: maximum recursion depth exceeded while calling a Python object
>>> depth
2

What's the difference between locals() and globals()

I don't understand what's wrong with in this code.
Please let me know how I write to solve this problem.
I'd thought that this might had been good, but it caused the error.
>>> def L():
... for i in range(3):
... locals()["str" + str(i)] = 1
... print str0
...
>>> L()
If I execute it, the following error happened.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in a
NameError: global name 'str0' is not defined
However, if I use globals(), the error didn't happen(like the following)
>>> def G():
... for i in range(3):
... globals()["str" + str(i)] = 1
... print str0
...
>>> G()
1
But!!! If I don't use for statement, I can write like this and works well.
>>> def LL():
... locals()["str" + str(0)] = 1
... print str0
...
>>> LL()
1
I want to get the result by using variables set in the method after the above code was executed.
>>> str0
1
>>> str1
1
>>> str2
1
From the documentation of locals()
Note:
The contents of this dictionary should not be modified; changes may not affect the values of local and free variables used by the interpreter.

Categories

Resources