difference between the next function and the next method [duplicate] - python

This question already has answers here:
Python: Why should I use next() and not obj.next()?
(3 answers)
Closed 6 years ago.
When you make a generator by calling a function or method that has the yield keyword in it you get an object that has a next method.
So far as I can tell, there isn't a difference between using this method and using the next builtin function.
e.g. my_generator.next() vs next(my_generator)
So is there any difference? If not, why are there two ways of calling next on a generator?

In Python 2 the internal method for an iterator is next() in Python 3 it is __next__(). The builtin function next() is aware of this and always calls the right method making the code compatible with both versions. Also it adds the default argument for more easy handling of the iteration end.

Related

Strange behavior when passing a function as second parameter of the `setdefault` dictionary method [duplicate]

This question already has an answer here:
Why does setdefault evaluate default when key is set?
(1 answer)
Closed 6 months ago.
I don't understand the behavior of setdefault in this scenario:
def f(x):
return x+1
dct = {5: 15}
print(dct.setdefault(5, f(5)))
The key 5 is in the dictionary, but instead of returning the value 15 immediately, it wastes time computing the function at the second argument. In the end, it discards the output of f and returns 15.
Is this a bug? What is its purpose?
Python allows you to use expressions as parameters to a function. In fact, you could view all parameters as expressions. Python will evaluate all of the parameter expressions from left to right, resolving each to a single object. A function object is created, the list is expanded into the parameter set for the function, and finally the function is called.
In dct.setdefault(5, f(5)), python has no idea whether a function parameter expression will be used or not. And it can't evaluate the expression after the call is made. So, f(5) is resolved and the function is called.
If a function is expensive, then setdefault is not the tool for you. Use "not in" instead.
if 5 not in dct:
dct[5] = f(5)
In Python all arguments must be evaluated before the function is called. It would be a 'performant' behavior for this case, but could lead to consistency issues for others.

Understanding how Python builtins are mapped to underlying C implementation [duplicate]

This question already has answers here:
PyCharm, what is python_stubs?
(2 answers)
Finding the source code for built-in Python functions?
(8 answers)
Closed 1 year ago.
If I go to the implementation of let's say the next() builtin function, I'm forwarded to the builtins.py module to following code:
def next(iterator, default=None): # real signature unknown; restored from __doc__
"""
next(iterator[, default])
Return the next item from the iterator. If default is given and the iterator
is exhausted, it is returned instead of raising StopIteration.
"""
pass
Now, it looks like this functions does nothing but obviously that's not the case.
I understand that this function is implemented in C under the hood, but how and when is this function(or other builtin functions) mapped to the underlying C implementation?
If you have an answer to this question, can you please also provide links that I can read in order to better unterstand this topic?
I'm not asking, where the code is, but how and when the function is mapped to that code
Thank you.

Obtaining closures at runtime [duplicate]

This question already has answers here:
Given a function with closure, can I refer back to it's closure scope?
(1 answer)
What exactly is contained within a obj.__closure__?
(4 answers)
Closed 2 years ago.
I would like to know if there is any method to check whether two functions have the same arguments at runtime in python 3.
Basically, I have this function (func) that takes another function and an argument. I want to check the values assigned to args in the lambda function
func(another_func, args):
return lambda(x : another_func(x, args))
It is not feasible to run the code before and check the results because I am implementing a lazy framework. My main goal is to be able to understand what are the arguments of the function because there is one variable argument that I do not care but there is one static that is created before running the function.
##EDIT
I actually solved this problem using the inspect module (getclosure) for those who are interested!
I actually solved this problem using the inspect module (getclosure) for those who are interested!
Extension (Martijn Pieters):
I think you are referring to getclosurevars(). You can also just access function.closure, and access the value of each cell via its cell_contents attribute.

Why is iter not a method of an instance and __iter__ is? [duplicate]

This question already has answers here:
Why does Python code use len() function instead of a length method?
(7 answers)
Closed 8 years ago.
The "intuitive" way of getting an iterator for someone who usually programs in Java, C++, etc is something like list.iterator().
Why did the Python folks choose to have it as a general function like len() (which results in iter(list) rather than list.iter())?
The same question can be asked for the length of a construct as well (len()).
iter() supports different types of objects.
You can pass in either a sequence (supporting length and item access) or an iterable (which produces an iterator by calling obj.__iter__()) or an iterator (which returns self from __iter__).
The Java list.iter() then is served by list.__iter__() in Python, but the iter() function allows for more types. You can customise the behaviour with a __iter__ method but if you implemented a sequence instead, things will still work.
There is also a second form of the function where a callable and a sentinel are passed in:
iter(fileobj.readline, '')
iterates over a file object by calling the readline() method until it returns an empty string (equal to the second argument, the sentinel).
Then there is the Principle of Least Astonishment argument; iter() gives the standard library a stable API call to standardise on, just like operators do; no need to look up the documentation of the class to see if it implemented obj.iter() or obj.iterator() or obj.get_iterator().

Does Python provide "free" default iterators? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why does defining getitem on a class make it iterable in python?
I have a class that is basically a wrapper for a python list. Within this class I have defined __getitem__, __setitem__, and __len__ . I have not defined __iter__ for this class.
when I go:
thing = ListWrapper(range(4))
for i in thing :
print i
I get the output:
0
1
2
3
Which is nice, but I expected an error message of some sort saying that python could not find an iterator. I've given the documentation a look and can't find anything referencing default iterators. Furthermore, tracing through the code in PyDev shows that it is calling the __getitem__ method each iteration.
I was wondering if it is good practice to depend on this behavior in my code. It doesn't fell quite right to me at this point. Does Python guarantee that classes with __getitem__ and __len__ will be treated as if they have a defined iterator? Any other information on weirdness this may cause is also welcome.
If a class doesn't have __iter__, but does have __getitem__, the iteration machinery will call it with consecutive integers until it runs out.

Categories

Resources