This question already has answers here:
Is it possible to implement a Python for range loop without an iterator variable?
(15 answers)
Closed 7 years ago.
Say I have a function foo that I want to call n times. In Ruby, I would write:
n.times { foo }
In Python, I could write:
for _ in xrange(n): foo()
But that seems like a hacky way of doing things.
My question: Is there an idiomatic way of doing this in Python?
You've already shown the idiomatic way:
for _ in range(n): # or xrange if you are on 2.X
foo()
Not sure what is "hackish" about this. If you have a more specific use case in mind, please provide more details, and there might be something better suited to what you are doing.
If you want the times method, and you need to use it on your own functions, try this:
def times(self, n, *args, **kwargs):
for _ in range(n):
self.__call__(*args, **kwargs)
import new
def repeatable(func):
func.times = new.instancemethod(times, func, func.__class__)
return func
now add a #repeatable decorator to any method you need a times method on:
#repeatable
def foo(bar):
print bar
foo.times(4, "baz") #outputs 4 lines of "baz"
Fastest, cleanest is itertools.repeat:
import itertools
for _ in itertools.repeat(None, n):
foo()
The question pre-supposes that calling foo() n times is an a priori necessary thing. Where did n come from? Is it the length of something iterable? Then iterate over the iterable. As I am picking up Python, I find that I'm using few to no arbitrary values; there is some more salient meaning behind your n that got lost when it became an integer.
Earlier today I happened upon Nicklaus Wirth's provocative paper for IEEE Computer entitled Good Ideas - Through the Looking Glass (archived version for future readers). In section 4 he brings a different slant on programming constructs that everyone (including himself) has taken for granted but that hold expressive flaws:
"The generality of Algol’s for
statement should have been a warning
signal to all future designers to
always keep the primary purpose of a
construct in mind, and to be weary of
exaggerated generality and complexity,
which may easily become
counter-productive."
The algol for is equivalent to the C/Java for, it just does too much. That paper is a useful read if only because it makes one not take for granted so much that we so readily do. So perhaps a better question is "Why would you need a loop that executes an arbitrary number of times?"
Related
This question already has answers here:
Should I cache range results if I reuse them?
(2 answers)
Closed 2 years ago.
In a Python program which runs a for loop over a fixed range many times, e.g.,
while some_clause:
for i in range(0, 1000)
pass
...
does it make sense to cache range:
r = range(0, 1000)
while some_clause:
for i in r
pass
...
or will it not add much benefit?
It won't, a range call does almost nothing. Only the itering part, which is not optional, has a cost.
Interestingly, caching makes it slower for some reason, in the example below.
My benchmarks:
>>> timeit.timeit("""
for i in range(10000):
pass""",number=10000)
1.7728144999991855
>>> timeit.timeit("""
for i in r:
pass""","r=range(10000)",number=10000)
1.80037959999936
And caching it breaks readability, as the Zen of Python states:
Readability counts.
and
Explicit is better than implicit.
Simple is better than complex.
If you are using python 2.*, range will return a list, and you should usexrange.
xrange (2.) or range (3.) are lazy evaluated, which means it actually evaluated the next requested item when you ask for it.
So, no, no need to cache. Instantiate the range where you need it,
no need for tricks and magic there, it's already implemented in Python.
It won't benefit. If you want to enhance your loop activities refer to the comparison of : https://dev.to/duomly/loops-in-python-comparison-and-performance-4f2m
You can have an idea of how can improve things.
This question already has answers here:
Function chaining in Python
(6 answers)
Closed 6 years ago.
I am calculating a sum using lambda like this:
def my_func(*args):
return reduce((lambda x, y: x + y), args)
my_func(1,2,3,4)
and its output is 10.
But I want a lambda function that takes random arguments and sums all of them. Suppose this is a lambda function:
add = lambda *args://code for adding all of args
someone should be able to call the add function as:
add(5)(10) # it should output 15
add(1)(15)(20)(4) # it should output 40
That is, one should be able to supply arbitrary
number of parenthesis.
Is this possible in Python?
This is not possible with lambda, but it is definitely possible to do this is Python.
To achieve this behaviour you can subclass int and override its __call__ method to return a new instance of the same class with updated value each time:
class Add(int):
def __call__(self, val):
return type(self)(self + val)
Demo:
>>> Add(5)(10)
15
>>> Add(5)(10)(15)
30
>>> Add(5)
5
# Can be used to perform other arithmetic operations as well
>>> Add(5)(10)(15) * 100
3000
If you want to support floats as well then subclass from float instead of int.
The sort of "currying" you're looking for is not possible.
Imagine that add(5)(10) is 15. In that case, add(5)(10)(20) needs to be equivalent to 15(20). But 15 is not callable, and in particular is not the same thing as the "add 15" operation.
You can certainly say lambda *args: sum(args), but you would need to pass that its arguments in the usual way: add(5,10,20,93)
[EDITED to add:] There are languages in which functions with multiple arguments are handled in this sort of way; Haskell, for instance. But those are functions with a fixed number of multiple arguments, and the whole advantage of doing it that way is that if e.g. add 3 4 is 7 then add 3 is a function that adds 3 to things -- which is exactly the behaviour you're wanting not to get, if you want something like this to take a variable number of arguments.
For a function of fixed arity you can get Haskell-ish behaviour, though the syntax doesn't work so nicely in Python, just by nesting lambdas: after add = lambda x: lambda y: x+y you can say add(3)(4) and get 7, or you can say add(3) and get a function that adds 3 to things.
[EDITED again to add:] As Ashwini Chaudhary's ingenious answer shows, you actually can kinda do what you want by arranging for add(5)(10) to be not the actual integer 15 but another object that very closely resembles 15 (and will just get displayed as 15 in most contexts). For me, this is firmly in the category of "neat tricks you should know about but never ever actually do", but if you have an application that really needs this sort of behaviour, that's one way to do it.
(Why shouldn't you do this sort of thing? Mostly because it's brittle and liable to produce unexpected results in edge cases. For instance, what happens if you ask for add(5)(10.5)? That will fail with A.C.'s approach; PM 2Ring's approach will cope OK with that but has different problems; e.g., add(2)(3)==5 will be False. The other reason to avoid this sort of thing is because it's ingenious and rather obscure, and therefore liable to confuse other people reading your code. How much this matters depends on who else will be reading your code. I should add for the avoidance of doubt that I'm quite sure A.C. and PM2R are well aware of this, and that I think their answers are very clever and elegant; I am not criticizing them but offering a warning about what to do with what they've told you.)
You can kind of do this with a class, but I really wouldn't advise using this "party trick" in real code.
class add(object):
def __init__(self, arg):
self.arg = arg
def __call__(self, arg):
self.arg += arg
return self
def __repr__(self):
return repr(self.arg)
# Test
print(add(1)(15)(20)(4))
output
40
Initially, add(1) creates an add instance, setting its .args attribute to 1. add(1)(15) invokes the .call method, adding 15 to the current value of .args and returning the instance so we can call it again. This same process is repeated for the subsequent calls. Finally, when the instance is passed to print its __repr__ method is invoked, which passes the string representation of .args back to print.
Let's say I want to partition a string. It returns a tuple of 3 items. I do not need the second item.
I have read that _ is used when a variable is to be ignored.
bar = 'asd cds'
start,_,end = bar.partition(' ')
If I understand it correctly, the _ is still filled with the value. I haven't heard of a way to ignore some part of the output.
Wouldn't that save cycles?
A bigger example would be
def foo():
return list(range(1000))
start,*_,end = foo()
It wouldn't really save any cycles to ignore the return argument, no, except for those which are trivially saved, by noting that there is no point to binding a name to a returned object that isn't used.
Python doesn't do any function inlining or cross-function optimization. It isn't targeting that niche in the slightest. Nor should it, as that would compromise many of the things that python is good at. A lot of core python functionality depends on the simplicity of its design.
Same for your list unpacking example. Its easy to think of syntactic sugar to have python pick the last and first item of the list, given that syntax. But there is no way, staying within the defining constraints of python, to actually not construct the whole list first. Who is to say that the construction of the list does not have relevant side-effects? Python, as a language, will certainly not guarantee you any such thing. As a dynamic language, python does not even have the slightest clue, or tries to concern itself, with the fact that foo might return a list, until the moment that it actually does so.
And will it return a list? What if you rebound the list identifier?
As per the docs, a valid variable name can be of this form
identifier ::= (letter|"_") (letter | digit | "_")*
It means that, first character of a variable name can be a letter or an underscore and rest of the name can have a letter or a digit or _. So, _ is a valid variable name in Python but that is less commonly used. So people normally use that like a use and throw variable.
And the syntax you have shown is not valid. It should have been
start,*_,end = foo()
Anyway, this kind of unpacking will work only in Python 3.x
In your example, you have used
list(range(1000))
So, the entire list is already constructed. When you return it, you are actually returning a reference to the list, the values are not copied actually. So, there is no specific way to ignore the values as such.
There certainly is a way to extract just a few elements. To wit:
l = foo()
start, end = foo[0], foo[-1]
The question you're asking is, then, "Why doesn't there exist a one-line shorthand for this?" There are two answers to that:
It's not common enough to need shorthand for. The two line solution is adequate for this uncommon scenario.
Features don't need a good reason to not exist. It's not like Guido van Rossum compiled a list of all possible ideas and then struck out yours. If you have an idea for improved syntax you could propose it to the Python community and see if you could get them to implement it.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Paul Graham describes the following problem:
We want to write a function that generates accumulators-- a function that takes a number n, and returns a function that takes another number i and returns n incremented by i.
He says that whereas such a function could be implemented in Lisp/Ruby/Perl as simply as something like
(defun foo (n)
(lambda (i) (incf n i)))
, in Python it would be written
class foo:
def __init__(self, n):
self.n = n
def __call__(self, i):
self.n += i
return self.n
So my question is, What exactly about Python (apart from the lack of support for multi-line lambdas) prevents you from implementing an accumulator generator in the terse style of the first code sample above? Would Python ever support such a thing in the future, as Paul Graham speculates?
The example if first of all contrived. After defining the accumulator, you would in Python use it like this:
acc = foo(6)
acc(2)
acc(4)
For what use? In Python you would do this:
acc = 6
acc += 2
acc += 4
I don't know if defining an accumulator in lisp makes sense, but in Python you don't need to define one, because you would have it built in, so to speak.
Second of all the question you ask hits the nail on the head. What prevents Python from doing this in a "terse" style? Because the attitude of Python is that it will going to be a language that is quick to develop in and that is maintainable, and that means easy to read. Code golf and obtuse, terse code is not a part of what Python is designed for.
But ultimately, the reason Python will never evolve this functionality is that is relies on integers being mutable, ie that you can do something like:
>>> g = 6
>>> g++
>>> g
7
This will not happen in Python, where integers are immutable. You can't increase the value of an integer. This simplifies both the language and it's use a lot. For example, if integers were mutable, they could not be used as keys in dictionaries.
Essentially the example centers around increasing the value of integers, something you can't do in Python. In Python, when you add two integers you get a third integer back. You do not increase the value of the first one.
So Python will never become lisp, and only people who have used lisp for too long thinks it should, or persist in the "python is almost lisp" idiom. And it can't do an accumulator in just a few lines, because it neither needs to nor wants to.
He actually describes one reason in his followup post. His brief discussion there covers both of the reasons I mention below, although his take on it is a bit different.
As he talks about earlier in the post, part of what he's concerned with is the difference between statements and expressions. In Python += is a statement, and lambdas cannot contain statements, only expressions.
However, there's another issue. He wants his function to take "a number" as input, but he makes a distinction between "plus" and "increment" (as do many programming languages). However, my own position would be that there is no such distinction for numbers, only for variables (or "objects" or similar things). There is no such thing as "incrementing" the number 5. In this sense, you still can't write a Python lambda that increments a variable containing a builtin in numeric type, but you can do it if it accepts a mutable object instead of a raw number. And you could write your own MutableNumber class that works this way and make it totally interoperable with existing numeric types. So in this sense the reason Python doesn't support has to do with the design of its types (i.e., numbers are immutable) rather than the sort of functional issues he discusses in the post.
Whether any of this is actually a problem for the language is, of course, another question.
It can. The trick is to use a container to hold the original integer and to set and access this number without using the assignment operator.
>>> g=lambda n: (lambda d: lambda i: (d.__setitem__('v', d['v']+i),d['v'])[1])({'v': n})
>>> x=g(3)
>>> x(1)
4
>>> x(1)
5
>>> x(10)
15
>>>
trying to do this as clear as I could, separating lambdas to variables:
concat = lambda arr, val: (arr.append(val), arr)[1]
f = lambda n: (lambda i: concat(n,n.pop(0)+i)[0])
accumulate = lambda n: f([n])
a=accumulate(9)
a(1) #10
a(2) #12
For functions with closely related formal parameters, such as
def add_two_numbers(n1, n2):
return n1 + n2
def multiply_two_numbers(n1, n2):
return n1 * n2
Is it a good idea to give the same names to the parameters in both functions, as shown above?
The alternative is to rename the parameters in one of the functions. For example:
def add_two_numbers(num1, num2):
return num1 + num2
Keeping them the same in both functions looks more consistent since the parameters each one takes are analogous, but is that more confusing?
Similarly, which would be better for the example below?
def count(steps1, steps2):
a = 0
b = 0
for i in range(steps1):
a += 1
for j in range(steps2):
b += 1
return a, b
def do_a_count(steps1, steps2):
print "Counting first and second steps..."
print count(steps1, steps2)
Otherwise, changing the arguments in the second function gives:
def do_a_count(s1, s2):
print "Counting first and second steps..."
print count(s1, s2)
Again, I'm a little unsure of which way is best. Keeping the same parameter names makes the relation between the two functions clearer, while the second means there is no possibility of confusing parameters in the two functions.
I have done a bit of searching around (including skimming through PEP-8), but couldn't find a definitive answer. (Similar questions on SO I found included:
Naming practice for optional argument in python function
and
In Python, what's the best way to avoid using the same name for a __init__ argument and an instance variable?)
I would keep the names the same unless you have a good reason to use different names ... Remember that even positional arguments can be called by keywords, e.g.:
>>> def foo(a,b):
... print a
... print b
...
>>> foo(b=2,a=1)
Keeping the "keywords" the same helps in that rare, but legal corner case ...
In general, you should give your function's parameters names that make sense, and not even consider anything to do with other functions when you choose those names. The names of two functions' parameters simply don't have anything to do with each other, even if the functions do similar things.
Although... if the functions do do similar things, maybe that's a sign that you've missed a refactoring? :)
The important guideline here is that you should give them reasonable names—ideally the first names that you'd guess the parameters to have when you come back to your code a year later.
Every programmer has a set of conventions. Some people like to call integer arguments to binary functions n1 and n2; others like n, m; or lhs, rhs; or something else. As long as you pick one and stick to it (as long as it's reasonable), anyone else can read your code, and understand it, after a few seconds learning your style. If you use different names all over the place, they'll have to do a lot more guessing.
As mgilson points out, this allows you to use keyword parameters if you want. It also means an IDE with auto-complete is more useful—when you see the (…n1…, …n2…) pop up you know you want to pass two integers. But mainly it's for readability.
Of course if there are different meanings, give them different names. In some contexts it might be reasonable to have add_two_numbers(augend, addend) and multiply_two_numbers(factor, multiplicand).