I recently started learning Python, and the concept of for loops is still a little confusing for me. I understand that it generally follows the format for x in y, where y is just some list.
The for-each loop for (int n: someArray)
becomes for n in someArray,
And the for loop for (i = 0; i < 9; i-=2) can be represented by for i in range(0, 9, -2)
Suppose instead of a constant increment, I wanted i*=2, or even i*=i. Is this possible, or would I have to use a while loop instead?
As you say, a for loop iterates through the elements of a list. The list can contain anything you like, so you can construct a list beforehand that contains each step.
A for loop can also iterate over a "generator", which is a small piece of code instead of an actual list. In Python, range() is actually a generator (in Python 2.x though, range() returned a list while xrange() was the generator).
For example:
def doubler(x):
while True:
yield x
x *= 2
for i in doubler(1):
print i
The above for loop will print
1
2
4
8
and so on, until you press Ctrl+C.
You can use a generator expression to do this efficiently and with little excess code:
for i in (2**x for x in range(10)): #In Python 2.x, use `xrange()`.
...
Generator expressions work just like defining a manual generator (as in Greg Hewgill's answer), with a syntax similar to a list comprehension. They are evaluated lazily - meaning that they don't generate a list at the start of the operation, which can cause much better performance on large iterables.
So this generator works by waiting until it is asked for a value, then asking range(10) for a value, doubling that value, and passing it back to the for loop. It does this repeatedly until the range() generator yields no more values.
Bear in mind that the 'list' part of the Python can be any iterable sequence.
Examples:
A string:
for c in 'abcdefg':
# deal with the string on a character by character basis...
A file:
with open('somefile','r') as f:
for line in f:
# deal with the file line by line
A dictionary:
d={1:'one',2:'two',3:'three'}
for key, value in d.items():
# deal with the key:value pairs from a dict
A slice of a list:
l=range(100)
for e in l[10:20:2]:
# ever other element between 10 and 20 in l
etc etc etc etc
So it really is a lot deeper than 'just some list'
As others have stated, just set the iterable to be what you want it to be for your example questions:
for e in (i*i for i in range(10)):
# the squares of the sequence 0-9...
l=[1,5,10,15]
for i in (i*2 for i in l):
# the list l as a sequence * 2...
You will want to use list comprehensions for this
print [x**2 for x in xrange(10)] # X to the 2nd power.
and
print [x**x for x in xrange(10)] # X to the Xth power.
The list comprehension syntax is a follows:
[EXPRESSION for VARIABLE in ITERABLE if CONDITION]
Under the hood, it acts similar to the map and filter function:
def f(VARIABLE): return EXPRESSION
def c(VARIABLE): return CONDITION
filter(c, map(f, ITERABLE))
Example given:
def square(x): return x**2
print map(square, xrange(10))
and
def hypercube(x): return x**x
print map(hypercube, xrange(10))
Which can be used as alternative approach if you don't like list comprehensions.
You could as well use a for loop, but that would step away from being Python idiomatic...
Just for an alternative, how about generalizing the iterate/increment operation to a lambda function so you can do something like this:
for i in seq(1, 9, lambda x: x*2):
print i
...
1
2
4
8
Where seq is defined below:
#!/bin/python
from timeit import timeit
def seq(a, b, f):
x = a;
while x < b:
yield x
x = f(x)
def testSeq():
l = tuple(seq(1, 100000000, lambda x: x*2))
#print l
def testGen():
l = tuple((2**x for x in range(27)))
#print l
testSeq();
testGen();
print "seq", timeit('testSeq()', 'from __main__ import testSeq', number = 1000000)
print "gen", timeit('testGen()', 'from __main__ import testGen', number = 1000000)
The difference in performance isn't that much:
seq 7.98655080795
gen 6.19856786728
[EDIT]
To support reverse iteration and with a default argument...
def seq(a, b, f = None):
x = a;
if b > a:
if f == None:
f = lambda x: x+1
while x < b:
yield x
x = f(x)
else:
if f == None:
f = lambda x: x-1
while x > b:
yield x
x = f(x)
for i in seq(8, 0, lambda x: x/2):
print i
Note: This behaves differently to range/xrange in which the direction </> test is chosen by the iterator sign, rather than the difference between start and end values.
Related
Is there any way to get the next n values of a generator without looping or calling next() n times?
The thing that the generator in this case is infinite, and cannot be translated into a list.
Here is the generator function:
def f():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
The following loops both give the desired result, but I would like to know if there is some other method of doing this.
gen = f()
n = 0
while n < 10:
print(next(gen))
n += 1
or..
for n, i in enumerate(f()):
if n < 10:
print(i)
else:
break
There are several ways to do this. One way is to use list comprehension, similar to what you already have above. For instance:
gen = f()
elements = [next(gen) for _ in range(10)]
Another way is to use something like the itertools module, for instance the takeWhile()- or islice()-function.
Also check out How to get the n next values of a generator in a list (python).
I'm kinda new to Programming and Python and I'm self learning before going to uni so please be gentle, I'm a newbie. I hope my english won't have too many grammatical errors.
Basically I had this exercise in a book I'm currently reading to take a list of tuples as a function parameter, then take every item in the each tuple and put it to 2nd power and sum the items up.
My code looks like this and works good if my function call includes the same amount of arguments as the function for loop requires:
def summary(xs):
for x,y,z in xs:
print( x*x + y*y + z*z)
xs =[(2,3,4), (2,-3,4), (1,2,3)]
summary(xs)
However, If I use a list with less tuples than the function definition, I get an error: ValueError : not enough values to unpack(expected 3, got 0):
xs =[(2,3,4), (), (1,2,3)]
I would like to know how to make a function that would accept a tuple I shown before () - with no tuples, and the function would return 0. I have been trying multiple ways how to solve this for 2 days already and googling as well, but it occurs to me I'm either missing something or I'm not aware of a function i could use. Thank you all for the help.
One way is to iterate over the tuple values, this would also be the way to tackle this problem in nearly every programming language:
def summary(xs):
for item in xs:
s = 0
for value in item:
s += value**2
print(s)
Or using a list comprehension:
def summary(xs):
for item in xs:
result = sum([x**2 for x in item])
print(result)
also note that sum([]) will return 0 for an empty iterable.
Well, the issue is that you don't have enough indices in your inner tuple to unpack into three variables. The simplest way to go around it is to manually unpack after checking that you have enough variables, i.e.:
def summary(xs):
for values in xs:
if values and len(values) == 3:
x, y, z = values # or don't unpack, refer to them by index, i.e. v[0], v[1]...
print(x*x + y*y + z*z)
else:
print(0)
Or use a try..except block:
def summary(xs):
for values in xs:
try:
x, y, z = values # or don't unpack, refer to them by index, i.e. v[0], v[1]...
print(x*x + y*y + z*z)
except ValueError: # check for IndexError if not unpacking
print(0)
One way is to use try / except. In the below example, we use a generator and catch occasions when unpacking fails with ValueError and yield 0.
While you are learning, I highly recommend you practice writing functions which return or yield rather than using them to print values.
def summary(xs):
for item in xs:
try:
yield sum(i**2 for i in item)
except ValueError:
yield 0
xs = [(2,3,4), (), (1,2,3)]
res = list(summary(xs))
print(res)
[29, 0, 14]
Or to actually utilise the generator in a lazy fashion:
for i in summary(xs):
print(i)
29
0
14
You should use the "len > 0" condition. This code should work for any list or tuple length :
def summary(xs):
for tup in xs:
prod = [a*a for a in tup if len(tup)>0]
print(sum(prod))
Note that I defined a "prod" list in order to use "sum" so that it is not calculated the hard way. It replaces your "x* x + y* y + z* z" and works for any tuple length.
It often pays to separate your algorithm into functions that just do one thing. In this case a function to sum the squares of a list of values and a function to print them. It is very helpful to keep your variable names meaningful. In this case your xs is a list of lists, so might be better named xss
import math
def sum_of_squares(xs):
return sum(map(math.sqr, xs))
def summary(xss):
for xs in xss:
print sum_of_squares(xs)
xss = [(2,3,4), (), (1,2,3)]
summary(xss)
or
map(print, sum(map(math.sqr, (x for x in xs))))
I've been messing around in Python with generator functions. I want to write a function that took a generator whose values were tuples, and returns a list of generators, where each generator's values correspond to one index in the original tuple.
Currently, I have a function which accomplishes this for a hardcoded number of elements in the tuple. Here is my code:
import itertools
def tee_pieces(generator):
copies = itertools.tee(generator)
dropped_copies = [(x[0] for x in copies[0]), (x[1] for x in copies[1])]
# dropped_copies = [(x[i] for x in copies[i]) for i in range(2)]
return dropped_copies
def gen_words():
for i in "Hello, my name is Fred!".split():
yield i
def split_words(words):
for word in words:
yield (word[:len(word)//2], word[len(word)//2:])
def print_words(words):
for word in words:
print(word)
init_words = gen_words()
right_left_words = split_words(init_words)
left_words, right_words = tee_pieces(right_left_words)
print("Left halves:")
print_words(left_words)
print("Right halves:")
print_words(right_words)
This correctly splits the generator, leading to left_words containing the left halves and right_words containing the right halves.
The problem comes when I try to parameterize the number of generators to be created, using the commented out line above. As far as I know it should be equivalent, but when I use that line instead, both left_words and right_words end up containg the right half of the word, giving an output like this:
Left halves:
lo,
y
me
s
ed!
Right halves:
lo,
y
me
s
ed!
Why is this happening? How can I accommplish the desired result, namely parameterize the number of pieces to split the generator into?
This has to do with python's lexical scoping rules. The classical "surprising" example for demonstrating it:
funcs = [ lambda: i for i in range(3) ]
print(funcs[0]())
=> 2 #??
print(funcs[1]())
=> 2 #??
print(funcs[2]())
=> 2
Your examples is another result of the same rules.
To fix, you can "break" the scoping with an additional function:
def make_gen(i):
return (x[i] for x in copies[i])
dropped_copies = [make_gen(i) for i in range(2)]
This binds the the value of i to the specific value passed to a specific call to make_gen, which achieves the desired behavior. Without it, it is bound the "the current value of the variable named i", which ends up as the same value for all generators you create (as there's only one variable named i).
Too add to shx2's answer, you could also substitute the additional function by a lambda:
dropped_copies = [(lambda j: (x[j] for x in copies[j]))(i) for i in range(2)]
This too creates a new scope when the lambda gets called, as is abundantly clear by the different variable name. It would however also work with using the same name, since the parameter inside the lambda shadows the one inside the generator:
dropped_copies = [(lambda i: (x[i] for x in copies[i]))(i) for i in range(2)]
This sort of scoping seems very confusing but becomes more intuitive if you rewrite the generator as a for loop:
dropped_copies = []
for i in range(2):
dropped_copies.append((x[i] for x in copies[i]))
Note that this is broken in the same way the original list comprehension version is.
This is because dropped_copies is a pair of iterators, and when the iterators are evaluated, i has already been incremented to 1.
Try use list comprehension, you can see the difference:
dropped_copies = [[x[i] for x in copies[i]] for i in range(2)]
I'm attempting to write a function that calculates the number of unique permutations of a string. For example aaa would return 1 and abc would return 6.
I'm writing the method like this:
(Pseudocode:)
len(string)! / (A!*B!*C!*...)
where A,B,C are the number of occurrences of each unique character. For example, the string 'aaa' would be 3! / 3! = 1, while 'abc' would be 3! / (1! * 1! * 1!) = 6.
My code so far is like this:
def permutations(n):
'''
returns the number of UNIQUE permutations of n
'''
from math import factorial
lst = []
n = str(n)
for l in set(n):
lst.append(n.count(l))
return factorial(len(n)) / reduce(lambda x,y: factorial(x) * factorial(y), lst)
Everything works fine, except when I try to pass a string that has only one unique character, i.e. aaa - I get the wrong answer:
>>> perm('abc')
6
>>> perm('aaa')
2
>>> perm('aaaa')
6
Now, I can tell the problem is in running the lambda function with factorials on a list of length 1. I don't know why, though. Most other lambda functions works on a list of length 1 even if its expecting two elements:
>>> reduce(lambda x,y: x * y, [3])
3
>>> reduce(lambda x,y: x + y, [3])
3
This one doesn't:
>>> reduce(lambda x,y: ord(x) + ord(y), ['a'])
'a'
>>> reduce(lambda x,y: ord(x) + ord(y), ['a','b'])
195
Is there something I should be doing differently? I know I can rewrite the function in many different ways that will circumvent this, (e.g. not using lambda), but I'm looking for why this specifically doesn't work.
See the documentation for reduce(), there is an optional 'initializer' argument that is placed before all other elements in the list so that the behavior for one element lists is consistent, for example, for your ord() lambda you could set initializer to the the character with an ord() of 0:
>>> reduce(lambda x, y: ord(x) + ord(y), ['a'], chr(0))
97
Python's reduce function doesn't always know what the default (initial) value should be. There should be a version that takes an initial value. Supply a sensible initial value and your reduce should work beautifully.
Also, from the comments, you should probably just use factorial on the second argument in your lambda:
reduce(lambda x,y: x * factorial(y), lst, 1)
If you want len(s)! / A!*B!*C! then the use of reduce() won't work, as it will calculate factorial(factorial(A)*factorial(B))*factorial(C). In other words, it really needs the operation to be commutative.
Instead, you'll need to generate the list of factorials, then multiply them together:
import operator
reduce(operator.mul, [factorial(x) for x in lst])
Reduce works by first computing the result for the first two elements in the sequence and then pseudo-recursively follows from there. A list of size 1 is a special case.
I would use a list comprehension here:
prod( [ factorial(val) for val in lst ] )
Good luck!
I'm looking for a "nice" way to process a list where some elements need to be expanded into more elements (only once, no expansion on the results).
Standard iterative way would be to do:
i=0
while i < len(l):
if needs_expanding(l[i]):
new_is = expand(l[i])
l[i:i] = new_is
i += len(new_is)
else:
i += 1
which is pretty ugly. I could rewrite the contents into a new list with:
nl = []
for x in l:
if needs_expanding(x):
nl += expand(x)
else:
nl.append(x)
But they both seem too long. Or I could simply do 2 passes and flatten the list later:
flatten(expand(x) if needs_expanding(x) else x for x in l)
# or
def try_expanding(x)....
flatten(try_expanding(x) for x in l)
but this doesn't feel "right" either.
Are there any other clear ways of doing this?
Your last two answers are what I would do. I'm not familiar with flatten() though, but if you have such a function then that looks ideal. You can also use the built-in sum():
sum(expand(x) if needs_expanding(x) else [x] for x in l, [])
sum(needs_expanding(x) and expand(x) or [x] for x in l, [])
If you do not need random access in the list you are generating, you could also use write a generator.
def iter_new_list(old_list):
for x in old_list:
if needs_expanding(x):
for y in expand(x):
yield y
else:
yield x
new_list = list(iter_new_list(old_list))
This is functionally equivalent to your second example, but it might be more readable in your real-world situation.
Also, Python coding standards forbid the use of lowercase-L as a variable name, as it is nearly indistinguishable from the numeral one.
The last one is probably your most pythonic, but you could try an implied loop (or in py3, generator) with map:
flatten(map(lambda x: expand(x) if needs_expanding(x) else x, l))
flatten(map(try_expanding, l))