I need a loop containing range(3,666,2) and 2 (for the sieve of Eratosthenes, by the way). This doesn't work ("AttributeError: 'range' object has no attribute 'extend'" ... or "append"):
primes = range(3,limit,2)
primes.extend(2)
How can I do it in the simple intuitive pythonesque way?
range() in Python 3 returns a dedicated immutable sequence object. You'll have to turn it into a list to extend it:
primes = list(range(3, limit, 2))
primes.append(2)
Note that I used list.append(), not list.extend() (which expects a sequence of values, not one integer).
However, you probably want to start your loop with 2, not end it. Moreover, materializing the whole range into a list requires some memory and kills the efficiency of the object. Use iterator chaining instead:
from itertools import chain
primes = chain([2], range(3, limit, 2))
Now you can loop over primes without materializing a whole list in memory, and still include 2 at the start of the loop.
If you're only looping and don't want to materialise, then:
from itertools import chain
primes = chain([2], range(3, limit, 2))
I think the two makes more sense at the start though...
Related
I'm new in python and I wrote code to have the product of items in a list without using the multiplication sign:
def witOutmultiply(some_list):
first_num = some_list[0]
result = 0
for n in some_list[1:]:
for i in range(n):
result += first_num
first_num = result
result = 0
return first_num
q =[2,4,5,6,10,15]
print(witOutmultiply(q))
My question is: can I use comprehensions in this case, and can I get the result with just one loop? Thanks
Yes, you can use list comprehension, sum, and range, but no other builtin functions:
q = [2,4,5,6,10,15]
mult = q[0]
for n in q[1:]:
mult = sum(mult for _ in range(n))
print(mult)
#36000
Here is an answer with no loop at all that satisfies your condition of "no multiplication sign." It is therefore very fast. The reduce function repeats an operation between members of an iterable, reducing it to a single value, while the mul function multiplies two numbers. The 1 at the end of the reduce function gives a reasonable value if the iterable (list) is empty. No multiplication sign in sight!
from operator import mul
from functools import reduce
def prod_seq(seq):
"""Return the product of the numbers in an iterable."""
return reduce(mul, seq, 1)
Comprehensions are used to build data structures. A list comprehension builds a list, a dict comprehension builds a dict, etc. Since you want a single value rather than a data structure in your computation, there's no good reason to use a comprehension.
There probably are ways to avoid using two loops, but it's not going to be easy, since your outer loop does several operations, not just one. Most of the easy ways to avoid an explicit loop will just be hiding one or more loops in some function call like sum. I think that for your chosen algorithm (doing multiplication by adding), your current code is quite good and there's no obvious way to improve it.
from numpy import prod
print(prod(q))
#36000
The following is a simplified example of my code.
>>> def action(num):
print "Number is", num
>>> items = [1, 3, 6]
>>> for i in [j for j in items if j > 4]:
action(i)
Number is 6
My question is the following: is it bad practice (for reasons such as code clarity) to simply replace the for loop with a comprehension which will still call the action function? That is:
>>> (action(j) for j in items if j > 2)
Number is 6
This shouldn't use a generator or comprehension at all.
def action(num):
print "Number is", num
items = [1, 3, 6]
for j in items:
if j > 4:
action(i)
Generators evaluate lazily. The expression (action(j) for j in items if j > 2) will merely return a generator expression to the caller. Nothing will happen in it unless you explicitly exhaust it. List comprehensions evaluate eagerly, but, in this particular case, you are left with a list with no purpose. Just use a regular loop.
This is bad practice. Firstly, your code fragment does not produce the desired output. You would instead get something like: <generator object <genexpr> at 0x03D826F0>.
Secondly, a list comprehension is for creating sequences, and generators a for creating streams of objects. Typically, they do not have side effects. Your action function is a prime example of a side effect -- it prints its input and returns nothing. Rather, a generator should for each item it generates, take an input and compute some output. eg.
doubled_odds = [x*2 for x in range(10) if x % 2 != 0]
By using a generator you are obfuscating the purpose of your code, which is to mutate global state (printing something), and not to create a stream of objects.
Whereas, just using a for loop makes the code slightly longer (basically just more whitespace), but immediately you can see that the purpose is to apply function to a selection of items (as opposed to creating a new stream/list of items).
for i in items:
if i < 4:
action(i)
Remember that generators are still looping constructs and that the underlying bytecode is more or less the same (if anything, generators are marginally less efficient), and you lose clarity. Generators and list comprehensions are great, but this is not the right situation for them.
While I personally favour Tigerhawk's solution, there might be a middle ground between his and willywonkadailyblah's solution (now deleted).
One of willywonkadailyblah's points was:
Why create a new list instead of just using the old one? You already have the condition to filter out the correct elements, so why put them away in memory and come back for them?
One way to avoid this problem is to use lazy evaluation of the filtering i.e. have the filtering done only when iterating using the for loop by making the filtering part of a generator expression rather than a list comprehension:
for i in (j for j in items if j > 4):
action(i)
Output
Number is 6
In all honesty, I think Tigerhawk's solution is the best for this, though. This is just one possible alternative.
The reason that I proposed this is that it reminds me a lot of LINQ queries in C#, where you define a lazy way to extract, filter and project elements from a sequence in one statement (the LINQ expression) and can then use a separate for each loop with that query to perform some action on each element.
I have two lists say
A = [1,3]
B = [1,3,5,6]
I want to know the index of the first differing element between these lists (2 in this case).
Is there a simple way to do this, or do I need to write a loop?
You can use following generator expression within next() function using enumerate() and zip() function:
>>> next(ind for ind,(i,j) in enumerate(zip(A,B)) if i != j)
2
Perhaps the loop you mentioned is the most obvious way, not necessarily the most pretty. Still every O(n) complexity solution is fine by me.
lesser_length = min(len(A), len(B))
answer = lesser_length # If one of the lists is shorter and a sublist,
# this will be the answer, because the if condition
# will never be satisfied.
for i in xrange(lesser_length):
if A[i] != B[i]:
answer = i
break
range instead of xrange in Python3. A generator would be the best way given that you don't know when the difference between lists will occur.(In Python2, xrange is generator. In Python3, xrange became the regular range() function.)
A list comprehension is also viable. I find this to be more readable.
I have a circular array. I created it with the following:
from itertools import cycle
myArray = ['a','b','c','d']
pool = cycle(myArray)
Now I want to print the nth item in pool where n could be larger than 4. Normally this would be a simple use of the modulo function but logically I think Python has a method which will know the number of elements in the pool (4 in this example) and automatically apply the modulo function.
For example the 1st and 5th item is 'a'. So I'm hoping for, logically, the equivalent of pool[0] and pool[4] giving me 'a'.
Is there such a method?
No, there's no built-in method to accomplish what you're attempting to do. As suggested earlier, you could use zip, but that would involve indexing into the result based on your sequence, as well as generating n elements out to the item you want.
Sometimes the simplest approach is the clearest. Use modulo to accomplish what you're after.
def fetch_circular(n):
myArray = ['a','b','c','d']
return myArray[n % 4]
I think you may be confusing arrays with generators.
The modulo function of an array is the way to go, in terms of performance.
cycle is a function which generates elements as they are requested. It is not a Cycle class with convenient methods. You can see the equivalent implementation in the documentation, and you'll probably understand what is the idea behind it:
https://docs.python.org/2/library/itertools.html#itertools.cycle
A list is definitely the way to go but if you actually had a cycle object and wanted the nth object wrapping around, you could islice:
from itertools import cycle, islice
myArray = ['a','b','c','d']
pool = cycle(myArray)
print(next(islice(pool, 5)))
a
Note once you call next(islice you have started cycling the list, if you actually want to be constantly rotating you may actually want a deque
Your pool object is already a generator and it will keep looping through myArray forever, so all you need is to zip your iterator with pool this way:
>>> pool = cycle(myA)
>>> for i,item in zip(range(10),pool):
print i,item
0 a
1 b
2 c
3 d
4 a
5 b
6 c
7 d
8 a
9 b
>>>
There are the map,reduce,filter functions to make list comprehensions.
What is the difference between passing a xrange argument or a range argument to each of these functions?
For example:
map(someFunc, range(1,11))
OR
map(someFunc,xrange(1,11))
In Python 2, range returns an actual list, while xrange returns a generating function which can be iterated over. Since map only cares that it iterates over a sequence, both are applicable, though xrange uses less memory. Python 3's xrange replaces range, so your only option is to use range to return a generating function. (You can use [i for i in range(10)] to generate the actual range if so desired.)
The modulus operator returns 0, a falsey value, for the even numbers: 2 mod 2 is 0. As such, the even numbers are filtered out.
Actually, the map, reduce, and filter functions are an alternative to list comprehensions. The term "list comprehension" refers to the specific syntactic construct; anything that doesn't look like a list comprehension is necessarily not a list comprehension.
They actually predate list comprehensions, and are borrowed from other languages. But most of those languages have ways of constructing anonymous functions which are more powerful than Python's lambda, so functions such as these are more natural. List comprehensions are considered a more natural fit to Python.
The difference between range and xrange is that range actually constructs a list containing the numbers that form the range, whereas an xrange is an object that knows its endpoints and can iterate over itself without ever actually constructing the full list of values in memory. xrange(1,1000) takes up no more space than xrange(1,5), whereas range(1,1000) generates a 999-element list.
If range() and xrange() were implemented in the Python language, they would look something like this:
def xrange(start, stop=None, step=1):
if stop is None: stop, start = start, 0
i = start
while i < stop:
yield i
i += step
def range(start, stop=None, step=1):
if stop is None: stop, start = start, 0
acc, i = [], start
while i < stop:
acc.append(i)
i += step
return acc
As you can see, range() creates a list and returns it, while xrange() lazily generates the values in a range on demand. This has the advantage that the overhead for creating a list is avoided in xrange(), since it doesn't store the values or create a list object. For most instances, there is no difference in the end result.
One obvious difference is that xrange() doesn't support slicing:
>>> range(10)[2:5]
[2, 3, 4]
>>> xrange(10)[2:5]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: sequence index must be integer, not 'slice'
>>>
It does, however support indexing:
>>> xrange(11)[10]
10