Here is a simple eratosthenes sieve for prime numbers, which removes the multiples and append them in the empty list of multiples. My question is that if I use n instead of n+1 in both for loops the answer comes out the same.
def eratosthenes(n):
multiples = []
for i in xrange(2, n+1):
if i not in multiples:
print i
for j in xrange(i*i, n+1, i):
multiples.append(j)
returns output like
eratosthenes(10)
2
3
5
7
while if I replace n+1 with n in both loops the output is still the same:
def eratosthenes(n):
multiples = []
for i in xrange(2, n):
if i not in multiples:
print i
for j in xrange(i*i, n, i):
multiples.append(j)
returns the same output as above function...
eratosthenes(10)
2
3
5
7
My question is why we use n+1 instead of n?
The Python range() and xrange() functions, like the Python slice notation, do not include the end value; xrange(2, 10) generates 8 numbers from 2 to 9, not 10. n + 1 makes sure n is part of the generated range.
Use eratosthenes(7) or eratosthenes(11) to see the difference; 10 is not a prime number and is being filtered out.
Related
Is there a way which this code could be improved so that it would run faster? Currently, this task takes between 11 and 12 seconds to run on my virtual environment
def divisors(n):
return sum([x for x in range(1, (round(n/2))) if n % x == 0])
def abundant_numbers():
return [x for x in range(1, 28123) if x < divisors(x)]
result = abundant_numbers()
Whenever you look for speeding up, you should first check whether the algorithm itself should change. And in this case it should.
Instead of looking for divisors given a number, look for numbers that divide by a divisor. For the latter you can use a sieve-like approach. That leads to this algorithm:
def abundant_numbers(n):
# All numbers are strict multiples of 1, except 0 and 1
divsums = [1] * n
for div in range(2, n//2 + 1): # Corrected end-of-range
for i in range(2*div, n, div):
divsums[i] += div # Sum up divisors for number i
divsums[0] = 0 # Make sure that 0 is not counted
return [i for i, divsum in enumerate(divsums) if divsum > i]
result = abundant_numbers(28123)
This runs quite fast, many factors faster than the translation of your algorithm to numpy.
Note that you had a bug in your code. round(n/2) as the range-end can miss a divisor. It should be n//2+1.
I am studying generators in python. I follow the code from https://jakevdp.github.io/WhirlwindTourOfPython/12-generators.html but totally confused by the program when n=2, what exactly is the result of all(n % p > 0 for p in primes)?
Per my understand, the first round of the loop, primes is empty. So how come the expression is True and adding 2 to the set?
def gen_primes(N):
"""Generate primes up to N"""
primes = set()
for n in range(2, N):
if all(n % p > 0 for p in primes):
primes.add(n)
yield n
print(*gen_primes(100))
From the documentation of all():
Return True if all elements of the iterable are true (or if the iterable is empty)
So when primes is empty, n % p > 0 for p in primes is empty because there's nothing to iterate over. Therefore all() returns True, and n is added to primes.
This question already has answers here:
How to create the most compact mapping n → isprime(n) up to a limit N?
(29 answers)
Closed 2 years ago.
The code does what is supposed to do for smaller values of n, but I would like to calculate the sum for all primes that are below two million, and that's where the code seems to take an endless amount of time. I am working with PyScripter. Is there any way to make this code more efficient?
def is_prime(a):
return all(a % i for i in range(2, a))
def sum_of_primes(n):
sum = 0
x = 2
while x < n:
if is_prime(x):
sum = sum + x
x = x+1
return sum
def main():
print(sum_of_primes(2000000))
if __name__ == '__main__':
main()
Sieve of Eratosthenes is one of the best algorithm of finding all prime numbers below some number.
Basicly you create a list of booleans with the size of range 2 to whatever number you want. Then you remove all the indexes of each true values index folds. Such as after you start searching list you came across 2 and you update to false all the indexes of 2*n then you jump to 3 then you update all 3*n indexes to false. Then you skip 4 since you have already updated it to false. Then you came to 5 and replace all 5*n to false. End so on. At the and you will get a long list that all true valued indexes are prime number. You can use this list as you want.
A basic algorithm as pointed out by Wikipedia would be:
Let A be an array of Boolean values, indexed by integers 2 to n,
initially all set to true.
for i = 2, 3, 4, ..., not exceeding √n:
if A[i] is true:
for j = i2, i2+i, i2+2i, i2+3i, ..., not exceeding n:
A[j] := false.
Output: all i such that A[i] is true.
I'm implementing the Sieve of Eratosthenes in Python. It returns composite numbers near the end of the search range:
def primes_Ero(n=1000):
primes = []
a = [True]*(n+1)
a[0] = a[1] = False
for (i,isprime) in enumerate(a):
if isprime:
for n in range(i*i,n+1, i):
a[n] = False
primes.append(i)
return primes
When using larger numbers, n, I end up with composite numbers. I made a check to see which numbers are composite (compared to a brute force method),
Given n, what numbers are composite:
n= 100; []
n= 500; [493, 497]
n= 1000; [961, 989]
n= 10000; [9701, 9727, 9797, 9853, 9869, 9917, 9943, 9953, 9983, 9991, 9997]
What am I doing wrong?
The problem is this line:
for n in range(i*i, n+1, i):
Initially n is set to the parameter value (default=1000) but after the for loop executes for the first time n will hold i < n < i + n. The second time the for loop is executed is faulty.
You should rename one of the ns that you're using. Consider giving it a proper name like sieve_size which is more descriptive of what it actually does.
One thing I would like to point out is, that while your code is clever you are modifying the list you're iterating over. This is generally considered bad practice.
I am trying to print prime numbers less than 'n'.The code is below:
def prime_numbers(n):
A=[1 for i in range(n+1)]
for i in range(2,int(sqrt(n))):
if A[i]==1:
for j in range(i*2,n,i):
A[j]=0
for i in range(n):
if A[i]:
print(i)
Output for
prime_numbers(10)
is
0
1
2
3
5
7
9
The program correctly prints for 100. What changes do I need to make?
The end point in a range() is not included. Since sqrt(10) is 3.1623, your range() loops to 2 and no further, and the multiples of 3 are not removed from your list. Your code works for 100, because it doesn't matter if you test for multiples 10 (those are already covered by 2 and 5).
The same issue applies to your other loops; if you want to include n itself as a candidate prime number you should also include it in the other ranges.
Note that you also want to ignore 0 and 1, those are not primes. You could add A[0] = A[1] = False at the top to make sure your last loop doesn't include those, or start your last loop at 2 rather than 0.
You want to add one to the floored square root to make sure it is tested for:
for i in range(2, int(sqrt(n)) + 1):
I'd use booleans rather than 0 and 1, by the way, just for clarity (there is not much of a performance or memory footprint difference here):
def prime_numbers(n):
sieve = [True] * (n + 1) # create a list n elements long
for i in range(2, int(sqrt(n)) + 1):
if sieve[i]:
for j in range(i * 2, n + 1, i):
sieve[j] = False
for i in range(2, n + 1):
if sieve[i]:
print(i)
I used [..] * (n + 1) to create a list of n items (plus 0); this produces a list with n shallow copies of the contents of the left operand. That's faster than a list comprehension, and the shared references are fine since True is a singleton in Python.
Demo:
>>> prime_numbers(31)
2
3
5
7
11
13
17
19
23
29
31
Note that 31 is included there; your code would have resulted in incorrect output as you'd have left in all the multiples of 5.