How to modify this Python function to use recursion - python

I have a simple function that breaks a number such as 54 and returns the sum of those two number (i.e. 9). I am also studying recursion, I am wondering if the code below meets the criteria. Why? or Why Not?. If not how can I solve this simple problem using the recursion paradigm?
def sumnum(n):
n = str(n)
a = []
for i in n:
a.append(i)
sum(int(n) for n in a)
sumnum(54)
9

The code you have used is not recursion. Recursion has two main characteristics: the base case, and the recursion.
Now, lets break down you your problem:
Base case:
Say we have the number smaller than 10. Then we would return the number. Let's keep it as our base case.
Recursion:
If we had a number ...xyz (x, y and z as digits), we would take the last digit, plus the last digit of the number left, until we have a number smaller than 10.
Code:
def sumnum (n):
if n < 10:
return n
return n % 10 + sumnum(n // 10)

Related

How can I profile the algorithm so it will consume less memory space?

There is a problem which is "Last Digit of the Sum of Fibonacci Numbers". I have already optimised the naive approach but this code is not working for higher values. Example: 613455
Everytime I run this program, the os crashes (memory limit exceeds).
My code is:
def SumFib(n):
result = []
for i in range(0, n+1):
if i <= 1:
result.append(i)
else:
result.append(result[i-1] + result[i-2])
return (sum(result) % 10)
print(SumFib(int(input())))
Need some help to get over this problem.
In order to consume less memory, store less values. For example:
def sum_fib(n):
if n == 0:
return 0
elif n == 1:
return 1
elif n == 2:
return 2
res_sum = 2
a, b = 1, 1
for _ in range(n-2):
a, b = b, a+b
res_sum += b
return res_sum % 10
The memory requirements (at a glance) are constant. Your original snippet has lineal memory requirements (when n grows, the memory requirements grow).
(More intelligent things can be done regarding the modulo operation if n is really huge, but that is off topic to the original question)
Edit:
I was curious and ended up doing some homework.
Lemma 1: The last digit of the fibonacci follows a 60-length cycle.
Demonstration: One can check that, starting by (0,1), element 60 has last digit 0 and element 61 has last digit 1 (this can be checked empirically). From this follows that fib(n) equals fib(n % 60).
Lemma 2: The last digit of the sum of the first 60 digits is 0.
Demonstration: Can be checked empirically too.
Conclusion: sum_fib(n) == sum_fib(n % 60). So the algorithm to solve "last digit of the sum of the n-th first fibonacci numbers can be solved in O(1) by taking n % 60 and evaluating sum_fib of that number, i.e., sum_fib(n % 60). Or one can precreate the digit list of 60 elements and do a simple-lookup.
Instead of using range you can try a custom function that works as generator.
Generators calculate values on the fly instead of loading all at once like iterators.
Replace the below function with range
def generate_value(start, end, step):
count =start
while True:
if count <=end:
yield count
count+=step
break

Find nearest prime number python

I want to find the largest prime number within range(old_number + 1 , 2*old_number)
This is my code so far:
def get_nearest_prime(self, old_number):
for num in range(old_number + 1, 2 * old_number) :
for i in range(2,num):
if num % i == 0:
break
return num
when I call the get_nearest_prime(13)
the correct output should be 23, while my result was 25.
Anyone can help me to solve this problem? Help will be appreciated!
There are lots of changes you could make, but which ones you should make depend on what you want to accomplish. The biggest problem with your code as it stands is that you're successfully identifying primes with the break and then not doing anything with that information. Here's a minimal change that does roughly the same thing.
def get_nearest_prime(old_number):
largest_prime = 0
for num in range(old_number + 1, 2 * old_number) :
for i in range(2,num):
if num % i == 0:
break
else:
largest_prime = num
return largest_prime
We're using the largest_prime local variable to keep track of all the primes you find (since you iterate through them in increasing order). The else clause is triggered any time you exit the inner for loop "normally" (i.e., without hitting the break clause). In other words, any time you've found a prime.
Here's a slightly faster solution.
import numpy as np
def seive(n):
mask = np.ones(n+1)
mask[:2] = 0
for i in range(2, int(n**.5)+1):
if not mask[i]:
continue
mask[i*i::i] = 0
return np.argwhere(mask)
def get_nearest_prime(old_number):
try:
n = np.max(seive(2*old_number-1))
if n < old_number+1:
return None
return n
except ValueError:
return None
It does roughly the same thing, but it uses an algorithm called the "Sieve of Eratosthenes" to speed up the finding of primes (as opposed to the "trial division" you're using). It isn't the fastest Sieve in the world, but it's reasonably understandable without too many tweaks.
In either case, if you're calling this a bunch of times you'll probably want to keep track of all the primes you've found since computing them is expensive. Caching is easy and flexible in Python, and there are dozens of ways to make that happen if you do need the speed boost.
Note that I'm not positive the range you've specified always contains a prime. It very well might, and if it does you can get away with a lot shorter code. Something like the following.
def get_nearest_prime(old_number):
return np.max(seive(2*old_number-1))
I don't completely agree with the name you've chosen since the largest prime in that interval is usually not the closest prime to old_number, but I think this is what you're looking for anyway.
You can use a sublist to check if the number is prime, if all(i % n for n in range(2, i)) means that number is prime due to the fact that all values returned from modulo were True, not 0. From there you can append those values to a list called primes and then take the max of that list.
List comprehension:
num = 13
l = [*range(num, (2*num)+1)]
print(max([i for i in l if all([i % n for n in range(2,i)])]))
Expanded:
num = 13
l = [*range(num, (2*num)+1)]
primes = []
for i in l:
if all([i % n for n in range(2, i)]):
primes.append(i)
print(max(primes))
23
Search for the nearest prime number from above using the seive function
def get_nearest_prime(old_number):
return old_number+min(seive(2*old_number-1)-old_number, key=lambda a:a<0)

isPrime test using list of factors

I'm trying to create a python program to check if a given number "n" is prime or not. I've first created a program that lists the divisors of n:
import math
def factors(n):
i = 2
factlist = []
while i <= n:
if n% i == 0:
factlist.append(i)
i = i + 1
return factlist
factors(100)
Next, I'm trying to use the "for i in" function to say that if p1 (the list of factors of n) only includes n itself, then print TRUE, and if else, print FALSE. This seems really easy, but I cannot get it to work. This is what I've done so far:
def isPrime(n):
p1 = factors(n)
for i in p1:
if factors(n) == int(i):
return True
return False
Any help is appreciated! This is a hw assignment, so it's required to use the list of factors in our prime test. Thanks in advance!
p1 will only have n if if has length 1. It may be worthwhile to add if len(p1)==1 as the conditional instead.
What I did (I actually managed to get it in 1 line) was use [number for number in range (2, num // 2) if num % number == 0]. This gets all the numbers that are factors of num. Then you can check if the length of that list is > 0. (Wouldn't be prime).
I usually do something like this:
primes = [2,3]
def is_prime(num):
if num>primes[-1]:
for i in range(primes[-1]+2, num+1,2):
if all(i%p for p in primes):
primes.append(i)
return num in primes
This method generates a global list of prime numbers as it goes. It uses that list to find other prime numbers. The advantage of this method is fewer calculations the program has to make, especially when there are many repeated calls to the is_prime function for possible prime numbers that are less than the largest number you have previously sent to it.
You can adapt some of the concepts in this function for your HW.

What would be the best answer for this Fibonacci excercise in Python?

What's the best answer for this Fibonacci exercise in Python?
http://www.scipy-lectures.org/intro/language/functions.html#exercises
Exercise: Fibonacci sequence
Write a function that displays the n first terms of the Fibonacci
sequence, defined by:
u0 = 1; u1 = 1
u(n+2) = u(n+1) + un
If this were simply asking a Fibonacci code, I would write like this:
def fibo_R(n):
if n == 1 or n == 2:
return 1
return fibo_R(n-1) + fibo_R(n-2)
print(fibo_R(6))
... However, in this exercise, the initial conditions are both 1 and 1, and the calculation is going towards the positive direction (+). I don't know how to set the end condition. I've searched for an answer, but I couldn't find any. How would you answer this?
Note that u_(n+2) = u_(n+1) + u_n is equivalent to u_n = u_(n-1) + u_(n-2), i.e. your previous code will still apply. Fibonacci numbers are by definition defined in terms of their predecessors, no matter how you phrase the problem.
A good approach to solve this is to define a generator which produces the elements of the Fibonacci sequence on demand:
def fibonacci():
i = 1
j = 1
while True:
yield i
x = i + j
i = j
j = x
You can then take the first N items of the generator via e.g. itertools.islice, or you use enumerate to keep track of how many numbers you saw:
for i, x in enumerate(fibonacci()):
if i > n:
break
print x
Having a generator means that you can use the same code for solving many different problems (and quite efficiently though), such as:
getting the n'th fibonacci number
getting the first n fibonacci numbers
getting all fibonacci numbers satisfying some predicate (e.g. all fibonacci numbers lower than 100)
The best way to calculate a fibonacci sequence is by simply starting at the beginning and looping until you have calculated the n-th number. Recursion produces way too many method calls since you are calculating the same numbers over and over again.
This function calculates the first n fibonacci numbers, stores them in a list and then prints them out:
def fibonacci(n):
array = [1]
a = 1
b = 1
if n == 1:
print array
for i in range(n-1):
fib = a + b
a = b
b = fib
array.append(fib)
print array
If you want a super memory-efficient solution, use a generator that only produces the next number on demand:
def fib_generator():
e1, e2 = 0, 1
while True:
e1,e2 = e2, e1+e2
yield e1
f = fib_generator()
print(next(f))
print(next(f))
print(next(f))
## dump the rest with a for-loop
for i in range(3, 50):
print(next(f))
The recursive solution is the most elegant, but it is slow. Keiwan's loop is the fastest for a large number of elements.
Yes, definitely no globals as correctly observed by DSM. Thanks!
An alternative recursive just to show that things can be done in slightly different ways:
def fib2(n): return n if n < 2 else fib2( n - 1 ) + fib2( n - 2 )

Handling memory usage for big calculation in python

I am trying to do some calculations with python, where I ran out of memory. Therefore, I want to read/write a file in order to free memory. I need a something like a very big list object, so I thought writing a line for each object in the file and read/write to that lines instead of to memory. Line ordering is important for me since I will use line numbers as index. So I was wondering how I can replace lines in python, without moving around other lines (Actually, it is fine to move lines, as long as they return back to where I expect them to be).
Edit
I am trying to help a friend, which is worse than or equal to me in python. This code supposed to find biggest prime number, that divides given non-prime number. This code works for numbers until the numbers like 1 million, but after dead, my memory gets exhausted while trying to make numbers list.
# a comes from a user input
primes_upper_limit = (a+1) / 2
counter = 3L
numbers = list()
while counter <= primes_upper_limit:
numbers.append(counter)
counter += 2L
counter=3
i=0
half = (primes_upper_limit + 1) / 2 - 1
root = primes_upper_limit ** 0.5
while counter < root:
if numbers[i]:
j = int((counter*counter - 3) / 2)
numbers[j] = 0
while j < half:
numbers[j] = 0
j += counter
i += 1
counter = 2*i + 3
primes = [2] + [num for num in numbers if num]
for numb in reversed(primes):
if a % numb == 0:
print numb
break
Another Edit
What about wrinting different files for each index? for example a billion of files with long integer filenames, and just a number inside of the file?
You want to find the largest prime divisor of a. (Project Euler Question 3)
Your current choice of algorithm and implementation do this by:
Generate a list numbers of all candidate primes in range (3 <= n <= sqrt(a), or (a+1)/2 as you currently do)
Sieve the numbers list to get a list of primes {p} <= sqrt(a)
Trial Division: test the divisibility of a by each p. Store all prime divisors {q} of a.
Print all divisors {q}; we only want the largest.
My comments on this algorithm are below. Sieving and trial division are seriously not scalable algorithms, as Owen and I comment. For large a (billion, or trillion) you really should use NumPy. Anyway some comments on implementing this algorithm:
Did you know you only need to test up to √a, int(math.sqrt(a)), not (a+1)/2 as you do?
There is no need to build a huge list of candidates numbers, then sieve it for primeness - the numbers list is not scalable. Just construct the list primes directly. You can use while/for-loops and xrange(3,sqrt(a)+2,2) (which gives you an iterator). As you mention xrange() overflows at 2**31L, but combined with the sqrt observation, you can still successfully factor up to 2**62
In general this is inferior to getting the prime decomposition of a, i.e. every time you find a prime divisor p | a, you only need to continue to sieve the remaining factor a/p or a/p² or a/p³ or whatever). Except for the rare case of very large primes (or pseudoprimes), this will greatly reduce the magnitude of the numbers you are working with.
Also, you only ever need to generate the list of primes {p} once; thereafter store it and do lookups, not regenerate it.
So I would separate out generate_primes(a) from find_largest_prime_divisor(a). Decomposition helps greatly.
Here is my rewrite of your code, but performance still falls off in the billions (a > 10**11 +1) due to keeping the sieved list. We can use collections.deque instead of list for primes, to get a faster O(1) append() operation, but that's a minor optimization.
# Prime Factorization by trial division
from math import ceil,sqrt
from collections import deque
# Global list of primes (strictly we should use a class variable not a global)
#primes = deque()
primes = []
def is_prime(n):
"""Test whether n is divisible by any prime known so far"""
global primes
for p in primes:
if n%p == 0:
return False # n was divisible by p
return True # either n is prime, or divisible by some p larger than our list
def generate_primes(a):
"""Generate sieved list of primes (up to sqrt(a)) as we go"""
global primes
primes_upper_limit = int(sqrt(a))
# We get huge speedup by using xrange() instead of range(), so we have to seed the list with 2
primes.append(2)
print "Generating sieved list of primes up to", primes_upper_limit, "...",
# Consider prime candidates 2,3,5,7... in increasing increments of 2
#for number in [2] + range(3,primes_upper_limit+2,2):
for number in xrange(3,primes_upper_limit+2,2):
if is_prime(number): # use global 'primes'
#print "Found new prime", number
primes.append(number) # Found a new prime larger than our list
print "done"
def find_largest_prime_factor(x, debug=False):
"""Find all prime factors of x, and return the largest."""
global primes
# First we need the list of all primes <= sqrt(x)
generate_primes(x)
to_factor = x # running value of the remaining quantity we need to factor
largest_prime_factor = None
for p in primes:
if debug: print "Testing divisibility by", p
if to_factor%p != 0:
continue
if debug: print "...yes it is"
largest_prime_factor = p
# Divide out all factors of p in x (may have multiplicity)
while to_factor%p == 0:
to_factor /= p
# Stop when all factors have been found
if to_factor==1:
break
else:
print "Tested all primes up to sqrt(a), remaining factor must be a single prime > sqrt(a) :", to_factor
print "\nLargest prime factor of x is", largest_prime_factor
return largest_prime_factor
If I'm understanding you correctly, this is not an easy task. They way I interpreted it, you want to keep a file handle open, and use the file as a place to store character data.
Say you had a file like,
a
b
c
and you wanted to replace 'b' with 'bb'. That's going to be a pain, because the file actually looks like a\nb\nc -- you can't just overwrite the b, you need another byte.
My advice would be to try and find a way to make your algorithm work without using a file for extra storage. If you got a stack overflow, chances are you didn't really run out of memory, you overran the call stack, which is much smaller.
You could try reworking your algorithm to not be recursive. Sometimes you can use a list to substitute for the call stack -- but there are many things you could do and I don't think I could give much general advice not seeing your algorithm.
edit
Ah I see what you mean... when the list
while counter <= primes_upper_limit:
numbers.append(counter)
counter += 2L
grows really big, you could run out of memory. So I guess you're basically doing a sieve, and that's why you have the big list numbers? It makes sense. If you want to keep doing it this way, you could try a numpy bool array, because it will use substantially less memory per cell:
import numpy as np
numbers = np.repeat(True, a/2)
Or (and maybe this is not appealing) you could go with an entirely different approach that doesn't use a big list, such as factoring the number entirely and picking the biggest factor.
Something like:
factors = [ ]
tail = a
while tail > 1:
j = 2
while 1:
if tail % j == 0:
factors.append(j)
tail = tail / j
print('%s %s' % (factors, tail))
break
else:
j += 1
ie say you were factoring 20: tail starts out as 20, then you find 2 tail becomes 10, then it becomes 5.
This is not terrible efficient and will become way too slow for a large (billions) prime number, but it's ok for numbers with small factors.
I mean your sieve is good too, until you start running out of memory ;). You could give numpy a shot.
pytables is excellent for working with and storing huge amounts of data. But first start with implementing the comments in smci's answer to minimize the amount of numbers you need to store.
For a number with only twelve digits, as in Project Euler #3, no fancy integer factorization method is needed, and there is no need to store intermediate results on disk. Use this algorithm to find the factors of n:
Set f = 2.
If n = 1, stop.
If f * f > n, print n and stop.
Divide n by f, keeping both the quotient q and the remainder r.
If r = 0, print q, divide n by q, and go to Step 2.
Otherwise, increase f by 1 and go to Step 3.
This just does trial division by every integer until it reaches the square root, which indicates that the remaining cofactor is prime. Each factor is printed as it is found.

Categories

Resources