Python program find sum of consecutive number values equal to number n? - python

I want to find consecutive digits in a string that sum to a given number.
Example:
a="23410212" number is=5 — output 23,41,410,0212,212.
This code is not working. What do I need to fix?
def find_ten_sstrsum():
num1="2825302"
n=0;
total=0;
alist=[];
ten_str="";
nxt=1;
for n in range(len(num1)):
for n1 in range(nxt,len(num1)):
print(total)
if(total==0):
total=int(num1[n])+int(num1[n1])
ten_str=num1[n]+num1[n1]
else:
total+=int(num1[n1])
ten_str+=num1[n1]
if(total==10):
alist.append(ten_str)
ten_str=""
total=0
nxt+=1
break
elif(total<10):
nxt+=1
return alist

This (sort-of) one-liner will work:
def find_ten_sstrsum(s, n):
return list( # list call only in Python 3 if you don't want an iterator
filter(
lambda y: sum(map(int, y))==n,
(s[i:j] for i in range(len(s)) for j in range(i+1, len(s)+1)))
)
>>> find_ten_sstrsum('23410212', 5)
['23', '41', '410', '0212', '212']
This uses a nested generator expression over all possible slices and filters out the ones with the correct digit-sum.
This is, of course, far from optimal (especially for long strings) because the inner loop should be stopped as soon as the digit-sum exceeds n, but should give you an idea.
A more performant and readable solution would be a generator function:
def find_ten_sstrsum(s, n):
for start in range(len(s)):
for end in range(start+1, len(s)+1):
val = sum(map(int, s[start:end]))
if val > n:
break
if val == n:
yield s[start:end]
>>> list(find_ten_sstrsum('23410212', 5))
['23', '41', '410', '0212', '212']
Definitely read up on sum and map, as well.

Your function has several problems. Most of all, you have no way to give it different data to work on. You have it hard-coded to handle one particular string and a total of 10. You've even written variable names with "ten" in them, even though your example uses n=5. Your function should have two input parameters: the given string and the target sum.
NOTE: schwobaseggl just posted a lovely, Pythonic solution. However, I'll keep writing this, in case you need a function closer to your present learning level.
You have several logic paths, making it hard to follow how you handle your data. I recommend a slightly different approach, so that you can treat each partial sum cleanly:
for start in range(len(num1)):
total = 0 # Start at 0 each time you get a new starting number.
sum_str = ""
for last in num1[start:]:
print(total)
# Don't create separate cases for the first and other additions.
this_digit = num1[last]
total += int(this_digit)
ten_str += this_digit
# If we hit the target, save the solution and
# start at the next digit
if(total == target):
alist.append(ten_str)
break
# If we passed the target, just
# start at the next digit
elif(total > target):
break
return alist
Now, this doesn't solve quite all of your problems, and I haven't done some of the accounting work for you (variable initializations, def line, etc.). However, I think it moves you in the right direction and preserves the spirit of your code.

Related

Find the first fibonacci number above 1 million

Currently taking a programming course and got as an assignment to find the first fibonacci number above a million and I'm having a bit of trouble finding the specific number. I'm also supposed to be finding the index of the n:th number when it hits 1 million. I'm pretty new to coding but this is what I've come up with so far, just having a hard time to figure out how to actually calculate what the number is.
I guess you would switch out the for-with a while-loop but haven't figured out it how to get it all to work.
Thanks in beforehand :)
def fib_seq(n):
if n <= 2:
return 1
return fib_seq(n-1) + fib_seq(n-2)
lst = []
for i in range(1, 20):
lst.append(i)
print(fib_seq(i), lst)
Some points:
You don't need to build a list. You're only asked to return an index and the corresponding Fibonnacci number.
The recursive algorithm for Fibonnacci is not best practice, unless you would use some memoization. Otherwise the same numbers have to be recalculated over and over again. Use an iterative method instead.
Here is how that could look:
def fib(atleast):
a = 0
b = 1
i = 1
while b < atleast:
a, b = b, a+b
i += 1
return i, b
print(fib(1000000)) # (31, 1346269)
If you need to do this with some find of recursion, you should try to avoid calling the recursions twice with each iteration. This is a classic example where the complexity explodes. One way to do this is to memoize the already calculated results. Another is to maintain state with the function arguments. For example this will deliver the answer and only call the function 32 times:
def findIndex(v, prev = 0, current = 1, index = 0):
if v < prev:
return (prev, index)
return findIndex(v, current, prev+current, index + 1 )
findIndex(1000000) # (1346269, 31)

Nest n loops for any n in Python

So, let's say I have an arbitrarily long list of numbers. I'd like to get a list of every number in that list multiplied by every number in that list. I'd do that by nesting for loops like this:
for x in numbers:
for y in numbers:
print(x*y)
Now if I'd like to multiply every number in that list times every number in that list times every number in that list again, I'd do this:
for x in numbers:
for y in numbers:
for z in numbers:
print(x*y*z)
My issue is that I'm searching a graph for a subgraph, and I need to allow for arbitrarily large subgraphs. To do this I have to construct every subgraph with n edges from the edges in the main graph - and I have to allow for arbitrary values of n. How?
itertools.product with an iterative product computing function (I favor reduce(mul, ...)). If you need n-way products (in both senses of the word "product"):
from functools import reduce
from operator import mul
for numset in itertools.product(numbers, repeat=n):
print(reduce(mul, numset))
The above is simple, but it will needlessly recompute partial products when the set of values is large and n >= 3. A recursive function could be used to avoid that:
def compute_products(numbers, repeat):
if repeat == 1:
yield from numbers
return
numbers = tuple(numbers) # Only needed if you want to handle iterator/generator inputs
for prod in compute_products(numbers, repeat-1):
yield from (prod * x for x in numbers)
for prod in compute_products(numbers, n):
print(prod)
I think a recursive function of some kind would probably help you. This is just untested pseudocode typed into the Stack Overflow editor, so beware of using this verbatim, but something like this:
def one_dimension(numbers, n):
if (n < 1):
return
for num in numbers:
for next_num in one_dimension(numbers, n-1):
yield num*next_num
The basic idea is that each "level" of the function calls the next one, and then deals with what the next one produces. I don't know how familiar you are with Python's yield keyword, but if you know what it does, then you should be able to adapt the code above.
Update: Or just use itertools.product like ShadowRanger suggested in his answer; that's probably a better solution to your problem.

Project Euler 92

My code for solving problem 92 was correct but too slow and hence I tried to modify it by considering only one number for each possible permutation of that number, effectively reducing the size of the problem to 11439 from the original 10 million. Here's my code
import time
from Euler import multCoeff
start = time.time()
def newNum(n):
return sum([int(dig)**2 for dig in str(n)])
def chain(n, l):
if n in l:
return n, l
else:
l.append(n)
return chain(newNum(n), l)
nums = []
for i in range(1,10000000):
if all(str(i)[j] <= str(i)[j+1] for j in range(len(str(i))-1)):
nums.append(i)
count = 0
for i in nums:
if 89 in chain(i,[])[1]:
perms = multCoeff(i)
count += perms
end = time.time() - start
print count, end
multCoeff is a method that I created which is basically equivalent to len(set(permutations([int(j) for j in str(i)])))
and works just fine. Anyway, the problem is that the result I get is not the correct one, and it looks like I'm ignoring some of the cases but I can't really see which ones. I'd be really grateful if someone could point me in the right direction. Thanks.
We're missing the code for multCoeff, so I'm guessing here.
You're trying to filter from 1 to 999,999,999 by excluding numbers that do not have digits in ascending order and then re-calculating their permutations after.
Your problem is 0.
According to your filter 2, 20, 200, 2000, 20000, 200000, 2000000 are all represented by 2, however you're probably not adding back this many permutations.
General observations about your code:
item in list has O(n) time complexity; try to avoid doing this for large lists.
You are throwing away the results of many computations; any number in a chain that results in 89 or 1 will always have that end result.
Every function call has a time cost; try to keep the number of functions in looping calls low.
Casting str to int is somewhat expensive; try to keep the number of casts to a minimum.

Python: speed up removal of every n-th element from list

I'm trying to solve this programming riddle and although the solution (see code below) works correctly, it is too slow for succesful submission.
Any pointers as how to make this run
faster (removal of every n-th element from a list)?
Or suggestions for a better algorithm to calculate the same; seems I can't think of anything
else than brute-force for now...
Basically, the task at hand is:
GIVEN:
L = [2,3,4,5,6,7,8,9,10,11,........]
1. Take the first remaining item in list L (in the general case 'n'). Move it to
the 'lucky number list'. Then drop every 'n-th' item from the list.
2. Repeat 1
TASK:
Calculate the n-th number from the 'lucky number list' ( 1 <= n <= 3000)
My original code (it calculated the 3000 first lucky numbers in about a second on my machine - unfortunately too slow):
"""
SPOJ Problem Set (classical) 1798. Assistance Required
URL: http://www.spoj.pl/problems/ASSIST/
"""
sieve = range(3, 33900, 2)
luckynumbers = [2]
while True:
wanted_n = input()
if wanted_n == 0:
break
while len(luckynumbers) < wanted_n:
item = sieve[0]
luckynumbers.append(item)
items_to_delete = set(sieve[::item])
sieve = filter(lambda x: x not in items_to_delete, sieve)
print luckynumbers[wanted_n-1]
EDIT: thanks to the terrific contributions of Mark Dickinson, Steve Jessop and gnibbler, I got at the following, which is quite a whole lot faster than my original code (and succesfully got submitted at http://www.spoj.pl with 0.58 seconds!)...
sieve = range(3, 33810, 2)
luckynumbers = [2]
while len(luckynumbers) < 3000:
if len(sieve) < sieve[0]:
luckynumbers.extend(sieve)
break
luckynumbers.append(sieve[0])
del sieve[::sieve[0]]
while True:
wanted_n = input()
if wanted_n == 0:
break
else:
print luckynumbers[wanted_n-1]
This series is called ludic numbers
__delslice__ should be faster than __setslice__+filter
>>> L=[2,3,4,5,6,7,8,9,10,11,12]
>>> lucky=[]
>>> lucky.append(L[0])
>>> del L[::L[0]]
>>> L
[3, 5, 7, 9, 11]
>>> lucky.append(L[0])
>>> del L[::L[0]]
>>> L
[5, 7, 11]
So the loop becomes.
while len(luckynumbers) < 3000:
item = sieve[0]
luckynumbers.append(item)
del sieve[::item]
Which runs in less than 0.1 second
Try using these two lines for the deletion and filtering, instead of what you have; filter(None, ...) runs considerably faster than the filter(lambda ...).
sieve[::item] = [0]*-(-len(sieve)//item)
sieve = filter(None, sieve)
Edit: much better to simply use del sieve[::item]; see gnibbler's solution.
You might also be able to find a better termination condition for the while loop: for example, if the first remaining item in the sieve is i then the first i elements of the sieve will become the next i lucky numbers; so if len(luckynumbers) + sieve[0] >= wanted_n you should already have computed the number you need---you just need to figure out where in sieve it is so that you can extract it.
On my machine, the following version of your inner loop runs around 15 times faster than your original for finding the 3000th lucky number:
while len(luckynumbers) + sieve[0] < wanted_n:
item = sieve[0]
luckynumbers.append(item)
sieve[::item] = [0]*-(-len(sieve)//item)
sieve = filter(None, sieve)
print (luckynumbers + sieve)[wanted_n-1]
An explanation on how to solve this problem can be found here. (The problem I linked to asks for more, but the main step in that problem is the same as the one you're trying to solve.) The site I linked to also contains a sample solution in C++.
The set of numbers can be represented in a binary tree, which supports the following operations:
Return the nth element
Erase the nth element
These operations can be implemented to run in O(log n) time, where n is the number of nodes in the tree.
To build the tree, you can either make a custom routine that builds the tree from a given array of elements, or implement an insert operation (make sure to keep the tree balanced).
Each node in the tree need the following information:
Pointers to the left and right children
How many items there are in the left and right subtrees
With such a structure in place, solving the rest of the problem should be fairly straightforward.
I also recommend calculating the answers for all possible input values before reading any input, instead of calculating the answer for each input line.
A Java implementation of the above algorithm gets accepted in 0.68 seconds at the website you linked.
(Sorry for not providing any Python-specific help, but hopefully the algorithm outlined above will be fast enough.)
You're better off using an array and zeroing out every Nth item using that strategy; after you do this a few times in a row, the updates start getting tricky so you'd want to re-form the array. This should improve the speed by at least a factor of 10. Do you need vastly better than that?
Why not just create a new list?
L = [x for (i, x) in enumerate(L) if i % n]

How can I merge two lists and sort them working in 'linear' time?

I have this, and it works:
# E. Given two lists sorted in increasing order, create and return a merged
# list of all the elements in sorted order. You may modify the passed in lists.
# Ideally, the solution should work in "linear" time, making a single
# pass of both lists.
def linear_merge(list1, list2):
finalList = []
for item in list1:
finalList.append(item)
for item in list2:
finalList.append(item)
finalList.sort()
return finalList
# +++your code here+++
return
But, I'd really like to learn this stuff well. :) What does 'linear' time mean?
Linear means O(n) in Big O notation, while your code uses a sort() which is most likely O(nlogn).
The question is asking for the standard merge algorithm. A simple Python implementation would be:
def merge(l, m):
result = []
i = j = 0
total = len(l) + len(m)
while len(result) != total:
if len(l) == i:
result += m[j:]
break
elif len(m) == j:
result += l[i:]
break
elif l[i] < m[j]:
result.append(l[i])
i += 1
else:
result.append(m[j])
j += 1
return result
>>> merge([1,2,6,7], [1,3,5,9])
[1, 1, 2, 3, 5, 6, 7, 9]
Linear time means that the time taken is bounded by some undefined constant times (in this context) the number of items in the two lists you want to merge. Your approach doesn't achieve this - it takes O(n log n) time.
When specifying how long an algorithm takes in terms of the problem size, we ignore details like how fast the machine is, which basically means we ignore all the constant terms. We use "asymptotic notation" for that. These basically describe the shape of the curve you would plot in a graph of problem size in x against time taken in y. The logic is that a bad curve (one that gets steeper quickly) will always lead to a slower execution time if the problem is big enough. It may be faster on a very small problem (depending on the constants, which probably depends on the machine) but for small problems the execution time isn't generally a big issue anyway.
The "big O" specifies an upper bound on execution time. There are related notations for average execution time and lower bounds, but "big O" is the one that gets all the attention.
O(1) is constant time - the problem size doesn't matter.
O(log n) is a quite shallow curve - the time increases a bit as the problem gets bigger.
O(n) is linear time - each unit increase means it takes a roughly constant amount of extra time. The graph is (roughly) a straight line.
O(n log n) curves upwards more steeply as the problem gets more complex, but not by very much. This is the best that a general-purpose sorting algorithm can do.
O(n squared) curves upwards a lot more steeply as the problem gets more complex. This is typical for slower sorting algorithms like bubble sort.
The nastiest algorithms are classified as "np-hard" or "np-complete" where the "np" means "non-polynomial" - the curve gets steeper quicker than any polynomial. Exponential time is bad, but some are even worse. These kinds of things are still done, but only for very small problems.
EDIT the last paragraph is wrong, as indicated by the comment. I do have some holes in my algorithm theory, and clearly it's time I checked the things I thought I had figured out. In the mean time, I'm not quite sure how to correct that paragraph, so just be warned.
For your merging problem, consider that your two input lists are already sorted. The smallest item from your output must be the smallest item from one of your inputs. Get the first item from both and compare the two, and put the smallest in your output. Put the largest back where it came from. You have done a constant amount of work and you have handled one item. Repeat until both lists are exhausted.
Some details... First, putting the item back in the list just to pull it back out again is obviously silly, but it makes the explanation easier. Next - one input list will be exhausted before the other, so you need to cope with that (basically just empty out the rest of the other list and add it to the output). Finally - you don't actually have to remove items from the input lists - again, that's just the explanation. You can just step through them.
Linear time means that the runtime of the program is proportional to the length of the input. In this case the input consists of two lists. If the lists are twice as long, then the program will run approximately twice as long. Technically, we say that the algorithm should be O(n), where n is the size of the input (in this case the length of the two input lists combined).
This appears to be homework, so I will no supply you with an answer. Even though this is not homework, I am of the opinion that you will be best served by taking a pen and a piece of paper, construct two smallish example lists which are sorted, and figure out how you would merge those two lists, by hand. Once you figured that out, implementing the algorithm is a piece of cake.
(If all goes well, you will notice that you need to iterate over each list only once, in a single direction. That means that the algorithm is indeed linear. Good luck!)
If you build the result in reverse sorted order, you can use pop() and still be O(N)
pop() from the right end of the list does not require shifting the elements, so is O(1)
Reversing the list before we return it is O(N)
>>> def merge(l, r):
... result = []
... while l and r:
... if l[-1] > r[-1]:
... result.append(l.pop())
... else:
... result.append(r.pop())
... result+=(l+r)[::-1]
... result.reverse()
... return result
...
>>> merge([1,2,6,7], [1,3,5,9])
[1, 1, 2, 3, 5, 6, 7, 9]
This thread contains various implementations of a linear-time merge algorithm. Note that for practical purposes, you would use heapq.merge.
Linear time means O(n) complexity. You can read something about algorithmn comlexity and big-O notation here: http://en.wikipedia.org/wiki/Big_O_notation .
You should try to combine those lists not after getting them in the finalList, try to merge them gradually - adding an element, assuring the result is sorted, then add next element... this should give you some ideas.
A simpler version which will require equal sized lists:
def merge_sort(L1, L2):
res = []
for i in range(len(L1)):
if(L1[i]<L2[i]):
first = L1[i]
secound = L2[i]
else:
first = L2[i]
secound = L1[i]
res.extend([first,secound])
return res
itertoolz provides an efficient implementation to merge two sorted lists
https://toolz.readthedocs.io/en/latest/_modules/toolz/itertoolz.html#merge_sorted
'Linear time' means that time is an O(n) function, where n - the number of items input (items in the lists).
f(n) = O(n) means that that there exist constants x and y such that x * n <= f(n) <= y * n.
def linear_merge(list1, list2):
finalList = []
i = 0
j = 0
while i < len(list1):
if j < len(list2):
if list1[i] < list2[j]:
finalList.append(list1[i])
i += 1
else:
finalList.append(list2[j])
j += 1
else:
finalList.append(list1[i])
i += 1
while j < len(list2):
finalList.append(list2[j])
j += 1
return finalList

Categories

Resources