Related
One of the generator advantage is that it uses less memory and consumes fewer resources. That is, we do not produce all the data at once and we do not allocate memory to all of them, and only a one value is generated each time. The state and status and values of the variables are stored, and in fact the code can be stopped and resumed by calling it to continue.
I wrote two codes and I am comparing them, I see that the generator can be written normally and now I do not see any points for the generator. Can anyone tell me what is the advantage of this generator in compare to when it be written normally? One value is generated with each iteration of both of them.
The first code:
def gen(n):
for i in range(n):
i = i ** 2
i += 1
yield i
g = gen(3)
for i in g:
print(i)
The second one:
def func(i):
i = i ** 2
i += 1
return i
for i in range(3):
print(func(i))
I know that the id of g is constant whereas the id of func(i) is changing.
Is that what the main generator advantage means?
To be specific about the above codes that you have mentioned in the question, there is no difference in terms of memory between the two approaches you have shown, but first one is more preferable because everything you need is inside the same generator function, whereas in the second case, the loop and the function are at two different places, and every time you need to use the second function, you need to use the loop outside which unnecessarily increases the redundancy.
Actually the two functions you have written, the generator one, and the normal function, they are not equivalent.
In the generator, you are returning all the values, i.e. the loop is inside the generator function:
def gen(n):
for i in range(n):
i = i ** 2
i += 1
yield i
But, in the second case, you are just returning one value, and the loop is outside the function:
def func(i):
i = i ** 2
i += 1
return i
In order to make the second function equivalent to the first one, you need to have the loop inside the function:
def func(n):
for i in range(n):
i = i ** 2
i += 1
return i
Now, of course the above function always return a single value for i=0 if control goes inside the loop, so to fix this, you need to return an entire sequence, which demands you to have a list or similar data structure that allows you to store multiple values:
def func(n):
result = []
for i in range(n):
i = i ** 2
i += 1
result.append(i)
return result
for v in func(3):
print(v)
1
2
5
Now, you can clearly differentiate the two cases, in the first one, each values are evaluated sequentially and processed later i.e. printed, but in the second case, you ended up having the entire result in memory before you can actually process it.
The main advantage is when you have a large dataset. It is basically the idea of lazy loading which means that a data is not called unless it is required. This saves your resources because typically in a list, the entire thing is loaded at once which might take up a lot of primary memory if the data is large enough.
The advantage of the first code is with respect to something you did not show. What is meant that generating and consuming one value at a time takes less memory than first generating all values, collecting them in a list, and then consuming them from the list.
The second code with which to compare the first code should have been:
def gen2(n):
result = []
for i in range(n):
i = i ** 2
i += 1
result.append(i)
return result
g = gen2(3)
for i in g:
print(i)
Note how the result of gen2 can be used exactly like the result of gen from your first example, but gen2 uses more memory if n is getting larger, whereas gen uses the same amount of memory no matter how large n is.
I'm trying to solve this problem:
A list is initialized to ["Sheldon", "Leonard", "Penny", "Rajesh", "Howard"], and then undergoes a series of operations. In each operation, the first element of the list is moved to the end of the list and duplicated. For example, in the first operation, the list becomes ["Leonard", "Penny", "Rajesh", "Howard", "Sheldon", "Sheldon"] (with "Sheldon" being moved and duplicated); in the second operation, it becomes ["Penny", "Rajesh", "Howard", "Sheldon", "Sheldon", "Leonard", "Leonard"] (with "Leonard" being moved and duplicated); etc. Given a positive integer n, find the string that is moved and duplicated in the nth operation. [paraphrased from https://codeforces.com/problemset/problem/82/A]
I've written a working solution, but it's too slow when n is huge:
l = ['Sheldon','Leonard','Penny','Rajesh','Howard']
n = int(input()) # taking input from user to print the name of the person
# standing at that position
for i in range(n):
t = l.pop(0)
l.append(t)
l.append(t)
#debug
# print(l)
print(t)
How can I do this faster?
Here's a solution that runs in O(log(input/len(l))) without doing any actual computation (no list operations):
l = ['Sheldon','Leonard','Penny','Rajesh','Howard']
n = int(input()) # taking input from user to print the name of the person
# standing at that position
i = 0
while n>(len(l)*2**i):
n = n - len(l)* (2**i)
i = i + 1
index = int((n-1)/(2**i ))
print(l[index])
Explanation: every time you push back the entire list, the list length will grow by exactly len(l) x 2^i. But you have to first find out how many times this happens. This is what the while is doing (that's what n = n - len(l)* (2**i) is doing). The while stops when it realized that i times of appending the double list will happen. Finally, after you have figured i out, you have to compute the index. But in the i-th appeneded list, every element is copied 2^i times, so you have to devide the number by 2**i. One minor detail is that for the index you have to subtract by 1 because lists in Python are 0-indexed while your input is 1-indexed.
As #khelwood said, you can deduce how many times you have to double the list.
To understand this, note that if you start with a list of 5 people and do 5 steps of your iteration, you will the same order as before just with everyone twice in it.
I am not 100% sure what you mean with the nth position as it shifts all the time, but if you mean the person in front after n iterations, solve for the largest integer i that fulfills
5*2^i<n
to get the number of times your list doubled. Then just look at the remaining list (each name is mentioned i times) to get the name at position n-5*2^i.
You are not going to be able to avoid calculating the list, but maybe you can make it a bit easier:
Every cycle (When sheldon is first again) the length of the list has doubled, so it looks like this:
After 1 cycle: SSLLPPRRHH
After 2 cycles: SSSSLLLLPPPPRRRRHHHH
...
while the number of cola's they drunk is 5*((2**n)-1) where the n is the number of cycles.
So you can calculate the state of the list at the closest ended cycle.
E.g.
Cola number 50:
5*((2**3)) = 40 means that after 40 cokes sheldon is next in line.
Then you can use the algorithm described in the task and get the last one in the line.
Hope this helps.
I am trying to solve Euler problem 18 where I am required to find out the maximum total from top to bottom. I am trying to use recursion, but am stuck with this.
I guess I didn't state my problem earlier. What I am trying to achieve by recursion is to find the sum of the maximum number path. I start from the top of the triangle, and then check the condition is 7 + findsum() bigger or 4 + findsum() bigger. findsum() is supposed to find the sum of numbers beneath it. I am storing the sum in variable 'result'
The problem is I don't know the breaking case of this recursion function. I know it should break when it has reached the child elements, but I don't know how to write this logic in the program.
pyramid=[[0,0,0,3,0,0,0,],
[0,0,7,0,4,0,0],
[0,2,0,4,0,6,0],
[8,0,5,0,9,0,3]]
pos=[0,3]
def downleft(pyramid,pos):#returns down left child
try:
return(pyramid[pos[0]+1][pos[1]-1])
except:return(0)
def downright(pyramid,pos):#returns down right child
try:
return(pyramid[pos[0]+1][pos[1]+1])
except:
return(0)
result=0
def find_max(pyramid,pos):
global result
if downleft(pyramid,pos)+find_max(pyramid,[pos[0]+1,pos[1]-1]) > downright(pyramid,pos)+find_max(pyramid,[pos[0]+1,pos[1]+1]):
new_pos=[pos[0]+1,pos[1]-1]
result+=downleft(pyramid,pos)+find_max(pyramid,[pos[0]+1,pos[1]-1])
elif downright(pyramid,pos)+find_max(pyramid,[pos[0]+1,pos[1]+1]) > downleft(pyramid,pos)+find_max(pyramid,[pos[0]+1,pos[1]-1]):
new_pos=[pos[0]+1,pos[1]+1]
result+=downright(pyramid,pos)+find_max(pyramid,[pos[0]+1,pos[1]+1])
else :
return(result)
find_max(pyramid,pos)
A big part of your problem is that you're recursing a lot more than you need to. You should really only ever call find_max twice recursively, and you need some base-case logic to stop after the last row.
Try this code:
def find_max(pyramid, x, y):
if y >= len(pyramid): # base case, we're off the bottom of the pyramid
return 0 # so, return 0 immediately, without recursing
left_value = find_max(pyramid, x - 1, y + 1) # first recursive call
right_value = find_max(pyramid, x + 1, y + 1) # second recursive call
if left_value > right_value:
return left_value + pyramid[y][x]
else:
return right_value + pyramid[y][x]
I changed the call signature to have separate values for the coordinates rather than using a tuple, as this made the indexing much easier to write. Call it with find_max(pyramid, 3, 0), and get rid of the global pos list. I also got rid of the result global (the function returns the result).
This algorithm could benefit greatly from memoization, as on bigger pyramids you'll calculate the values of the lower-middle areas many times. Without memoization, the code may be impractically slow for large pyramid sizes.
Edit: I see that you are having trouble with the logic of the code. So let's have a look at that.
At each position in the tree you want to make a choice of selecting
the path from this point on that has the highest value. So what
you do is, you calculate the score of the left path and the score of
the right path. I see this is something you try in your current code,
only there are some inefficiencies. You calculate everything
twice (first in the if, then in the elif), which is very expensive. You should only calculate the values of the children once.
You ask for the stopping condition. Well, if you reach the bottom of the tree, what is the score of the path starting at this point? It's just the value in the tree. And that is what you should return at that point.
So the structure should look something like this:
function getScoreAt(x, y):
if at the end: return valueInTree(x, y)
valueLeft = getScoreAt(x - 1, y + 1)
valueRight = getScoreAt(x + 1, y + 1)
valueHere = min(valueLeft, valueRight) + valueInTree(x, y)
return valueHere
Extra hint:
Are you aware that in Python negative indices wrap around to the back of the array? So if you do pyramid[pos[0]+1][pos[1]-1] you may actually get to elements like pyramid[1][-1], which is at the other side of the row of the pyramid. What you probably expect is that this raises an error, but it does not.
To fix your problem, you should add explicit bound checks and not rely on try blocks (try blocks for this is also not a nice programming style).
I have been fighting against a function giving me a memory error and thanks to your support (Python: how to split and return a list from a function to avoid memory error) I managed to sort the issue; however, since I am not a pro-programmer I would like to ask for your opinion on my method and how to improve its performance (if possible).
The function is a generator function returning all cycles from an n-nodes digraph. However, for a 12 nodes digraph, there are about 115 million cycles (each defined as a list of nodes, e.g. [0,1,2,0] is a cycle). I need all cycles available for further processing even after I have extracted some of their properties when they were first generated, so they need to be stored somewhere. So, the idea is to cut the result array every 10 million cycles to avoid memory error (when an array is too big, python runs out of RAM) and create a new array to store the following results. In the 12 node digraph, I would then have 12 result arrays, 11 full ones (containing 10 million cycles each) and the last containing 5 million cycles.
However, splitting the result array is not enough since the variables stay in RAM. So, I still need to write each one to the disk and delete it afterwards to clear the RAM.
As stated in How do I create a variable number of variables?, using 'exec' to create variable variable names is not very "clean" and dictionary solutions are better. However, in my case, if I store the results in a single dictionary, it will run out of memory due to the size of the arrays. Hence, I went for the 'exec' way. I would be grateful if you could comment on that decision.
Also, to store the arrays I use numpy.savez_compressed which gives me a 43 Mb file for each 10million cycles array. If it is not compressed it creates a 500 Mb file. However, using the compressed version slows the writing process. Any idea how to speed the writing and/or compressing process?
A simplified version of the code I wrote is as follows:
nbr_result_arrays=0
result_array_0=[]
result_lenght=10000000
tmp=result_array_0 # I use tmp to avoid using exec within the for loop (exec slows down code execution)
for cycle in generator:
tmp.append(cycle)
if len(tmp) == result_lenght:
exec 'np.savez_compressed(\'results_' +str(nbr_result_arrays)+ '\', tmp)'
exec 'del result_array_'+str(nbr_result_arrays)
nbr_result_arrays+=1
exec 'result_array_'+str(nbr_result_arrays)+'=[]'
exec 'tmp=result_array_'+str(nbr_result_arrays)
Thanks for reading,
Aleix
How about using itertools.islice?
import itertools
import numpy as np
for i in itertools.count():
tmp = list(itertools.islice(generator, 10000000))
if not tmp:
break
np.savez_compressed('results_{}'.format(i), tmp)
del tmp
thanks to all for your suggestions.
As suggested by #Aya, I believe that to improve performance (and possible space issues) I should avoid to store the results on the HD because storing them adds half of the time than creating the result, so loading and processing it again would get very close to creating the result again. Additionally, if I do not store any result, I save space which can become a big issue for bigger digraphs (a 12 node complete digraphs has about 115 million cycles but a 29 node ones has about 848E27 cycles... and increasing at factorial rate).
The idea is that I first need to find through all cycles going through the weakest arc to find the total probability of all cycles going it. Then, with this total probability I must go again through all those cycles to subtract them from the original array according to the weighted probability (I needed the total probability to be able to calculate the weighted probalility: weighted_prob= prob_of_this_cycle/total_prob_through_this_edge).
Thus, I believe that this is the best approach to do that (but I am open to more discussions! :) ).
However, I have a doubt regarding speed processing regarding two sub-functions:
1st: find whether a sequence contains a specific (smaller) sequence. I am doing that with the function "contains_sequence" which relies on the generator function "window" (as suggested in Is there a Python builtin for determining if an iterable contained a certain sequence? However I have been told that doing it with a deque would be up to 33% faster. Any other ideas?
2nd: I am currently finding the cycle probability of a cycle by sliding through the cycle nodes (which is represented by a list) to find the probability at the output of each arc to stay within the cycle and then multiply them all to find the cycle probability (the function name is find_cycle_probability). Any performance suggestions on this function would be appreciated since I need to run it for each cycle, i.e. countless times.
Any other tips/suggestion/comments will be most welcome! And thanks again for your help.
Aleix
Below follows the simplified code:
def simple_cycles_generator_w_filters(working_array_digraph, arc):
'''Generator function generating all cycles containing a specific arc.'''
generator=new_cycles.simple_cycles_generator(working_array_digraph)
for cycle in generator:
if contains_sequence(cycle, arc):
yield cycle
return
def find_smallest_arc_with_cycle(working_array,working_array_digraph):
'''Find the smallest arc through which at least one cycle flows.
Returns:
- if such arc exist:
smallest_arc_with_cycle = [a,b] where a is the start of arc and b the end
smallest_arc_with_cycle_value = x where x is the weight of the arc
- if such arc does not exist:
smallest_arc_with_cycle = []
smallest_arc_with_cycle_value = 0 '''
smallest_arc_with_cycle = []
smallest_arc_with_cycle_value = 0
sparse_array = []
for i in range(numpy.shape(working_array)[0]):
for j in range(numpy.shape(working_array)[1]):
if working_array[i][j] !=0:
sparse_array.append([i,j,working_array[i][j]])
sorted_array=sorted(sparse_array, key=lambda x: x[2])
for i in range(len(sorted_array)):
smallest_arc=[sorted_array[i][0],sorted_array[i][1]]
generator=simple_cycles_generator_w_filters(working_array_digraph,smallest_arc)
if any(generator):
smallest_arc_with_cycle=smallest_arc
smallest_arc_with_cycle_value=sorted_array[i][2]
break
return smallest_arc_with_cycle,smallest_arc_with_cycle_value
def window(seq, n=2):
"""Returns a sliding window (of width n) over data from the iterable
s -> (s0,s1,...s[n-1]), (s1,s2,...,sn), ... """
it = iter(seq)
result = list(itertools.islice(it, n))
if len(result) == n:
yield result
for elem in it:
result = result[1:] + [elem]
yield result
def contains_sequence(all_values, seq):
return any(seq == current_seq for current_seq in window(all_values, len(seq)))
def find_cycle_probability(cycle, working_array, total_outputs):
'''Finds the cycle probability of a given cycle within a given array'''
output_prob_of_each_arc=[]
for i in range(len(cycle)-1):
weight_of_the_arc=working_array[cycle[i]][cycle[i+1]]
output_probability_of_the_arc=float(weight_of_the_arc)/float(total_outputs[cycle[i]])#NOTE:total_outputs is an array, thus the float
output_prob_of_each_arc.append(output_probability_of_the_arc)
circuit_probabilities_of_the_cycle=numpy.prod(output_prob_of_each_arc)
return circuit_probabilities_of_the_cycle
def clean_negligible_values(working_array):
''' Cleans the array by rounding negligible values to 0 according to a
pre-defined threeshold.'''
zero_threeshold=0.000001
for i in range(numpy.shape(working_array)[0]):
for j in range(numpy.shape(working_array)[1]):
if working_array[i][j] == 0:
continue
elif 0 < working_array[i][j] < zero_threeshold:
working_array[i][j] = 0
elif -zero_threeshold <= working_array[i][j] < 0:
working_array[i][j] = 0
elif working_array[i][j] < -zero_threeshold:
sys.exit('Error')
return working_array
original_array= 1000 * numpy.random.random_sample((5, 5))
total_outputs=numpy.sum(original_array,axis=0) + 100 * numpy.random.random_sample(5)
working_array=original_array.__copy__()
straight_array= working_array.__copy__()
cycle_array=numpy.zeros(numpy.shape(working_array))
iteration_counter=0
working_array_digraph=networkx.DiGraph(working_array)
[smallest_arc_with_cycle, smallest_arc_with_cycle_value]= find_smallest_arc_with_cycle(working_array, working_array_digraph)
while smallest_arc_with_cycle: # using implicit true value of a non-empty list
cycle_flows_to_be_subtracted = numpy.zeros(numpy.shape((working_array)))
# FIRST run of the generator to calculate each cycle probability
# note: the cycle generator ONLY provides all cycles going through
# the specified weakest arc
generator = simple_cycles_generator_w_filters(working_array_digraph, smallest_arc_with_cycle)
nexus_total_probs = 0
for cycle in generator:
cycle_prob = find_cycle_probability(cycle, working_array, total_outputs)
nexus_total_probs += cycle_prob
# SECOND run of the generator
# using the nexus_prob_sum calculated before, I can allocate the weight of the
# weakest arc to each cycle going through it
generator = simple_cycles_generator_w_filters(working_array_digraph,smallest_arc_with_cycle)
for cycle in generator:
cycle_prob = find_cycle_probability(cycle, working_array, total_outputs)
allocated_cycle_weight = cycle_prob / nexus_total_probs * smallest_arc_with_cycle_value
# create the array to be substracted
for i in range(len(cycle)-1):
cycle_flows_to_be_subtracted[cycle[i]][cycle[i+1]] += allocated_cycle_weight
working_array = working_array - cycle_flows_to_be_subtracted
clean_negligible_values(working_array)
cycle_array = cycle_array + cycle_flows_to_be_subtracted
straight_array = straight_array - cycle_flows_to_be_subtracted
clean_negligible_values(straight_array)
# find the next weakest arc with cycles.
working_array_digraph=networkx.DiGraph(working_array)
[smallest_arc_with_cycle, smallest_arc_with_cycle_value] = find_smallest_arc_with_cycle(working_array,working_array_digraph)
I'm looking for a clean way to do this in Python:
Let's say I have two iterators "iter1" and "iter2": perhaps a prime number generator, and itertools.count(). I know a priori that both are are infinite and monotonically increasing. Now I want to take some simple operation of two args "op" (perhaps operator.add or operator.mul), and calculate every element of the first iterator with every element of the next, using said operation, then yield them one at a time, sorted. Obviously, this is an infinite sequence itself. (As mentioned in comment by #RyanThompson: this would be called the Cartesian Product of these sequences... or, more exactly, the 1d-sort of that product.)
What is the best way to:
wrap-up "iter1", "iter2", and "op" in an iterable that itself yields the values in monotonically increasing output.
Allowable simplifying assumptions:
If it helps, we can assume op(a,b) >= a and op(a,b) >= b.
If it helps, we can assume op(a,b) > op(a,c) for all b > c.
Also allowable:
Also acceptable would be an iterator that yields values in "generally increasing" order... by which I mean the iterable could occasionally give me a number less than a previous one, but it would somehow make "side information" available (as via a method on the object) that would say "I'm not guarenteeing the next value I give you will be greater than the one I just gave you, but I AM SURE that all future values will at least be greater than N.".... and "N" itself is monotonically increasing.
The only way I can think to do this is a sort of "diagonalization" process, where I keep an increasing number of partially processed iterables around, and "look ahead" for the minimum of all the possible next() values, and yield that. But this weird agglomeration of a heapq and a bunch of deques just seems outlandish, even before I start to code it.
Please: do not base your answer on the fact that my examples mentioned primes or count().... I have several uses for this very concept that are NOT related to primes and count().
UPDATE: OMG! What a great discussion! And some great answers with really thorough explanations. Thanks so much. StackOverflow rocks; you guys rock.
I'm going to delve in to each answer more thoroughly soon, and give the sample code a kick in the tires. From what I've read so far, my original suspicions are confirmed that there is no "simple Python idiom" to do this. Rather, by one way or another, I can't escape keeping all yielded values of iter1 and iter2 around indefinitely.
FWIW: here's an official "test case" if you want to try your solutions.
import operator
def powers_of_ten():
n = 0
while True:
yield 10**n
n += 1
def series_of_nines():
yield 1
n = 1
while True:
yield int("9"*n)
n += 1
op = operator.mul
iter1 = powers_of_ten()
iter2 = series_of_nines()
# given (iter1, iter2, op), create an iterator that yields:
# [1, 9, 10, 90, 99, 100, 900, 990, 999, 1000, 9000, 9900, 9990, 9999, 10000, ...]
import heapq
import itertools
import operator
def increasing(fn, left, right):
"""
Given two never decreasing iterators produce another iterator
resulting from passing the value from left and right to fn.
This iterator should also be never decreasing.
"""
# Imagine an infinite 2D-grid.
# Each column corresponds to an entry from right
# Each row corresponds to an entry from left
# Each cell correspond to apply fn to those two values
# If the number of columns were finite, then we could easily solve
# this problem by keeping track of our current position in each column
# in each iteration, we'd take the smallest value report it, and then
# move down in that column. This works because the values must increase
# as we move down the column. That means the current set of values
# under consideration must include the lowest value not yet reported
# To extend this to infinite columns, at any point we always track a finite
# number of columns. The last column current tracked is always in the top row
# if it moves down from the top row, we add a new column which starts at the top row
# because the values are increasing as we move to the right, we know that
# this last column is always lower then any columns that come after it
# Due to infinities, we need to keep track of all
# items we've ever seen. So we put them in this list
# The list contains the first part of the incoming iterators that
# we have explored
left_items = [next(left)]
right_items = [next(right)]
# we use a heap data structure, it allows us to efficiently
# find the lowest of all value under consideration
heap = []
def add_value(left_index, right_index):
"""
Add the value result from combining the indexed attributes
from the two iterators. Assumes that the values have already
been copied into the lists
"""
value = fn( left_items[left_index], right_items[right_index] )
# the value on the heap has the index and value.
# since the value is first, low values will be "first" on the heap
heapq.heappush( heap, (value, left_index, right_index) )
# we know that every other value must be larger then
# this one.
add_value(0,0)
# I assume the incoming iterators are infinite
while True:
# fetch the lowest of all values under consideration
value, left_index, right_index = heapq.heappop(heap)
# produce it
yield value
# add moving down the column
if left_index + 1 == len(left_items):
left_items.append(next(left))
add_value(left_index+1, right_index)
# if this was the first row in this column, add another column
if left_index == 0:
right_items.append( next(right) )
add_value(0, right_index+1)
def fib():
a = 1
b = 1
while True:
yield a
a,b = b,a+b
r = increasing(operator.add, fib(), itertools.count() )
for x in range(100):
print next(r)
Define the sequences as:
a1 <= a2 <= a3 ...
b1 <= b2 <= b3 ...
Let a1b1 mean op(a1,b1) for short.
Based on your allowable assumptions (very important) you know the following:
max(a1, b1) <= a1b1 <= a1b2 <= a1b3 ...
<=
max(a2, b1) <= a2b1 <= a2b2 <= a2b3 ...
<=
max(a3, b1) <= a3b1 <= a3b2 <= a3b3 ...
. .
. .
. .
You'll have to do something like:
Generate a1b1. You know that if you continue increasing the b variables, you will only get higher values. The question now is: is there a smaller value by increasing the a variables? Your lower bound is min(a1, b1), so you will have to increase the a values until min(ax,b1) >= a1b1. Once you hit that point, you can find the smallest value from anb1 where 1 <= n <= x and yield that safely.
You then will have multiple horizontal chains that you'll have to keep track of. Every time you have a value that goes past min(ax,b1), you'll have to increase x (adding more chains) until min(ax,b1) is larger than it before safely emitting it.
Just a starting point... I don't have time to code it at the moment.
EDIT: Oh heh that's exactly what you already had. Well, without more info, this is all you can do, as I'm pretty sure that mathematically, that's what is necessary.
EDIT2: As for your 'acceptable' solution: you can just yield a1bn in increasing order of n, returning min(a1,b1) as N =P. You'll need to be more specific. You speak as if you have a heuristic of what you generally want to see, the general way you want to progress through both iterables, but without telling us what it is I don't know how one could do better.
UPDATE: Winston's is good but makes an assumption that the poster didn't mention: that op(a,c) > op(b,c) if b>a. However, we do know that op(a,b)>=a and op(a,b)>=b.
Here is my solution which takes that second assumption but not the one Winston took. Props to him for the code structure, though:
def increasing(fn, left, right):
left_items = [next(left)]
right_items = [next(right)]
#columns are (column value, right index)
columns = [(fn(left_items[0],right_items[0]),0)]
while True:
#find the current smallest value
min_col_index = min(xrange(len(columns)), key=lambda i:columns[i][0])
#generate columns until it's impossible to get a smaller value
while right_items[0] <= columns[min_col_index][0] and \
left_items[-1] <= columns[min_col_index][0]:
next_left = next(left)
left_items.append(next_left)
columns.append((fn(next_left, right_items[0]),0))
if columns[-1][0] < columns[min_col_index][0]:
min_col_index = len(columns)-1
#yield the smallest value
yield columns[min_col_index][0]
#move down that column
val, right_index = columns[min_col_index]
#make sure that right value is generated:
while right_index+1 >= len(right_items):
right_items.append(next(right))
columns[min_col_index] = (fn(left_items[min_col_index],right_items[right_index+1]),
right_index+1)
#repeat
For a (pathological) input that demonstrates the difference, consider:
def pathological_one():
cur = 0
while True:
yield cur
cur += 100
def pathological_two():
cur = 0
while True:
yield cur
cur += 100
lookup = [
[1, 666, 500],
[666, 666, 666],
[666, 666, 666],
[666, 666, 666]]
def pathological_op(a, b):
if a >= 300 or b >= 400: return 1005
return lookup[b/100][a/100]
r = increasing(pathological_op, pathological_one(), pathological_two())
for x in range(15):
print next(r)
Winston's answer gives:
>>>
1
666
666
666
666
500
666
666
666
666
666
666
1005
1005
1005
While mine gives:
>>>
1
500
666
666
666
666
666
666
666
666
666
666
1005
1005
1005
Let me start with an example of how I would solve this intuitively.
Because reading code inline is a little tedious, I'll introduce some notation:
Notation
i1 will represent iter1. i10 will represent the first element of iter1. Same for iter2.
※ will represent the op operator
Intuitive solution
By using simplifying assumption 2, we know that i10 ※ i20 is the smallest element that will ever be yielded from your final iterator. The next element would the smaller of i10 ※ i21 and i11 ※ i20.
Assuming i10 ※ i21 is smaller, you would yield that element. Next, you would yield the smaller of i11 ※ i20, i11 ※ i20, and i11 ※ i21.
Expression as traversal of a DAG
What you have here is a graph traversal problem. First, think of the problem as a tree. The root of the tree is i10 ※ i20. This node, and each node below it, has two children. The two children of i1x ※ i2y are the following: One child is i1x+1 ※ i2y, and the other child is i1x ※ i2y+1. Based on your second assumption, we know that i1x ※ i2y is less than both of its children.
(In fact, as Ryan mentions in a comment, this is a directed acyclic graph, or DAG. Some "parents" share "children" with other "parent" nodes.)
Now, we need to keep a frontier - a collection of nodes that could be next to be returned. After returning a node, we add both its children to the frontier. To select the next node to visit (and return from your new iterator), we compare the values of all the nodes in the frontier. We take the node with the smallest value and we return it. Then, we again add both of its child nodes to the frontier. If the child is already in the frontier (added as the child of some other parent), just ignore it.
Storing the frontier
Because you are primarily interested in the value of the nodes, it makes sense to store these nodes indexed by value. As such, it may be in your interest to use a dict. Keys in this dict should be the values of nodes. Values in this dict should be lists containing individual nodes. Because the only identifying information in a node is the pair of operands, you can store individual nodes as a two-tuple of operands.
In practice, after a few iterations, your frontier may look like the following:
>>> frontier
{1: [(2, 3), (2, 4)], 2: [(3, 5), (5, 4)], 3: [(1, 6)], 4: [(6, 3)]}
Other implementation notes
Because iterators don't support random access, you'll need to hang on to values that are produced by your first two iterators until they are no longer needed. You'll know that a value is still needed if it is referenced by any value in your frontier. You'll know that a value is no longer needed once all nodes in the frontier reference values later/greater than one you've stored. For example, i120 is no longer needed when nodes in your frontier reference only i121, i125, i133, ...
As mentioned by Ryan, each value from each iterator will be used an infinite number of times. Thus, every value produced will need to be saved.
Not practical
Unfortunately, in order to assure that elements are returned only in increasing order, the frontier will grow without bound. Your memoized values will probably also take a significant amount of space also grow without bound. This may be something you can address by making your problem less general, but this should be a good starting point.
So you basically want to take two monotonically increasing sequences, and then (lazily) compute the multiplication (or addition, or another operation) table between them, which is a 2-D array. Then you want to put the elements of that 2-D array in sorted order and iterate through them.
In general, this is impossible. However, if your sequences and operation are such that you can make certain guarantees about the rows and columns of the table, then you can make some progress. For example, let's assume that your sequences are monitonically-increasing sequences of positive integers only, and that the operation is multiplication (as in your example). In this case, we know that every row and column of the array is a monotonically-increasing sequence. In this case, you do not need to compute the entire array, but rather only parts of it. Specifically, you must keep track of the following:
How many rows you have ever used
The number of elements you have taken from each row that you have used
Every element from either input sequence that you have ever used, plus one more from each
To compute the next element in your iterator, you must do the following:
For each row that you have ever used, compute the "next" value in that row. For example, if you have used 5 values from row 1, then compute the 6th value (i=1, j=6) by taking the 1st value from the first sequence and the 6th value from the second sequence (both of which you have cached) and applying the operation (multiplication) to them. Also, compute the first value in the first unused row.
Take the minimum of all the values you computed. Yield this value as the next element in your iterator
Increment the counter for the row from which you sampled the element in the previous step. If you took the element from a new, unused row, you must increment the count of the number of rows you have used, and you must create a new counter for that row initialized to 1. If necessary, you must also compute more values of one or both input sequences.
This process is kind of complex, and in particular notice that to compute N values, you must in the worst case save an amount of state proportional to the square root of N. (Edit: sqrt(N) is actually the best case.) This is in stark contrast to a typical generator, which only requires constant space to iterate through its elements regardless of length.
In summary, you can do this under certain assumptions, and you can provide a generator-like interface to it, but it cannot be done in a "streaming" fashion, because you need to save a lot of state in order to iterate through the elements in the correct order.
Use generators, which are just iterators written as functions that yield results. In this case you can write generators for iter1 and iter2 and another generator to wrap them and yield their results (or do calculations with them, or the history of their results) as you go.
From my reading of the question you want something like this, which will calculate every element of the first iterator with every element of the next, using said operation, you also state you want some way to wrap-up "iter1", "iter2", and "op" in an iterable that itself yields the values in monotonically increasing output. I propose generators offer a simple solution to such problem.
import itertools
def prime_gen():
D, q = {}, 2
while True:
if q not in D:
yield q
D[q * q] = [q]
else:
for p in D[q]:
D.setdefault(p + q, []).append(p)
del D[q]
q += 1
def infinite_gen(op, iter1, iter2):
while True:
yield op(iter1.next(), iter2.next())
>>> gen = infinite_gen(operator.mul, prime_gen(), itertools.count())
>>> gen.next()
<<< 0
>>> gen.next()
<<< 3
>>> gen.next()
<<< 10
Generators offer a lot of flexibility, so it should be fairly easy to write iter1 and iter2 as generators that return values you want in the order you want. You could also consider using coroutines, which let you send values into a generator.
Discussion in other answers observes that there is potentially infinite storage required no matter what the algorithm, since every a[n] must remain available for each new b[n]. If you remove the restriction that the input be two iterators and instead only require that they be sequences (indexable or merely something that can be regenerated repeatedly) then I believe all of your state suddenly collapses to one number: The last value you returned. Knowing the last result value you can search the output space looking for the next one. (If you want to emit duplicates properly then you may need to also track the number of times the result has been returned)
With a pair of sequences you have a simple recurrence relation:
result(n) = f(seq1, seq1, result(n-1))
where f(seq1, seq1, p) searches for the minimum value in the output space q such that q > p. In practical terms you'd probably make the sequences memoized functions and choose your search algorithm to avoid thrashing the pool of memoized items.