Python Program Giving me StopIteration - python

I have made a function: generatesequence (shown below)
def generatesequence(start: float, itera: float = 1, stop: float = None):
"""
Generate a sequence, that can have a stopping point, starting point.
"""
__num = start
# if sequence has a stopping point
if stop != None:
# if stop is negative
if stop < 0:
# while num is greater than stop (0 < 5, but 0 > -5)
while __num >= stop:
# yield __num variable (yield = return without exiting function)
yield __num
# add iter to __num
__num += itera
else:
while __num <= stop:
yield __num
__num += itera
else:
# if sequence has no stopping point, run forever
while True:
yield __num
__num += itera
I have also made a Sequence Class (also shown below)
class Sequence:
def __init__(self, start, itera, stop):
self.sequence = generatesequence(start, itera, stop)
self.sequencelength = iterlen(self.sequence)
print(self.sequencelength)
def printself(self):
for i in range(self.sequencelength):
print(next(self.sequence))
However, when I run printself on a Sequence instance, it gives me a StopIteration error. How can I fix this?

You don't need to do that with a generator, you can just do the following:
def printself(self):
for i in self.sequence:
print(i)
This way you don't need to calculate the length of the generator beforehand

Caculating length of generator defies the whole purpose of using generator. And it also explains StopIteration.
Unlike list or some data structure that takes O(n) memory space, generator takes O(1) space and it cannot know the length without iterating one by one.
And by calcuating length you have moved the iter for your generator from start to end, and now your iter points at StopIteration.
Now when you access generator afterwards it returns StopIteration.
Actually the whole purpose of generator and the likes is to save memory space for iterables that you know will be iterated at most once. You can not do two or more full iterations on generator. To do that, use list function on generator beforehand and save values in list or similar data structures . Or simply recreate generator after it's been used up (=iterated over).
In short, to fix bug, remove the line where it computes length of generator in init method. And do for loop using
"for i in generator_name: "
syntax
Alternatively you can make a method that makes generator and call it to recreate generator whenever / whereever you need

Related

Previous in yield operations - python

Recently i have been using the 'yield' in python. And I find generator functions very useful. My query is that, is there something which could decrement the imaginative cursor in the generator object. Just how next(genfun) moves and outputs +i'th item in the container, i would like to know if there exists any function that may call upon something like previous(genfun) and moves to -1th item in the conatiner.
Actual Working
def wordbyword():
words = ["a","b","c","d","e"]
for word in words:
yield word
getword = wordbyword()
next(getword)
next(getword)
Output's
a
b
What I would like to see and achieve is
def wordbyword():
words = ["a","b","c","d","e"]
for word in words:
yield word
getword = wordbyword()
next(getword)
next(getword)
previous(getword)
Expected Output
a
b
a
This may sound silly, but is there someway there is this previous in generator, if not why is it so?. Why not we could decrement the iterator, or am I ignorant of an existing method, pls shower some light. What can be the closest way to implement what I have here in hand.
No there is no such function to sort of go back in a generator function. The reason is that Python does not store up the previous value in a generator function natively, and as it does not store it, it also cannot perform a recalculation.
For example, if your generator is a time-sensitive function, such as
def time_sensitive_generator():
yield datetime.now()
You will have no way to recalculate the previous value in this generator function.
Of course, this is only one of the many possible cases that a previous value cannot be calculated, but that is the idea.
If you do not store the value yourself, it will be lost forever.
As already said, there is no such function since the entire point of a generator is to have a small memory footprint. You would need to store the result.
You could automate the storing of previous results. One use-case of generators is when you have a conceptually infinite list (e.g. that of prime numbers) for which you only need an initial segment. You could write a generator that builds up these initial segments as a side effect. Have an optional history parameter that the generator appends to while it is yielding. For example:
def wordbyword(history = None):
words = ["a","b","c","d","e"]
for word in words:
if isinstance(history,list): history.append(word)
yield word
If you use the generator without an argument, getword = wordbyword(), it will work like an ordinary generator, but if you pass it a list, that list will store the growing history:
hist = []
getword = wordbyword(hist)
print(next(getword)) #a
print(next(getword)) #b
print(hist) #['a','b']
Iterating over a generator object consumes its elements, so there is nothing to go back to after using next. You could convert the generator to a list and implement your own next and previous
index = 0
def next(lst):
global index
index += 1
if index > len(lst):
raise StopIteration
return lst[index - 1]
def previous(lst):
global index
index -= 1
if index == 0:
raise StopIteration
return lst[index - 1]
getword = list(wordbyword())
print(next(getword)) # a
print(next(getword)) # b
print(previous(getword)) # a
One option is to wrap wordbyword with a class that has a custom __next__ method. In this way, you can still use the built-in next function to consume the generator on-demand, but the class will store all the past results from the next calls and make them accessible via a previous attribute:
class save_last:
def __init__(self, f_gen):
self.f_gen = f_gen
self._previous = []
def __next__(self):
self._previous.append(n:=next(self.i_gen))
return n
def __call__(self, *args, **kwargs):
self.i_gen = self.f_gen(*args, **kwargs)
return self
#property
def previous(self):
if len(self._previous) < 2:
raise Exception
return self._previous[-2]
#save_last
def wordbyword():
words = ["a","b","c","d","e"]
for word in words:
yield word
getword = wordbyword()
print(next(getword))
print(next(getword))
print(getword.previous)
Output:
a
b
a

How can we send for loop variable as the function argument in python

i wanna send the value of i to fun() function in and want to receive it as the 3rd argument in the range function .
for i in range(1,11,fun(i)):
print(i)
error i am getting is :
Name Error: name 'i' is not defined
i have tried defining i as global but it didn't workout because both the 'i' have different id's .
for eg:
global i
print(id(i))
for i in range(1,11,fun(i)):
print(id(i))
so please help
You can't do this with range(). The arguments to range() are just evaluated once, when the loop starts, not each time through the loop, and it effectively calculates the entire list of values at the start (it actually uses an iterator object internally, but that doesn't change this). The sequence can't be changed during the loop.
Do it with a while loop that recalculates i itself.
i = 1
while i < 11:
print(i)
i = fun(i)
If you really want to use for, you could define a generator function that returns your sequence of values.
def multrange(init, end, step):
i = init
while i < end:
yield i
i = i * step
for i in multrange(1, 11, 2):
print(i)
You can even define a generator that takes the function to use to generate the next step as an argument.
def generic_range(init, end, stepper):
i = init
while i < end:
yield i
i = stepper(i)
for i in generic_range(1, 11, fun):
print(i)
Like what Barmar mentioned, the range() function is only evaluated once. The behavior you seek is best accomplished through a while loop.
If you'd like to insist on a for loop however, you may explore the following options:
Define a generator function and then use a for loop that iterates from it (the answer by Barmar)
Use a for loop with enumerate:
step = 0 # Initialize variable `step`
for index, x in enumerate(range(1, 30)):
if index == step:
print(x)
# set the next index for which to
# output a value
step = fun(x) + index

How can I make this generator loop?

I am trying to loop over a directory and load all files. I've tried using one generator to load files and another one to generate batches and call the first generator when it runs of out memory.
def file_gen(b):
# iterate over my directory and load two audio file at a time
for n in range(len(b)):
path_ = os.path.join(os.path.join(path,'Mixtures'), 'Dev')
os.chdir(os.path.join(path_,b[n]))
y, _ = librosa.load('mixture.wav', sr=rate)
path_vox = os.path.join(os.path.join(path,'Sources'), 'Dev')
os.chdir(os.path.join(path_vox,b[n]))
x, _ = librosa.load('vocals.wav', sr=rate)
yield y, x
list_titles = os.listdir(os.path.join(os.path.join(path,'Mixtures'),'Dev'))
gen_file = file_gen(list_titles)
# second generator
def memory_test():
memory = 0
if memory == 0:
a, b = next(gen_file)
a, _ = mag_phase(spectrogram(a))
b, _ = mag_phase(spectrogram(b))
# calculate how many batches I can generate from the file
memory = a.shape[1]/(n_frames*(time_len-overlap) + time_len)
for n in range(memory):
yield memory
memory = memory -1
test = memory_test()
The second generator is where the problem is. Ideally, I would like both generator to iterate indefinitely though (the first one should go back to the beginning of the list).
Thank you!
itertools.cycle()
One way you could do this is to use itertools.cycle() which will essentially store the results of the generator and then continuously loops them over and over. docs
If you chose to do that, you would consume a lot of additional memory storing those results.
except StopIteration
As an alternative method, you could try: and except StopIteration for your generator yield in order to reset it back to the beginning. Generators always raise StopIteration if you call __next__ on an exhausted generator.
Edit: I originally linked to a wrapper function here but the code in that example actually doesn't work. Below is code that I have tested to work which is hopefully helpful. My answer here is based on the same concept.
def Primes(max): # primary generator
number = 1
while number < max:
number += 1
if check_prime(number):
yield number
primes = Primes(100)
def primer(): # this acts as a loop and resets your generator
global primes
try:
ok = next(primes)
return ok
except StopIteration:
primes = Primes(100)
ok = next(primes)
return ok
while True: # this is the actual loop continuing forever
primer()
You'll notice we couldn't implicitly refer to our own function in order to reset itself, and we also couldn't use a standard for loop because it will always catch StopIteration before you can, by design [more info].

Python, use of yield to implement a cyclic generator

TL;DR is what I'm trying to do too complicated for a yield-based generator?
I have a python application where I need to repeat an expensive test on a list of objects, one at a time, and then mangle those that pass. I expect several objects to pass, but I do not want to create a list of all those that pass, as mangle will alter the state of some of the other objects. There is no requirement to test in any particular order. Then rinse and repeat until some stop condition.
My first simple implementation was this, which runs logically correctly
while not stop_condition:
for object in object_list:
if test(object):
mangle(object)
break
else:
handle_no_tests_passed()
unfortunately, for object in object_list: always restarts at the beginning of the list, where the objects probably haven't been changed, and there are objects at the end of the list ready to test. Picking them at random would be slightly better, but I would rather carry on where I left off from the previous for/in call. I still want the for/in call to terminate when it's traversed the entire list.
This sounded like a job for yield, but I tied my brain in knots failing to make it do what I wanted. I can use it in the simple cases, iterating over a range or returning filtered records from some source, but I couldn't find out how to make it save state and restart reading from its source.
I can often do things the long wordy way with classes, but fail to understand how to use the alleged simplifications like yield. Here is a solution that does exactly what I want.
class CyclicSource:
def __init__(self, source):
self.source = source
self.pointer = 0
def __iter__(self):
# reset how many we've done, but not where we are
self.done_this_call = 0
return self
def __next__(self):
ret_val = self.source[self.pointer]
if self.done_this_call >= len(self.source):
raise StopIteration
self.done_this_call += 1
self.pointer += 1
self.pointer %= len(self.source)
return ret_val
source = list(range(5))
q = CyclicSource(source)
print('calling once, aborted early')
count = 0
for i in q:
count += 1
print(i)
if count>=2:
break
else:
print('ran off first for/in')
print('calling again')
for i in q:
print(i)
else:
print('ran off second for/in')
which demonstrates the desired behaviour
calling once, aborted early
0
1
calling again
2
3
4
0
1
ran off second for/in
Finally, the question. Is it possible to do what I want with the simplified generator syntax using yield, or does maintaining state between successive for/in calls require the full class syntax?
Your use of the __iter__ method causes your iterator to be reset. This actually goes quite counter to regular behaviour of an iterator; the __iter__ method should just return self, nothing more. You rely on a side effect of for applying iter() to your iterator each time you create a for i in q: loop. This makes your iterator work, but the behaviour is surprising and will trip up future maintainers. I'd prefer that effect to be split out to a separate .reset() method, for example.
You can reset a generator too, using generator.send() to signal it to reset:
def cyclic_source(source):
pointer = 0
done_this_call = 0
while done_this_call < len(source):
ret_val = source[pointer]
done_this_call += 1
pointer = (pointer + 1) % len(source)
reset = yield ret_val
if reset is not None:
done_this_call = 0
yield # pause again for next iteration sequence
Now you can 'reset' your count back to zero:
q = cyclic_source(source)
for count, i in enumerate(q):
print(i)
if count == 1:
break
else:
print('ran off first for/in')
print('explicitly resetting the generator')
q.send(True)
for i in q:
print(i)
else:
print('ran off second for/in')
This is however, rather.. counter to readability. I'd instead use an infinite generator by using itertools.cycle() that is limited in the number of iterations with itertools.islice():
from itertools import cycle, islice
q = cycle(source)
for count, i in enumerate(islice(q, len(source))):
print(i)
if count == 1:
break
else:
print('ran off first for/in')
for i in islice(q, len(source)):
print(i)
else:
print('ran off second for/in')
q will produce values from source in an endless loop. islice() cuts off iteration after len(source) elements. But because q is reused, it is still maintaining the iteration state.
If you must have a dedicated iterator, stick to a class object and make an iterable, so have it return a new iterator each time __iter__ is called:
from itertools import cycle, islice
class CyclicSource:
def __init__(self, source):
self.length = len(source)
self.source = cycle(source)
def __iter__(self):
return islice(self.source, self.length)
This keeps state in the cycle() iterator still, but simply creates a new islice() object each time you create an iterator for this. It basically encapsulates the islice() approach above.

Halt a recursively called function

I'm trying to halt the for loop below once values (x,y) or (z,2) have been returned so that the value i doesn't keep increasing, and simply halts when the if or elif condition is first
def maxPalindrome(theList):
# students need to put some logic here
maxcomplist = theList[:]
maxcomplist.reverse()
control = len(theList) - 1
# exit if maxPalindrome is True
for i in range(control):
if maxcomplist[:] == theList[:]:
x = 0
y = len(theList)
return (x, y)
break
elif maxcomplist[i:control] == theList[i:control]:
successList = theList[i:control]
z = i
w = len(theList) - z - 1
return (z, w)
How can I accomplish this?
As I wrote in a comment already: that function isn't a recursive one at all.
Recursion means, that a function calls itself to complete it's purpose. This call can be indirect, meaning that the function uses helper function that will call the first function again.
But your code doesn't cover both cases.
A recursive function always have a certain architecture:
the first thing after being called should be to test, if the primitive case (or one primitive case among options) has been reached. if so, it returns.
If not it will compute whatever is needed and pass this results to itself again,
untill the primitive case is reached, and the nested function calls will finish in one after the other.
One well-known usage of recursion is the quicksort algorithm:
def quicksort(alist):
if len(alist) < 2:
return alist # primitive case: a list of size one is ordered
pivotelement = alist.pop()
# compute the 2 lists for the next recursive call
left = [element for element in alist if element < pivotelement]#left = smaller than pivotelemet
right = [element for element in alist if element >= pivotelement]#left = greater than pivotelemet
# call function recursively
return quicksort(left) + [pivotelement] + quicksort(right)
So the "stop" must be the return of a primitive case. This is vital for recursion. You cannot just break out somehow.
I don't understand the question - if I get it right, that what you want already happens. If you return, the function stops running.
Some comments in addition to this answer:
As well, I cannot see where the function is called recursively, nor what
exit if maxPalindrome is True
means. (Is this a comment, maybe?)
Besides, the maxcomplist[:]==theList[:] does not make much sense to me, and seem to be a waste of time and memory, and to have this comparison in each iteration loop doesn't make it faster as well.

Categories

Resources