Related
I am trying to run a function as a variable in another function, however the first function has a variable that is only being specified in the second function. I do not think it is good practice, but I guess I programmed myself in a corner.
def in_func(n, p):
print(p)
print(f'Num {n}')
def out_func(func):
n = 0
while n < 10:
func(n)
n += 1
p = 8
out_func(in_func(n, p))
What is the best practice or solution to a problem such as this?
You are calling in_func and passing the result to out_func. Instead, you can e.g. define a lambda function that accepts only the n parameter while using p from the current scope...
p = 8
out_func(lambda n: in_func(n, p))
... or use functools.partial to the same effect:
from functools import partial
p = 8
out_func(partial(in_func, p=p))
There is no need to use n when passing the in_func as an argument:
def in_func(n):
print(f'Num {n}')
def out_func(func):
n = 0
while n < 10:
func(n)
n += 1
out_func(in_func)
Edit
Pretending the updated syntax works and does something meaningful (had to adjust to even run it), I guess your only option would be to declare the n variable globally. Then returning in_func from itself to be called by out_func:
def in_func(n, p):
print(p)
print(f'Num {n}')
return in_func
def out_func(func):
global n
n = 0
while n < 10:
func(n, p)
n += 1
n = 10
p = 8
out_func(in_func(n, p))
In the end, maybe they don't need to be the same variable?
def in_func(n, p):
print(p)
print(f'Num {n}')
return in_func
def out_func(func):
n_b = 0
while n_b < 10:
func(n_b, p)
n_b += 1
n_a = 10
p = 8
out_func(in_func(n_a, p))
I have two generators, A and B, of unknown length.
I want to know if B is a subsequence (contiguous) of A, so I do the following:
def subseq(A, B):
b0 = next(B)
for a in A:
if a == b0:
break
else: # no-break
# b0 not found in A so B is definitely not a subseq of A
return False
# is the remaining of B a prefix of the remaining of B
return prefix(A, B)
def prefix(A, B):
return all(a == b for a, b in zip(A, B))
However, prefix(A, B) is not exactly correct, as if what remains of A is shorter than what remains of B, then I might get a false positive:
E.g. with A = 'abc' and B = 'abcd' (imagine they are generators), then return all(a == b for a, b in zip(A, B)) would return True.
But if I use zip_longest instead, then I have the complimentary problem -- I would get false negatives:
E.g. with A = 'abcd' and B = 'abc', then return all(a == b for a, b in zip_longest(A, B)) would return False.
What's a sensible way to do this? Specifically, I want to zip to the length of the second argument. I basically want something like zip_(A, B, ziplengthselect=1)
where ziplengthselect=i tells the function that it should zip to the length of the ith argument.
Then the expression all(a == b for a, b in zip_(A, B, fillvalue=sentinel, ziplengthselect=1)) where sentinel is something not found in B, would have the following behavior. If the expression
reaches end of B, then it would evaluate to True
reaches end of A, then it would use the fillvalue, check sentinel == b, fail the check since sentinel was chosen to be something not found in B, and return False
fails an a == b check, then it would evaluate to False
I can think of solutions with try, except blocks, but was wondering if there's a better way.
# Whether generator B is a prefix of generator A.
def prefix(A, B):
for b in B:
try:
a = next(A)
if a != b:
return False
except StopIteration:
# reached end of A
return False
return True
OR
# Whether generator B is a prefix of generator A.
def prefix(A, B):
prefix = all(a == b for a, b in zip(A, B))
if not prefix:
return False
try:
next(B)
# end of B was reached
return True
except StopIteration:
# end of B was not reached
return False
The above code works when A has no duplicates. However if A has duplicates, then we have to tee the generators as follows:
from itertools import tee
def subseq(A, B):
try:
b0 = next(B)
except StopIteration:
return True
while True:
try:
a = next(A)
if a == b0:
A, Acop = tee(A)
B, Bcop = tee(B)
if prefix(Acop, Bcop):
return True
del Acop, Bcop
except StopIteration:
return False
def prefix(A, B):
for b in B:
try:
a = next(A)
if a != b:
return False
except StopIteration:
# reached end of A
return False
return True
# Some tests
A = (i for i in range(10))
B = (i for i in range(5,8))
print(subseq(A, B)) # True
A = (i for i in range(10))
B = (i for i in range(5,11))
print(subseq(A, B)) # False
A = (i for i in [1,2,3]*10 + [1,2,3,4])
B = (i for i in [1,2,3])
print(subseq(A, B)) # True
A = (i for i in [1,1,2,1,1,2]*8 + [3])
B = (i for i in [1,1,2,3])
print(subseq(A, B)) # True
Here's how I solved the analogous subsequence problem for lists. Lists are easier because you can know their length:
def isSublist(lst, sublst):
N, M = len(lst), len(sublst)
starts = (i for i in range(N - M + 1) if lst[i] == sublst[0])
for i in starts:
# i <= N - M so N - i >= M
j = 0
while j < M and lst[i] == sublst[j]:
i += 1
j += 1
if j == M:
return True
return False
I might use deques (although this assumes B is finite):
from collections import deque
from itertools import islice
def subseq(A, B):
B = deque(B)
if not B:
return True
n = len(B)
Asub = deque(islice(A, n-1), n)
for a in A:
Asub.append(a)
if Asub == B:
return True
return False
Might take more or less time/memory than yours. Depends on the input.
Try it online!
A note about yours: For an input like A = iter('a'+'b'*10**7), B = iter('ac') you waste a lot of memory (90 MB on 64-bit Python), since your Acop from the very beginning causes the underlying tee storage to never let go of anything. You'd better do del Acop, Bcop after an unsuccessful prefix check.
It’s possible to build KMP’s partial match table lazily.
from itertools import islice
def has_substring(sup, sub):
sub = LazySequence(sub)
if not sub:
return True
t = kmp_table(sub)
k = 0
for x in sup:
while x != sub[k]:
k = t[k]
if k == -1:
break
if k == -1:
k = 0
continue
k += 1
try:
sub[k]
except IndexError:
return True
return False
class LazySequence:
def __init__(self, iterator):
self.consumed = []
self.iterator = None if iterator is None else iter(iterator)
def __getitem__(self, index):
if index >= len(self.consumed):
self.consumed.extend(islice(self.iterator, index - len(self.consumed) + 1))
return self.consumed[index]
def __iter__(self):
consumed = self.consumed
yield from consumed
for x in self.iterator:
consumed.append(x)
yield x
def __bool__(self):
for _ in self:
return True
return False
def lazy_sequence(g):
def wrap_generator(*args, **kwargs):
ls = LazySequence(None)
ls.iterator = g(ls.consumed, *args, **kwargs)
return ls
return wrap_generator
#lazy_sequence
def kmp_table(t, w):
yield -1
cnd = 0
for x in islice(w, 1, None):
if x == w[cnd]:
yield t[cnd]
else:
yield cnd
while cnd != -1 and x != w[cnd]:
cnd = t[cnd]
cnd += 1
This search is fast (asymptotically optimal time of O(|sub| + |sup|)) and doesn’t use unnecessary time/space when one generator is much longer than the other – including being able to return True when sup is infinite and being able to return False when sub is infinite.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I've started reading the book Systematic Program Design: From Clarity to Efficiency few days ago. Chapter 4 talks about a systematic method to convert any recursive algorithm into its counterpart iterative. It seems this is a really powerful general method but I'm struggling quite a lot to understand how it works.
After reading a few articles talking about recursion removal using custom stacks, it feels like this proposed method would produce a much more readable, optimized and compact output.
Recursive algorithms in Python where I want to apply the method
#NS: lcs and knap are using implicit variables (i.e.: defined globally), so they won't
#work directly
# n>=0
def fac(n):
if n==0:
return 1
else:
return n*fac(n-1)
# n>=0
def fib(n):
if n==0:
return 0
elif n==1:
return 1
else:
return fib(n-1)+fib(n-2)
# k>=0, k<=n
def bin(n,k):
if k==0 or k==n:
return 1
else:
return bin(n-1,k-1)+bin(n-1,k)
# i>=0, j>=0
def lcs(i,j):
if i==0 or j==0:
return 0
elif x[i]==y[j]:
return lcs(i-1,j-1)+1
else:
return max(lcs(i,j-1),lcs(i-1,j))
# i>=0, u>=0, for all i in 0..n-1 w[i]>0
def knap(i,u):
if i==0 or u==0:
return 0
elif w[i]>u:
return knap(i-1,u)
else:
return max(v[i]+knap(i-1,u-w[i]), knap(i-1,u))
# i>=0, n>=0
def ack(i,n):
if i==0:
return n+1
elif n==0:
return ack(i-1,1)
else:
return ack(i-1,ack(i,n-1))
Step Iterate: Determine minimum increments, transform recursion into iteration
The Section 4.2.1 the book talks about determining the appropriate increment:
1) All possible recursive calls
fact(n) => {n-1}
fib(n) => {fib(n-1), fib(n-2)}
bin(n,k) => {bin(n-1,k-1),bin(n-1,k)}
lcs(i,j) => {lcs(i-1,j-1),lcs(i,j-1),lcs(i-1,j)}
knap(i,u) => {knap(i-1,u),knap(i-1,u-w[i])}
ack(i,n) => {ack(i-1,1),ack(i-1,ack(i,n-1)), ack(i,n-1)}
2) Decrement operation
fact(n) => n-1
fib(n) => n-1
bin(n,k) => [n-1,k]
lcs(i,j) => [i-1,j]
knap(i,u) => [i-1,u]
ack(i,n) => [i,n-1]
3) Minimum increment operation
fact(n) => next(n) = n+1
fib(n) => next(n) = n+1
bin(n,k) => next(n,k) = [n+1,k]
lcs(i,j) => next(i,j) = [i+1,j]
knap(i,u) => next(i,u) = [i+1,u]
ack(i,n) => next(i,n) = [i,n+1]
Section 4.2.2 talks about forming the optimized program:
Recursive
---------
def fExtOpt(x):
if base_cond(x) then fExt0(x ) -- Base case
else let rExt := fExtOpt(prev(x)) in -- Recursion
f Ext’(prev(x),rExt) -- Incremental computation
Iterative
---------
def fExtOpt(x):
if base_cond(x): return fExt0(x) -- Base case
x1 := init_arg; rExt := fExt0(x1) -- Initialization
while x1 != x: -- Iteration
x1 := next(x1); rExt := fExt’(prev(x1),rExt) -- Incremental comp
return rExt
How do I create {fibExtOpt,binExtOpt,lcsExtOpt,knapExtOpt,ackExtOpt} in Python?
Additional material about this topic can be found in one of the papers of the main author of the method, Y. Annie Liu, Professor.
So, to restate the question. We have a function f, in our case fac.
def fac(n):
if n==0:
return 1
else:
return n*fac(n-1)
It is implemented recursively. We want to implement a function facOpt that does the same thing but iteratively. fac is written almost in the form we need. Let us rewrite it just a bit:
def fac_(n, r):
return (n+1)*r
def fac(n):
if n==0:
return 1
else:
r = fac(n-1)
return fac_(n-1, r)
This is exactly the recursive definition from section 4.2. Now we need to rewrite it iteratively:
def facOpt(n):
if n==0:
return 1
x = 1
r = 1
while x != n:
x = x + 1
r = fac_(x-1, r)
return r
This is exactly the iterative definition from section 4.2. Note that facOpt does not call itself anywhere. Now, this is neither the most clear nor the most pythonic way of writing down this algorithm -- this is just a way to transform one algorithm to another. We can implement the same algorithm differently, e.g. like that:
def facOpt(n):
r = 1
for x in range(1, n+1):
r *= x
return r
Things get more interesting when we consider more complicated functions. Let us write fibObt where fib is :
def fib(n):
if n==0:
return 0
elif n==1:
return 1
else:
return fib(n-1) + fib(n-2)
fib calls itself two times, but the recursive pattern from the book allows only a single call. That is why we need to extend the function to returning not one, but two values. Fully reformated, fib looks like this:
def fibExt_(n, rExt):
return rExt[0] + rExt[1], rExt[0]
def fibExt(n):
if n == 0:
return 0, 0
elif n == 1:
return 1, 0
else:
rExt = fibExt(n-1)
return fibExt_(n-1, rExt)
def fib(n):
return fibExt(n)[0]
You may notice that the first argument to fibExt_ is never used. I just added it to follow the proposed structure exactly.
Now, it is again easy to turn fib into an iterative version:
def fibExtOpt(n):
if n == 0:
return 0, 0
if n == 1:
return 1, 0
x = 2
rExt = 1, 1
while x != n:
x = x + 1
rExt = fibExt_(x-1, rExt)
return rExt
def fibOpt(n):
return fibExtOpt(n)[0]
Again, the new version does not call itself. And again one can streamline it to this, for example:
def fibOpt(n):
if n < 2:
return n
a, b = 1, 1
for i in range(n-2):
a, b = b, a+b
return b
The next function to translate to iterative version is bin:
def bin(n,k):
if k == 0 or k == n:
return 1
else:
return bin(n-1,k-1) + bin(n-1,k)
Now neither x nor r can be just numbers. The index (x) has two components, and the cache (r) has to be even larger. One (not quite so optimal) way would be to return the whole previous row of the Pascal triangle:
def binExt_(r):
return [a + b for a,b in zip([0] + r, r + [0])]
def binExt(n):
if n == 0:
return [1]
else:
r = binExt(n-1)
return binExt_(r)
def bin(n, k):
return binExt(n)[k]
I have't followed the pattern so strictly here and removed several useless variables. It is still possible to translate to an iterative version directly:
def binExtOpt(n):
if n == 0:
return [1]
x = 1
r = [1, 1]
while x != n:
r = binExt_(r)
x += 1
return r
def binOpt(n, k):
return binExtOpt(n)[k]
For completeness, here is an optimized solution that caches only part of the row:
def binExt_(n, k_from, k_to, r):
if k_from == 0 and k_to == n:
return [a + b for a, b in zip([0] + r, r + [0])]
elif k_from == 0:
return [a + b for a, b in zip([0] + r[:-1], r)]
elif k_to == n:
return [a + b for a, b in zip(r, r[1:] + [0])]
else:
return [a + b for a, b in zip(r[:-1], r[1:])]
def binExt(n, k_from, k_to):
if n == 0:
return [1]
else:
r = binExt(n-1, max(0, k_from-1), min(n-1, k_to+1) )
return binExt_(n, k_from, k_to, r)
def bin(n, k):
return binExt(n, k, k)[0]
def binExtOpt(n, k_from, k_to):
if n == 0:
return [1]
ks = [(k_from, k_to)]
for i in range(1,n):
ks.append((max(0, ks[-1][0]-1), min(n-i, ks[-1][1]+1)))
x = 0
r = [1]
while x != n:
x += 1
r = binExt_(x, *ks[n-x], r)
return r
def binOpt(n, k):
return binExtOpt(n, k, k)[0]
In the end, the most difficult task is not switching from recursive to iterative implementation, but to have a recursive implementation that follows the required pattern. So the real question is how to create fibExt', not fibExtOpt.
So its and exercises for python i am totally stuck! you have a random Function in [a,b] you already know that the a is negative and b is positive and it has only ONE root. The true root is : -0.94564927392359 and you have to make a
def that will find the root( or zero ) that will be closest to the true root with minimum difference eps.The eps is 1e-8 or 1e-6.Note that we don't know the true root, before was an example to understand what the number we are looking for is about. Also we are given the above :
import math
def fnc(x):
""" This the function in which root we are looking for """
global a, b, eps
if not hasattr(fnc, "counter"):
fnc.counter = 0
fnc.maxtimes = (int)(0.1+math.ceil(math.log((b-a)/eps, 2.0)+2))
if fnc.counter<fnc.maxtimes:
fnc.counter += 1
return x*x*x-x-0.1
else:
return 0.0 ##
WE have to start with this :
def root(f, a, b, eps):
(sorry for my English )
Just simple bisect:
from __future__ import division
import math
def func(x):
return x*x*x-x-0.1
def sign(n):
try:
return n/abs(n)
except ZeroDivisionError:
return 0
def root(f, a, b, eps=1e-6):
f_a = f(a)
if abs(f_a) < eps:
return a
f_b = f(b)
if abs(f_b) < eps:
return b
half = (b+a)/2
f_half = f(half)
if sign(f_half) != sign(f_a):
return root(f, a, half, eps)
else:
return root(f, half, b, eps)
print root(func, -1.5, -0.5, 1e-8) # -0.945649273694
See if the following heuristic of iteratively slicing intervals into 2 equals and then choosing the admissible half is suitable for you.
def root(fnc, a, b, eps = 1e-8, maxtimes = None):
if maxtimes == None: maxtimes = (int)(0.1+math.ceil(math.log((b-a)/eps, 2.0)+2))
for counter in xrange(maxtimes+1) : # a was assumed negative and b positive
if fnc(a) > -eps : return a, -fnc(a)
if fnc(b) < eps : return b, fnc(b)
new_bound = (a + b)/2.0
print a, b, new_bound
if fnc(new_bound) < 0 : a = new_bound
else : b = new_bound
return new_bound, min(-fnc(a),fnc(b))
and then
fnc = lambda x : x**3-x-0.1
result = root(fnc, 0, 2, 1e-6)
print "root = ", result[0], "error = ", result[1]
Say I have a function
def equals_to(x,y):
a + b = c
def some_function(something):
for i in something:
...
Is there a way to use c that was calculated by equals_to as a parameter for some_function like this
equals_to(1,2)
some_function(c)
You need to return the value of c from the function.
def equals_to(x,y):
c = x + y # c = x + y not a + b = c
return c # return the value of c
def some_function(something):
for i in something:
...
return
sum = equals_to(1,2) # set sum to the return value from the function
some_function(sum) # pass sum to some_function
Also the function signature of equals_to takes the arguments x,y but in the function you use a,b and your assignment was the wrong way round, c takes the value of x + y not a + b equals c.
Strongly recommend: http://docs.python.org/2/tutorial/