I'm trying to approximate Julia sets using roots of polynomials in Python. In particular I want to find the roots of the nth iterate of the polynomial q(z) = z^2-0.5. In other words I want to find the roots of q(q(q(..))) composed n times. In order to get many sample points I would need to compute the roots of polynomials of degree >1000.
I've tried solving this problem using both the built in polynomial class of numpy which has a root function and also the function solver of sympy. In the first case precision is lost when I choose degrees larger than 100. The sympy computation simply takes to long time. Here is my code:
p = P([-0.5,0,1])
for k in range(9):
p = p**2-0.5
roots = p.roots()
plt.plot([np.real(r) for r in roots], [np.imag(r) for r in roots],'x')
plt.show()
abs_vector = [np.abs(p(r)) for r in roots]
max = 0
for a in abs_vector:
if a > max:
max = a
print(max)
The max value above gives the largest value of p at a supposed root. However running this code gives me 7.881370400084486e+296 which is very large.
How would one go about computing roots of high degree polynomials with good accuracy in a short amount of time?
For the n-times composition of a polynomial q you can reconstruct the roots iteratively
q = [1,0,-0.5]
n = 9
def q_preimage(w):
c = q.copy()
c[-1] -= w
return np.roots(c)
rts = [0]
for k in range(n):
rts = np.concatenate([q_preimage(w) for w in rts])
which returns
array([ 1.36444432e+00+0.00095319j, -1.36444432e+00-0.00095319j,
1.40104860e-03-0.92828301j, -1.40104860e-03+0.92828301j,
8.82183775e-01-0.52384727j, -8.82183775e-01+0.52384727j,
8.78972436e-01+0.52576116j, -8.78972436e-01-0.52576116j,
1.19545693e+00-0.21647154j, -1.19545693e+00+0.21647154j,
3.61362916e-01+0.71612883j, -3.61362916e-01-0.71612883j,
1.19225541e+00+0.21925381j, -1.19225541e+00-0.21925381j,
3.66786415e-01-0.71269419j, -3.66786415e-01+0.71269419j,
...
or plotted
plt.plot(rts.real, rts.imag,'ob', ms=2); plt.grid(); plt.show()
Related
I need to find the k in the range [1, 10], which is the least positive integer such that binomial(k, 2)≥ m, where m≥3 - integer. The binomial() function is the binominal coefficient.
My attempt is:
After some algebraic steps, I have found the minization task: min k(k-1) - 2m ≥ 0, s.t. m≥3. I have defined the objective function and gradient. In the objective function I fixed the m=3 and my problem is how to define integer domain for the variable m.
from scipy.optimize import line_search
# objective function
def objective(k):
m = 3
return k*(k-1)-2*m
# gradient for the objective function
def gradient(k):
return 2.0 * k - 1
# define range
r_min, r_max = 1, 11
# prepare inputs
inputs = arange(r_min, r_max, 1)
# compute targets
targets = [objective(k) for k in inputs]
# define the starting point
point = 1.0
# define the direction to move
direction = 1.0
# print the initial conditions
print('start=%.1f, direction=%.1f' % (point, direction))
# perform the line search
result = line_search(objective, gradient, point, direction)
print(result)
I have see the
LineSearchWarning: The line search algorithm did not converge
Question. How to define the objective function in Python?
You are look to minimise k such that k(k-1)-2m≥0, with additional constraints on k on which we'll come back later. You can explicitly solve this inequation, by solving the corresponding equation first, that is, finding the roots of P:=X²-X-2m. The quadratic formulas give the roots (1±√(1+4m²))/2. Since P(x)→∞ as x→±∞, you know that the x that satisfy your inequation are the ones above the greatest root, and below the lowest root. Since you are only interested in positive solutions, and since 1-√(1+m²)<0, the set of wanted solutions is [(1+√(1+m²))/2,∞). Among these solutions, the smallest integer is the ceil of (1+√(1+m²))/2 which is strictly greater than 1. Let k=ceil((1+sqrt(1+m**2))/2) be that integer. If k≤10, then your problem has a solution, which is k. Otherwise, your problem has no solutions. In Python, you get the following:
import math
def solve(m):
k = math.ceil((1+math.sqrt(1+m**2))/2)
return k if k <= 10 else None
I'm trying to minimize the following function:
with respect to the parameters H and alpha using the brute force method, specifically the scipy.optimize.brute algorithm. The problem arises that I don't know how to deal with this unknown number of variables, I mean, that is 2n variables and n is an input of the program.
I have the following code, where I'd like that the minimization would lead to arrays for Hand alpha values:
import numpy as np
#Entries:
gamma = 17.0
C = 70.0
T = 1
R = 0.5
n = int(2)
def F(mins, gamma,C,T,R):
H,alpha = mins
ret = 0
for i in range(n):
inner_sum = 0
for j in range(i+1):
inner_sum += H[j]*np.tan(alpha[j])
ret += 3*gamma*H[i]*(R+inner_sum)**2
So I can get the values of H and alpha from the position of the array. I was used to multivariable minimization with brute force but only when I have a fixed number of variables. In this case, how can I proceed?
P.S.: I know that the minimization of the above expression will lead to 0 for both variables. This is just a small piece of a bigger expression to illustrate the problem, in which a working algorithm would be very helpful. Thanks in advance!
I have to calculate the exponential of the following array for my project:
w = [-1.52820754859, -0.000234000845064, -0.00527938881237, 5797.19232191, -6.64682108484,
18924.7087966, -69.308158911, 1.1158892974, 1.04454511882, 116.795573742]
But I've been getting overflow due to the number 18924.7087966.
The goal is to avoid using extra packages such as bigfloat (except "numpy") and get a close result (which has a small relative error).
1.So far I've tried using higher precision (i.e. float128):
def getlogZ_robust(w):
Z = sum(np.exp(np.dot(x,w).astype(np.float128)) for x in iter_all_observations())
return np.log(Z)
But I still get "inf" which is what I want to avoid.
I've tried clipping it using nump.clip():
def getlogZ_robust(w):
Z = sum(np.exp(np.clip(np.dot(x,w).astype(np.float128),-11000, 11000)) for x in iter_all_observations())
return np.log(Z)
But the relative error is too big.
Can you help me solving this problem, if it is possible?
Only significantly extended or arbitrary precision packages will be able to handle the huge differences in numbers. The exponential of the largest and most negative numbers in w differ by 8000 (!) orders of magnitude. float (i.e. double precision) has 'only' 15 digits of precision (meaning 1+1e-16 is numerically equal to 1), such that adding the small numbers to the huge exponential of the largest number has no effect. As a matter of fact, exp(18924.7087966) is so huge, that it dominates the sum. Below is a script performing the sum with extended precision in mpmath: the ratio of the sum of exponentials and exp(18924.7087966) is basically 1.
w = [-1.52820754859, -0.000234000845064, -0.00527938881237, 5797.19232191, -6.64682108484,
18924.7087966, -69.308158911, 1.1158892974, 1.04454511882, 116.795573742]
u = min(w)
v = max(w)
import mpmath
#using plenty of precision
mpmath.mp.dps = 32768
print('%.5e' % mpmath.log10(mpmath.exp(v)/mpmath.exp(u)))
#exp(w) differs by 8000 orders of magnitude for largest and smallest number
s = sum([mpmath.exp(mpmath.mpf(x)) for x in w])
print('%.5e' % (mpmath.exp(v)/s))
#largest exp(w) dominates such that ratio over the sums of exp(w) and exp(max(w)) is approx. 1
If the issues of loosing digits in the final results due to hugely different orders of magnitudes of added terms in not a concern, one could also mathematically transform the log of sums over exponentials the following way avoiding exp of large numbers:
log(sum(exp(w)))
= log(sum(exp(w-wmax)*exp(wmax)))
= wmax + log(sum(exp(w-wmax)))
In python:
import numpy as np
v = np.array(w)
m = np.max(v)
print(m + np.log(np.sum(np.exp(v-m))))
Note that np.log(np.sum(np.exp(v-m))) is numerically zero as the exponential of the largest number completely dominates the sum here.
Numpy has a function called logaddexp which computes
logaddexp(x1, x2) == log(exp(x1) + exp(x2))
without explicitly computing the intermediate exp() values. This way it avoids the overflow. So here is the solution:
def getlogZ_robust(w):
Z = 0
for x in iter_all_observations():
Z = np.logaddexp(Z, np.dot(x,w))
return Z
I'm using this code to get the zeros of a nonlinear function.
Most certainly, the function should have 1 or 3 zeros
import numpy as np
import matplotlib.pylab as plt
from scipy.optimize import fsolve
[a, b, c] = [5, 10, 0]
def func(x):
return -(x+a) + b / (1 + np.exp(-(x + c)))
x = np.linspace(-10, 10, 1000)
print(fsolve(func, [-10, 0, 10]))
plt.plot(x, func(x))
plt.show()
In this case the code give the 3 expected roots without any problem.
But, with c = -1.5 the code miss a root, and with c = -3 it find a non existing root.
I want to calculate the roots for many different parameter combinations, so changing the seeds manually is not a practical solution.
I appreciate any solution, trick or advice.
What you need is an automatic way to obtain good initial estimates of the roots of the function. This is in general a difficult task, however, for univariate, continuous functions, it is rather simple. The idea is to note that (a) this class of functions can be approximated to an arbitrary precision by a polynomial of appropriately large order, and (b) there are efficient algorithms for finding (all) the roots of a polynomial. Fortunately, Numpy provides functions for both performing polynomial approximation and finding polynomial roots.
Let's consider a specific function
[a, b, c] = [5, 10, -1.5]
def func(x):
return -(x+a) + b / (1 + np.exp(-(x + c)))
The following code uses polyfit and poly1d to approximate func over the range of interest (-10<x<10) by a polynomial function f_poly of order 10.
x_range = np.linspace(-10,10,100)
y_range = func(x_range)
pfit = np.polyfit(x_range,y_range,10)
f_poly = np.poly1d(pfit)
As the following plot shows, f_poly is indeed a good approximation of func. Even greater accuracy can be obtained by increasing the order. However, there is no point in pursuing extreme accuracy in the polynomial approximation, since we are looking for approximate estimates of the roots that will be later refined by fsolve
The roots of the polynomial approximation can be simply obtained as
roots = np.roots(pfit)
roots
array([-10.4551+1.4893j, -10.4551-1.4893j, 11.0027+0.j ,
8.6679+2.482j , 8.6679-2.482j , -5.7568+3.2928j,
-5.7568-3.2928j, -4.9269+0.j , 4.7486+0.j , 2.9158+0.j ])
As expected, Numpy returns 10 complex roots. However, we are only interested for real roots within the interval [-10,10]. These can be extracted as follows:
x0 = roots[np.where(np.logical_and(np.logical_and(roots.imag==0, roots.real>-10), roots.real<10))].real
x0
array([-4.9269, 4.7486, 2.9158])
Array x0 can serve as the initialization for fsolve:
fsolve(func, x0)
array([-4.9848, 4.5462, 2.7192])
Remark: The pychebfun package provides a function that directly gives all the roots of a function within an interval. It is also based on the idea of performing polynomial approximation, however, it uses a more sophisticated (yet, more efficient) approach. It automatically chooses the best polynomial order of the approximation (no user input), with the polynomial roots being practically equal to the true ones (no need to refine them via fsolve).
This simple code gives the same roots as those by fsolve
import pychebfun
f_cheb = pychebfun.Chebfun.from_function(func, domain = (-10,10))
f_cheb.roots()
Between two stationary points (i.e., df/dx=0), you have one or zero roots. In your case it is possible to calculate the two stationary points analytically:
[-c + log(1/(b - sqrt(b*(b - 4)) - 2)) + log(2),
-c + log(1/(b + sqrt(b*(b - 4)) - 2)) + log(2)]
So you have three intervals where you need to find a zero. Using Sympy saves you from doing the calculations by hand. Its sy.nsolve() allows to robustly find a zero in an interval:
import sympy as sy
a, b, c, x = sy.symbols("a, b, c, x", real=True)
# The function:
f = -(x+a) + b / (1 + sy.exp(-(x + c)))
df = f.diff(x) # calculate f' = df/dx
xxs = sy.solve(df, x) # Solving for f' = 0 gives two solutions
# numerical values:
pp = {a: 5, b: 10, c: .5} # values for a, b, c
fpp = f.subs(pp)
xxs_pp = [xpr.subs(pp).evalf() for xpr in xxs] # numerical stationary points
xxs_pp.sort() # in ascending order
# resulting intervals:
xx_low = [-1e9, xxs_pp[0], xxs_pp[1]]
xx_hig = [xxs_pp[0], xxs_pp[1], 1e9]
# calculate roots for each interval:
xx0 = []
for xl_, xh_ in zip(xx_low, xx_hig):
try:
x0 = sy.nsolve(fpp, (xl_, xh_), solver="bisect") # calculate zero
except ValueError: # no solution found
continue
xx0.append(x0)
print("The zeros are:")
print(xx0)
sy.plot(fpp) # plot function
I'm struggling with a numerical precision issue in calculating the beta binomial likelihood. My goal is to estimate the probability y mod 10 = d for some digit d, given that y is Binomial(n,p) and p is Beta(a,b). I'm trying to come up with a fast solution for large n, by which I mean at least 1000. One thing that seems to be giving me reasonable answers is to use simulation.
def npliketest_exact(n,digit,a,b):
#draw 1000 values of p
probs = np.array(beta.rvs(a,b,size=1000))
#create an array of numbers whose last digit is digit
digits = np.arange(digit,n+1,10)
#create a function that calculates the pmf at x given p
exact_func = lambda x,p: binom(n,p).pmf(x)
#given p, the likeklihood of last digit "digit" is the sum over all entries in digits
likelihood = lambda p: exact_func(digits,p).sum()
#return the average of that likelihood over all the draws
return np.vectorize(likelihood)(probs).mean()
np.random.seed(1)
print npliketest_exact(1000,9,1,1) #0.0992310195195
This might be ok, but I'm worried about the precision of this strategy. In particular if there's a better/more precise way to do this calculation I'm eager to figure out how to do it.
I've started trying to use the log likelihood to come up with that answer, but I'm running into numerical stability issues even with that.
def llike(n,k,a,b):
out = gammaln(n+1) + gammaln(k+a) + gammaln(n-k+b) + gammaln(a+b) - \
( gammaln(k+1) + gammaln(n-k+1) + gammaln(a) + gammaln(b) + gammaln(n+a+b) )
return out
print exp(llike(1000,9,1,1)) #.000999000999001
print exp(llike(1000,500,1,1)) #.000999000999001
Since the beta 1,1 has mean 0.5, the probability of getting y=500 from a beta binomial with n=1000 should be much higher than getting 9, but the above calculations show a suspicious constant value.
Another thing I tried, which was suggested elsewhere on stackoverflow to deal with this problem, was to use some clever tricks to support numerical stability that are apparently hidden in scipy's betaln formula.
def binomln(n, k): #log of the binomial coefficient
# Assumes binom(n, k) >= 0
return -betaln(1 + n - k, 1 + k) - log(n + 1)
def log_betabinom_exact(n,k,a,b):
return binomln(n,k) + betaln(k+a,n-k+b) - betaln(a,b)
print exp(log_betabinom_exact(1000,9,1,1)) #.000999000999001
print exp(log_betabinom_exact(1000,500,1,1)) #0.000999000999001
Again, same suspicious constants. Would appreciate any advice. Would using sympy be of any help on this?
**** Followup
Sorry guys, dumb mistake on my part, Beta(1,1) is uniform so the results I was getting make sense. Trying different parameters makes things look differnet for different values of k.