Find two zeros of a function with Python - python

I have a function f(x) which I know has two zeros within an interval and I need to compute both x values for wich the function cross 0.
I usually use
import scipy.optimize as opt
opt.brentq(f, xmin, xmax)
But the problem is this method is working if the function has one 0 in the interval, and it is not very simple to know where to divide in two parts.
The function is also time costly to evaluate...

I think a good approach would be to pre-process the search of the zeros by sampling f before searching for the zeros. During that pre-process, you evaluate f to detect if the sign of the function has changed.
def preprocess(f,xmin,xmax,step):
first_sign = f(xmin) > 0 # True if f(xmin) > 0, otherwise False
x = xmin + step
while x <= xmax: # This loop detects when the function changes its sign
fstep = f(x)
if first_sign and fstep < 0:
return x
elif not(first_sign) and fstep > 0:
return x
x += step
return x # If you ever reach here, that means that there isn't a zero in the function !!!
With this function , you can separate your initial interval in several smaller intervals. For example :
import scipy.optimize as opt
step = ...
xmid = preprocess(f,xmin,max,step)
z0 = opt.brentq(f,xmin,xmid)
z1 = opt.brentq(f,xmid,xmax)
Depending of the functions f you use, you may need to separate your interval in more than two sub-intervals. Just iterates through [xmin,xmax] like this :
x_list = []
x = x_min
while x < xmax: # This discovers when f changes its sign
x_list.append(x)
x = preprocess(f,x,xmax,step)
x_list.append(xmax)
z_list = []
for i in range(len(x_list) - 1):
z_list.append(opt.brentq(f,x_list[i],x_list[i + 1]))
In the end, z_list contains all the zeros in the given interval [xmin,xmax].
Keep in mind that this algorithm is time-consuming but will do the job.

Related

While loop for Python odient solver?

I have a mathematical model of differential equations that begins as linear and then uses correctional coefficients after reaching a certain value (1).
Currently, I solve the linear function independently, find out where the array goes from less than 1 to greater than 1, and then use that value from the array as the new initial condition. I also correct the time scale.
def vttmodel_linear(m,t,tm,tv,M_max):
n = 1/(7*tm)
dMdt = n
return dMdt
M_0 = 0
M_max = 1 + 7*((RH_crit-RH)/(RH_crit-100)) - 2*np.square((RH_crit-RH)/(RH_crit-100))
print(M_max)
# tm = days
# M = weeks so 7*tm
t = np.arange(0,104+1)
tm = np.exp(-0.68*np.log(T) - 13.9*np.log(RH) + 0.14*W - 0.33*SQ + 66.02)
tv = np.exp(-0.74*np.log(T) - 12.72*np.log(RH) + 0.06*W + 61.50)
m = odient(vttmodel_linear, M_0, t, args=(tm,tv,M_max))
M_0 = m[(np.where(m>1)[0][0])-1]
t = np.where(m>1)[0]
Then I use the new initial condition, M_0 and the updated time scale to solve the non-linear portion of the model.
def vttmodel(M,t,tm,tv,M_max):
n = 1/(7*tm)
k1 = 2/((tv/tm)-1)
k2 = np.max([1-np.exp(2.3*(M-M_max)), 0])
dMdt = n*k1*k2
return dMdt
M = odient(vttmodel, M_0, t, args=(tm,tv,M_max))
I then splice the arrays m and M at the location I found earlier and graph the result.
I would like to find a simplified way to do this. I have tried using If statements within the odient function and also a While loop when calling the two functions, but have not had any luck interrupting the odient function. Suggestions would be helpful. Thank you.

How to make a condition to terminate appending?

I am writing a code to plot several projectile trajectories of various theta values in Python.
theta = np.arange(np.pi/6, np.pi/3)
t = np.linspace(0,2,num=100)
while y0>=0:
for i in theta:
x = []
y = []
for k in t:
x0= v_0*np.cos(i)*k
y0= v_0*np.sin(i)*k - 1/2*g*(k**2)
x.append(x0)
x.append(y0)
After forming the arrays and putting in the necessary conditions for projectile, I have used a while loop to put the terminating instruction in the program. I think, I am missing a crucial point. Thanks!
I think you want your terminating condition inside your inner-most loop. See below, where I also defined a couple of missing constants (v_0, g) and fixed one x to y. also printing the results
theta = np.arange(np.pi/6, np.pi/3)
t = np.linspace(0,2,num=100)
v_0 = 1
g=10
for i in theta:
x = []
y = []
for k in t:
x0= v_0*np.cos(i)*k
y0= v_0*np.sin(i)*k - 1/2*g*(k**2)
x.append(x0)
y.append(y0)
if y0 < 0: # the main change here. Stop looping when y_0 below zero
break
print(f'theta:{i}')
print(f'x:{x}')
print(f'y:{y}')
produces
theta:0.5235987755982988
x:[0.0, 0.017495462702715934, 0.03499092540543187, 0.052486388108147805, 0.06998185081086374, 0.08747731351357968]
y:[0.0, 0.008060401999795939, 0.012039587797163551, 0.011937557392102841, 0.007754310784613805, -0.0005101520253035577]
Plotting it (y vs x), looks reasonable
It is also worth noting that your definition of theta = np.arange(np.pi/6, np.pi/3) looks rather strange, what are you trying to achieve here?

FFT Algorithm Bug

Attempting to implement the FFT Algorithm in python, and hitting a bug that I can't seem to figure out.
Here is my code:
def FFT(co, inverse=False):
if len(co) <= 1:
return co
even = FFT(co[::2], inverse)
odd = FFT(co[1::2], inverse)
y = [0] * len(co)
z = -1 if inverse else 1
for k in range(0, len(co)//2):
w = np.round(math.e**(z*1j*math.pi*(2*k / len(co))), decimals=10)
y[k] = even[k] + w*odd[k]
y[k + len(co)//2] = even[k] - w*odd[k]
return y
when I run
x1 = FFT([1, 1, 2, 0])
print x1
print np.fft.fft([1, 1, 2, 0])
I get:
[(4+0j), (-1+1j), (2+0j), (-1-1j)]
[ 4.+0.j -1.-1.j 2.+0.j -1.+1.j]
So for index 1 and index 3, it's the complex conjugate. Any ideas?
The definition of the forward Discrete Fourier Transform used by np.fft.fft is given by:
You should notice the negative sign in the argument to the complex exponential.
In your implementation on the other hand, you are using a positive sign for the forward transform, and such an inversing in the sign of the arguments to the complex exponential results in conjugating the frequency spectrum.
So, for your implementation to yield the same results as np.fft.fft you simply have to invert the sign of the forward and backward transforms with:
z = +1 if inverse else -1
(instead of z = -1 if inverse else 1).

Finite difference approximations in python

I am trying to calculate the derivative of a function at x = 0, but I keep getting odd answers with all functions I have tried. For example with f(x)=x**2 I get the derivative to be 2 at all points. My finite difference coefficients are correct, it is second order accurate for the second derivative with respect to x.
from numpy import *
from matplotlib.pyplot import *
def f1(x):
return x**2
n = 100 # grid points
x = zeros(n+1,dtype=float) # array to store values of x
step = 0.02/float(n) # step size
f = zeros(n+1,dtype=float) # array to store values of f
df = zeros(n+1,dtype=float) # array to store values of calulated derivative
for i in range(0,n+1): # adds values to arrays for x and f(x)
x[i] = -0.01 + float(i)*step
f[i] = f1(x[i])
# have to calculate end points seperately using one sided form
df[0] = (f[2]-2*f[1]+f[0])/step**2
df[1] = (f[3]-2*f[2]+f[1])/step**2
df[n-1] = (f[n-1]-2*f[n-2]+f[n-3])/step**2
df[n] = (f[n]-2*f[n-1]+f[n-2])/step**2
for i in range(2,n-1): # add values to array for derivative
df[i] = (f[i+1]-2*f[i]+f[i-1])/step**2
print df # returns an array full of 2...
The second derivative of x^2 is the constant 2, and you use the central difference quotient for the second derivative, as you can also see by the square in the denominator. Your result is absolutely correct, your code does exactly what you told it to do.
To get the first derivative with a symmetric difference quotient, use
df[i] = ( f[i+1] - f[i-1] ) / ( 2*step )
first order derivative at point x of function f1 (for the case f1(x)=x^2) can be obtained:
def f1(x):
return (x**2)
def derivative (f, x, step=0.0000000000001):
return ((f(x+step)-f(x))/step)
hope that helps

Make a number more probable to result from random

I'm using x = numpy.random.rand(1) to generate a random number between 0 and 1. How do I make it so that x > .5 is 2 times more probable than x < .5?
That's a fitting name!
Just do a little manipulation of the inputs. First set x to be in the range from 0 to 1.5.
x = numpy.random.uniform(1.5)
x has a 2/3 chance of being greater than 0.5 and 1/3 chance being smaller. Then if x is greater than 1.0, subtract .5 from it
if x >= 1.0:
x = x - 0.5
This is overkill for you, but it's good to know an actual method for generating a random number with any probability density function (pdf).
You can do that by subclassing scipy.stat.rv_continuous, provided you do it correctly. You will have to have a normalized pdf (so that its integral is 1). If you don't, numpy will automatically adjust the range for you. In this case, your pdf has a value of 2/3 for x<0.5, and 4/3 for x>0.5, with a support of [0, 1) (support is the interval over which it's nonzero):
import scipy.stats as spst
import numpy as np
import matplotlib.pyplot as plt
import ipdb
def pdf_shape(x, k):
if x < 0.5:
return 2/3.
elif 0.5 <= x and x < 1:
return 4/3.
else:
return 0.
class custom_pdf(spst.rv_continuous):
def _pdf(self, x, k):
return pdf_shape(x, k)
instance = custom_pdf(a=0, b=1)
samps = instance.rvs(k=1, size=10000)
plt.hist(samps, bins=20)
plt.show()
tmp = random()
if tmp < 0.5: tmp = random()
is pretty easy way to do it
ehh I guess this is 3x as likely ... thats what i get for sleeping through that class I guess
from random import random,uniform
def rand1():
tmp = random()
if tmp < 0.5:tmp = random()
return tmp
def rand2():
tmp = uniform(0,1.5)
return tmp if tmp <= 1.0 else tmp-0.5
sample1 = []
sample2 = []
for i in range(10000):
sample1.append(rand1()>=0.5)
sample2.append(rand2()>=0.5)
print sample1.count(True) #~ 75%
print sample2.count(True) #~ 66% <- desired i believe :)
First off, numpy.random.rand(1) doesn't return a value in the [0,1) range (half-open, includes zero but not one), it returns an array of size one, containing values in that range, with the upper end of the range having nothing to do with the argument passed in.
The function you're probably after is the uniform distribution one, numpy.random.uniform() since this will allow an arbitrary upper range.
And, to make the upper half twice as likely is a relatively simple matter.
Take, for example, a random number generator r(n) which returns a uniformly distributed integer in the range [0,n). All you need to do is adjust the values to change the distribution:
x = r(3) # 0, 1 or 2, # 1/3 probability each
if x == 2:
x = 1 # Now either 0 (# 1/3) or 1 (# 2/3)
Now the chances of getting zero are 1/3 while the chances of getting one are 2/3, basically what you're trying to achieve with your floating point values.
So I would simply get a random number in the range [0,1.5), then subtract 0.5 if it's greater than or equal to one.
x = numpy.random.uniform(high=1.5)
if x >= 1: x -= 0.5
Since the original distribution should be even across the [0,1.5) range, the subtraction should make [0.5,1.0) twice as likely (and [1.0,1.5) impossible), while keeping the distribution even within each section ([0,0.5) and [0.5,1)):
[0.0,0.5) [0.5,1.0) [1.0,1.5) before
<---------><---------><--------->
[0.0,0.5) [0.5,1.0) [0.5,1.0) after
You could take a "mixture model" approach where you split the process into two steps: first, decide whether to take option A or B, where B is twice as likely as A; then, if you chose A, return a random number between 0.0 and 0.5, else if you chose B, return one between 0.5 and 1.0.
In the example, the randint randomly returns 0, 1, or 2, so the else case is twice as likely as the if case.
m = numpy.random.randint(3)
if m==0:
x = numpy.random.uniform(0.0, 0.5)
else:
x = numpy.random.uniform(0.5, 1.0)
This is a little more expensive (two random draws instead of one) but it can generalize to more complicated distributions in a fairly straightforward way.
if you want a more fluid randomness, you can just square the output of the random function
(and subtract it from 1 to make x > 0.5 more probable instead of x < 0.5).
x = 1 - sqr(numpy.random.rand(1))

Categories

Resources