I have the following code where I need to solve an expression to find the roots. The expression needs to be solved for omega.
import numpy as np
from sympy import Symbol,lambdify
import scipy
from mpmath import findroot, exp
eta = 1.5
tau = 5 /1000
omega = Symbol("omega")
Tf = exp(1j * omega * tau)
symFun = 1 + Tf * (eta - 1)
denom = lambdify((omega), symFun, "scipy")
Tf_high = 1j * 2 * np.pi * 1000 * tau
sol = findroot(denom, [0+1j,Tf_high])
The program gives an error and I am not able to correct. The error is : TypeError: cannot create mpf from 0.005Iomega
Edit 1 - I have tried to implement different approach based on comments. First approach was to use the sympy.solveset module. Second approach was to use fsolve from scipy.optimise. Both are not giving proper output.
For clarity, I am copying the relevant code to each approach along with the output I am getting.
Approach 1 - Sympy
import numpy as np
from sympy import Symbol,exp
from sympy.solvers.solveset import solveset,solveset_real,solveset_complex
import matplotlib.pyplot as plt
def denominator(eta,Tf):
return 1 + Tf * (eta - 1)
if __name__ == "__main__":
eta = 1.5
tau = 5 /1000
omega = Symbol("omega")
n = 1
Tf = exp(1j * omega * tau)
denom = 1 + Tf * (eta - 1)
symFun = denominator(eta,Tf)
sol = solveset_real(denom,omega)
sol1 = solveset_complex(denom,omega)
print('In real domain', sol)
print('In imaginary domain',sol1)
Output:
In real domain EmptySet
In imaginary domain ImageSet(Lambda(_n, -200.0*I*(I*(2*_n*pi + pi) + 0.693147180559945)), Integers)
Approach 2 Scipy
import numpy as np
from scipy.optimize import fsolve, root
def denominator(eta,tau,n, omega):
Tf = n * np.exo(1j * omega * tau)
return 1 + Tf * (eta - 1)
if __name__ == "__main__":
eta = 1.5
tau = 5 /1000
n = 1
func = lambda omega : 1 + (eta - 1) * (n * np.exp( 1j * omega * tau))
sol = fsolve(func,10)
print(sol)
Output:
Cannot cast array data from dtype('complex128') to dtype('float64') according to the rule 'safe'
How do I correct the program? Please suggest me the approach that will give proper results.
SymPy is a computer algebra system and solves the equation like a human would. SciPy uses numeric optimization. If you want ALL the solutions, I suggest going with SymPy. If you want one solution, I suggest going with SciPy.
Approach 1 - SymPy
The solutions SymPy gives will be more "interactive" for you as the developer. But it will be perfectly correct almost all the time.
from sympy import *
eta = S(3)/2
tau = S(5) / 1000
omega = Symbol("omega")
n = 1
Tf = exp(I * omega * tau)
denom = 1 + Tf * (eta - 1)
sol = solveset(denom, omega)
print(sol)
Giving
ImageSet(Lambda(_n, -200*I*(I*(2*_n*pi + pi) + log(2))), Integers)
This is the true mathematical solution.
Notice how I put S around an integer before dividing it. When dividing integers in Python, it loses accuracy because it uses floating point numbers. Converting it to SymPy objects keep all the accuracy.
Since we know we have an ImageSet over integers, we can start listing a few solutions:
for n in range(-3, 3):
print(complex(sol.lamda(n)))
Which gives
(-3141.5926535897934-138.62943611198907j)
(-1884.9555921538758-138.62943611198907j)
(-628.3185307179587-138.62943611198907j)
(628.3185307179587-138.62943611198907j)
(1884.9555921538758-138.62943611198907j)
(3141.5926535897934-138.62943611198907j)
With some experience, you could automate it so that the whole program only returns 1 solution no matter on the type of output returned by solveset.
Approach 2 - SciPy
The solutions SciPy gives will be more automated. You will never have a perfect answer and different choices of the initial conditions may not converge all the time.
import numpy as np
from scipy.optimize import root
eta = 1.5
tau = 5 / 1000
n = 1
def f(omega: Tuple):
omega_real, omega_imag = omega
omega: complex = omega_real + omega_imag*1j
result: complex = 1 + (eta - 1) * (n * np.exp(1j * omega * tau))
return result.real, result.imag
sol = root(f, [100, 100])
print(sol)
print(sol.x[0]+sol.x[1]*1j)
Which gives
fjac: array([[ 0.00932264, 0.99995654],
[-0.99995654, 0.00932264]])
fun: array([-2.13074003e-12, -8.86389816e-12])
message: 'The solution converged.'
nfev: 30
qtf: array([ 2.96274855e-09, -6.82780898e-10])
r: array([-0.00520194, -0.00085702, -0.00479143])
status: 1
success: True
x: array([ 628.31853072, -138.62943611])
(628.3185307197314-138.62943611241522j)
Looks like that's one of the solutions SymPy found. So we must be doing something right. Note that there are many initial values that don't converge, for example, sol = root(f, [1, 1]).
Related
I would like to solve the (x+1)e^x=c equation in Python.
The equation has been successfully solved by hand using lambert w functions as depicted in the figure below:
Using same steps, I would like to solve (x+1)e^x programmatically. I have coded it using the module SymPy as per the step shown in the figure above , but without success.
Is there any to solve these kinds of equations in Python?
import numpy as np
from sympy import *
n = symbols('n')
sigmao=0.06866
sigmas=0.142038295
theta=38.9
rad=(np.pi/180)*38.9076
cos=np.cos(rad)
sec=1/np.cos(rad)
out = (0.06*0.7781598455*n*(1-exp(-2*0.42*sec*n))+exp(-2*0.42*n*sec)*sigmas)/sigmao
#Apply diff for the above expression.
fin=diff(out, n)
print(solve(fin,n))
from scipy.optimize import fsolve
import numpy as np
const = 20
def func(x):
return [(x[0]+1) * np.exp(x[0]) - const]
result = fsolve(func, [1])[0]
print('constant: ', const, ', solution: ', result)
#check
print('check: ', (result+1) * np.exp(result))
#Output[]:
constant: 20.0 , solution: 1.9230907433218063
check: 20.0
Preview : https://onlinegdb.com/By8Z2Jwgw
Your expression is very numeric. As sympy's solve tries to find a perfect symbolic solution, sympy gets into troubles.
To find numeric solutions, sympy has nsolve (which allows sympy's expressions but behind the scenes calls mpmath's numeric solver). Unlike solve, here an initial guess is needed:
from sympy import symbols, exp, diff, nsolve, pi, cos
n = symbols('n')
sigmao = 0.06866
sigmas = 0.142038295
theta = 38.9076
rad = (pi / 180) * theta
sec = 1 / cos(rad)
out = (0.06 * 0.7781598455 * n * (1 - exp(-2 * 0.42 * sec * n)) + exp(-2 * 0.42 * n * sec) * sigmas) / sigmao
# Apply diff for the above expression.
fin = diff(out, n)
result = nsolve(fin, n, 1)
print(result, fin.subs(n, result).evalf())
Result: 1.05992379637846 -7.28565300819065e-17
Note that when working with numeric values, you should be very careful to use as many digits as possible to avoid accumulation of errors. Whenever you have an exact expression, it is recommended to leave that expression into the code, instead of replacing it with digits. (Usually, 64 bits or about 16 digits are used in calculations, but for intermediate calculations 80 bits can be taken into account).
To solve the original question with sympy:
from sympy import symbols, Eq, exp, solve
x = symbols('x')
solutions = solve(Eq((x + 1) * exp(x), 20))
for s in solutions:
print(s.evalf())
Result: 1.92309074332181
I am trying to fit two sweeps that take on a Fraunhofer distribution. Please help me to understand a snippet in my code to make it run better, because it changes the fitting, but I don't know why! This is the line I mean specifically:
if MaxIndexup>MaxIndexdown: FitDataup = TempData.loc[TempData["Field (Oe)"] <= MaxIndexup]
When this runs, it puts a bunch of files with a lot of data that is irrelevant to the question at hand. Which is how do you identify the peak of the (almost) Gaussian distribution, and then how do you fit it so the lobes work better (if you understand lobes given by a Fraunhofer diffraction pattern). I have my code working, I just want/need to understand more so how to parse through and fit the data better:
from os import walk
import numpy as np
import pandas as pd
import re
import math
from scipy import stats
import scipy.special as sp
from lmfit import Model
#create function that computes the Fruanhofer diffraction pattern: reference--> https://en.wikipedia.org/wiki/Fraunhofer_diffraction; also see the Guassian section
def Fraunup(x, Imaxup, Lambda, M, CorrectedBitSize):
y1 = Imaxup * abs(2*sp.jv(1,((float(CorrectedBitSize)*(2*abs(Lambda) + 2*dN + dF)*(x+M))/(phi0*math.pow(10,10))))/((float(CorrectedBitSize)*(2*Lambda + 2*dN + dF)*(x+M))/(phi0*math.pow(10,10))))
return y1
def Fraundown(x, Imaxdown, Lambda, M, CorrectedBitSize):
y2 = Imaxdown * abs(2 * sp.jv(1, ((float(CorrectedBitSize) * (2 * abs(Lambda) + 2 * dN + dF) * (x + M)) / (phi0 * math.pow(10, 10)))) / ( (float(CorrectedBitSize) * (2 * Lambda + 2 * dN + dF) * (x + M)) / (phi0 * math.pow(10, 10))))
return y2
Iparamup = 1.5 * TempData["Ic+ (mA)"].max()
Iparamdown = 1.5 * TempData2["Ic+ (mA)"].max()
Lambparam = (0.00000009)
Mparam = (200 / CorrectedBitSize_Squared) #gives you H_shift
#Create the Model and add specfic parameters. M deals with the H_Shift
modelup = Model(Fraunup)
params = modelup.make_params()
params.add('Imaxup',value=Iparamup)
params.add('Lambda',value=Lambparam)
params.add('M',value=np.mean(Mparam),min=0,max=500)
modeldown = Model(Fraundown)
params = modeldown.make_params()
params.add('Lambda',value=Lambparam)
params.add('M',value=np.mean(Mparam),min=0,max=500)
params.add('Imaxdown', value=Iparamdown)
if MaxIndexup>MaxIndexdown:
FitDataup = TempData.loc[TempData["Field (Oe)"] <= MaxIndexup]
FitDataup = FitDataup[~np.isnan(FitDataup['Ic+ (mA)'])].reset_index()
FitDatadown = FitDatadown[~np.isnan(FitDatadown['Ic+ (mA)'])].reset_index()
for x in range(len(Mparam)):
resultsup = modelup.fit(FitDataup['Ic+ (mA)'],x=-FitDataup['Field (Oe)'], CorrectedBitSize=CorrectedBitSize[x], Imaxup=Iparamup, Lambda=Lambparam, M= Mparam[x], method='Powell')
resultsdown = modeldown.fit(FitDatadown['Ic+ (mA)'], x=-FitDatadown['Field (Oe)'], CorrectedBitSize=CorrectedBitSize[x],Imaxdown=Iparamdown, Lambda=Lambparam, M=Mparam[x], method='Powell')
#fit the value using the best results
YFitValueup = Fraunup(TempData["Field (Oe)"], resultsup.best_values['Imaxup'], resultsup.best_values['Lambda'], resultsup.best_values['M'], resultsup.best_values['CorrectedBitSize'])
YFitValuedown = Fraundown(TempData2["Field (Oe)"], resultsdown.best_values['Imaxdown'], resultsdown.best_values['Lambda'], resultsdown.best_values['M'], resultsdown.best_values['CorrectedBitSize'])
I'm trying to solve an integral equation using the following code (irrelevant parts removed):
def _pdf(self, a, b, c, t):
pdf = some_pdf(a,b,c,t)
return pdf
def _result(self, a, b, c, flag):
return fsolve(lambda t: flag - 1 + quad(lambda tau: self._pdf(a, b, c, tau), 0, t)[0], x0)[0]
Which takes a probability density function and finds a result tau such that the integral of pdf from tau to infinity is equal to flag. Note that x0 is a (float) estimate of the root defined elsewhere in the script. Also note that flag is an extremely small number, on the order of 1e-9.
In my application fsolve only successfully finds a root about 50% of the time. It often just returns x0, significantly biasing my results. There is no closed form for the integral of pdf, so I am forced to integrate numerically and feel that this might be introducing some inaccuracy?
EDIT:
This has since been solved using a method other than that described below, but I'd like to get quadpy to work and see if the results improve at all. The specific code I'm trying to get to work is as follows:
import quadpy
import numpy as np
from scipy.optimize import *
from scipy.special import gammaln, kv, gammaincinv, gamma
from scipy.integrate import quad, simps
l = 226.02453163
mu = 0.00212571582056
nu = 4.86569872444
flag = 2.5e-09
estimate = 3 * mu
def pdf(l, mu, nu, t):
return np.exp(np.log(2) + (l + nu - 1 + 1) / 2 * np.log(l * nu / mu) + (l + nu - 1 - 1) / 2 * np.log(t) + np.log(
kv(nu - l, 2 * np.sqrt(l * nu / mu * t))) - gammaln(l) - gammaln(nu))
def tail_cdf(l, mu, nu, tau):
i, error = quadpy.line_segment.adaptive_integrate(
lambda t: pdf(l, mu, nu, t), [tau, 10000], 1.0e-10
)
return i
result = fsolve(lambda tau: flag - tail_cdf(l, mu, nu, tau[0]), estimate)
When I run this I get an assertion error from assert all(lengths > minimum_interval_length). I'm not quite sure of how to remedy this; any help would be very much appreciated!
As an example, I tried 1 / x for the integration between 1 and alpha to retrieve the target integral 2.0. This
import quadpy
from scipy.optimize import fsolve
def f(alpha):
beta, _ = quadpy.quad(lambda x: 1.0/x, 1, alpha)
return beta
target = 2.0
res = fsolve(lambda alpha: target - f(alpha), x0=2.0)
print(res)
correctly returns 7.38905611.
The failing quadpy assertion
assert all(lengths > minimum_interval_length)
you're getting means that the adaptive integration hit its limit: Either relax your tolerance a bit, or decrease the minimum_interval_length (see here).
I'm just trying to plot two gaussians and to find the intersection point. I have the following code. It's not plotting the exact intersection though and I really cannot figure out why. It's like just barely slightly off but I worked through the derived solution if we took the log of subtracted gaussians and yeah it seems like it should be correct. Can anyone help? Thank you so much!
import numpy as np
import matplotlib.pyplot as plt
def plot_normal(x, mean = 0, sigma = 1):
return 1.0/(2*np.pi*sigma**2) * np.exp(-((x-mean)**2)/(2*sigma**2))
# found online
def solve_gasussians(m1, s1, m2, s2):
a = 1.0/(2.0*s1**2) - 1.0/(2.0*s2**2)
b = m2/(s2**2) - m1/(s1**2)
c = m1**2 /(2*s1**2) - m2**2 / (2.0*s2**2) - np.log(s2/s1)
return np.roots([a,b,c])
s1 = np.linspace(0, 10,300)
s2 = np.linspace(0, 14, 300)
solved_val = solve_gasussians(5.0, 0.5, 7.0, 1.0)
print solved_val
solved_val = solved_val[0]
plt.figure('Baseline Distributions')
plt.title('Baseline Distributions')
plt.xlabel('Response Rate')
plt.ylabel('Probability')
plt.plot(s1, plot_normal(s1, 5.0, 0.5),'r', label='s1')
plt.plot(s2, plot_normal(s2, 7.0, 1.0),'b', label='s2')
plt.plot(solved_val, plot_normal(solved_val, 7.0, 1.0), 'mo')
plt.legend()
plt.show()
You have a small bug in plot_normal function - you are missing square root in the denominator. Proper version:
def plot_normal(x, mean = 0, sigma = 1):
return 1.0/np.sqrt(2*np.pi*sigma**2) * np.exp(-((x-mean)**2)/(2*sigma**2))
gives the expected result:
And two remarks.
Remember that you can have 2 roots of the equation in general (two intersection points), and this is the case with parameters you provided.
As far as I know np.roots gives you approximate result, but you cat get exact result easily, rewriting solve_gasussians function as:
def solve_gasussians(m1, s1, m2, s2):
# coefficients of quadratic equation ax^2 + bx + c = 0
a = (s1**2.0) - (s2**2.0)
b = 2 * (m1 * s2**2.0 - m2 * s1**2.0)
c = m2**2.0 * s1**2.0 - m1**2.0 * s2**2.0 - 2 * s1**2.0 * s2**2.0 * np.log(s1/s2)
x1 = (-b + np.sqrt(b**2.0 - 4.0 * a * c)) / (2.0 * a)
x2 = (-b - np.sqrt(b**2.0 - 4.0 * a * c)) / (2.0 * a)
return x1, x2
I don't know where the mistake lies in your code. But I think I found the code your borrowed from and made part of the adjustment you need.
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
def solve(m1,m2,std1,std2):
a = 1/(2*std1**2) - 1/(2*std2**2)
b = m2/(std2**2) - m1/(std1**2)
c = m1**2 /(2*std1**2) - m2**2 / (2*std2**2) - np.log(std2/std1)
return np.roots([a,b,c])
m1 = 5
std1 = 0.5
m2 = 7
std2 = 1
result = solve(m1,m2,std1,std2)
x = np.linspace(-5,9,10000)
plot1=plt.plot(x,[norm.pdf(_,m1,std1) for _ in x])
plot2=plt.plot(x,[norm.pdf(_,m2,std2) for _ in x])
plot3=plt.plot(result[0],norm.pdf(result[0],m1,std1) ,'o')
plt.show()
I will offer two pieces of unsolicited advice that might make life easier for you (in the way they do for me):
When you adapt code try to make small, incremental changes and check that the code still works at each step.
Look for existing free libraries. In this case norm from scipy is a good replacement for what was used in the original code.
The mistake is here. This line:
def plot_normal(x, mean = 0, sigma = 1):
return 1.0/(2*np.pi*sigma**2) * np.exp(-((x-mean)**2)/(2*sigma**2))
Should be this:
def plot_normal(x, mean = 0, sigma = 1):
return 1.0/np.sqrt(2*np.pi*sigma**2) * np.exp(-((x-mean)**2)/(2*sigma**2))
You forgot the sqrt.
It would be wiser to use a pre-existing normal pdf if that's available, such as:
import scipy.stats
def plot_normal(x, mean = 0, sigma = 1):
return scipy.stats.norm.pdf(x,loc=mean,scale=sigma)
It's also possible to solve for the intersections exactly. This answer provides a quadratic equation for the roots of the Gaussians' intersections. Using maxima to solve for x gives the following expression. Which, while complicated, does not rely on iterative methods and can be automatically generated from simpler expressions.
def solve_gaussians(m1,s1,m2,s2):
x1 = (s1*s2*np.sqrt((-2*np.log(s1/s2)*s2**2)+2*s1**2*np.log(s1/s2)+m2**2-2*m1*m2+m1**2)+m1*s2**2-m2*s1**2)/(s2**2-s1**2)
x2 = -(s1*s2*np.sqrt((-2*np.log(s1/s2)*s2**2)+2*s1**2*np.log(s1/s2)+m2**2-2*m1*m2+m1**2)-m1*s2**2+m2*s1**2)/(s2**2-s1**2)
return x1,x2
Putting it altogether gives:
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
def plot_normal(x, mean = 0, sigma = 1):
return scipy.stats.norm.pdf(x,loc=mean,scale=sigma)
#Use the equation from [this answer](https://stats.stackexchange.com/a/12213/12116) solved for x
def solve_gaussians(m1,s1,m2,s2):
x1 = (s1*s2*np.sqrt((-2*np.log(s1/s2)*s2**2)+2*s1**2*np.log(s1/s2)+m2**2-2*m1*m2+m1**2)+m1*s2**2-m2*s1**2)/(s2**2-s1**2)
x2 = -(s1*s2*np.sqrt((-2*np.log(s1/s2)*s2**2)+2*s1**2*np.log(s1/s2)+m2**2-2*m1*m2+m1**2)-m1*s2**2+m2*s1**2)/(s2**2-s1**2)
return x1,x2
s = np.linspace(0, 14,300)
x = solve_gaussians(5.0,0.5,7.0,1.0)
plt.figure('Baseline Distributions')
plt.title('Baseline Distributions')
plt.xlabel('Response Rate')
plt.ylabel('Probability')
plt.plot(s, plot_normal(s, 5.0, 0.5),'r', label='s1')
plt.plot(s, plot_normal(s, 7.0, 1.0),'b', label='s2')
plt.plot(x[0],plot_normal(x[0],5.,0.5),'mo')
plt.plot(x[1],plot_normal(x[1],5.,0.5),'mo')
plt.legend()
plt.show()
Giving:
I have equation:
import numpy as np
from scipy import optimize
def wealth_evolution(price, wealth=10, rate=0.01, q=1, realEstate=0.1, prev_price=56):
sum_wantedEstate = 100
for delta in range(1,4):
z = rate - ((price-prev_price) / (price + q / rate))
k = delta * np.divide(1.0, float(np.maximum(0.0, z)))
wantedEstate = (wealth / (price + q / rate)) * np.minimum(k, 1) - realEstate
sum_wantedEstate += wantedEstate
return sum_wantedEstate
So I find the solution of this equation:
sol = optimize.fsolve(wealth_evolution, 200)
But if I substituted sol into equation I wouldn't get 0 (welth_evolution(sol)). Why it happens? fsolve finds the roots of f(x)=0.
UPD:
The full_output gives:
(array([ 2585200.]), {'qtf': array([-99.70002298]), 'nfev': 14, 'fjac': array([[-1.]]), 'r': array([ 3.45456519e-11]), 'fvec': array([ 99.7000116])}, 5, 'The iteration is not making good progress, as measured by the \n improvement from the last ten iterations.')
Have you tried plotting your function?
import numpy as np
from scipy import optimize
from matplotlib import pyplot as plt
small = 1e-30
def wealth_evolution(price, wealth=10, rate=0.01, q=1, realEstate=0.1, prev_price=56):
sum_wantedEstate = 100
for delta in range(1,4):
z = rate - ((price-prev_price) / (price + q / rate))
k = delta * np.divide(1.0, float(np.maximum(small, z)))
wantedEstate = (wealth / (price + q / rate)) * np.minimum(k, 1) - realEstate
sum_wantedEstate += wantedEstate
return sum_wantedEstate
price_range = np.linspace(0,10000,10000)
we = [wealth_evolution(p) for p in price_range]
plt.plot(price_range,we)
plt.xlabel('price')
plt.ylabel('wealth_evolution(price)')
plt.show()
At least for the parameters you specify it does not have a root, which is what fsolve tries to find. If you want to minimize a function you can try fmin. For this function this will not help though, because it seems to just asymptotically decay to 99.7 or so. So minimizing it would lead to infinite price.
So either you have to live with this or come up with a different function to optimize or constrain your search range (in which case you don't have to search, because it will just be the maximum value...).