I would like to find minimum of equations given in these script. It is looking very messy(but deep undestanding of equation is not needed- I suppose). At the end of def is the expression to minimize:
vys1=-Qd-40*sqrt(5)*sqrt((ch+cm)*ep*kb*Na*T)*w1
vys2=fi0-fib-Q0/cq
vys3=fib-fid+Qd/cq1
vysf= np.array([vys1,vys2,vys3])
return vysf
I write this script in matlab using lsqnonlin to compare the results. Matlab results seems much accurate. Result are (fi0,fib,fid)
Python
[-0.14833481 -0.04824387 -0.00942132] Sum(value) ~1e-3.
Matlab
[-0,13253 -0,03253 -0,02131 ] Sum(value)~1e-15
Note that script has a check for typos in equation(if they are identical in python and matlab)
for [fi0,fib,fid]=[-0.120, -0.0750 ,-0.011] the result are the same [vys1,vys2,vys3]-
python [0.00069376 0.05500097 -0.06179421]
matlab [0.0006937598,0.05500096 -0.06179421]
Are there any options in least_squares to improve results? Thanks for any help(sorry for misunderstanding english )
Python
import scipy as sc
import numpy as np
from math import sinh
import matplotlib as plt
from numpy import exp, sqrt
from scipy.optimize import leastsq,least_squares
def q(par,ep,Na,kb,T,e,gamaal,gamasi,gamax,k1,k2,k3,k4,cq,cq1,ch,cm):
fi0,fib,fid=np.array([par[0],par[1],par[2]])
AlOH= gamaal*k1*exp(e*fi0/(T*kb))/(ch + k1*exp(e*fi0/(T*kb)))
AlOH2= ch*gamaal/(ch + k1*exp(e*fi0/(T*kb)))
SiO= gamasi*k2*exp(e*fi0/(T*kb))/(ch + k2*exp(e*fi0/(T*kb)))
SiOH= ch*gamasi/(ch + k2*exp(e*fi0/(T*kb)))
X= gamax*k3*k4*exp(e*fib/(T*kb))/(ch*k4 + cm*k3 + k3*k4*exp(e*fib/ (T*kb)))
XH= ch*gamax*k4/(ch*k4 + cm*k3 + k3*k4*exp(e*fib/(T*kb)))
Xm= cm*gamax*k3/(ch*k4 + cm*k3 + k3*k4*exp(e*fib/(T*kb)))
Q0=e*(0.5*(AlOH2+SiOH-AlOH-SiO)-gamax)
Qb=e*(XH+Xm)
Qd=-Q0-Qb
w1=sc.sinh(0.5*e*fid/kb/T)
vys1=-Qd-40*sqrt(5)*sqrt((ch+cm)*ep*kb*Na*T)*w1
vys2=fi0-fib-Q0/cq
vys3=fib-fid+Qd/cq1
vysf= np.array([vys1,vys2,vys3])
return vysf
kb=1.38E-23;T=300;e=1.6e-19;Na=6.022e23;gamaal=1e16;gamasi=1e16
gamax=1e18;k1=1e-4;k2=1e5;k3=1e-4;k4=1e-4;cq=1.6;cq1=0.2
cm=1e-3;ep=80*8.8e-12
ch1=np.array([1e-3,1e-5,1e-7,1e-10])
# Check the equations, if they are same
x0=np.array([-0.120, -0.0750 ,-0.011])
val=q(x0,ep,Na,kb,T,e,gamaal,gamasi,gamax,k1,k2,k3,k4,cq,cq1,ch1[0],cm)
print(val)
w1=least_squares(q,x0, args=(kb,ep,Na,T,e,gamaal,gamasi,gamax,k1,k2,k3,
k4,cq,cq1,ch1[0],cm))
print(w1['x'])
matlab
function[F1,poten,fval]=test()
kb=1.38E-23;T=300;e=1.6e-19;Na=6.022e23;gamaal=1e16;gamasi=1e16;gamax=1e18;
k1=1e-4;k2=1e5;k3=1e-4;k4=1e-4;cq=1.6;cq1=0.2;ch=[1e-3];cm=1e-3;ep=80*8.8e- 12;
% Test if equation are same
x0=[-0.120, -0.0750 ,-0.011];
F1=rovnica(x0,ch) ;
[poten,fval]= lsqnonlin(#(c) rovnica(c,ch(1)),x0);
function[F]=rovnica(c,ch)
fi0=c(1);
fib=c(2);
fid=c(3);
aloh=exp(1).^(e.*fi0.*kb.^(-1).*T.^(-1)).*gamaal.*k1.*(ch+exp(1).^(e.* ...
fi0.*kb.^(-1).*T.^(-1)).*k1).^(-1);
aloh2=ch.*gamaal.*(ch+exp(1).^(e.*fi0.*kb.^(-1).*T.^(-1)).*k1).^(-1);
sioh=ch.*gamasi.*(ch+exp(1).^(e.*fi0.*kb.^(-1).*T.^(-1)).*k2).^(-1);
sio=exp(1).^(e.*fi0.*kb.^(-1).*T.^(-1)).*gamasi.*k2.*(ch+exp(1).^(e.* ...
fi0.*kb.^(-1).*T.^(-1)).*k2).^(-1);
Xm=cm.*gamax.*k3.*(cm.*k3+ch.*k4+exp(1).^(e.*fib.*kb.^(-1).*T.^(-1)) ...
.*k3.*k4).^(-1);
XH=ch.*gamax.*k4.*(cm.*k3+ch.*k4+exp(1).^(e.*fib.*kb.^(-1).*T.^(-1)) ...
.*k3.*k4).^(-1);
Q0=e*(0.5*(aloh2+sioh-aloh-sio)-gamax);
Qb=e*(XH+Xm);
Qd=-Q0-Qb;
F=[-Qd+(-40).*5.^(1/2).*((ch+cm).*ep.*kb.*Na.*T).^(1/2).*sinh((1/2).*e.* ...
fid.*kb.^(-1).*T.^(-1));...
fi0-fib-Q0/cq;...
(fib-fid+Qd/cq1)];
end
end
There is a mistake in this line:
w1=least_squares(q,x0, args=(kb,ep,Na,T,e,gamaal,gamasi,gamax,k1,k2,k3,
k4,cq,cq1,ch1[0],cm))
You have the argument kb in the wrong spot. The signature of q is:
def q(par,ep,Na,kb,T,e,gamaal,gamasi,gamax,k1,k2,k3,k4,cq,cq1,ch,cm):
The argument kb is between Na and T. If you fix the args argument in the least_squares call:
w1 = least_squares(q, x0, args=(ep, Na, kb, T, e, gamaal, gamasi, gamax,
k1, k2, k3, k4, cq, cq1, ch1[0], cm))
then the output of the Python script is
[ 0.00069376 0.05500097 -0.06179421]
[-0.13253313 -0.03253254 -0.02131043]
which agrees with the Matlab output.
Related
I am just a new user for SymPy. I am self learning this library for my undergrauate research. But in the middle of the process I am stuck with one code.
I have defined a function with a subscript.
U_n= x^n + 1/x^n
When I consider (U_1)^3 I get (substitute n=1)
(U_1)^3 = (x+1/x)^3
Then after simplifying this I get
(U_1)^3 = (x^3 + 1/x^3) + 3(x+ 1/x)
But one can see this answer as
(U_1)^3 = U_3 + 3U_1
How to get the output in terms of U_n 's ?
from sympy import *
from sympy import sympify
import sympy
x=symbols('x')
def u(n):
return x**n+1/x**n
def unu(eq1):
c = (eq1.subs(x, exp(x))).simplify()/2
return c.subs(cosh, Function('u')).subs(x,1)
from sympy import *
from sympy import sympify
import sympy
x=symbols('x')
def v(n):
return x**n-1/x**n
def vnu(eq2):
c = (eq2.subs(x, exp(x))).simplify()/2
return c.subs(sinh, Function('v')).subs(x,1)
This is my current code.I have built it to 2 separate U_n and V_n.But I cannot combine them.
Can someone please give an idea how to build this code using SymPy. It would be a very big help for my research.
Thank you very much.
I am trying to separately compute the elements of a Taylor expansion and did not obtain the results I was supposed to. The function to approximate is x**321, and the first three elements of that Taylor expansion around x=1 should be:
1 + 321(x-1) + 51360(x-1)**2
For some reason, the code associated with the second term is not working.
See my code below.
import sympy as sy
import numpy as np
import math
import matplotlib.pyplot as plt
x = sy.Symbol('x')
f = x**321
x0 = 1
func0 = f.diff(x,0).subs(x,x0)*((x-x0)**0/factorial(0))
print(func0)
func1 = f.diff(x,1).subs(x,x0)*((x-x0)**1/factorial(1))
print(func1)
func2 = f.diff(x,2).subs(x,x0)*((x-x0)**2/factorial(2))
print(func2)
The prints I obtain running this code are
1
321x - 321
51360*(x - 1)**2
I also used .evalf and .lambdify but the results were the same. I can't understand where the error is coming from.
f = x**321
x = sy.Symbol('x')
def fprime(x):
return sy.diff(f,x)
DerivativeOfF = sy.lambdify((x),fprime(x),"numpy")
print(DerivativeOfF(1)*((x-x0)**1/factorial(1)))
321*x - 321
I'm obviously just starting with the language, so thank you for your help.
I found a beginners guide how to Taylor expand in python. Check it out perhaps all your questions are answered there:
http://firsttimeprogrammer.blogspot.com/2015/03/taylor-series-with-python-and-sympy.html
I tested your code and it works fine. like Bazingaa pointed out in the comments it is just an issue how python saves functions internally. One could argument that for a computer it takes less RAM to save 321*x - 321 instead of 321*(x - 1)**1.
In your first output line it also gives you 1 instead of (x - 1)**0
Here is a snippet of my code.
from scipy.integrate import quad
from numpy import exp, log, inf
def f(x):
return log(log(x))/(x*log(x**2))
val, err = quad(f, exp(), exp(2))
val
I know the code is structured correctly but I cannot format the exp() correctly. What am I doing wrong? The function should output 0.069324. Thanks ahead of time for the help!!!
Here is the answer from WolfRamAlpha:
numpy's exp is a function, not a number. You want
exp(1) = e
exp(2) = e**2
or maybe
import numpy as np
np.e
np.e**2
as your integration limits.
That said, I get
from numpy import exp, log
def f(x):
return log(log(x))/(x*log(x**2))
val, err = quad(f, exp(1), exp(2))
val
returning 0.12011325347955035
This is definitely the value of this integral. You can change variables to verify
val,err = quad(lambda x: log(x)/(2*x),1,2)
which gives the same result
Just replace exp() by exp(1) and you are good to go. By the way, once you have figured out the correct function, you can also use one liner lambda functions. Your code is perfectly fine. I thought of just sharing another possible way to implement the same thing.
f = lambda x: np.log(np.log(x))/(x*np.log(x**2))
val, err = quad(f, exp(1), 2*exp(1))
Goal:
To view the value of the objective function at each iteration for scipy.optimize.fmin_l_bfgs_b.
Problem:
Giving the optional argument iprint=1 should cause output to be printed. However, doing so does not result in any output.
Other info:
I am using the Anaconda 4.3 distribution of Python 2.7 on a Windows 7 machine, Spyder IDE with IPython console.
Example Code:
import numpy as np
import scipy.optimize as opt
A = np.random.rand(20,40)
b = np.random.rand(20,)
x0 = np.ones((40,))
def objective_func(x,A,b):
objective = np.sum((A.dot(x)-b)**2) + np.sum(np.abs(x))
return objective
def gradient_func(x,A,b):
gradient = 2*A.T.dot(A.dot(x)-b) + 2*x/np.sqrt(x**2 + 10**(-8))
return gradient
x_bar = opt.fmin_l_bfgs_b(func=objective_func,
x0=x0,
fprime = gradient_func,
args=(A,b),
iprint=1)
One solution is to use a lambda function as the callback function. This allows one to pass A, b to the callback function in addition to x.
I'm pretty new to python and I got stuck on this:
I'd like to use scipy.optimize.minimize to maximize a function and I'm having some problem with the extra arguments of the function I defined.
I looked for a solution in tons of answered questions but I can't find anything that solves my problem.
I saw in Structure of inputs to scipy minimize function how to pass extra arguments that one wants to be constant in the minimization of the function and my code seems fine to me from this point of view.
This is my code:
import numpy as np
from scipy.stats import pearsonr
import scipy.optimize as optimize
def min_pears_function(a,exp):
(b,c,d,e)=a
return (1-(pearsonr(b + exp[0] * c + exp[1] * d + exp[2],e)[0]))
a = (log_x,log_y,log_t,log_z) # where log_x, log_y, log_t and log_z are numpy arrays with same length
guess_PF=[0.6,2.0,0.2]
res = optimize.minimize(min_pears_function, guess_PF, args=(a,), options={'xtol': 1e-8, 'disp': True})
When running the code I get the following error:
ValueError: need more than 3 values to unpack
But I can't see what needed argument I'm missing. The function seems to work fine, so I guess the problem is in optimize.minimize call?
Your error occurs here:
def min_pears_function(a,exp):
# XXX: This is your error line
(b,c,d,e)=a
return (1-(pearsonr(b + exp[0] * c + exp[1] * d + exp[2],e)[0]))
This is because:
the initial value you pass to optimize.minimize is guessPF which has just three values ([0.6,2.0,0.2]).
this initial value is passed to min_pears_function as the variable a.
Did you mean for it to be passed as exp? Is it exp you wish to solve for? In that case, redefine the signature as:
def min_pears_function(exp, a):
...