mpmath laplace inverse function in python - python

I am trying to find the laplace inverse of an expression for which all but one variable are already defined at the time of declaration:
from numpy import *
import mpmath as mp
p0 = 1
E = 2
c= 3
L = 4
x = 2.5
t = linspace(1,5,10)
ulaplace = []
def U(s):
return(c*p0*(-exp(L*s/c) + exp(s*(L + 2*x)/c))*exp(-s*x/c)/(E*s**2*(exp(2*L*s/c) + 1)))
for ti in t:
ulaplace.append(mp.invertlaplace(U, ti, method='talbot'))
But I am getting this error:
Traceback (most recent call last):
File "D:\TEMP\IDLEscripts\CompareAnalyticalSolutions2.py", line 46, in <module>
ulaplace.append(mp.invertlaplace(U, ti, method='talbot'))
File "C:\Python35\lib\site-packages\mpmath\calculus\inverselaplace.py", line 805, in invertlaplace
fp = [f(p) for p in rule.p]
File "C:\Python35\lib\site-packages\mpmath\calculus\inverselaplace.py", line 805, in <listcomp>
fp = [f(p) for p in rule.p]
File "D:\TEMP\IDLEscripts\CompareAnalyticalSolutions2.py", line 43, in U
return(c*p0*(-exp(L*s/c) + exp(s*(L + 2*x)/c))*exp(-s*x/c)/(E*s**2*(exp(2*L*s/c) + 1)))
TypeError: attribute of type 'int' is not callable
I also tried the lambda function format suggested by the doc website but still got the same error.
Does the mpmath.invertlaplace function require that everything be in numerical termsat the time of definition? I am asking because this worked:
>>> import mpmath as mp
>>> def F(s):
return 1/s
>>> mp.invertlaplace(F,5, method = 'talbot')
mpf('1.0')
If so, I need to be able to circumvent this. The whole point for me is to play around with the other variables and see how they affect the inverse laplacian. Furthermore one would think that the function gets evaluated before it is passed on to mpmath.
If not, then what on earth is going on here?

Allright I got it. Basically the function that mp.invertlaplace needs to itself only use mpmath defined functions. In the code provided in the original question I am using exp from the numpy library. So exp(x) is really numpy.exp(x). To make the code work it needs to call the mpmath.exp function as follows:
def U(s):
return -p0*mp.exp(s*x/c)/(E*s*(-s*mp.exp(L*s/c)/c - s*mp.exp(-L*s/c)/c)) + p0*mp.exp(-s*x/c)/(E*s*(-s*mp.exp(L*s/c)/c - s*mp.exp(-L*s/c)/c))
I have not tested the above on the reduced example I provided in the original question, since it is a subset of the more general script. However it should work and this appears to be the root of the problem.

Related

How to subsitute symbols in sympyfied expression properly?

my goal is to have a string turned into a symbolic expression using sympify and then make substitutions.
import sympy as sp
Eq_Str = 'a*x+b'
Eq_Sym = sp.sympify(Eq_Str)
Then, for instance, substitute a for something else:
Eq_Sym.subs(a,2)
But I get the error:
Traceback (most recent call last):
File "<ipython-input-5-e9892d6ffa06>", line 1, in <module>
Eq_Sym.subs(a,2)
NameError: name 'a' is not defined
I understand that there is no symbol a in the workspace. Am I right?
Is there a way to get the symbols from the set I get from Eq_Sym.free_symbols into the workspace so I can substitute them in Eq_Sym.
Thank you very much for taking the time to read this.
you can use globals() for that:
import sympy as sp
Eq_Str = 'a*x+b'
Eq_Sym = sp.sympify(Eq_Str)
for s in Eq_Sym.free_symbols :
globals()[s.name] = s;
print (Eq_Sym.subs(a,2)); #b + 2*x

odeint -(TypeError: can't convert expression to float ) - Importing expression into a function to perform odeint

This is the error I get
Traceback (most recent call last):
File "C:\Users\user\.spyder-py3\Numerical Methods Problems\FreeFall.py", line 40, in <module>
ans=odeint(vel,0,t)
File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\integrate\odepack.py", line 245, in odeint
int(bool(tfirst)))
File "C:\ProgramData\Anaconda3\lib\site-packages\sympy\core\expr.py", line 325, in __float__
raise TypeError("can't convert expression to float")
TypeError: can't convert expression to float
And Here is my code - I am pretty new to coding and am learning to use it for numerical computation:
from scipy.integrate import odeint
from sympy import *
import numpy as np
import math
def diff_cd(re_no):
Re=Symbol('Re')
expr=(24/Re)+(6/(1+(Re**0.5)))+0.4
ans=diff(expr,Re).subs(Re,re_no)
return ans
def diff_re(k,u_no):
u=Symbol('u')
expr=k*u
ans=diff(expr,u).subs(u,u_no)
return ans
ans = [diff_cd(20),diff_re(11,15)]
rhog=1.2
mug=1.872*(10**(-5))
a=0.3
u=Symbol('u')
pi=math.pi
k=(2*rhog*a/mug)
Re=k*u
p1=(rhog*pi*(a**2)*u*Re*((24/Re)+(6/(1+(Re**0.5)))+0.4))+(0.5*rhog*pi*(a**2)*(u**2)*diff_cd(Re)*diff_re(k,u))
ansfu=p1*(-1/24)
def vel(y,t):
dudt = ansfu
return dudt
t=np.linspace(1,100,100)
ans=odeint(vel,0,t)
print(ans)
I just need is to get an answer without this error. Also is there a way to do all this in a single function
If I add a print(vel(0,0)) before the ode call I get
-271.868595022194*u**2*(-3.97723522060237e-7*u**(-0.5)/(u**0.5 +
0.00509901951359279)**2 - 1.6224e-8/u**2) -
543.737190044387*u**2*(0.4 + 6/(196.116135138184*u**0.5 + 1) + 0.000624/u)
That is, a sympy expression, not a number. odeint cannot work with that!
u is defined as Symbol, and thus any python expression using it will also be a sympy expression.
Especially if you are new to coding, you should stick with one package or the other. If defining the function symbolically, then use sympy and its own solvers. But if a numeric solution of the kind that scipy produces is important, then define the function with python/numpy. Don't try to mix sympy and numpy (without a lot more experience).
Old question but it seems you could've solved this problem using sympy's lambdify function. You'd do it like this:
f = lambdify(x, x**2, modules=['scipy'])
now f is a lambda that you can use with scipy.

scipy.optimize.minimize Jacobian function causes 'Value Error: The truth value of an array with more than one element is ambiguous'

I am using the BFGS method, giving it the negative log likelihood of my squared exponential/RBF kernel, as well as the gradient (Jacobian) of it. Leaving out the gradient, it works fine using first differences - but the
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
error comes up once I try and use the gradient of the NLL. Also note that while my source code in the SE_der function (gradient/Jacobian) below doesn't use .any() or .all() in the result, I have also tried both of those, only to get exactly the same error.
The preceding trace is:
Traceback (most recent call last):
File "loaddata.py", line 107, in <module>
gp.fit(X, y)
File "/home/justinting/programming/bhm/ML/gp.py", line 33, in fit
res = minimize(self.SE_NLL, gp_hp_guess, method='bfgs', jac=True)
File "/usr/lib/python3.5/site-packages/scipy/optimize/_minimize.py", line 441, in minimize
return _minimize_bfgs(fun, x0, args, jac, callback, **options)
File "/usr/lib/python3.5/site-packages/scipy/optimize/optimize.py", line 865, in _minimize_bfgs
old_fval, old_old_fval)
File "/usr/lib/python3.5/site-packages/scipy/optimize/optimize.py", line 706, in _line_search_wolfe12
old_fval, old_old_fval)
File "/usr/lib/python3.5/site-packages/scipy/optimize/linesearch.py", line 282, in line_search_wolfe2
phi, derphi, old_fval, old_old_fval, derphi0, c1, c2, amax)
File "/usr/lib/python3.5/site-packages/scipy/optimize/linesearch.py", line 379, in scalar_search_wolfe2
if (phi_a1 > phi0 + c1 * alpha1 * derphi0) or \
The relevant code is as follows:
gp_hp_guess = [1.0] * 3 # initial guess
res = minimize(self.SE_NLL, gp_hp_guess, method='bfgs', jac=self.SE_der)
# other stuff
def SE_der(self, args):
[f_err, l_scale, n_err] = args
L = self.L_create(f_err, l_scale, n_err)
alpha = linalg.solve(L.T, (linalg.solve(L, self.y))) # save for use with derivative func
aaT = alpha.dot(alpha.T)
K_inv = np.linalg.inv(L.T).dot(np.linalg.inv(L))
# self.K_inv = np.linalg.inv(self.L.T).dot(np.linalg.inv(self.L))
dK_dtheta = np.gradient(self.K_se(self.X, self.X, f_err, l_scale))[0]
der = 0.5 * np.matrix.trace((aaT - K_inv).dot(dK_dtheta))
return -der
def SE_NLL(self, args):
[f_err, l_scale, n_err] = args
L = self.L_create(f_err, l_scale, n_err)
alpha = linalg.solve(L.T, (linalg.solve(L, self.y))) # save for use with derivative func
nll = (
0.5 * self.y.T.dot(alpha) +
np.matrix.trace(L) + # sum of diagonal
L.shape[0]/2 * math.log(2*math.pi)
)
return nll
I've left out the source code of the helper functions as the NLL works fine when the gradient function isn't used, and they share the same helper functions.
When calling the SE_der function directly passing in the optimised parameters after the fact (and not actually using the gradient in the optimisation), it outputs a single number as expected (or at least I think that's what is expected), so I'm failing to spot the problem.
Is this error a misunderstanding on my part of what scipy expects in its Jacobian function, or something else? I tried digging through the Python source code, but the actual function call dealing with the functions are hidden behind functions that don't seem to be in the Python code on Github - I'm not sure if they're in private/C++ repos somewhere else.
Look at the side bar. See all those SO questions about that same ValueError?
While the circumstances vary, in nearly every case it is the result of using a boolean array in a Python context that expects a scalar boolean.
A simple example is
In [236]: if np.arange(10)>5:print('yes')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-236-633002262b65> in <module>()
----> 1 if np.arange(10)>5:print('yes')
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
np.arange(10)>5 produces a boolean array, np.array([False, False, ...]).
Combining boolean expressions can also produce this. np.arange(10)>5 | np.arange(10)<2 produces this, (np.arange(10)>5) | (np.arange(10)<2) does not - because of the presidence of the logical operators. Using and instead of | in this context is hopeless.
I'll look at your code in more detail, but in the meantime maybe this will help you find the problem yourself.
==================
from the error stack:
if (phi_a1 > phi0 + c1 * alpha1 * derphi0) or \
the code at this point expects (phi_a1 > phi0 + c1 * alpha1 * derphi0) (and whatever follows the or) to be a scalar. Presumably one of those variables is an array with multiple values. Admittedly this is occurring way down the calling stack so it will difficult to trace those values back to your code.
Prints, focusing on variable type and shape, might be most useful. Sometimes on these iterative solvers the code runs fine for one loop, and then some variable changes to array, and it chokes on the next loop.
==================
Why are you using np.matrix.trace? In my tests that produces a 2d single element np.matrix. It's not obvious that it would produce this ValueError, but it's still suspicious.

Numerical double integration of a function in python with a list of fixed variables

I am a little stuck on a function I am trying to numerically integrate through scipy, python.
For simplicity I will define the function as:
integral f(x,y)= SUM[double integral(ax+by)dxdy]
a and b are constants, but they are different for every equation that is integrated. I have integrated each function separately and then summed the result over all the integrals, however this takes significant time to calculate and it is not ideal for what I am attempting to achieve.
Is there a way to integrate the entire function at once by expanding the sum such that:
integral f(x,y)=double integral [(a1x+b1y)+(a2x+b2y)...(anx+bny)]dxdy
then passing the function with a list of (a,b) tuples, etc to scipy's dblquad function?
I am struggling to find anything anywhere in the literature relating to this at the moment.
*EDIT
I have included an example code to show what it is I want to achieve a little more clearly:
import sys
import re
import math
from scipy.integrate import dblquad
def f((x,y),variables):
V=0
for v in variables:
a,b=v
V=V+ax+by
return (V)
def integral(x_max,y_max,variables):
return dblquad(f, 0, y_max, lambda x: 0, lambda x: x_max,args=variables)
def main():
variables=[(1,2),(3,4),(5,6)] #example variables. The length of this list can change with the code I am running.
x_max=y_max=1
integral(x_max,y_max,variables)
if __name__ == '__main__':
main()
The error that gets returned is thus:
Traceback (most recent call last):
File "integration_example.py", line 23, in <module>
main()
File "integration_example.py", line 19, in main
integral(x_max,y_max,variables)
File "integration_example.py", line 14, in integral
return dblquad(f, 0, y_max, lambda x: 0, lambda x: x_max,args=variables)
File "/usr/lib/python2.7/dist-packages/scipy/integrate/quadpack.py", line 435, in dblquad
return quad(_infunc,a,b,(func,gfun,hfun,args),epsabs=epsabs,epsrel=epsrel)
File "/usr/lib/python2.7/dist-packages/scipy/integrate/quadpack.py", line 254, in quad
retval = _quad(func,a,b,args,full_output,epsabs,epsrel,limit,points)
File "/usr/lib/python2.7/dist-packages/scipy/integrate/quadpack.py", line 319, in _quad
return _quadpack._qagse(func,a,b,args,full_output,epsabs,epsrel,limit)
File "/usr/lib/python2.7/dist-packages/scipy/integrate/quadpack.py", line 382, in _infunc
myargs = (x,) + more_args
TypeError: can only concatenate tuple (not "list") to tuple
Obviously the function doesn't like me passing a list of values to put into the integral in the way I have written this. Is there a way to do this?
(sorry that's probably a better way of phrasing the question).
I'm not entirely sure, but it seems your bug is basically simply that you are referring to the argument you are passing as args to f as variables (which also should be a tuple, not a list). You should then unpack the unknown number of variables with *args. Try:
import sys
import re
import math
from scipy.integrate import dblquad
def f(x,y,*args):
V=0
for v in args:
a,b=v
V=V+a*x+b*y
return (V)
def integral(x_max, y_max, variables):
return dblquad(f, 0, y_max, lambda x: 0, lambda x: x_max, args=variables)
def main():
variables=((1,2),(3,4),(5,6)) #example variables. The length of this list can change with the code I am running.
x_max=y_max=1
integral(x_max,y_max,variables)
if __name__ == '__main__':
main()
(Note also you need a*x, not ax.)

ifft function gives "'str' object is not callable" error

I am trying to take the inverse Fourier transform of a list, and for some reason I keep getting the following error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "simulating_coherent_data.py", line 238, in <module>
exec('ift%s = np.fft.ifft(nd.array(FTxSQRT_PS%s))'(x,x))
TypeError: 'str' object is not callable
And I can't figure out where I have a string. The part of my code it relates to is as follows
def FTxSQRT_PS(FT,PS):
# Import: The Fourier Transform and the Power Spectrum, both as lists
# Export: The result of FTxsqrt(PS), as a list
# Function:
# Takes each element in the FT and PS and finds FTxsqrt(PS) for each
# appends each results to a list called signal
signal = []
print type(PS)
for x in range(len(FT)):
indiv_signal = np.abs(FT[x])*math.sqrt(PS[x])
signal.append(indiv_signal)
return signal
for x in range(1,number_timesteps+1):
exec('FTxSQRT_PS%s = FTxSQRT_PS(fshift%s,power_spectrum%s)'%(x,x,x))
exec('ift%s = np.fft.ifft(FTxSQRT_PS%s)'(x,x))
Where FTxSQRT_PS%s are all lists. fshift%s is a np.array and power_spectrum%s is a list. I've also tried setting the type for FTxSQRT_PS%s as a np.array but that did not help.
I have very similar code a few lines up that works fine;
for x in range(1,number_timesteps+1):
exec('fft%s = np.fft.fft(source%s)'%(x,x))
where source%s are all type np.array
The only thing I can think of is that maybe np.fft.ifft is not how I should be taking the inverse Fourier transform for Python 2.7.6 but I also cannot find an alternative.
Let me know if you'd like to see the whole code, there is about 240 lines up to where I'm having trouble, though a lot of that is commenting.
Thanks for any help,
Teresa
You are missing a %
exec('ift%s = np.fft.ifft(FTxSQRT_PS%s)'(x,x))
Should be:
exec('ift%s = np.fft.ifft(FTxSQRT_PS%s)'%(x,x))

Categories

Resources