For E=0.46732451 and t=1.07589765 I am trying to solve for the upper limit of the integral t= \int_{0}^{z} 1/sqrt(2*(0.46732451-z**2)), I plotted this function and it looks like this .
Around t=1 it kind of asymptotes.
I have the following code
import numpy as np
from scipy import integrate
from scipy.optimize import fsolve
def fg(z_up,t,E):
def h(z,E):
return 1/(np.sqrt(2*(E-z**2)))
b, err = integrate.quad(h, 0, z_up,args=(E))
return b-t
x0 = 0.1
print fsolve(fg, x0, args=(1.07589765, 0.46732451))[0]
But this code just outputs the guess value, no matter what I put, so I am guessing it has something to do with the fact that the curve asymptotes there. I should note that this code works for other values of t which are away from the asymptotic region.
Can anyone help me resolve this?
Thanks
EDIT After playing around for a while, I solved the problem, but it's kind of a patchwork, it only works for similar problems not in general (or does it?)
I made the following changes: the maximum value that z can attain is sqrt(0.46732451), so I set x0=0.5*np.sqrt(0.46732451) and set factor anywhere between 0.1 to 1, and out pops the correct answer. I don't have an explanation for this, perhaps someone who is an expert in this matter can help?
You should use bisect instead, as it handles nan without problems:
print bisect(fg, 0.4, 0.7, args=(1.07589765, 0.46732451))
Here 0.4 and 0.7 are taken as an example but you can generalize this for almost any diverging integral by using 0 and let's say 1e12 as the limits.
However, I'm not sure I understand what you really want to do... if you want to find the limit at which the integral diverges, cf. your
I am trying to solve for the upper limit of the integral
then it's simply for z_up -> \sqrt{E} \approx 0,683611374...
So to find the (approximate) numerical value of the integral you just have to decrease z_up from that value until quad stops giving a nan...
Related
I am trying to solve the elliptical differential equation using fourth-order runge-kutta method in python.
After execution, I get a very small part of the actual plot that should be obtained and alongside with it an error saying that:
"RuntimeWarning: invalid value encountered in double_scalars"
import numpy as np
import matplotlib.pyplot as plt
#Define constants
g=9.8
L=1.04
#Define the differential Function
def fun(y,x):
return-(2*(g/L)*(np.cos(y)-np.cos(np.pi/6)))**(1/2)
#Define variable arrays
x=np.zeros(1000)
y=np.zeros(1000)
y[0]=np.pi/6
dx=0.5
#Runge-Kutta Method
for i in range(len(y)-1):
k1=fun(x[i],y[i])
k2=fun(x[i]+dx/2, y[i]+dx*k1/2)
k3=fun(x[i]+dx/2, y[i]+dx*k2/2)
k4=fun(x[i]+dx, y[i]+dx*k3)
y[i+1]=y[i]+dx/6*(k1+2*k2+2*k3+k4)
x[i+1]=x[i]+dx
#print(y)
#print(x)
plt.plot(x,y)
plt.xlabel('Time')
plt.ylabel('Theta')
plt.grid()
And the graph I obtain is something like,
My question is why am I getting the error message? Thanks for helping!
Several points that lead to this behavior. First, you switched the order of the arguments in the ODE function, probably to make it compatible with odeint. Use the tfirst=True optional argument to avoid that and have the independent variable always first.
The actual source of the error is the term
(np.cos(y)-np.cos(np.pi/6)))**(1/2)
remember that in your version y has the value x[i], so that at some point the expression under the root becomes negative.
If you correct the first point, you will probably still encounter the second error as the exact solution moves parabolically towards the fixed point, so that the stages of RK4 are likely to overshoot. One can fix that by providing a sufficiently secured square root function,
def softroot(x): return x/max(1e-12,abs(x))**0.5
#Define the differential Function
def fun(x,y):
return -(2*(g/L)*softroot(np.cos(y)-np.cos(np.pi/6)))
#Define variable arrays
dx=0.01
x=np.arange(0,1,dx)
y=np.zeros(x.shape)
y[0]=np.pi/6
...
results in a plot
as the solution already starts in the fixed point. Shifting the initial point a little down to y[0]=np.pi/6-1e-8 produces a jump to the fixed point below.
im new into Python and i try to figure out how everythings work. I have a little problem with the minimize function of the scipy.optimize package. I try to minimize a given function with some start values but python gives me very high parameter values.
This ist my simple code:
import numpy as np
from scipy.optimize import minimize
global array
y_wert = np.array([1,2,3,4,5,6,7,8])
global x_wert
x_wert = np.array([1,2,3,4,5,6,7,8])
def Test(x):
Summe = 0
for i in range(0,len(y_wert)):
Summe = Summe + (y_wert[i] - (x[0]*x_wert[i]+x[1]))
return(Summe)
x_0 = [1,0]
xopt = minimize(Test,x_0, method='nelder-mead',options={'xatol': 1e-8, 'disp': True})
print(xopt)
If i run this script the best given parameters are:
[1.02325529e+44, 9.52347084e+40]
which really doesnt solve this problem. Ive also try some slightly different startvalues but that doesnt solve my problems.
Can anyone give me a clue as to where my mistake lies?
Thanks a lot for your help!
Your test function is effectively a straight line with negative gradient so there is no minimum, it's an infinitely decreasing function, that explains your large results, try something like x squared instead
We're supposed to plot the sine wave with the help of Matplotlib but in degrees.
First we're supposed to get every sin(x) from 0-360 degrees, every ten degrees.
I'm a beginner, would really appreciate the help.
def getsin():
for i in range(0,361,10):
y=math.sin(math.radians(i))
sinvalue.append(round(y,3)
sinvalue=[]
getsin()
x=np.linspace(0,360,37)
plot.plot(x,sinvalue)
plot.xlabel("x, degrees")
plot.ylabel("sin(x)")
plot.show()
Your code is almost OK (you missed a closing parenthesis on line sinvalue.append(round(y,3)), but it can be perfected.
E.g., it's usually considered bad practice updating a global variable (I mean sinvalue) from inside a function... I don't say that it should never be done, I say that it should be avoided as far as possible.
A more common pattern is to have the function to return a value...
def getsin():
sines = []
for i in range(0,361,10):
y=math.sin(math.radians(i))
sines.append(round(y,3))
return sines
Why I say this is better? Compare
sinvalue=[]
getsin()
with
sinvalue = getsin()
At least in my opinion, the second version is waaay better because it makes clear what is happening, now sinvalue is clearly the result of getting the sines...
how can I minimize a function (uncostrained), respect a[0] and a[1]?
example (this is a simple example for I uderstand scipy, numpy and py):
import numpy as np
from scipy.integrate import *
from scipy.optimize import *
def function(a):
return(quad(lambda t: ((np.cos(a[0]))*(np.sin(a[1]))*t),0,3))
i tried:
l=np.array([0.1,0.2])
res=minimize(function,l, method='nelder-mead',options={'xtol': 1e-8, 'disp': True})
but I get errors.
I get the results in matlab.
any idea ?
thanks in advance
This is just a guess, because you haven't included enough information in the question for anyone to really know what the problem is. Whenever you ask a question about code that generates an error, always include the complete error message in the question. Ideally, you should include a minimal, complete and verifiable example that we can run to reproduce the problem. Currently, you define function, but later you use the undefined function chirplet. That makes it a little bit harder for anyone to understand your problem.
Having said that...
scipy.integrate.quad returns two values: the estimate of the integral, and an estimate of the absolute error of the integral. It looks like you haven't taken this into account in function. Try something like this:
def function(a):
intgrl, abserr = quad(lambda t: np.cos(a[0])*np.sin(a[1])*t, 0, 3)
return intgrl
I'm working on some python code to interpolate irregular data onto a 180° lat x 360° lon spherical grid. The code is currently hanging when I call the following:
def geo_interp(lats,lons,data,grid_size_deg):
deg2rad = pi/180.
new_lats = np.linspace(grid_size_deg, 180, 180) * deg2rad
new_lons = np.linspace(grid_size_deg, 360, 360) * deg2rad
knotst, knotsp = new_lats.copy(), new_lons.copy()
knotst[0] += .0001
knotst[-1] -= .0001
knotsp[0] += .0001
knotsp[-1] -= .0001
lut = LSQSphereBivariateSpline(lats.ravel(),lons.ravel(),data.T.ravel(),knotst,knotsp)
data_interp = lut(new_lats, new_lons)
return data_interp
The arrays I'm using as arguments when I call the above subroutine all fit the requirements of LSQSphereBivariateSpline as listed in the documentation. When I run it, it takes much longer than I feel like it should take to process a 180x360 dataset.
When I run the script with python -m trace --trace, the last line of output before nothing happens for a long time is
fitpack2.py(1025): w=w, eps=eps)
As far as I can tell, line 1025 of fitpack2.py is in a comment, which is even more confusing.
So my questions are:
1. Is there a way to tell if it's hanging or just very slow?
2. If it's hanging, how might I fix it?
The only thing I can think of is that I have no idea what I'm doing as far as choosing knots. Is there a good way to choose those? I just went with the grid I'll be interpolating to later, since the example in the doc seemed to be an arbitrary grid.
UPDATE: It finally finished after about 3 hours but the "interpolated data" looked like random noise. Also, if this is relevant, as far as I can tell LSQSphereBivariateSpline is the only function I can use for this because my lats and lons are not strictly increasing.
Also, I should add that when it finished it output the following warning: Warning (from warnings module):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/interpolate/fitpack2.py", line 1029
warnings.warn(message)
UserWarning:
WARNING. The coefficients of the spline returned have been computed as the
minimal norm least-squares solution of a (numerically) rank
deficient system (deficiency=16336, rank=48650). Especially if the rank
deficiency, which is computed by 6+(nt-8)*(np-7)+ier, is large,
the results may be inaccurate. They could also seriously depend on
the value of eps.
SOLUTION: I had far too many knots, causing both the glacial pace and the useless results.