how can I minimize a function (uncostrained), respect a[0] and a[1]?
example (this is a simple example for I uderstand scipy, numpy and py):
import numpy as np
from scipy.integrate import *
from scipy.optimize import *
def function(a):
return(quad(lambda t: ((np.cos(a[0]))*(np.sin(a[1]))*t),0,3))
i tried:
l=np.array([0.1,0.2])
res=minimize(function,l, method='nelder-mead',options={'xtol': 1e-8, 'disp': True})
but I get errors.
I get the results in matlab.
any idea ?
thanks in advance
This is just a guess, because you haven't included enough information in the question for anyone to really know what the problem is. Whenever you ask a question about code that generates an error, always include the complete error message in the question. Ideally, you should include a minimal, complete and verifiable example that we can run to reproduce the problem. Currently, you define function, but later you use the undefined function chirplet. That makes it a little bit harder for anyone to understand your problem.
Having said that...
scipy.integrate.quad returns two values: the estimate of the integral, and an estimate of the absolute error of the integral. It looks like you haven't taken this into account in function. Try something like this:
def function(a):
intgrl, abserr = quad(lambda t: np.cos(a[0])*np.sin(a[1])*t, 0, 3)
return intgrl
Related
So, if the title is not clear, I am trying to take the function [sin(x)/(x3-2*x2+4x-8)] and find the integral. I know that the analytical solution is (-pi/(8e**2))+(pi/8)*cos(2) or ~ -0.2165...
The code that I can not seem create properly so far looks like
import numpy as np
from sympy import integrate
from sympy.abc import x
f = integrate(np.sin(x)/(x**3-2*x**2+4x-8), (x, -5, 2))
print(f)
I would like the code to eventually be drawn out which I plan on doing myself, however, this gives me the error
"TypeError: loop of ufunc does not support argument 0 of type Symbol which has no callable sin method"
How do I go about fixing this?
im new into Python and i try to figure out how everythings work. I have a little problem with the minimize function of the scipy.optimize package. I try to minimize a given function with some start values but python gives me very high parameter values.
This ist my simple code:
import numpy as np
from scipy.optimize import minimize
global array
y_wert = np.array([1,2,3,4,5,6,7,8])
global x_wert
x_wert = np.array([1,2,3,4,5,6,7,8])
def Test(x):
Summe = 0
for i in range(0,len(y_wert)):
Summe = Summe + (y_wert[i] - (x[0]*x_wert[i]+x[1]))
return(Summe)
x_0 = [1,0]
xopt = minimize(Test,x_0, method='nelder-mead',options={'xatol': 1e-8, 'disp': True})
print(xopt)
If i run this script the best given parameters are:
[1.02325529e+44, 9.52347084e+40]
which really doesnt solve this problem. Ive also try some slightly different startvalues but that doesnt solve my problems.
Can anyone give me a clue as to where my mistake lies?
Thanks a lot for your help!
Your test function is effectively a straight line with negative gradient so there is no minimum, it's an infinitely decreasing function, that explains your large results, try something like x squared instead
I am trying to apply the Q-function values for a problem. I don't know the function available for it in Python.
What is the python equivalent for the following code in octave?
>> f=0:0.01:1;
>> qfunc(f)
The Q-function can be expressed in terms of the error function. Check here for more info. "scipy" has the error function, special.erf(), that can be used to calculate the Q-function.
import numpy as np
from scipy import special
f = np.linspace(0,1,101)
0.5 - 0.5*special.erf(f/np.sqrt(2)) # Q(f) = 0.5 - 0.5 erf(f/sqrt(2))
Take a look at this https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.stats.norm.html
Looks like the norm.sf method (survival function) might be what you're looking for.
I've used this Q function for my code and it worked perfectly well,
from scipy import special as sp
def qfunc(x):
return 0.5-0.5*sp.erf(x/sqrt(2))
I'vent used this one but I think it should work,
def invQfunc(x):
return sqrt(2)*sp.erfinv(1-2x)
references:
https://mail.python.org/pipermail/scipy-dev/2016-February/021252.html
Python equivalent of MATLAB's qfuncinv()
Thanks #Anton for letting me know how to write a good answer
For E=0.46732451 and t=1.07589765 I am trying to solve for the upper limit of the integral t= \int_{0}^{z} 1/sqrt(2*(0.46732451-z**2)), I plotted this function and it looks like this .
Around t=1 it kind of asymptotes.
I have the following code
import numpy as np
from scipy import integrate
from scipy.optimize import fsolve
def fg(z_up,t,E):
def h(z,E):
return 1/(np.sqrt(2*(E-z**2)))
b, err = integrate.quad(h, 0, z_up,args=(E))
return b-t
x0 = 0.1
print fsolve(fg, x0, args=(1.07589765, 0.46732451))[0]
But this code just outputs the guess value, no matter what I put, so I am guessing it has something to do with the fact that the curve asymptotes there. I should note that this code works for other values of t which are away from the asymptotic region.
Can anyone help me resolve this?
Thanks
EDIT After playing around for a while, I solved the problem, but it's kind of a patchwork, it only works for similar problems not in general (or does it?)
I made the following changes: the maximum value that z can attain is sqrt(0.46732451), so I set x0=0.5*np.sqrt(0.46732451) and set factor anywhere between 0.1 to 1, and out pops the correct answer. I don't have an explanation for this, perhaps someone who is an expert in this matter can help?
You should use bisect instead, as it handles nan without problems:
print bisect(fg, 0.4, 0.7, args=(1.07589765, 0.46732451))
Here 0.4 and 0.7 are taken as an example but you can generalize this for almost any diverging integral by using 0 and let's say 1e12 as the limits.
However, I'm not sure I understand what you really want to do... if you want to find the limit at which the integral diverges, cf. your
I am trying to solve for the upper limit of the integral
then it's simply for z_up -> \sqrt{E} \approx 0,683611374...
So to find the (approximate) numerical value of the integral you just have to decrease z_up from that value until quad stops giving a nan...
I am fairly new to python and trying to transfer some code from matlab to python. I am trying to optimize a function in python using fmin_bfgs. I always try to vectorize the code when possible, but I ran into the following problem that I can't figure out. Here is a test example.
from pylab import *
from scipy.optimize import fmin_bfgs
## Create some linear data
L=linspace(0,10,100).reshape(100,1)
n=L.shape[0]
M=2*L+5
L=hstack((ones((n,1)),L))
m=L.shape[0]
## Define sum of squared errors as non-vectorized and vectorized
def Cost(theta,X,Y):
return 1.0/(2.0*m)*sum((theta[0]+theta[1]*X[:,1:2]-Y)**2)
def CostVec(theta,X,Y):
err=X.dot(theta)-Y
resid=err**2
return 1.0/(2.0*m)*sum(resid)
## Initialize the theta
theta=array([[0.0], [0.0]])
## Run the minimization on the two functions
print fmin_bfgs(Cost, x0=theta,args=(L,M))
print fmin_bfgs(CostVec, x0=theta,args=(L,M))
The first answer, with the unvectorized function, gives the correct answer which is just the vector [5, 2]. But, the the second answer, using the vectorizied form of the cost function returns roughly [15,0]. I have figured out the 15 doesn't appear from nowhere as it is 2 times the mean of the data plus the intercept, i.e., $2\times 5+5$. Any help is greatly appreciated.