Passing Arguments in a correct way to scipy minimzer - python

I am trying to minimize a loglikelihood wrt Fsc, Qsc and Rsc:
def llik_scalars(Fsc, Qsc, Rsc, pred_state, pred_P, y):
T = len(pred_P)
#pred_state = np.array([pred_state[t].item() for t in range(len(pred_state))])
#pred_P = np.array([pred_P[t].item() for t in range(len(pred_P))])
Sigmat = np.array(pred_P) + Rsc
Mut = pred_state
for t in range(T):
exponent = -0.5 * (y[t]-Mut[t])**2 / Sigmat[t]
cc = 1 / math.sqrt(2*math.pi*Sigmat[t])
LL -= math.log(cc*math.exp(exponent))
return LL
At first i tried to pass my pred_state and pred_P as lists of matrices. These matrices are of size 1x1, so with the code that is commented out i retrieved list of the numbers in the matrices.
However, as i was not sure the arguments could be passed in that form, but I read that arrays can be passed, the code that is commented out is now performed BEFORE i pass pred_state and pred_P as arguments. I thus pass them as numpy arrays.
I tried to do this using the scipy minimzer
x0 = [0.5, np.var(y)/3, np.var(y) *2/3]
minimize(llik_scalars, x0, method = 'nelder-mead', args=(pred_state, pred_P, y))
I get this error:
llik_scalars() missing 2 required positional arguments: 'pred_P' and 'y'
Following another topic on stackoverflow i adapted my code to the following, hoping to solve my problem:
def llik_scalars(Fsc, Qsc, Rsc, *args):
pred_state = args[0]
pred_P = args[1]
y = args[2]
T = len(pred_P)
#pred_state = np.array([pred_state[t].item() for t in range(len(pred_state))])
#pred_P = np.array([pred_P[t].item() for t in range(len(pred_P))])
Sigmat = np.array(pred_P) + Rsc
Mut = pred_state
for t in range(T):
exponent = -0.5 * (y[t]-Mut[t])**2 / Sigmat[t]
cc = 1 / math.sqrt(2*math.pi*Sigmat[t])
LL -= math.log(cc*math.exp(exponent))
return LL
This however, results in the following error:
pred_P = args[1]
IndexError: tuple index out of range
I don't see how this is not working. Please help me out :)
-- EDIT:--
the first few entries of pred_state and pred_p and y, how i pass them into llik_scalars. Note inital guess for the state is 0, and I use a sort of diffuse prior by setting my variance (pred_P) to a million. I retrieved my pred_state and pred_P using a Kalman filter with initial guesses for my F, Q and R:
pred_state[:5]
Out[121]: array([ 0. , 0.6097107 , 0.29789331, 0.30998801, -0.33307371])
pred_P[:5]
Out[122]:
array([1.00000000e+06, 1.24999975e+00, 1.13888888e+00, 1.13311688e+00,
1.13280061e+00])
y[:5]
Out[123]: array([ 1.21942262, 0.58464737, 0.90278035, -1.52760793, -0.80572172])

Related

How to fit a function with an integral in python

I have to fit a fairly complex function to the data of an experiment and so far I get values that makes no sense. The function looks like this function. I have tried the following:
R0 = 2.5e-9
def integrand1(r, args):
t, W0, n = args
return 4*pi*n*W0*np.exp(-2*r/R0)*np.exp(-W0*np.exp(-2*r/R0)*t)*r**2
def integrand2(r, args):
t, W0, n = args
return 4*pi*n*(np.exp(-W0*np.exp(-2*r/R0)*t)-1)*r**2
def fit(t, W0,n):
res = scipy.integrate.quad(integrand1, 0.0, np.inf,[t,W0,n])*np.exp(scipy.integrate.quad(integrand2, 0, np.inf, [t,W0,n]))
return res[0]
vcurve = np.vectorize(fit, excluded=set([1]))
popt, pcov = scipy.optimize.curve_fit(vcurve, t, y,p0=[0,0], bounds = ((0,0),(np.inf,1)))
print(popt)
print(pcov)
I'm really unsure how to proceed, because the code 'apparently' works but I either get an error of infinity or 0, which neither makes sense. I've never dealt with such a complicated function so I assume I may be missing some steps that may help me 'prevent' or fix this issue and make the fit work. Any help would be greatly appreciated!
Edit1: This is the seccond attempt at the fit:
def W(W0,r):
return W0*exp(-2*r/R0)
def I(t,W0,R0,n):
out = []
for i in t:
t1 = 4*pi*n * quad(lambda r: W0 * exp(-2*r/R0 -W(W0,r)*i)*r**2,0,inf)[0]
t2 = exp(4*pi*n * quad(lambda r: (exp(-W(W0,r)*i) - 1) * r**2,0,inf)[0])
out.append(t1*t2/(pi*n*W0*R0**3)) #Normierung
return out
pars = Parameters()
pars.add('W0', value=1, min=0)
pars.add('n', value=1, min=0)
pars.add('R0', value=2.5e-9, vary = False)
mdl = Model(I)
result = mdl.fit(y,t,params=pars,nan_policy='propagate')
comps = result.eval_components()
print(result.fit_report())
Here I get the error message: fit() got multiple values for argument 'params'. Any kind of help would be greatly appreciated!
For extra information, I(t) and t consists of 20000 rows. t goes from 0 to 3.2e^-5s and here is a sample of I(t): 14,19,18,10,10,15,15,23,16,74,54,44,31,31,26,39,31,46,31,23.

curve_fit making called func raise an IndexError

I'm trying to fit a parameter eta_H in function TGp_xx to some data (x_data, data_num_xx) using curve_fit. Now, the code below is a reduced version of what I'm using and it won't work by itself, but I hope the issue is conceptual enough to be understandable even from this
from scipy.optimize import curve_fit
Lx = 150
y_cut = 20
data = np.loadtxt("../dump/results.dat")
ux = data[:,3]
ux = np.reshape(ux , (Ly, Lx))
def Par_x(x,y,vec):
fdx = vec[(x+1)%Lx , y]
fsx = vec[(x-1+Lx)%Lx , y]
return (fdx - fsx) / 2.0
def TGp_xx(x, eta_H): return 2*eta_H*Par_x(x,y_cut,ux)
x_data = np.arange(Lx, dtype=np.int)
data_num_xx = np.empty(Lx, dtype='float64') #this is just a placeholder
popt_xx, pcov_xx = curve_fit(TGp_xx, x_data, data_num_xx)
I get an IndexError raised within Par_x:
fdx = vec[(x+1)%Lx , y]
IndexError: arrays used as indices must be of integer (or boolean) type
I tried something simpler like calling TGp_xx(x_data, some_constant) outside curve_fit, and it works. I don't really get why inside curve_fit i get the IndexError, as if I'm passing a float value (or an array of floats) as x, that can't be used as an index.

Creating images from a string of random functions

I've rewritten a bit of what was done here in an attempt to not have to use recursion so as to produce the images. While I can get what appears to be the correct string of random functions, I am unable to get the correct output arrays so as to build the image.
You'll notice I've put the xVar function first in the random functions because it will operate on an empty string and give me back values. This is similar to what the original code does except that (by recursion) uses the value 0 to pick out one of three functions that will operate on empty strings. I am thinking that the results are passed back in so that functions such as np.sin will work.
I think the issue might lie in my usage of the identity decorator func(*testlist), perhaps I'm using it incorrectly.
import numpy as np, random
from PIL import Image
width, height = 256,256
xArray = np.linspace(0.0, 1.0, width).reshape((1, width, 1))
yArray = np.linspace(0.0, 1.0, height).reshape((height, 1, 1))
def xVar(): return xArray
def yVar(): return yArray
def safeDivide(a, b): return np.divide(a, np.maximum(b, 0.001))
def add(x,y):
added = np.add(x, y)
return added
def Color():
randColorarray = np.array([random.random(), random.random(), random.random()]).reshape((1, 1, 3))
return randColorarray
# def circle(x,y):
# circles = (x- 100) ** 2 + (y - 100) ** 2
# return circles
functions = (Color, xVar, yVar, np.sin, np.multiply, safeDivide)
depth = 5
def functionArray(depth = 0):
FunctList = []
FunctList.append(xVar)
for x in range(depth):
func = random.choice(functions)
FunctList.append(func)
return FunctList
def ImageBuilder():
FunctionList = functionArray(depth)
testlist = []
for func in FunctionList:
values = func(*testlist)
return values
vals = ImageBuilder()
repetitions = (int(xArray / vals.shape[0]), int(yArray / vals.shape[1]), int(3 / vals.shape[2]))
img = np.tile(vals, repetitions)
# Convert to 8-bit, send to PIL and save
img8Bit = np.uint8(np.rint(img.clip(0.0, 1.0) * 255.0))
Image.fromarray(img8Bit).save('Images/' + '.png', "PNG")
Depending on which random function is chosen, I'll either get
values = func(*testlist)
ValueError: invalid number of arguments
or
TypeError: safeDivide() missing 2 required positional arguments: 'a' and 'b'
Note however that the linked program does not get a safe divide error and both a and b are not being explicitly passed in (as is the same with np.multiply).
Thanks for any help.

ValueError: setting an array element with a sequence. in scipy.optimize.minimize + curve_fit

I am a newbie at Python and I was writing a code to compute, then fit, magnetization data.
Firstly, I am writing the function for the energy to be minimized with respect to the parameter "theta".
def E_uniaxial(H, phi, theta, Keff, Ms):
e = Keff*(np.cos(theta))**2 - ((4*np.pi)**2*mu0)*Ms*H*np.cos(theta - phi)
return e
Then, as the magnetization depends strongly on the previous equilibriuum position of the system, I write a function for the "next equilibriuum position", the parameter H is the one supposed to change between the previous and the new equilibriuum position.
def next_theta(Ms, phi, Keff, H, lasttheta, fctE):
E = lambda x : fctE(H, phi, x, Keff, Ms)[0]
result = scipy.optimize.minimize(E, lasttheta)
return result.x
After this, I write a function that computes a whole hysteresis cycle. Given a starting point that is known, the function increases H and computes all the equilibriuum positions that depends on the previous one (then H is decreased and the same process is performed).
def cycle_theta(Ms, desfield, Keff, Hmax, theta_init_1, theta_init_2, fctE):
#aller
H1 = np.linspace(-Hmax, Hmax, 2000)
sol1 = np.zeros(np.shape(H1))
sol1[0] = theta_init_1
for i in range(len(H1)-1):
sol1[i+1] = next_theta(Ms, desfield, Keff, H1[i+1], sol1[i], fctE)
#retour
H2 = np.linspace(Hmax, -Hmax, 2000)
sol2 = np.zeros(np.shape(H2))
sol2[0] = theta_init_2
for i in range(len(H2) -1):
sol2[i+1] = next_theta(Ms, desfield, Keff, H2[i+1], sol2[i], fctE)
return H1, sol1, np.flip(sol2)
Then, I have to fit data in order to find the Ms and Keff parameters. I defined this function :
def test_fit(H, Ms, Keff):
a = cycle_theta(Ms, 1., Keff, 20, np.pi, 0., E_uniaxial)[1]
idx = 0
if isinstance(H, float):
idx = find_nearest(a, H)
print('float')
return np.sin(a[idx])
if isinstance(H, np.ndarray):
c = np.zeros(np.shape(H))
for i in range(len(H)):
idx = find_nearest(a, H[i])
c[i] = a[idx]
print('array')
return np.sin(c)
The condition on the type seemed to be required for the function to work with curve_fit.
I finally call popt = curve_fit(test_fit, b, sig) where "b" and "sig" are my experimental data.
But I got this error several times coming from the scipy.optimize.minimize, not the curve_fit:
ValueError: setting an array element with a sequence.
I read that this message can come from the fact my energy function E_unixial returns an array and not a scalar, but actually it's a quite regular function : if you input a scalar, you get a scalar and if you input an array, you get an array.
So I really don't understand, am I not supposed to use scipy.optimize.minimize and scipy.minimize.curve_fit one into the other ?
Thank you a lot for your help !!

How to real-time filter with scipy and lfilter?

Disclaimer: I am probably not as good at DSP as I should be and therefore have more issues than I should have getting this code to work.
I need to filter incoming signals as they happen. I tried to make this code to work, but I have not been able to so far.
Referencing scipy.signal.lfilter doc
import numpy as np
import scipy.signal
import matplotlib.pyplot as plt
from lib import fnlib
samples = 100
x = np.linspace(0, 7, samples)
y = [] # Unfiltered output
y_filt1 = [] # Real-time filtered
nyq = 0.5 * samples
f1_norm = 0.1 / nyq
f2_norm = 2 / nyq
b, a = scipy.signal.butter(2, [f1_norm, f2_norm], 'band', analog=False)
zi = scipy.signal.lfilter_zi(b,a)
zi = zi*(np.sin(0) + 0.1*np.sin(15*0))
This sets zi as zi*y[0 ] initially, which in this case is 0. I have got it from the example code in the lfilter documentation, but I am not sure if this is correct at all.
Then it comes to the point where I am not sure what to do with the few initial samples.
The coefficients a and b are len(a) = 5 here.
As lfilter takes input values from now to n-4, do I pad it with zeroes, or do I need to wait until 5 samples have gone by and take them as a single bloc, then continuously sample each next step in the same way?
for i in range(0, len(a)-1): # Append 0 as initial values, wrong?
y.append(0)
step = 0
for i in xrange(0, samples): #x:
tmp = np.sin(x[i]) + 0.1*np.sin(15*x[i])
y.append(tmp)
# What to do with the inital filterings until len(y) == len(a) ?
if (step> len(a)):
y_filt, zi = scipy.signal.lfilter(b, a, y[-len(a):], axis=-1, zi=zi)
y_filt1.append(y_filt[4])
print(len(y))
y = y[4:]
print(len(y))
y_filt2 = scipy.signal.lfilter(b, a, y) # Offline filtered
plt.plot(x, y, x, y_filt1, x, y_filt2)
plt.show()
I think I had the same problem, and found a solution on https://github.com/scipy/scipy/issues/5116:
from scipy import zeros, signal, random
def filter_sbs():
data = random.random(2000)
b = signal.firwin(150, 0.004)
z = signal.lfilter_zi(b, 1) * data[0]
result = zeros(data.size)
for i, x in enumerate(data):
result[i], z = signal.lfilter(b, 1, [x], zi=z)
return result
if __name__ == '__main__':
result = filter_sbs()
The idea is to pass the filter state z in each subsequent call to lfilter. For the first few samples the filter may give strange results, but later (depending on the filter length) it starts to behave correctly.
The problem is not how you are buffering the input. The problem is that in the 'offline' version, the state of the filter is initialized using lfilter_zi which computes the internal state of an LTI so that the output will already be in steady-state when new samples arrive at the input. In the 'real-time' version, you skip this so that the filter's initial state is 0. You can either initialize both versions to using lfilter_zi or else initialize both to 0. Then, it doesn't matter how many samples you filter at a time.
Note, if you initialize to 0, the filter will 'ring' for a certain amount of time before reaching a steady state. In the case of FIR filters, there is an analytic solution for determining this time. For many IIR filters, there is not.
This following is correct. For simplicity's sake I initialize to 0 and feed the input on sample at a time. However, any non-zero block size will produce equivalent output.
from scipy import signal, random
from numpy import zeros
def filter_sbs(data, b):
z = zeros(b.size-1)
result = zeros(data.size)
for i, x in enumerate(data):
result[i], z = signal.lfilter(b, 1, [x], zi=z)
return result
def filter(data, b):
result = signal.lfilter(b,1,data)
return result
if __name__ == '__main__':
data = random.random(20000)
b = signal.firwin(150, 0.004)
result1 = filter_sbs(data, b)
result2 = filter(data, b)
print(result1 - result2)
Output:
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 ... -5.55111512e-17
0.00000000e+00 1.66533454e-16]

Categories

Resources