Evaluate SmoothBivariateSpline for two 1d array lists - python

I have three arrays x,y,z. I wanted to smooth the z-data. So, I have used SmoothBivariateSpline function. But when I eval the result, I get completely different values compared to my previous z-data. Below is my code:
def envinterpolate(x,y,z):
x_interp = np.linspace(min(x),max(x),len(x)*4)
y_interp = np.linspace(min(y),max(y),len(x)*4)
sbsp = SmoothBivariateSpline(x,y,z)
z_interp = sbsp.ev(x_interp,y_interp)
return z_interp
Is there anything wrong in my code while evaluating the values of spline?
Attaching the plot,after trying s=0 parameter(redline my actual z-data,blackline z-interp data)

By convention, "smoothing" refers specifically to cases where you don't want the interpolant to pass exactly through your input data points (for example if you know that your input data is noisy).
SmoothBivariateSpline takes a parameter s that controls the degree of smoothing that is applied to the interpolant:
s : float, optional
Positive smoothing factor defined for estimation condition: sum((w[i]*(z[i]-s(x[i], y[i])))**2, axis=0) <= s Default s=len(w) which should be a good value if 1/w[i] is an estimate of the standard deviation of z[i].
If you don't want any smoothing you could simply set s=0.

Related

Output all guesses from scipy.optimize.leastsq()

I'm hoping to make an animation about how the least-squares regression analysis provided by scipy.optimize.leastsq() converges on a specific result. Is there any way to get the function to, say, append to a list a tuple of guess values for each iteration until the function converges to the local minima? Or, is there a different library which includes this feature?
Below is what I have:
# initial guess for gaussian distributions to optimize [height, position, width].
# if more than 2 distributions required, add a new set of [h,p,w] initial parameters to 'initials' for each new distribution.
# new parameters should be of the same format for consistency; i.e. [h,p,w],[h,p,w],[h,p,w]... etc.
# A 'w' guess of 1 is typically a sufficient estimation.
initials = [6.5,13,1],[4.5,19,1]
# determines the number of gaussian functions to compute from the initial guesses
n = len(initials)
# formats initials into a 1D array
var = np.concatenate(initials)
# data matrix
M = np.array(master)
# defines a typical gaussian function, of independent variable x,
# amplitude a, position b, and width parameter c.
def gaussian(x,a,b,c):
return a*np.exp((-(x-b)**2.0)/c**2.0)
# defines the expected resultant as a sum of intrinsic gaussian functions
def GaussSum(x, p):
return sum(gaussian(x, p[3*k], p[3*k+1], p[3*k+2]) for k in range(n))
# defines condition of minimization, reducing the square of the difference between the data (y) and the function 'func(x,p)'
def residuals(p, y, x):
return (y - GaussSum(x,p))**2
# executes least-squares regression analysis to optimize initial parameters
cnsts = leastsq(residuals, var, args=(M[:,1],M[:,0]))[0]
what I'm eventually hoping for is for 'cnsts' to be a list of tuples of every guess from the initial guess to the final guess.
If I'm understanding your question correctly, you want to make a guess at each of the different coefficients while fitting a linear regression line, then have a list of all the coefficents that have been guessed? Similar to how a NN will back-propagate the error to better fit a model?
Linear regression isn't guessing the different coefficents. It's just calculating them... https://www.statisticshowto.datasciencecentral.com/probability-and-statistics/regression-analysis/find-a-linear-regression-equation/#FindaLinear

How to combine the phase of one image and magnitude of different image into 1 image by using python

I want to combine phase spectrum of one image and magnitude spectrum of different image into one image.
I have got phase spectrum and magnitude spectrum of image A and image B.
Here is the code.
f = np.fft.fft2(grayA)
fshift1 = np.fft.fftshift(f)
phase_spectrumA = np.angle(fshift1)
magnitude_spectrumB = 20*np.log(np.abs(fshift1))
f2 = np.fft.fft2(grayB)
fshift2 = np.fft.fftshift(f2)
phase_spectrumB = np.angle(fshift2)
magnitude_spectrumB = 20*np.log(np.abs(fshift2))
I trying to figure out , but still i do not know how to do that.
Below is my test code.
imgCombined = abs(f) * math.exp(1j*np.angle(f2))
I wish i can come out just like that
Here are the few things that you would need to fix for your code to work as intended:
The math.exp function supports scalar exponentiation. For an element-wise matrix exponentiation you should use numpy.exp instead.
Similary, the * operator would attempt to perform matrix multiplication. In your case you want to instead perform element-wise multiplication which can be done with np.multiply
With these fixes you should get the frequency-domain combined matrix as follows:
combined = np.multiply(np.abs(f), np.exp(1j*np.angle(f2)))
To obtain the corresponding spatial-domain image, you would then need compute the inverse transform (and take the real part since there my be residual small imaginary parts due to numerical errors) with:
imgCombined = np.real(np.fft.ifft2(combined))
Finally the result can be shown with:
import matplotlib.pyplot as plt
plt.imshow(imgCombined, cmap='gray')
Note that imgCombined may contain values outside the [0,1] range. You would then need to decide how you want to rescale the values to fit the expected [0,1] range.
The default scaling (resulting in the image shown above) is to linearly scale the values such that the minimum value is set to 0, and the maximum value is set to 0.
Another way could be to limit the values to that range (i.e. forcing all negative values to 0 and all values greater than 1 to 1).
Finally another approach, which seems to provide a result closer to the screenshot provided, would be to take the absolute value with imgCombined = np.abs(imgCombined)

pymc: Inferring parameters based on functions of observables

I have observations of several optical emission lines, and I have a model that predicts several (flux) ratios of those lines, based on two parameters, q and z, which I want to infer.
I have created #pymc.deterministic objects that take values of q and z (each of which has uninformative priors over some physically-interesting region), and turn them into a "predicted" ratio. There are about 7 ratios, and they have the form:
#pymc.deterministic(observed=True, value=NII_SII)
def NII_SII_th(q=q, z=z):
return NII_SII_g(np.array([q, z]))
I can also define the ratios derived from observations, such as
#pymc.deterministic
def NII_SII(NII_6584=NII_6584, SII_6717=SII_6717,
rcf_NII_6584=rcf_NII_6584, rcf_SII_6717=rcf_SII_6717):
return np.log10(
(rcf_NII_6584*NII_6584) / \
(rcf_SII_6717*SII_6717))
where, for instance, NII_6584 is the observed flux of one of the lines and rcf_NII_6584 is the flux correction for that same line. These corrections are themselves determined by the line wavelengths (known with infinite precision), and by a parameter EBV, which can be calculated from the observed flux ratio of two lines that are supposed to have a fixed ratio r:
#pymc.deterministic
def EBV(Ha=Ha, Hb=Hb, r=r, R_V=R_V, Ha_l=Ha_l, Hb_l=Hb_l):
kHb = gas_meas.calzetti_k(lams=np.array([Ha_l]), Rv=R_V)
kHa = gas_meas.calzetti_k(lams=np.array([Hb_l]), Rv=R_V)
return 2.5 / (kHb - kHa) * np.log10((Ha/Hb) / r)
I also have a prior on the value of R_V.
The measurements themselves are expressed as Normal distributions, such as
NII_6584 = pymc.Normal(
'NII_6584', mu=f_row['[NII]6584'],
tau=1./e_row['[NII]6584']**2.,
observed=True, value=f_row['[NII]6584'])
I would like to get estimates of R_V, EBV, q, and z. However, when I make a pymc Model from all these, I am told that Deterministic objects cannot have observed values:
TypeError: __init__() got an unexpected keyword argument 'value'
First, am I misunderstanding the nature of Deterministic objects? If so, how else do I infer based on values that are not directly observed?
Second, am I constructing the observations correctly? It seems odd that I'd have to specify the observed flux as both the mean and the value argument, but it's not clear to me what else to do, other than also model the flux means and variances, which seems unnecessarily complicated.
Any advice would be appreciated!
I don't think you're constructing your observations correctly. This is not a minimum working example, but maybe we can clear up some confusion.
First off, I don't think the #deterministic decorator takes an argument value = <something>. It's not clear which of your deterministic statements is the actual model, but try to translate your code into the following template:
#Define your randomly-distributed variables (I'm assuming they're normal)
q = pymc.Normal(name,mu=mu,tau=tau)
z = pymc.Normal(name2,mu=mu2,tau=tau2)
#Define how you think they generate your data
#pymc.deterministic
def NII_SII_th(q=q, z=z):
return NII_SII_g(np.array([q, z])) #this fcn is defined somewhere else
#Your data array
f_row['[Nii]6584']=[...]
#Now link your model and your data
obs = pymc.Normal(modelname,mu=NII_SII_th,
observed=True, value=f_row['[NII]6584'])

Fitting a sum to data in Python

Given that the fitting function is of type:
I intend to fit such function to the experimental data (x,y=f(x)) that I have. But then I have some doubts:
How do I define my fitting function when there's a summation involved?
Once the function defined, i.e. def func(..) return ... is it still possible to use curve_fit from scipy.optimize? Because now there's a set of parameters s_i and r_i involved compared to the usual fitting cases where one has few single parameters.
Finally are such cases treated completely differently?
Feel a bit lost here, thanks for any help.
This is very well within reach of scipy.optimize.curve_fit (or just scipy.optimize.leastsqr). The fact that a sum is involved does not matter at all, nor that you have arrays of parameters. The only thing to note is that curve_fit wants to give your fit function the parameters as individual arguments, while leastsqr gives a single vector.
Here's a solution:
import numpy as np
from scipy.optimize import curve_fit, leastsq
def f(x,r,s):
""" The fit function, applied to every x_k for the vectors r_i and s_i. """
x = x[...,np.newaxis] # add an axis for the summation
# by virtue of numpy's fantastic broadcasting rules,
# the following will be evaluated for every combination of k and i.
x2s2 = (x*s)**2
return np.sum(r * x2s2 / (1 + x2s2), axis=-1)
# fit using curve_fit
popt,pcov = curve_fit(
lambda x,*params: f(x,params[:N],params[N:]),
X,Y,
np.r_[R0,S0],
)
R = popt[:N]
S = popt[N:]
# fit using leastsq
popt,ier = leastsq(
lambda params: f(X,params[:N],params[N:]) - Y,
np.r_[R0,S0],
)
R = popt[:N]
S = popt[N:]
A few things to note:
Upon start, we need the 1d arrays X and Y of measurements to fit to, the 1d arrays R0 and S0 as initial guesses and Nthe length of those two arrays.
I separated the implementation of the actual model f from the objective functions supplied to the fitters. Those I implemented using lambda functions. Of course, one could also have ordinary def ... functions and combine them into one.
The model function f uses numpy's broadcasting to simultaneously sum over a set of parameters (along the last axis), and calculate in parallel for many x (along any axes before the last, though both fit functions would complain if there is more than one... .ravel() to help there)
We concatenate the fit parameters R and S into a single parameter vector using numpy's shorthand np.r_[R,S].
curve_fit supplies every single parameter as a distinct parameter to the objective function. We want them as a vector, so we use *params: It catches all remaining parameters in a single list.
leastsq gives a single params vector. However, it neither supplies x, nor does it compare it to y. Those are directly bound into the objective function.
In order to use scipy.optimize.leastsq to estimate multiple parameters, you need to pack them into an array and unpack them inside your function. You can then do anything you want with them. For example, if your s_i are the first 3 and your r_i are the next three parameters in your array p, you would just set ssum=p[:3].sum() and rsum=p[3:6].sum(). But again, your parameters are not identified (according to your comment), so estimation is pointless.
For an example of using leastsq, see the Cookbook's Fitting Data example.

Numpy Leastsq fitting returning unchanged inital guess in all cases

I am attempting to fit a function using Leastsq to fit to a few relevant points in an fft. The issue at hand is that, no matter how good or bad the fit is, there is absolutely no change in the parameters. In other words the least squares takes 6 iterations and does nothing on any of them, then returns the initial parameter values. I cannot determine why nothing is happening.
guess = [per_guess,thresh_guess,cen_guess] #parameter guesses, all real numbers
res, stuff = leastsq(fitting, guess)
The fitting function has a number of manipulations to find the correct indices, which I will not reproduce here to save space, but it returns a list of complex numbers:
M, freq= fft(real_gv, zf)
def fitting(guess):
gi, trial_gv = gen_pat(size, guess[0], guess[1], guess[2])
trial_gv = trial_gv*private.han #apply hanning window
F, freq= fft(trial_gv, zf)
#stuff that picks the right indices
return M[left_fit target:right_fit_target]-F[left_fit target:right_fit_target]
I tried at one point using an array cast in the return, but I would constantly receive errors about casting between complex and real floats, even though I wasn't asking for any. Even with this method, I occasionally receive ComplexWarnings.
EDIT:
As requested, I am putting up gen_pat:
def gen_pat(num, period, threshold, pos = 0, step = 1.0, subdivide=10.0, blur = 1.0):
x= np.arange(-num/2,num/2,step) #grid indexes
j=np.zeros((len(x),subdivide))
for i in range(len(x)):
j[i]=np.linspace(x[i]-0.5*blur,x[i]+0.5*blur,subdivide) #around each discrete point take a subvision. This will be averaged to get the antialiased point. blur allows for underlap (<1) or overlap of pxels
holder = -np.sin(2*np.pi*np.abs(j-pos)/period) #map a sin function for the region
holder = holder < 2.0*threshold-1.0 #map to 1 or 0 based on the fraction of the period that is 0
y = np.sum(holder, axis=1)/float(subdivide) #take the average of the values at the sub-points to get the anti-aliased value at the point i
y= np.array(y)
x= np.array(x)
return x,y
EDIT 2:
Managed to get a fit working using res = fmin_powell(fitting, guess, direc=[[1,0,0],[0,0.1,0],[0,0,1]]) and a modified return. Would still like to know why lestsq didn't work.
return np.sum((M[fit_start_index:fit_end_index].real-F[fit_start_index:fit_end_index].real)**2+(M[fit_start_index:fit_end_index].imag-F[fit_start_index:fit_end_index].imag)**2)
The provided function gen_pat(x1,x2,x3,x4) returns a horizontal line at value 1 for a few values of input (x1,x2,x3,x4) I tried. Its Fourier components (except the 0-th component of course) are then always zero independently of the parameters. Then the leastsq algorithm fails, as the change in the 4 parameters does not affect the Fourier components you are trying to optimize.
You are doing something wrong in gen_pat(), either a coding or conceptual error, like choosing a wrong fitting curve.

Categories

Resources