Issues with 2D-Interpolation in Scipy - python

In my application, the data data is sampled on a distorted grid, and I would like to resample it to a nondistorted grid. In order to test this, I wrote this program with examplary distortions and a simple function as data:
from __future__ import division
import numpy as np
import scipy.interpolate as intp
import pylab as plt
# Defining some variables:
quadratic = -3/128
linear = 1/16
pn = np.poly1d([quadratic, linear,0])
pixels_x = 50
pixels_y = 30
frame = np.zeros((pixels_x,pixels_y))
x_width= np.concatenate((np.linspace(8,7.8,57) , np.linspace(7.8,8,pixels_y-57)))
def data(x,y):
z = y*(np.exp(-(x-5)**2/3) + np.exp(-(x)**2/5) + np.exp(-(x+5)**2))
return(z)
# Generating grid coordinates
yt = np.arange(380,380+pixels_y*4,4)
xt = np.linspace(-7.8,7.8,pixels_x)
X, Y = np.meshgrid(xt,yt)
Y=Y.T
X=X.T
Y_m = np.zeros((pixels_x,pixels_y))
X_m = np.zeros((pixels_x,pixels_y))
# generating distorted grid coordinates:
for i in range(pixels_y):
Y_m[:,i] = Y[:,i] - pn(xt)
X_m[:,i] = np.linspace(-x_width[i],x_width[i],pixels_x)
# Sample data:
for i in range(pixels_y):
for j in range(pixels_x):
frame[j,i] = data(X_m[j,i],Y_m[j,i])
Y_m = Y_m.flatten()
X_m = X_m.flatten()
frame = frame.flatten()
##
Y = Y.flatten()
X = X.flatten()
ipf = intp.interp2d(X_m,Y_m,frame)
interpolated_frame = ipf(xt,yt)
At this point, I have to questions:
The code works, but I get the the following warning:
Warning: No more knots can be added because the number of B-spline coefficients
already exceeds the number of data points m. Probably causes: either
s or m too small. (fp>s)
kx,ky=1,1 nx,ny=54,31 m=1500 fp=0.000006 s=0.000000
Also, some interpolation artifacts appear, and I assume that they are related to the warning - Do you guys know what I am doing wrong?
For my actual applications, the frames need to be around 500*100, but when doing this, I get a MemoryError - Is there something I can do to help that, apart from splitting the frame into several parts?
Thanks!

This problem is most likely related to the usage of bisplrep and bisplev within interp2d. The docs mention that they use a smooting factor of s=0.0 and that bisplrep and bisplev should be used directly if more control over s is needed. The related docs mention that s should be found between (m-sqrt(2*m),m+sqrt(2*m)) where m is the number of points used to construct the splines. I had a similar problem and found it solved when using bisplrep and bisplev directly, where s is only optional.

For 2d interpolation,
griddata
is solid, local, fast.
Take a look at problem-with-2d-interpolation-in-scipy-non-rectangular-grid on SO.

You might want to look at the following interp method in basemap:
mpl_toolkits.basemap.interp
http://matplotlib.sourceforge.net/basemap/doc/html/api/basemap_api.html
unless you really need spline-based interpolation.

Related

Bilinear interpolation from structured grid to unstructured grid (arbitrary points)

I need to interpolate bilinearly some air data of a hdf4/netcdf4/hdf5 file from a 240x240 structured grid on an arbitrary collection of coordinates. I have no idea on how to do this. I have tried using pyresample but that needs an AreaDefinition of target grid which is not possible in my case of unstructured target data (arbitrary points). Here is my code:
import numpy as np
import pyresample
from netCDF4 import Dataset
air_file = Dataset('air.hdf', mode='r')
air_data = air_file.variables['air_2m' ][:].flatten()
air_lon = air_file.variables['air_lon'][:].flatten()
air_lat = air_file.variables['air_lat'][:].flatten()
air_data = air_data.reshape(240,240)
air_lon = air_lon. reshape(240,240) # grid size is 240x240
air_lat = air_lat. reshape(240,240)
tar_lon = 100 * np.random.random((100,1)) # random points
tar_lat = 100 * np.random.random((100,1)) # random points
source_def = pyresample.geometry.SwathDefinition(lons=air_lon, lats=air_lat)
target_def = pyresample.geometry.SwathDefinition(lons=tar_lon, lats=tar_lat)
result = pyresample.bilinear.resample_bilinear(gmt_1500, source_def, target_def, radius=50e3, neighbours=32, nprocs=1, fill_value=None, reduce_data=True, segments=None, epsilon=0)
I am getting the following error (which is understood as it needs an AreaDefinition for target):
AttributeError: 'SwathDefinition' object has no attribute 'proj_str'
Is there any other way of doing this?
I'm not familiar with the pyresample package, but for bilinear interpolation in python I suggest referring to this earlier stackexchange thread which gives a number of useful examples:
How to perform bilinear interpolation in Python
p.s: By the way, if anyone wants to perform this task from the command line, you can also extract a set of points using bilinear interpolation with cdo too
# some bash loop over a pairs of x and y
cdo remapbil,lon=${x}/lat=${x} in.nc mypoint_${x}_${y}.nc

Using np.interp to find x value for a given y gives wrong answer

I want to find the x value for a given y (I want to know at what t, X, the conversion, reaches 0.9). There are questions like this all over SO and they say use np.interp but I did that in two ways and both were wrong. The code is:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
# Create time domain
t = np.linspace(0,4000,100)
# Parameters
A = 1.5*10**(-3) # Arrhenius constant
T = 300 # Temperature [K]
R = 8.31 # Ideal gas constant [J/molK]
E_a= 1000 # Activation energy [J/mol]
V = 5 # Reactor volume [m3]
# Initial condition
C_A0 = 0.1 # Initial concentration [mol/m3]
def dNdt(C_A,t):
r_A = (-k*C_A)/V
dNdt = r_A*V
return dNdt
k=A*np.exp(-E_a/(R*T))
C_A = odeint(dNdt,C_A0,t)
N_A0 = C_A0*V
N_A = C_A*V
X = (N_A0 - N_A)/N_A0
# Plot
plt.figure()
plt.plot(t,X,'b-',label='Conversion')
plt.plot(t,C_A,'r--',label='Concentration')
plt.legend(loc='best')
plt.grid(True)
plt.xlabel('Time [s]')
plt.ylabel('Conversion')
Looking at the graph, at roughly t=2300, the conversion is 0.9.
Method 1:
I wrote this function so I can ask for any given point and get the x-value:
def find(x_val,f):
f = np.reshape(f,len(f))
global t
t = np.reshape(t,len(t))
return np.interp(x_val,t,f)
print('Conversion of 0.9 is reached at: ',int(find(0.9,X)),'s')
When I call the function at 0.9 I get 0.0008858 which gets rounded to 0 which is wrong. I thought maybe something is going wrong when I declare global t??
Method 2:
When I do it outside the function; so I manually reshape X and t and use np.interp(0.9,t,X), the output is 0.9.
X = np.reshape(X,len(X))
t = np.reshape(t,len(t))
print(np.interp(0.9,t,X))
I thought I made a mistake in the order of the variables so I did np.interp(0.9,X,t), and again it surprised me with 0.9.
I'm unsure as to where I'm going wrong. Any help would be appreciated. Many thanks :)
On your plot, t is horizontal and X is vertical. You want to find the horizontal coordinate where the vertical one is 0.9. That is, find t for a given X. Saying
find x value for a given y
is bound to lead to confusion, as it did here.
The problem is solved with
print(np.interp(0.9, X.ravel(), t)) # prints 2292.765497278863
(It's better to use ravel for flattening, instead of the reshape as you did). There is no need to reshape t, which is already one-dimensional.
I did np.interp(0.9,X,t), and again it surprised me with 0.9.
That sounds unlikely, you probably mistyped. This was the correct order.

Multiple Linear Regression using Python

Firstly, there are a few topics on this but they involve deprecated packages with pandas etc. Suppose I'm trying to predict a variable w with variables x,y and z. I want to run a multiple linear regression to try and predict w. There are quite a few solutions that will produce the coefficients but I'm not sure how to use these. So, in pseudocode;
import numpy as np
from scipy import stats
w = np.array((1,2,3,4,5,6,7,8,9,10)) # Time series I'm trying to predict
x = np.array((1,3,6,1,4,6,8,9,2,2)) # The three variables to predict w
y = np.array((2,7,6,1,5,6,3,9,5,7))
z = np.array((1,3,4,7,4,8,5,1,8,2))
def model(w,x,y,z):
# do something!
return guess # where guess is some 10 element array formed
# using multiple linear regression of x,y,z
guess = model(w,x,y,z)
r = stats.pearsonr(w,guess) # To see how good guess is
Hopefully this makes sense as I'm new to MLR. There is probably a package in scipy that does all this so any help welcome!
You can use the normal equation method.
Let your equation be of the form : ax+by+cz +d =w
Then
import numpy as np
x = np.asarray([[1,3,6,1,4,6,8,9,2,2],
[2,7,6,1,5,6,3,9,5,7],
[1,3,4,7,4,8,5,1,8,2],
[1,1,1,1,1,1,1,1,1,1]]).T
y = numpy.asarray([1,2,3,4,5,6,7,8,9,10]).T
a,b,c,d = np.linalg.pinv((x.T).dot(x)).dot(x.T.dot(y))
Think I've found out now. If anyone could confirm that this produces the correct results that'd be great!
import numpy as np
from scipy import stats
# What I'm trying to predict
y = [-6,-5,-10,-5,-8,-3,-6,-8,-8]
# Array that stores two predictors in columns
x = np.array([[-4.95,-4.55],[-10.96,-1.08],[-6.52,-0.81],[-7.01,-4.46],[-11.54,-5.87],[-4.52,-11.64],[-3.36,-7.45],[-2.36,-7.33],[-7.65,-10.03]])
# Fit linear least squares and get regression coefficients
beta_hat = np.linalg.lstsq(x,y)[0]
print(beta_hat)
# To store my best guess
estimate = np.zeros((9))
for i in range(0,9):
# y = x1b1 + x2b2
estimate[i] = beta_hat[0]*x[i,0]+beta_hat[1]*x[i,1]
# Correlation between best guess and real values
print(stats.pearsonr(estimate,y))

FFT results Matlab VS Numpy (Python) : not the same results

I have a Matlab script to compute the DFT of a signal and plot it:
(data can be found here)
clc; clear; close all;
fid = fopen('s.txt');
txt = textscan(fid,'%f');
s = cell2mat(txt);
nFFT = 100;
fs = 24000;
deltaF = fs/nFFT;
FFFT = [0:nFFT/2-1]*deltaF;
win = hann(length(s));
sw = s.*win;
FFT = fft(sw, nFFT)/length(s);
FFT = [FFT(1); 2*FFT(2:nFFT/2)];
absFFT = 20*log10(abs(FFT));
plot(FFFT, absFFT)
grid on
I am trying to translate it to Python and can't get the same result.
import numpy as np
from matplotlib import pyplot as plt
x = np.genfromtxt("s.txt", delimiter=' ')
nfft = 100
fs = 24000
deltaF = fs/nfft;
ffft = [n * deltaF for n in range(nfft/2-1)]
ffft = np.array(ffft)
window = np.hanning(len(x))
xw = np.multiply(x, window)
fft = np.fft.fft(xw, nfft)/len(x)
fft = fft[0]+ [2*fft[1:nfft/2]]
fftabs = 20*np.log10(np.absolute(fft))
plt.figure()
plt.plot(ffft, np.transpose(fftabs))
plt.grid()
The plots I get (Matlab on the left, Python on the right):
What am I doing wrong?
Both codes are different in one case you concatenate two lists
FFT = [FFT(1); 2*FFT(2:nFFT/2)];
in the matlab code
in the other you add the first value of fft with the rest of the vector
fft = fft[0]+ [2*fft[1:nfft/2]]
'+' do not concatenate here because you have numpy array
In python, it should be:
fft = fft[0:nfft/2]
fft[1:nfft/2] = 2*fft[1:nfft/2]
I am not a Mathlab user so I am not sure but there are few things I'd ask to see if I can help you.
You called np.array after array has been made (ffft). That probably will not change the nature of array as well as you hoped, perhaps it would be better to try to define it inside np.array(n * deltaF for n in range(nfft/2-1)) I am not sure of formatting but you get the idea. The other thing is that the range doesn't seem right to me. You want it to have a value of 49?
Another one is the fft = fft[0]+ [2*fft[1:nfft/2]] compared to FFT = [FFT(1); 2*FFT(2:nFFT/2)]; I am not sure if the comparsion is accurate or not. It just seemed to be a different type of definition to me?
Also, when I do these type of calculations, I 'print' out the intermediate steps so I can compare the numbers to see where it breaks.
Hope this helps.
I found out that using np.fft.rfft instead of np.fft.fft and modifying the code as following does the job :
import numpy as np
from matplotlib import pyplot as pl
x = np.genfromtxt("../Matlab/s.txt", delimiter=' ')
nfft = 100
fs = 24000
deltaF = fs/nfft;
ffft = np.array([n * deltaF for n in range(nfft/2+1)])
window = np.hanning(len(x))
xw = np.multiply(x, window)
fft = np.fft.rfft(xw, nfft)/len(x)
fftabs = 20*np.log10(np.absolute(fft))
pl.figure()
pl.plot(np.transpose(ffft), fftabs)
pl.grid()
The resulting plot :
right result with Python
I can see that the first and the last points, as well as the amplitudes are not the same. It isn't a problem for me (I am more interested in the general shape), but if someone can explain, I'd be happy.

understanding pyresample to regrid irregular grid data to a regular grid

I need to regrid data on a irregular grid (lambert conical) to a regular grid. I think pyresample is my best bet. Infact my original lat,lon are not 1D (which seems to be needed to use basemap.interp or scipy.interpolate.griddata).
I found this SO's answer helpful. However I get empty interpolated data. I think it has to do with the choice of my radius of influence and with the fact that my data are wrapped (??).
This is my code:
import numpy as np
from matplotlib import pyplot as plt
import netCDF4
%matplotlib inline
url = "http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/NARR/Dailies/monolevel/hlcy.2009.nc"
SRHtemp = netCDF4.Dataset(url).variables['hlcy'][0,::]
Y_n = netCDF4.Dataset(url).variables['y'][:]
X_n = netCDF4.Dataset(url).variables['x'][:]
T_n = netCDF4.Dataset(url).variables['time'][:]
lat_n = netCDF4.Dataset(url).variables['lat'][:]
lon_n = netCDF4.Dataset(url).variables['lon'][:]
lat_n and lon_n are irregular and the latitude and longitude corresponding to the projected coordinates x,y.
Because of the way lon_n is, I added:
lon_n[lon_n<0] = lon_n[lon_n<0]+360
so that now if I plot them they look nice and ok:
Then I create my new set of regular coordinates:
XI = np.arange(148,360)
YI = np.arange(0,87)
XI, YI = np.meshgrid(XI,YI)
Following the answer above I wrote the following code:
from pyresample.geometry import SwathDefinition
from pyresample.kd_tree import resample_nearest
def_a = SwathDefinition(lons=XI, lats=YI)
def_b = SwathDefinition(lons=lon_n, lats=lat_n)
interp_dat = resample_nearest(def_b,SRHtemp,def_a,radius_of_influence = 70000,fill_value = -9.96921e+36)
the resolution of the data is about 30km, so I put 70km, the fill_value I put is the one from the data, but of course I can just put zero or nan.
however I get an empty array.
What do I do wrong? also - if there is another way of doing it, I am interested in knowing it. Pyresample documentation is a bit thin, and I need a bit more help.
I did find this answer suggesting to use another griddata function:
import matplotlib.mlab as ml
resampled_data = ml.griddata(lon_n.ravel(), lat_n.ravel(),SRHtemp.ravel(),XI,YI,interp = "linear")
and it seems to be ok:
But I would like to understand more about pyresample, since it seems so powerful.
The problem is that XI and XI are integers, not floats. You can fix this by simply doing
XI = np.arange(148,360.)
YI = np.arange(0,87.)
XI, YI = np.meshgrid(XI,YI)
The inability to handle integer datatypes is an undocumented, unintuitive, and possibly buggy behavior from pyresample.
A few more notes on your coding style:
It's not necessary to overwrite the XI and YI variables, you don't gain much by this
You should just load the netCDF dataset once and the access the variables via that object

Categories

Resources