Pylab inserts line breaks where there are nans in the data. How can I get continuous lines. I think matlab does this automatically.
example:
x = linspace(0,1,10)
y = rand(10,2)
y[5:8,1] = nan
plot(x,y,'.-')
I want y[:,1] interpolated but not y[:,0] since there are no points missing.
select the rows without nan
from pylab import isnan
boolind = ~isnan(y).any(1)
then do
plot(x[boolind], y[boolind])
if you want a value from linear interpolation to substitute that nan, you simply record the position of that nan and do the interpolation using adjacent points, but I think for plotting purposes, simply eliminating nan data points is enough - the code will do the linear interpolation for you anyway.
btw: presumably your y = rand((10, 50)) should be y = rand(10, 50), although I am not sure why you wanna plot a 2D array against a 1D.
EDIT
for your particular question, you can simply plot the two columns of y separately
from pylab import *
x = linspace(0,1,10)
y = rand(10,2)
y[5:8,1] = nan
boolind = ~isnan(y)
plot(x[boolind[:,0]],y[boolind[:,0], 0],'.-')
plot(x[boolind[:,1]],y[boolind[:,1], 1],'.-')
show()
Related
I have a number of spectra: wavelength/counts at a given temperature. The wavelength range is the same for each spectrum.
I would like to interpolate between the temperature and counts to create a large grid of spectra (temperature and counts (at a given wavelength range).
The code below is my current progress. When I try to get a spectrum for a given temperature I only get one value of counts when I need a range of counts representing the spectrum (I already know the wavelengths).
I think I am confused about arrays and interpolation. What am I doing wrong?
import pandas as pd
import numpy as np
from scipy import interpolate
image_template_one = pd.read_excel("mr_image_one.xlsx")
counts = np.array(image_template_one['counts'])
temp = np.array(image_template_one['temp'])
inter = interpolate.interp1d(temp, counts, kind='linear')
temp_new = np.linspace(30,50,0.5)
counts_new = inter(temp_new)
I am now think that I have two arrays; [wavelength,counts] and [wavelength, temperature]. Is this correct, and, do I need to interpolate between the arrays?
Example data
I think what you want to achieve can be done with interp2d:
from scipy import interpolate
# dummy data
data = pd.DataFrame({
'temp': [30]*6 + [40]*6 + [50]*6,
'wave': 3 * [a for a in range(400,460,10)],
'counts': np.random.uniform(.93,.95,18),
})
# make the interpolator
inter = interpolate.interp2d(data['temp'], data['wave'], data['counts'])
# scipy's interpolators return functions,
# which you need to call with the values you want interpolated.
new_x, new_y = np.linspace(30,50,100), np.linspace(400,450,100)
interpolated_values = inter(new_x, new_y)
GOAL
I have values v given at specific 3D coordinates x y z. The data is stored as a pandas dataframe:
x y z v
0 -68.5 68.50 -10.00 0.297845
1 -68.5 -23.29 61.10 0.148683
2 -68.5 -23.29 63.47 0.142325
3 -68.5 -23.29 65.84 0.135908
4 -68.5 -23.29 68.21 0.129365
... ... ... ...
91804 68.5 23.29 151.16 0.118460
91805 68.5 23.29 153.53 0.119462
91806 68.5 23.29 155.90 0.120386
91807 68.5 23.29 139.31 0.112257
91808 68.5 -68.50 227.00 0.127948
I would like to find the values at new coordinates that are not part of the dataframe, hence I am looking into how to efficiently interpolate the data.
What I have done:
Since the coordinates are on a grid, I can use interpn:
import numpy as np
from scipy.interpolate import interpn
# Extract the list of coordinates (I know that they are on a grid)
xs = np.array(df["x"].to_list())
ys = np.array(df["y"].to_list())
zs = np.array(df["z"].to_list())
# Extract the associated values
vs = np.array(df["v"].to_list())
Reshape the data to fit the scipy function:
points = (np.unique(xs), np.unique(ys), np.unique(zs))
values= vs.reshape(len(np.unique(xs)), len(np.unique(ys)), len(np.unique(zs)))
To test the interpolation, I would like to see if I get the same values back, if I put in the same points as the original points:
request = (xs,ys,zs)
output = interpn(points, values, request)
... BUT
I am wondering, what I am doing wrong??
Other:
Dataset
Please find the complete dataset here: https://filebin.net/u10lrw956enqhg5i
Visualization
from mayavi import mlab
# Create figure
fig = mlab.figure(1, fgcolor=(0, 0, 0), bgcolor=(0, 0, 0))
mlab.points3d(xs,ys,zs,output)
mlab.view(azimuth=270, elevation=90, roll=180, figure=fig)
# View plot
mlab.show()
I strongly suspect that your data, while on a grid, is not ordered so as to allow a simple reshape of the values. You have two solutions available, both involving reordering the data in different ways.
Solution 1
Since you're already using np.unique to extract the grid, you can get the correct ordering of vs using the return_inverse parameter:
px, ix = np.unique(xs, return_inverse=True)
py, iy = np.unique(ys, return_inverse=True)
pz, iz = np.unique(zs, return_inverse=True)
points = (px, py, pz)
values = np.empty_like(vs, shape=(px.size, py.size, pz.size))
values[ix, iy, iz] = vs
return_inverse is sort of magical, largely because it's so counterintuitive. In this case, for each element of values, it tells you which unique, sorted gross location it corresponds to.
By the way, if you are missing grid elements, you may want to replace np.empty_like(vs, shape=(px.size, py.size, pz.size)) with either np.zeros_like(vs, shape=(px.size, py.size, pz.size)) or np.empty_like(vs, np.nan, shape=(px.size, py.size, pz.size)). In the latter case, you could interpolate the nans in the grid first.
Solution 2
The more obvious solution would be to rearrange the indices so you can reshape vs as you tried to do. That only works if you're sure that there are no missing grid elements. The easiest way would be to sort the whole dataframe, since the pandas methods are less annoying than np.lexsort (IMO):
df.sort_values(['x', 'y', 'z'], inplace=True, ignore_index=True)
When you extract, do it efficiently:
xs, ys, zs, vs = df.to_numpy().T
Since everything is sorted, you don't need np.unique to identify the grid any more. The number of unique x values is:
nx = np.count_nonzero(np.diff(xs)) + 1
And the unique values are:
bx = xs.size // nx
ux = xs[::bx]
y values go through a full cycle every bx elements, so
ny = np.count_nonzero(np.diff(ys[:bx])) + 1
by = bx // ny
uy = ys[:bx:by]
And for z (bz == 1):
nz = by
uz = zs[:nz]
Now you can construct your original arrays:
points = (ux, uy, uz)
values = vs.reshape(nx, ny, nz)
I would like to plot in 3D with Pandas / MatplotLib (Wireframe or other, I do not care) but in a specific way..
I'm using RFID sensors and I'm trying to record the signal I receive at different distance + different angles. And I want to see the correlation between the rising of the distance and the angle.
So that's why I want to plot in 3D :
X Axis -> the Distance, Y Axis -> the Angle, Z Axis -> the signal received which means a float
My CSV file from where I generate my DataFrame is organized like this a double entry table :
Distance;0;23;45;90;120;180
0;-53.145;-53.08;-53.1;-53.035;-53.035;-53.035
5;-53.145;-53.145;-53.05;-53.145;-53.145;-53.145
15;-53.145;-53.145;-53.145;-53.145;-53.145;-53.145
25;-53.145;-52.145;-53.145;-53.002;-53.145;-53.145
40;-53.145;-53.002;-51.145;-53.145;-54.255;-53.145
60;-53.145;-53.145;-53.145;-53.145;-53.145;-53.145
80;-53.145;-53.145;-53.145;-53.145;-60;-53.145
100;-53.145;-52;-53.145;-54;-53.145;-53.145
120;-53.145;-53.145;-53.145;-53.145;-53.002;-53.145
140;-51.754;-53.145;-51.845;-53.145;-53.145;-53.145
160;-53.145;-53.145;-49;-53.145;-53.145;-53.145
180;-53.145;-53.145;-53.145;-53.145;-53.145;-53.002
200;-53.145;-53.145;-53.145;-53.145;-53.145;-53.145
On the first label row we've different angles : 0°, 23°, 45°, ...
And the index of the DataFrame is the distance : 0 cm, 15 cm...
And the matrix inside represents the signal, so, values of Z Axis...
But I do not know how to generate a 3D Scatter, WireFrame... because in every tutorial I see people that use specific columns as axis.
Indeed, in my CSV file on the first row I've the label of all columns
Distance;0 ;23 ;45 ;90 ;120;180
And I do not know how to generate a 3D plot with a double entry table.
Do you know how to do it ? Or, to generate my CSV file in a better way to see the same result at the end !
I would be grateful if you would help me about this !
Thank you !
maybe contour is enough
b = np.array([0,5,15,25,40,60,80,100,120,140,160,180,200])
a = np.array([0,23,45,90,120,180])
x, y = np.meshgrid(a, b)
z = np.random.randint(-50,-40, (x.shape))
scm = plt.contourf(x, y, z, cmap='inferno')
plt.colorbar(scm)
plt.xticks(a)
plt.yticks(b)
plt.xlabel('Distance')
plt.ylabel('Angle')
plt.show()
displays
You can get a contour plot with something like this (but for the data shown it is not very interesting since all the values are constant at -45):
df = pd.read_csv(sep=';')
df = df.set_index('Distance')
x = df.index
y = df.columns.astype(int)
z = df.values
X,Y = np.meshgrid(x,y)
Z = z.T
plt.contourf(X,Y,Z,cmap='jet')
plt.colorbar()
plt.show()
Welcome to stackoverflow, your question can be split into several steps:
Step 1 - read the data
I have stored your data in a file called data.txt.
I don't know Pandas very well but this can also be handled with the nice simple function of Numpy called loadtxt. Your data is a bit problematic because of the text 'Distance' value in the first column and first row. But don't panic we load the file as a matrix of strings:
raw_data = np.loadtxt('data.txt', delimiter=';', dtype=np.string_)
Step 2 - transform the raw data
To extract the wanted data from the raw data we can do the following:
angle = raw_data[0 , 1:].astype(float)
distance = raw_data[1:, 0 ].astype(float)
data = raw_data[1:, 1:].astype(float)
With indexing the raw data we select the data that we want and with astype we change the string values to numbers.
Intermediate step - making the data a bit fancier
Your data was a bit boring, only the value -45, i took the liberty to make it a bit fancier:
data = (50 + angle[np.newaxis,:]) / (10 + np.sqrt(distance[:,np.newaxis]))
Step 4 - make a wireframe plot
The example at matplotlib.org looks easy enough:
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot_wireframe(X, Y, Z)
plt.show()
But the trick is to get the X, Y, Z parameters right...
Step 3 - make the X and Y data
The Z data is simply our data values:
Z = data
The X and Y should also be 2D array's such that plot_wireframe can find the x and y for each value of Z in the 2D arrays X an Y at the same array locations. There is a Numpy function to create these 2D array's:
X, Y = np.meshgrid(angle, distance)
Step 5 - fancing it up a bit
ax.set_xticks(angle)
ax.set_yticks(distance[::2])
ax.set_xlabel('angle')
ax.set_ylabel('distance')
Putting it together
All steps together in the right order:
# necessary includes...
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
import numpy as np
raw_data = np.loadtxt('data.txt', delimiter=';', dtype=np.string_)
angle = raw_data[0 , 1:].astype(float)
distance = raw_data[1:, 0 ].astype(float)
data = raw_data[1:, 1:].astype(float)
# make the example data a bit more interesting...
data = (50 + angle[np.newaxis,:]) / (10 + np.sqrt(distance[:,np.newaxis]))
# setting up the plot
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# the trickey part creating the data that plot_wireframe wants
Z = data
X, Y = np.meshgrid(angle, distance)
ax.plot_wireframe(X, Y, Z)
# fancing it up a bit
ax.set_xticks(angle)
ax.set_yticks(distance[::2])
ax.set_xlabel('angle')
ax.set_ylabel('distance')
# and showing the plot ...
plt.show()
I have got this code to generate a surface plot. But it gives a zero division error. I am not able to figure out what is wrong. Thank you.
import pylab, csv
import numpy
from mayavi.mlab import *
def getData(fileName):
try:
data = csv.reader(open(fileName,'rb'))
except:
print 'File not found'
else:
data = [[float(row[0]), float(row[1]),float(row[2])] for row in data]
x = [row[0] for row in data]
y = [row[1] for row in data]
z = [row[2] for row in data]
return (x, y, z)
def plotData(fileName):
xVals, yVals, zVals = getData(fileName)
xVals = pylab.array(xVals)
yVals = pylab.array(yVals)
zVals = (pylab.array(zVals)*10**3)
x, y = numpy.mgrid[-0.5:0.5:0.001, -0.5:0.5:0.001]
s = surf(x, y, zVals)
return s
plotData('data')
If I have understood the code correctly, there is a problem with zVals in mayavi.mlab.surf.
According to the documentation of the function, s is the elevation matrix, a 2D array, where indices along the first array axis represent x locations, and indices along the second array axis represent y locations. Your file reader seems to return a 1D vector instead of an array.
However, this may not be the most difficult problem. Your file seems to contain triplets of x, y, and z coordinates. You can use mayavi.mlab.surf only if your x and y coordinates in the file form a regular square grid. If this is the case, then you just have to recover that grid and form nice 2D arrays of all three parts. If the points are in the file in a known order, it is easy, otherwise it is rather tricky.
Maybe you would want to start with mayavi.mlab.points3d(xVals, yVals, zVals). That will give you an overall impression of your data. (Or if already know more about your data, you might give us a hint by editing your question and adding more information!)
Just to give you an idea of probably slightly pythonic style of writing this, your code is rewritten (and surf replaced) in the following:
import mayavi.mlab as ml
import numpy
def plot_data(filename):
data = numpy.loadtxt(filename)
xvals = data[:,0]
yvals = data[:,1]
zvals = data[:,2] * 1000.
return ml.points3d(x, y, z)
plot_data('data')
(Essential changes: the use of numpy.loadtxt, get rid of pylab namespace here, no import *, no CamelCase variable or function names. For more information, see PEP 8.)
If you only need to see the shape of the surface, and the data in the file is ordered row-by-row and with the same number of data points in each row (i.e. fixed number of columns), then you may use:
import mayavi.mlab as ml
import numpy
importt matplotlib.pyplot as plt
# whatever you have as the number of points per row
columns = 13
data = numpy.loadtxt(filename)
# draw the data points into a XY plane to check that they really for a rectangular grid:
plt.plot(data[:,0], data[:,1])
# draw the surface
zvals = data[:,2].reshape(-1,columns)
ml.surf(zvals, warp_scale='auto')
As you can see, this code allows you to check that your values really are in the right kind of grid. It does not check that they are in the correct order, but at least you can see they form a nice grid. Also, you have to input the number of columns manually. The keyword warp_scale takes care of the surface scaling so that it should look reasonable.
The image (test.tif) is attached.
The np.nan values are the whitest region.
How to fill those whitest region using some gap filling algorithms that uses values from the neighbours?
import scipy.ndimage
data = ndimage.imread('test.tif')
As others have suggested, scipy.interpolate can be used. However, it requires fairly extensive index manipulation to get this to work.
Complete example:
from pylab import *
import numpy
import scipy.ndimage
import scipy.interpolate
import pdb
data = scipy.ndimage.imread('data.png')
# a boolean array of (width, height) which False where there are missing values and True where there are valid (non-missing) values
mask = ~( (data[:,:,0] == 255) & (data[:,:,1] == 255) & (data[:,:,2] == 255) )
# array of (number of points, 2) containing the x,y coordinates of the valid values only
xx, yy = numpy.meshgrid(numpy.arange(data.shape[1]), numpy.arange(data.shape[0]))
xym = numpy.vstack( (numpy.ravel(xx[mask]), numpy.ravel(yy[mask])) ).T
# the valid values in the first, second, third color channel, as 1D arrays (in the same order as their coordinates in xym)
data0 = numpy.ravel( data[:,:,0][mask] )
data1 = numpy.ravel( data[:,:,1][mask] )
data2 = numpy.ravel( data[:,:,2][mask] )
# three separate interpolators for the separate color channels
interp0 = scipy.interpolate.NearestNDInterpolator( xym, data0 )
interp1 = scipy.interpolate.NearestNDInterpolator( xym, data1 )
interp2 = scipy.interpolate.NearestNDInterpolator( xym, data2 )
# interpolate the whole image, one color channel at a time
result0 = interp0(numpy.ravel(xx), numpy.ravel(yy)).reshape( xx.shape )
result1 = interp1(numpy.ravel(xx), numpy.ravel(yy)).reshape( xx.shape )
result2 = interp2(numpy.ravel(xx), numpy.ravel(yy)).reshape( xx.shape )
# combine them into an output image
result = numpy.dstack( (result0, result1, result2) )
imshow(result)
show()
Output:
This passes to the interpolator all values we have, not just the ones next to the missing values (which may be somewhat inefficient). It also interpolates every point in the output, not just the missing values (which is extremely inefficient). A better way is to interpolate just the missing values, and then patch them into the original image. This is just a quick working example to get started :)
I think viena's question is more related to an inpainting problem.
Here are some ideas:
In order to fill the gaps in B/W images you can use some filling algorithm like scipy.ndimage.morphology.binary_fill_holes. But you have a gray level image, so you can't use it.
I suppose that you don't want to use a complex inpainting algorithm. My first suggestion is: Don't try to use Nearest gray value (you don't know the real value of the NaN pixels). Using the NEarest value will generate a dirty algorithm. Instead, I would suggest you to fill the gaps with some other value (e.g. the mean of the row). You can do it without coding by using scikit-learn:
Source:
>>> from sklearn.preprocessing import Imputer
>>> imp = Imputer(strategy="mean")
>>> a = np.random.random((5,5))
>>> a[(1,4,0,3),(2,4,2,0)] = np.nan
>>> a
array([[ 0.77473361, 0.62987193, nan, 0.11367791, 0.17633671],
[ 0.68555944, 0.54680378, nan, 0.64186838, 0.15563309],
[ 0.37784422, 0.59678177, 0.08103329, 0.60760487, 0.65288022],
[ nan, 0.54097945, 0.30680838, 0.82303869, 0.22784574],
[ 0.21223024, 0.06426663, 0.34254093, 0.22115931, nan]])
>>> a = imp.fit_transform(a)
>>> a
array([[ 0.77473361, 0.62987193, 0.24346087, 0.11367791, 0.17633671],
[ 0.68555944, 0.54680378, 0.24346087, 0.64186838, 0.15563309],
[ 0.37784422, 0.59678177, 0.08103329, 0.60760487, 0.65288022],
[ 0.51259188, 0.54097945, 0.30680838, 0.82303869, 0.22784574],
[ 0.21223024, 0.06426663, 0.34254093, 0.22115931, 0.30317394]])
The dirty solution that uses the Nearest values can be this:
1) Find the perimeter points of the NaN regions
2) Compute all the distances between the NaN points and the perimeter
3) Replace the NaNs with the nearest's point gray value
If you want values from the nearest neighbors, you could use the NearestNDInterpolator from scipy.interpolate. There are also other interpolators as well you can consider.
You can locate the X,Y index values for the NaN values with:
import numpy as np
nan_locs = np.where(np.isnan(data))
There are some other options for the interpolation as well. One option is to replace NaN values with the results of a median filter (but your areas are kind of large for this). Another option might be grayscale dilation. The correct interpolation depends on your end domain.
If you haven't used a SciPy ND interpolator before, you'll need to provide X, Y, and value data to fit the interpolator to then X and Y data for values to interpolate at. You can do this using the where example above as a template.
OpenCV has some image in-painting algorithms that you could use. You just need to provide a binary mask which indicates which pixels should be in-painted.
import cv2
import numpy as np
import scipy.ndimage
data = ndimage.imread("test.tif")
mask = np.isnan(data)
inpainted_img = cv2.inpaint(img, mask, inpaintRadius=3, flags=cv2.INPAINT_TELEA)