Matplotlib heatmap with changing y-values - python

I'm trying to plot some data for a measurement taken from between two surfaces. The z-direction in the system is defined as normal to the surfaces. The problem is that along the x-axis of my plot I'm varying the separation distance between the two surfaces which means that for every slice, the min/max of the y-axis change. I've sort circumvented this by presenting a normalized y-axis where z_min is the bottom surface and z_max is the top surface:
However, this representation somewhat distorts the data. Ideally I would like to show the actual distance to the wall on the y-axis and just leave the areas outside of the system bounds white. I (poorly) sketched what I'm envisioning here (the actual distribution on the heatmap should look different, of course):
I can pretty easily plot what I want as a 3D scatter plot like so:
But how do I get the data into a plot-able form for a heatmap?
I'm guessing I would have to blow up the MxN array and fill in missing values through interpolation or simply mark them as NAN? But then I'm also not quite sure how to add a hard cutoff to my color scheme to make everything outside of the system white.

You can do this with pcolormesh which takes the corners of quadrilaterals as the arguements
X, Y = np.meshgrid(np.linspace(0, 10, 100), np.linspace(0, 2*np.pi, 150),)
h = np.sin(Y)
Y *= np.linspace(.5, 1, 100)
fig, ax = plt.subplots(1, 1)
ax.pcolormesh(X, Y, h)

Below an implementation with triangular mesh contouring, based on CT Zhu example.
If your domain is not convex, you will need to provide your own triangles to the triangulation, as default Delaunay triangulation meshes the convex hull from your points.
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.tri as mtri
y = np.array([np.linspace(-i, i, 51) for i in (
np.linspace(5, 10))[::-1]])
x = (np.zeros((50, 51)) +
np.linspace(1, 6, 50)[..., np.newaxis])
z = (np.zeros((50, 51)) -
np.linspace(-5, 5, 51)**2 + 10) # make up some z data
x = x.flatten()
y = y.flatten()
z = z.flatten()
print "x shape: ", x.shape
triang = mtri.Triangulation(x, y)
plt.tricontourf(triang, z)
plt.colorbar()
plt.show()

I guess, maybe 2d interpolation by using griddata will be what you want?
from matplotlib.mlab import griddata
xi=linspace(1,5,100)
yi=linspace(-10.5, 10.5, 100)
y=array([linspace(-i, i, 51) for i in (linspace(5,10))[::-1]]) #make up some y vectors with different range
x=zeros((50,51))+linspace(1,6, 50)[...,newaxis]
z=zeros((50,51))-linspace(-5, 5,51)**2+10 #make up some z data
x=x.flatten()
y=y.flatten()
z=z.flatten()
zi=griddata(x, y, z, xi, yi)
plt.contourf(xi, yi, zi, levels=-linspace(-5, 5,51)**2+10)

Related

How to put a 'grid' (for example dividing the x-y plane into bins) on an image to calculate mean of the z-values in every bin and plot as a heatmap?

My aim:
I have x, y and z values as arrays. For example:
x=np.array([10,2,-4,12,3,6,8,14])
y=np.array([5,5,-6,8,20,10,2,2])
z=np.array([4,6,10,40,22,14,20,8])
I want to plot a heatmap where the z-values will act as the intensity or 'weight' for every pair of (x,y) and the axes will be x and y values. So, my plot will be on a x-y plane. I want to lay a 'grid' on top of my plot by dividing my x-y plane into bins and then calculate the mean of the z-values within every bin and use that mean value as my color or intensity for that bin. I also want to make another plot but there I want to plot the variance of z-values as the intensity within the bins.
What I have done:
I coded it the following way but I think I am misinterpreting things..I don't think I understand bins etc well (I am new to programming).
import numpy as np
import matplotlib.pyplot as plt
x=np.array([10,2,-4,12,3,6,8,14])
y=np.array([5,5,-6,8,20,10,2,2])
z=np.array([4,6,10,40,22,14,-20,8])
# Bin the data onto a 2x2 grid
# Have to reverse x & y due to row-first indexing
zi, yi, xi = np.histogram2d(y, x, bins=(2,2), weights=z, normed=False)
counts, _, _ = np.histogram2d(y, x, bins=(2,2))
#to get mean divide by counts
zi = zi / counts
print(zi)
zi = np.ma.masked_invalid(zi)
fig, ax = plt.subplots()
sc=ax.pcolormesh(xi, yi, zi, edgecolors='black')
sct = ax.scatter(x, y, c=z, s=200) #shows the points in the bins
fig.colorbar(sc)
ax.margins(0.05)
plt.show()
Where I am stuck:
I am not even sure if the above code is doing the right thing. So, feel free to forget it and advise me on any other standard way of solving this problem.
With the above code I get a plot where the axes limits are determined by the given dataset automatically but I want to keep my axes constant at xmin=-20,xmax=20,ymin=-20,ymax=20.
Also, I am not sure how to manipulate the z-values within the bins to calculate other statistical quantities like variance or standard deviation etc.
EDIT: so, I have got some better code that gives the mean z values in bins and plot using np.histogram2d and the I can set the axes etc to my liking now but using this gives H as the sum of values in bins and I can get the mean from that but not other statistical quantities like variance. I wanted a way to code this so that I can have access to the values in the bin and I can calculate variance of those and use that result as the weight/intensity of the heatmap.
I am attaching the plot for mean z in bins.
import numpy as np
import matplotlib.pyplot as plt
x=np.array([10,2,4,12,3,6,8,14])
y=np.array([5,5,6,8,20,10,2,2])
z=np.array([4,6,10,40,22,14,20,8])
x_bins = np.linspace(0, 20, 3)
y_bins = np.linspace(0, 20, 3)
H, xedges, yedges = np.histogram2d(x, y, bins = [x_bins, y_bins], weights = z)
H_counts, xedges, yedges = np.histogram2d(x, y, bins = [x_bins, y_bins])
print(H)
H1 = H/H_counts
print(H1)
plt.xlabel("x")
plt.ylabel("y")
plt.imshow(H1.T, origin='lower', cmap='RdBu',
extent=[xedges[0], xedges[-1], yedges[0], yedges[-1]])
plt.colorbar().set_label('mean z', rotation=270)
EDIT 2: When I use stats for standard deviation I get the following plot
The deep red bin on the top right is actually empty and has no z values so I want the standard deviation to be 'Nan' instead of being assigned a value of 0. How can I do that?
My code for this plot is:
from scipy import stats
import numpy as np
import matplotlib.pyplot as plt
x=np.array([10,2,4,12,3,6,8,14])
y=np.array([5,5,6,8,20,10,2,2])
z=np.array([4,6,10,40,22,14,20,8])
x_bins = np.linspace(0, 20, 3)
y_bins = np.linspace(0, 20, 3)
H, xedges, yedges = np.histogram2d(x, y, bins = [x_bins, y_bins], weights = z)
#mean = stats.binned_statistic_2d(x,y,z,statistic='',bins=[x_bins,y_bins])
#mean.statistic
std = stats.binned_statistic_2d(x,y,z,statistic='std',bins=[x_bins,y_bins])
#std.statistic
#print(std.statistic)
plt.xlabel("x")
plt.ylabel("y")
plt.imshow(std.statistic.T, origin='lower', cmap='RdBu',
extent=[xedges[0], xedges[-1], yedges[0], yedges[-1]])
#plt.clim(0, 20)
plt.colorbar().set_label('std z', rotation=270)
You data need to be interpolated on a regular grid since your computer do not know which is the z value where there is no value. Lukily there is already a function for that: scipy.interpolate.griddata.
import numpy as np
from scipy.interpolate import griddata
import matplotlib.pyplot as plt
# Dummy data
x=np.array([10,2,-4,12,3,6,8,14])
y=np.array([5,5,-6,8,20,10,2,2])
z=np.array([4,6,10,40,22,14,-20,8])
# Create a regular grid along x and y axis
grid_x, grid_y = np.mgrid[x.min():x.max()+1, y.min():y.max()+1]
# Linear interpolation
# But you could also use a cubic interpolation or whatever you want/need
z_interpolated = griddata((x,y), z, (grid_x, grid_y), method='linear')
# Plot the result:
plt.imshow(z_interpolated, cmap='plasma')
And we obtain:
Noticed that there is no value on the image boundary because your spatial domain is not defined beyond the value contained in x and y so with a linear interpolation, your computer can not guess what should be the value beyond those points. So the heatmap is restricted to the convexhull formed by your points, anything else will be extrapolation.
Edit:
If you need to compute a bidimentionnal binned statistic you can use:
scipy.stats.binned_statistic_2d()
In your case if we want to compute the variance and the mean:
from scipy import stats
std = stats.binned_statistic_2d(x,y,z,statistic='std',bins=[x_bins,y_bins])
mean = stats.binned_statistic_2d(x,y,z,statistic='mean',bins=[x_bins,y_bins])
Where mean is totally equivalent to your H/H_counts

Data points falling outside the meshgrid being interpolated over, while the meshgrid certainly covers those points

I am trying to interpolate sparse data over a meshgrid, but am observing some rather odd behavior. The white dots are precisely where I have values, and I am relying on the linear interpolation algorithm to fill in the other grids where possible. I recognize that this type of interpolation is not perfect due to the obvious lack of data, but how come some of the points where I have data fall outside the meshgrid that I am interpolating over? Is this a common phenomenon? This doesn't change even if I make the grid coarser.
I would appreciate some insight into why this happens, (perhaps how the linear interpolation works), or if there are any ways to fix this. See the red circles in the picture below for example:
Data points provided for interpolation falling outside the meshgrid that is interpolated over
The following is some code on the interpolation that generated the gridded data.
#mesh grid
xg = np.linspace(-130, -60, num=70)
yg = np.linspace(20,50,num=30)
Xg,Yg = np.meshgrid(xg,yg)
zg1 = griddata(points1, df2['tempratio'], (Xg, Yg), method = 'linear')
from mpl_toolkits.basemap import Basemap
lon_0 = xg.mean()
lat_0 = yg.mean()
m = Basemap(width=5000000, height=3500000,
resolution='l', projection='stere',\
lat_ts=40, lat_0=lat_0, lon_0=lon_0)
xm, ym = m(Xg, Yg)
cs = m.pcolormesh(xm,ym,zg1,shading='flat',cmap=plt.cm.Reds)
griddata assigns values to the vertices of a grid, so 70x30 points. pcolormesh doesn't color vertices, but the rectangles in-between. There are only 69x29 rectangles formed by the given vertices. So, one row and one column of zg1 will be dropped. To counter that, an extra row and extra column can be added to the coordinates and shifting everything half a rectangle in each direction.
It still doesn't force griddata to include all given points, but goes a step towards the desired outcome. A denser grid can also help. (Choosing 'nearest' instead of 'linear' interpolation would fill the complete grid.)
Here is some code to illustrate what's happening:
import numpy as np
from scipy.interpolate import griddata
from matplotlib import pyplot as plt
def extend_range(x):
dx = (x[1] - x[0]) / 2
return np.append( x - dx, x[-1] + dx)
N = 10
points1 = np.vstack([np.random.randint(-130, -60, N), np.random.randint(20, 50, N)]).T
tempratio = np.random.randint(0, 20, N)
xg = np.linspace(-130, -60, num=15)
yg = np.linspace(20, 50, num=10)
Xg, Yg = np.meshgrid(xg, yg)
zg1 = griddata(points1, tempratio, (Xg, Yg), method='linear')
fig, axs = plt.subplots(ncols=2, figsize=(12, 4))
for ax in axs:
ax.scatter(Xg, Yg, c=zg1, cmap='coolwarm', ec='g', s=80, zorder=2, label='griddata')
ax.scatter(points1[:,0], points1[:,1], c=tempratio, cmap='coolwarm', ec='black', s=150, zorder=3, label='given data')
if ax == axs[0]:
ax.pcolormesh(xg, yg, zg1, shading='flat', cmap='coolwarm')
ax.set_title('given x and y ranges')
else:
#todo: convert xg and yg to map coordinates
ax.pcolormesh(extend_range(xg), extend_range(yg), zg1, shading='flat', cmap='coolwarm')
ax.set_title('extended x and y ranges')
ax.legend()
plt.show()

matplotlib contour of sparse (regular) data shows artefacts

I would like to contour data that are quite sparse and where a maximum is going diagonally through the picture; the matplotlib contour function invents minima between the sampled maxima.
Starting with the densely sampled case where everything looks as expected:
import matplotlib.pyplot as plt
import matplotlib.tri as tri
import numpy as np
x_1D = np.linspace(0., 10., 100)
y_1D = np.linspace(0., 10., 100)
x, y = np.meshgrid(x_1D, y_1D)
z = np.empty_like(x)
def peak(y, y0):
return np.exp(-(y-y0)**2)
for i in range(x_1D.size):
z[:,i] = peak(y_1D, i/x_1D.size*y_1D.max())
fig, ax = plt.subplots(ncols=3)
ax[0].set_title('measured data')
ax[0].scatter(x, y, marker='s', c=z, cmap=plt.cm.jet, s=25)
ax[1].set_title('contour')
ax[1].contourf(x, y, z, levels=14, cmap=plt.cm.jet)
# define grid
xi = np.linspace(x_1D.min()-0.1, x_1D.max()+0.1, 1000)
yi = np.linspace(y_1D.min()-0.1, y_1D.max()+0.1, 1000)
# grid the data
triang = tri.Triangulation(x.flatten(), y.flatten())
interpolator = tri.LinearTriInterpolator(triang, z.flatten())
Xi, Yi = np.meshgrid(xi, yi)
zi = interpolator(Xi, Yi)
ax[2].set_title('interpolated')
ax[2].contourf(xi, yi, zi, levels=14, cmap=plt.cm.jet)
plt.show()
yields
When x is sampled less by a factor 10, i.e. x_1D = np.linspace(0., 10., 10), minima appear between the sampled maxima in the contour plot.
Is there a way how to avoid this artefact and make the contour of the sparsely sampled data look like the one of the densely sampled data?
EDIT: Thanks for the answer that works very nicely on the example I provided. Unfortunately, I have simplified the problem too far. Rather than talking about one diagonal line, I should have enquired about an arbitrary number of peaks moving in arbitrary directions through the picture; e.g. replace the peak-generation by
z = np.zeros_like(x)
def peak(y, y0):
return np.exp(-(y-y0)**2)
for i in range(x_1D.size):
z[:,i] += peak(y_1D, np.cos(i/x_1D.size*np.pi)*y_1D.max()*0.05+y_1D.max()*0.8)
for i in range(x_1D.size):
z[:,i] += peak(y_1D, np.sin(i/x_1D.size*np.pi/2.)*y_1D.max()*0.5)
resulting in
The main issue with your approach is that the triangulation algorithm doesn't know that the peaks should be connecting to eachother between the "x-slices" (your line of dense data points for a constant x).
Simplifying a bit, the triangulation algorithm will look at the neighbours in the x and y direction and connect to those. Then, when trying to interpolate using this triangulation, the points between the peaks will be roughly an average of the nearest points in the x direction and hence the minima will appear. The best solution is to make your own triangulation, with the peaks connected directly.
Fortunately, we can actually hack the triangulation to make it connect to the peaks by shifting the coordinates in the y direction such that the peaks are all aligned horizontally. This works because the triangulation algorithm uses the coordinates that you pass it. In your example this is easy to accomplish because we can just apply the simple shift y_s = y - x. In general you would have to get the equation for your peak (call it y_p(x)) and then subtract that from y to get the y_s.
Now that you have a shifted triangulation, you can make a new denser grid (like you did) and apply the same shift. Then, you interpolate in the shifted mesh with the shifted dense grid to get the z values correctly interpolated. Finally, you un-shift the dense grid to get the correct y values and plot it.
Below is the code of applying this concept to your code and the final result. As you can see. It works quite well for this case.
import matplotlib.pyplot as plt
import matplotlib.tri as tri
import numpy as np
def peak(y, y0):
return np.exp(-(y-y0)**2)
x_1D = np.linspace(0., 10., 10)
y_1D = np.linspace(0., 10., 100)
x, y = np.meshgrid(x_1D, y_1D)
z = np.empty_like(x)
for i in range(x_1D.size):
z[:,i] = peak(y_1D, i/x_1D.size*y_1D.max())
fig, ax = plt.subplots(ncols=4)
ax[0].set_title('measured data')
ax[0].scatter(x, y, marker='s', c=z, cmap=plt.cm.jet, s=25)
ax[1].set_title('contour')
ax[1].contourf(x, y, z, levels=14, cmap=plt.cm.jet)
# define output grid
xi_1D = np.linspace(x_1D.min()-0.1, x_1D.max()+0.1, 1000)
yi_1D = np.linspace(y_1D.min()-0.1, y_1D.max()+0.1, 1000)
xi, yi = np.meshgrid(xi_1D, yi_1D)
# Old Linear Interpolation
triang = tri.Triangulation(x.flatten(), y.flatten())
interpolator = tri.LinearTriInterpolator(triang, z.flatten())
zi = interpolator(xi, yi)
ax[2].set_title('interpolated')
ax[2].contourf(xi, yi, zi, levels=14, cmap=plt.cm.jet)
# === SHIFTED LINEAR INTERPOLATION ===
# make shifted interpolating mesh for the data
y_s=y-x
triang_s = tri.Triangulation(x.flatten(), y_s.flatten())
interpolator_s = tri.LinearTriInterpolator(triang_s, z.flatten())
# interpolate in the shifted state
yi_s = yi-xi
zi_s = interpolator_s(xi, yi_s)
# unshift the fine mesh
yi_us = yi_s+xi
ax[3].set_title('interpolated (shifted)')
ax[3].contourf(xi, yi_us, zi_s, levels=14, cmap=plt.cm.jet)
plt.show()

Delaunay Triangulation of points from 2D surface in 3D with python?

I have a collection of 3D points. These points are sampled at constant levels (z=0,1,...,7). An image should make it clear:
These points are in a numpy ndarray of shape (N, 3) called X. The above plot is created using:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
X = load('points.npy')
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_wireframe(X[:,0], X[:,1], X[:,2])
ax.scatter(X[:,0], X[:,1], X[:,2])
plt.draw()
I'd like to instead triangulate only the surface of this object, and plot the surface. I do not want the convex hull of this object, however, because this loses subtle shape information I'd like to be able to inspect.
I have tried ax.plot_trisurf(X[:,0], X[:,1], X[:,2]), but this results in the following mess:
Any help?
Example data
Here's a snippet to generate 3D data that is representative of the problem:
import numpy as np
X = []
for i in range(8):
t = np.linspace(0,2*np.pi,np.random.randint(30,50))
for j in range(t.shape[0]):
# random circular objects...
X.append([
(-0.05*(i-3.5)**2+1)*np.cos(t[j])+0.1*np.random.rand()-0.05,
(-0.05*(i-3.5)**2+1)*np.sin(t[j])+0.1*np.random.rand()-0.05,
i
])
X = np.array(X)
Example data from original image
Here's a pastebin to the original data:
http://pastebin.com/YBZhJcsV
Here are the slices along constant z:
update 3
Here's a concrete example of what I describe in update 2. If you don't have mayavi for visualization, I suggest installing it via edm using edm install mayavi pyqt matplotlib.
Toy 2D contours stacked in 3D
Contours -> 3D surface
Code to generate the figures
from matplotlib import path as mpath
from mayavi import mlab
import numpy as np
def make_star(amplitude=1.0, rotation=0.0):
""" Make a star shape
"""
t = np.linspace(0, 2*np.pi, 6) + rotation
star = np.zeros((12, 2))
star[::2] = np.c_[np.cos(t), np.sin(t)]
star[1::2] = 0.5*np.c_[np.cos(t + np.pi / 5), np.sin(t + np.pi / 5)]
return amplitude * star
def make_stars(n_stars=51, z_diff=0.05):
""" Make `2*n_stars-1` stars stacked in 3D
"""
amps = np.linspace(0.25, 1, n_stars)
amps = np.r_[amps, amps[:-1][::-1]]
rots = np.linspace(0, 2*np.pi, len(amps))
zamps = np.linspace
stars = []
for i, (amp, rot) in enumerate(zip(amps, rots)):
star = make_star(amplitude=amp, rotation=rot)
height = i*z_diff
z = np.full(len(star), height)
star3d = np.c_[star, z]
stars.append(star3d)
return stars
def polygon_to_boolean(points, xvals, yvals):
""" Convert `points` to a boolean indicator mask
over the specified domain
"""
x, y = np.meshgrid(xvals, yvals)
xy = np.c_[x.flatten(), y.flatten()]
mask = mpath.Path(points).contains_points(xy).reshape(x.shape)
return x, y, mask
def plot_contours(stars):
""" Plot a list of stars in 3D
"""
n = len(stars)
for i, star in enumerate(stars):
x, y, z = star.T
mlab.plot3d(*star.T)
#ax.plot3D(x, y, z, '-o', c=(0, 1-i/n, i/n))
#ax.set_xlim(-1, 1)
#ax.set_ylim(-1, 1)
mlab.show()
if __name__ == '__main__':
# Make and plot the 2D contours
stars3d = make_stars()
plot_contours(stars3d)
xvals = np.linspace(-1, 1, 101)
yvals = np.linspace(-1, 1, 101)
volume = np.dstack([
polygon_to_boolean(star[:,:2], xvals, yvals)[-1]
for star in stars3d
]).astype(float)
mlab.contour3d(volume, contours=[0.5])
mlab.show()
update 2
I now do this as follows:
I use the fact that the paths in each z-slice are closed and simple and use matplotlib.path to determine points inside and outside of the contour. Using this idea, I convert the contours in each slice to a boolean-valued image, which is combined into a boolean-valued volume.
Next, I use skimage's marching_cubes method to obtain a triangulation of the surface for visualization.
Here's an example of the method. I think the data is slightly different, but you can definitely see that the results are much cleaner, and can handle surfaces that are disconnected or have holes.
Original answer
Ok, here's the solution I came up with. It depends heavily on my data being roughly spherical and sampled at uniformly in z I think. Some of the other comments provide more information about more robust solutions. Since my data is roughly spherical I triangulate the azimuth and zenith angles from the spherical coordinate transform of my data points.
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.tri as mtri
X = np.load('./mydatars.npy')
# My data points are strictly positive. This doesn't work if I don't center about the origin.
X -= X.mean(axis=0)
rad = np.linalg.norm(X, axis=1)
zen = np.arccos(X[:,-1] / rad)
azi = np.arctan2(X[:,1], X[:,0])
tris = mtri.Triangulation(zen, azi)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot_trisurf(X[:,0], X[:,1], X[:,2], triangles=tris.triangles, cmap=plt.cm.bone)
plt.show()
Using the sample data from the pastebin above, this yields:
I realise that you mentioned in your question that you didn't want to use the convex hull because you might lose some shape information. I have a simple solution that works pretty well for your 'jittered spherical' example data, although it does use scipy.spatial.ConvexHull. I thought I would share it here anyway, just in case it's useful for others:
from matplotlib.tri import triangulation
from scipy.spatial import ConvexHull
# compute the convex hull of the points
cvx = ConvexHull(X)
x, y, z = X.T
# cvx.simplices contains an (nfacets, 3) array specifying the indices of
# the vertices for each simplical facet
tri = Triangulation(x, y, triangles=cvx.simplices)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.hold(True)
ax.plot_trisurf(tri, z)
ax.plot_wireframe(x, y, z, color='r')
ax.scatter(x, y, z, color='r')
plt.draw()
It does pretty well in this case, since your example data ends up lying on a more-or-less convex surface. Perhaps you could make some more challenging example data? A toroidal surface would be a good test case which the convex hull method would obviously fail.
Mapping an arbitrary 3D surface from a point cloud is a really tough problem. Here's a related question containing some links that might be helpful.

Generate a heatmap using a scatter data set

I have a set of X,Y data points (about 10k) that are easy to plot as a scatter plot but that I would like to represent as a heatmap.
I looked through the examples in Matplotlib and they all seem to already start with heatmap cell values to generate the image.
Is there a method that converts a bunch of x, y, all different, to a heatmap (where zones with higher frequency of x, y would be "warmer")?
If you don't want hexagons, you can use numpy's histogram2d function:
import numpy as np
import numpy.random
import matplotlib.pyplot as plt
# Generate some test data
x = np.random.randn(8873)
y = np.random.randn(8873)
heatmap, xedges, yedges = np.histogram2d(x, y, bins=50)
extent = [xedges[0], xedges[-1], yedges[0], yedges[-1]]
plt.clf()
plt.imshow(heatmap.T, extent=extent, origin='lower')
plt.show()
This makes a 50x50 heatmap. If you want, say, 512x384, you can put bins=(512, 384) in the call to histogram2d.
Example:
In Matplotlib lexicon, i think you want a hexbin plot.
If you're not familiar with this type of plot, it's just a bivariate histogram in which the xy-plane is tessellated by a regular grid of hexagons.
So from a histogram, you can just count the number of points falling in each hexagon, discretiize the plotting region as a set of windows, assign each point to one of these windows; finally, map the windows onto a color array, and you've got a hexbin diagram.
Though less commonly used than e.g., circles, or squares, that hexagons are a better choice for the geometry of the binning container is intuitive:
hexagons have nearest-neighbor symmetry (e.g., square bins don't,
e.g., the distance from a point on a square's border to a point
inside that square is not everywhere equal) and
hexagon is the highest n-polygon that gives regular plane
tessellation (i.e., you can safely re-model your kitchen floor with hexagonal-shaped tiles because you won't have any void space between the tiles when you are finished--not true for all other higher-n, n >= 7, polygons).
(Matplotlib uses the term hexbin plot; so do (AFAIK) all of the plotting libraries for R; still i don't know if this is the generally accepted term for plots of this type, though i suspect it's likely given that hexbin is short for hexagonal binning, which is describes the essential step in preparing the data for display.)
from matplotlib import pyplot as PLT
from matplotlib import cm as CM
from matplotlib import mlab as ML
import numpy as NP
n = 1e5
x = y = NP.linspace(-5, 5, 100)
X, Y = NP.meshgrid(x, y)
Z1 = ML.bivariate_normal(X, Y, 2, 2, 0, 0)
Z2 = ML.bivariate_normal(X, Y, 4, 1, 1, 1)
ZD = Z2 - Z1
x = X.ravel()
y = Y.ravel()
z = ZD.ravel()
gridsize=30
PLT.subplot(111)
# if 'bins=None', then color of each hexagon corresponds directly to its count
# 'C' is optional--it maps values to x-y coordinates; if 'C' is None (default) then
# the result is a pure 2D histogram
PLT.hexbin(x, y, C=z, gridsize=gridsize, cmap=CM.jet, bins=None)
PLT.axis([x.min(), x.max(), y.min(), y.max()])
cb = PLT.colorbar()
cb.set_label('mean value')
PLT.show()
Edit: For a better approximation of Alejandro's answer, see below.
I know this is an old question, but wanted to add something to Alejandro's anwser: If you want a nice smoothed image without using py-sphviewer you can instead use np.histogram2d and apply a gaussian filter (from scipy.ndimage.filters) to the heatmap:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from scipy.ndimage.filters import gaussian_filter
def myplot(x, y, s, bins=1000):
heatmap, xedges, yedges = np.histogram2d(x, y, bins=bins)
heatmap = gaussian_filter(heatmap, sigma=s)
extent = [xedges[0], xedges[-1], yedges[0], yedges[-1]]
return heatmap.T, extent
fig, axs = plt.subplots(2, 2)
# Generate some test data
x = np.random.randn(1000)
y = np.random.randn(1000)
sigmas = [0, 16, 32, 64]
for ax, s in zip(axs.flatten(), sigmas):
if s == 0:
ax.plot(x, y, 'k.', markersize=5)
ax.set_title("Scatter plot")
else:
img, extent = myplot(x, y, s)
ax.imshow(img, extent=extent, origin='lower', cmap=cm.jet)
ax.set_title("Smoothing with $\sigma$ = %d" % s)
plt.show()
Produces:
The scatter plot and s=16 plotted on top of eachother for Agape Gal'lo (click for better view):
One difference I noticed with my gaussian filter approach and Alejandro's approach was that his method shows local structures much better than mine. Therefore I implemented a simple nearest neighbour method at pixel level. This method calculates for each pixel the inverse sum of the distances of the n closest points in the data. This method is at a high resolution pretty computationally expensive and I think there's a quicker way, so let me know if you have any improvements.
Update: As I suspected, there's a much faster method using Scipy's scipy.cKDTree. See Gabriel's answer for the implementation.
Anyway, here's my code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
def data_coord2view_coord(p, vlen, pmin, pmax):
dp = pmax - pmin
dv = (p - pmin) / dp * vlen
return dv
def nearest_neighbours(xs, ys, reso, n_neighbours):
im = np.zeros([reso, reso])
extent = [np.min(xs), np.max(xs), np.min(ys), np.max(ys)]
xv = data_coord2view_coord(xs, reso, extent[0], extent[1])
yv = data_coord2view_coord(ys, reso, extent[2], extent[3])
for x in range(reso):
for y in range(reso):
xp = (xv - x)
yp = (yv - y)
d = np.sqrt(xp**2 + yp**2)
im[y][x] = 1 / np.sum(d[np.argpartition(d.ravel(), n_neighbours)[:n_neighbours]])
return im, extent
n = 1000
xs = np.random.randn(n)
ys = np.random.randn(n)
resolution = 250
fig, axes = plt.subplots(2, 2)
for ax, neighbours in zip(axes.flatten(), [0, 16, 32, 64]):
if neighbours == 0:
ax.plot(xs, ys, 'k.', markersize=2)
ax.set_aspect('equal')
ax.set_title("Scatter Plot")
else:
im, extent = nearest_neighbours(xs, ys, resolution, neighbours)
ax.imshow(im, origin='lower', extent=extent, cmap=cm.jet)
ax.set_title("Smoothing over %d neighbours" % neighbours)
ax.set_xlim(extent[0], extent[1])
ax.set_ylim(extent[2], extent[3])
plt.show()
Result:
Instead of using np.hist2d, which in general produces quite ugly histograms, I would like to recycle py-sphviewer, a python package for rendering particle simulations using an adaptive smoothing kernel and that can be easily installed from pip (see webpage documentation). Consider the following code, which is based on the example:
import numpy as np
import numpy.random
import matplotlib.pyplot as plt
import sphviewer as sph
def myplot(x, y, nb=32, xsize=500, ysize=500):
xmin = np.min(x)
xmax = np.max(x)
ymin = np.min(y)
ymax = np.max(y)
x0 = (xmin+xmax)/2.
y0 = (ymin+ymax)/2.
pos = np.zeros([len(x),3])
pos[:,0] = x
pos[:,1] = y
w = np.ones(len(x))
P = sph.Particles(pos, w, nb=nb)
S = sph.Scene(P)
S.update_camera(r='infinity', x=x0, y=y0, z=0,
xsize=xsize, ysize=ysize)
R = sph.Render(S)
R.set_logscale()
img = R.get_image()
extent = R.get_extent()
for i, j in zip(xrange(4), [x0,x0,y0,y0]):
extent[i] += j
print extent
return img, extent
fig = plt.figure(1, figsize=(10,10))
ax1 = fig.add_subplot(221)
ax2 = fig.add_subplot(222)
ax3 = fig.add_subplot(223)
ax4 = fig.add_subplot(224)
# Generate some test data
x = np.random.randn(1000)
y = np.random.randn(1000)
#Plotting a regular scatter plot
ax1.plot(x,y,'k.', markersize=5)
ax1.set_xlim(-3,3)
ax1.set_ylim(-3,3)
heatmap_16, extent_16 = myplot(x,y, nb=16)
heatmap_32, extent_32 = myplot(x,y, nb=32)
heatmap_64, extent_64 = myplot(x,y, nb=64)
ax2.imshow(heatmap_16, extent=extent_16, origin='lower', aspect='auto')
ax2.set_title("Smoothing over 16 neighbors")
ax3.imshow(heatmap_32, extent=extent_32, origin='lower', aspect='auto')
ax3.set_title("Smoothing over 32 neighbors")
#Make the heatmap using a smoothing over 64 neighbors
ax4.imshow(heatmap_64, extent=extent_64, origin='lower', aspect='auto')
ax4.set_title("Smoothing over 64 neighbors")
plt.show()
which produces the following image:
As you see, the images look pretty nice, and we are able to identify different substructures on it. These images are constructed spreading a given weight for every point within a certain domain, defined by the smoothing length, which in turns is given by the distance to the closer nb neighbor (I've chosen 16, 32 and 64 for the examples). So, higher density regions typically are spread over smaller regions compared to lower density regions.
The function myplot is just a very simple function that I've written in order to give the x,y data to py-sphviewer to do the magic.
If you are using 1.2.x
import numpy as np
import matplotlib.pyplot as plt
x = np.random.randn(100000)
y = np.random.randn(100000)
plt.hist2d(x,y,bins=100)
plt.show()
Seaborn now has the jointplot function which should work nicely here:
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# Generate some test data
x = np.random.randn(8873)
y = np.random.randn(8873)
sns.jointplot(x=x, y=y, kind='hex')
plt.show()
Here's Jurgy's great nearest neighbour approach but implemented using scipy.cKDTree. In my tests it's about 100x faster.
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from scipy.spatial import cKDTree
def data_coord2view_coord(p, resolution, pmin, pmax):
dp = pmax - pmin
dv = (p - pmin) / dp * resolution
return dv
n = 1000
xs = np.random.randn(n)
ys = np.random.randn(n)
resolution = 250
extent = [np.min(xs), np.max(xs), np.min(ys), np.max(ys)]
xv = data_coord2view_coord(xs, resolution, extent[0], extent[1])
yv = data_coord2view_coord(ys, resolution, extent[2], extent[3])
def kNN2DDens(xv, yv, resolution, neighbours, dim=2):
"""
"""
# Create the tree
tree = cKDTree(np.array([xv, yv]).T)
# Find the closest nnmax-1 neighbors (first entry is the point itself)
grid = np.mgrid[0:resolution, 0:resolution].T.reshape(resolution**2, dim)
dists = tree.query(grid, neighbours)
# Inverse of the sum of distances to each grid point.
inv_sum_dists = 1. / dists[0].sum(1)
# Reshape
im = inv_sum_dists.reshape(resolution, resolution)
return im
fig, axes = plt.subplots(2, 2, figsize=(15, 15))
for ax, neighbours in zip(axes.flatten(), [0, 16, 32, 63]):
if neighbours == 0:
ax.plot(xs, ys, 'k.', markersize=5)
ax.set_aspect('equal')
ax.set_title("Scatter Plot")
else:
im = kNN2DDens(xv, yv, resolution, neighbours)
ax.imshow(im, origin='lower', extent=extent, cmap=cm.Blues)
ax.set_title("Smoothing over %d neighbours" % neighbours)
ax.set_xlim(extent[0], extent[1])
ax.set_ylim(extent[2], extent[3])
plt.savefig('new.png', dpi=150, bbox_inches='tight')
and the initial question was... how to convert scatter values to grid values, right?
histogram2d does count the frequency per cell, however, if you have other data per cell than just the frequency, you'd need some additional work to do.
x = data_x # between -10 and 4, log-gamma of an svc
y = data_y # between -4 and 11, log-C of an svc
z = data_z #between 0 and 0.78, f1-values from a difficult dataset
So, I have a dataset with Z-results for X and Y coordinates. However, I was calculating few points outside the area of interest (large gaps), and heaps of points in a small area of interest.
Yes here it becomes more difficult but also more fun. Some libraries (sorry):
from matplotlib import pyplot as plt
from matplotlib import cm
import numpy as np
from scipy.interpolate import griddata
pyplot is my graphic engine today,
cm is a range of color maps with some initeresting choice.
numpy for the calculations,
and griddata for attaching values to a fixed grid.
The last one is important especially because the frequency of xy points is not equally distributed in my data. First, let's start with some boundaries fitting to my data and an arbitrary grid size. The original data has datapoints also outside those x and y boundaries.
#determine grid boundaries
gridsize = 500
x_min = -8
x_max = 2.5
y_min = -2
y_max = 7
So we have defined a grid with 500 pixels between the min and max values of x and y.
In my data, there are lots more than the 500 values available in the area of high interest; whereas in the low-interest-area, there are not even 200 values in the total grid; between the graphic boundaries of x_min and x_max there are even less.
So for getting a nice picture, the task is to get an average for the high interest values and to fill the gaps elsewhere.
I define my grid now. For each xx-yy pair, i want to have a color.
xx = np.linspace(x_min, x_max, gridsize) # array of x values
yy = np.linspace(y_min, y_max, gridsize) # array of y values
grid = np.array(np.meshgrid(xx, yy.T))
grid = grid.reshape(2, grid.shape[1]*grid.shape[2]).T
Why the strange shape? scipy.griddata wants a shape of (n, D).
Griddata calculates one value per point in the grid, by a predefined method.
I choose "nearest" - empty grid points will be filled with values from the nearest neighbor. This looks as if the areas with less information have bigger cells (even if it is not the case). One could choose to interpolate "linear", then areas with less information look less sharp. Matter of taste, really.
points = np.array([x, y]).T # because griddata wants it that way
z_grid2 = griddata(points, z, grid, method='nearest')
# you get a 1D vector as result. Reshape to picture format!
z_grid2 = z_grid2.reshape(xx.shape[0], yy.shape[0])
And hop, we hand over to matplotlib to display the plot
fig = plt.figure(1, figsize=(10, 10))
ax1 = fig.add_subplot(111)
ax1.imshow(z_grid2, extent=[x_min, x_max,y_min, y_max, ],
origin='lower', cmap=cm.magma)
ax1.set_title("SVC: empty spots filled by nearest neighbours")
ax1.set_xlabel('log gamma')
ax1.set_ylabel('log C')
plt.show()
Around the pointy part of the V-Shape, you see I did a lot of calculations during my search for the sweet spot, whereas the less interesting parts almost everywhere else have a lower resolution.
Make a 2-dimensional array that corresponds to the cells in your final image, called say heatmap_cells and instantiate it as all zeroes.
Choose two scaling factors that define the difference between each array element in real units, for each dimension, say x_scale and y_scale. Choose these such that all your datapoints will fall within the bounds of the heatmap array.
For each raw datapoint with x_value and y_value:
heatmap_cells[floor(x_value/x_scale),floor(y_value/y_scale)]+=1
Very similar to #Piti's answer, but using 1 call instead of 2 to generate the points:
import numpy as np
import matplotlib.pyplot as plt
pts = 1000000
mean = [0.0, 0.0]
cov = [[1.0,0.0],[0.0,1.0]]
x,y = np.random.multivariate_normal(mean, cov, pts).T
plt.hist2d(x, y, bins=50, cmap=plt.cm.jet)
plt.show()
Output:
Here's one I made on a 1 Million point set with 3 categories (colored Red, Green, and Blue). Here's a link to the repository if you'd like to try the function. Github Repo
histplot(
X,
Y,
labels,
bins=2000,
range=((-3,3),(-3,3)),
normalize_each_label=True,
colors = [
[1,0,0],
[0,1,0],
[0,0,1]],
gain=50)
I'm afraid I'm a little late to the party but I had a similar question a while ago. The accepted answer (by #ptomato) helped me out but I'd also want to post this in case it's of use to someone.
''' I wanted to create a heatmap resembling a football pitch which would show the different actions performed '''
import numpy as np
import matplotlib.pyplot as plt
import random
#fixing random state for reproducibility
np.random.seed(1234324)
fig = plt.figure(12)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
#Ratio of the pitch with respect to UEFA standards
hmap= np.full((6, 10), 0)
#print(hmap)
xlist = np.random.uniform(low=0.0, high=100.0, size=(20))
ylist = np.random.uniform(low=0.0, high =100.0, size =(20))
#UEFA Pitch Standards are 105m x 68m
xlist = (xlist/100)*10.5
ylist = (ylist/100)*6.5
ax1.scatter(xlist,ylist)
#int of the co-ordinates to populate the array
xlist_int = xlist.astype (int)
ylist_int = ylist.astype (int)
#print(xlist_int, ylist_int)
for i, j in zip(xlist_int, ylist_int):
#this populates the array according to the x,y co-ordinate values it encounters
hmap[j][i]= hmap[j][i] + 1
#Reversing the rows is necessary
hmap = hmap[::-1]
#print(hmap)
im = ax2.imshow(hmap)
Here's the result
None of these solutions worked for my application, so this is what I came up with. Essentially I am placing a 2D Gaussian at every single point:
import cv2
import numpy as np
import matplotlib.pyplot as plt
def getGaussian2D(ksize, sigma, norm=True):
oneD = cv2.getGaussianKernel(ksize=ksize, sigma=sigma)
twoD = np.outer(oneD.T, oneD)
return twoD / np.sum(twoD) if norm else twoD
def pt2heat(pts, shape, kernel=16, sigma=5):
heat = np.zeros(shape)
k = getGaussian2D(kernel, sigma)
for y,x in pts:
x, y = int(x), int(y)
for i in range(-kernel//2, kernel//2):
for j in range(-kernel//2, kernel//2):
if 0 <= x+i < shape[0] and 0 <= y+j < shape[1]:
heat[x+i, y+j] = heat[x+i, y+j] + k[i+kernel//2, j+kernel//2]
return heat
heat = pts2heat(pts, img.shape[:2])
plt.imshow(heat, cmap='heat')
Here are the points overlayed ontop of it's associated image, along with the resulting heat map:

Categories

Resources