matplotlib: filling under line in 3d polar plot - python

I'd like to plot a sine wave on a circle: that is, the circle is in the x,y-plane and the sine wave wraps around it perpendicular to that plane (sticking up the z-axis). I can do this, but when I try to fill the areas between the circle and the sine wave with a polygon (ie paint on the surface of the imaginary cylinder on which my sine wave lives), I can't get it quite right - matplotlib seems to XOR the regions that overlap in a view of the plot instead of giving me a view in which the ones in front occlude those behind.
Here's the relevant bit of my code:
fig = plt.figure()
ax = fig.gca(projection='3d')
ax._axis3don = False
theta = np.linspace(0., 2 * np.pi, 1000)
r = 1.
x = r * np.sin(theta)
y = r * np.cos(theta)
sinez = N * np.sin(theta * m)
ax.plot(x, y, sinez, color='r')
xv = np.append(x, x[::-1])
yv = np.append(y, y[::-1])
zv = np.append(sinez, np.zeros(n))
verts = [zip(xv,yv,zrev),]
poly = Poly3DCollection(verts, facecolors = [cc('r'), cc('b')],
edgecolor='None')
poly.set_alpha(0.7)
ax.add_collection3d(poly)
Here's what it looks like:

matplotlib's main reason for existence is 2D plotting, the 3D stuff is just some clever transforms and can be buggy/hacky. One of the inherent limitations is that matplotlib draws in layers, so it has no notion of 'in front' or 'behind', it only knows the order in which in draws the curves to the canvas (which is confusingly called z-order).
If you want to get this to look right with out re-writing the 3D code, split the sine wave up into pieces and make sure you set the z-order right by hand (see How to draw intersecting planes? for a simpler version of this), but you won't be able to rotate the image.
If you need real 3D, I would suggest looking into mayavi from enthought which is OpenGL based.

In the docs, the devs claim that Poly3DCollection
does a bit of magic with the _facecolors and _edgecolors properties.
which I believe is the XOR effect that you can see here, and looking at the code it's the function do_3d_projection that seems to be doing the magic.
As I see it, you could either subclass Poly3DCollection and rewrite do_3d_projection to get what you want, or maybe think of another way to plot this (perhaps treating the sinusoid and circle as separate objects somehow).

Related

Plotting in polar space in matplotlib

Consider the following data, which is defined in polar space in theta, r, and is plotted twice; once in the the orthogonal theta-r phase space, and one in cartesian space after an inverse transformation from polar coordinates to x-y (i.e. what matplotlib's projection='polar' does):
import matplotlib.pyplot as plt
import numpy as np
theta = np.linspace(0, 2*np.pi, 50)
r = np.linspace(0, 1, 50)
THETA, R = np.meshgrid(theta, r)
Z = np.sin(R*np.pi) * np.sin(THETA+np.pi/2)
fig = plt.figure()
axpol = fig.add_subplot(121)
axcart = fig.add_subplot(122, projection='polar')
axcart.contourf(THETA, R, Z, levels=10)
axpol.contourf(THETA, R, Z, levels=10)
axcart.set_title('cartesian space')
axpol.set_title('polar space')
axpol.set_xlim([0, 2*np.pi])
axpol.set_xlabel('r')
axpol.set_ylabel('theta')
plt.show()
This produces:
(NOTE: Oops, the axis labels in the polar plots (left side) should be swapped in each of the images below)
Now, if we shift the theta array by pi:
theta = np.linspace(np.pi, 3*np.pi, 50)
and rerun the above, we see
Notice that the data plotted in the projected polar space successfully wraps the data at theta > 2*np.pi back to the beginning of the angular domain (since this is defined in the projections inverse transformation), such that it appears unchanged. In polar space, this does not happen.
Of course, this is expected; this axis has no associated transformation, and thus does know know how to wrap the data, or that it even should.
My question is, how can I enable this behavior, without having to shift the coordinates and data manually? That is, is there a way to have the axis on the left of the figure above inherit the polar transformation, but not the projection?
I would prefer to do this without defining my own transformation or projection objects. I thought there should be a way to inherit this small piece of the polar transformation, without doing the "full" transformation to Cartesian x,y.

Hatch area using pcolormesh in Basemap

I try to hatch only the regions where I have statistically significant results. How can I do this using Basemap and pcolormesh?
plt.figure(figsize=(12,12))
lons = iris_cube.coord('longitude').points
lats = iris_cube.coord('latitude').points
m = Basemap(llcrnrlon=lons[0], llcrnrlat=lats[0], urcrnrlon=lons[-1], urcrnrlat=lats[-1], resolution='l')
lon, lat = np.meshgrid(lons, lats)
plt.subplot(111)
cs = m.pcolormesh(lon, lat, significant_data, cmap=cmap, norm=norm, hatch='/')
It seems pcolormesh does not support hatching (see https://github.com/matplotlib/matplotlib/issues/3058). Instead, the advice is to use pcolor, which starting from this example would look like,
import matplotlib.pyplot as plt
import numpy as np
dx, dy = 0.15, 0.05
y, x = np.mgrid[slice(-3, 3 + dy, dy),
slice(-3, 3 + dx, dx)]
z = (1 - x / 2. + x ** 5 + y ** 3) * np.exp(-x ** 2 - y ** 2)
z = z[:-1, :-1]
zm = np.ma.masked_less(z, 0.3)
cm = plt.pcolormesh(x, y, z)
plt.pcolor(x, y, zm, hatch='/', alpha=0.)
plt.colorbar(cm)
plt.show()
where a mask array is used to get the values of z greater than 0.3 and these are hatched using pcolor.
To avoid plotting another colour over the top (so you get only hatching) I've set alpha to 0. in pcolor which feels a bit like a hack. The alternative is to use patch and assign to the areas you want. See this example Python: Leave Numpy NaN values from matplotlib heatmap and its legend. This may be more tricky for basemaps, etc than just choosing areas with pcolor.
I have a simple solution for this problem, using only pcolormesh and not pcolor: Plot the color mesh, then hatch the entire plot, and then plot the original mesh again, this time by masking statistically significant cells, so that the only hatching visible is those on significant cells. Alternatively, you can put a marker on every cell (looks good too), instead of hatching the entire figure.
(I use cartopy instead of basemap, but this shouldn't matter.)
Step 1: Plot your field (z) normally, using pcolormesh.
mesh = plt.pcolormesh(x,y,z)
where x/y can be lons/lats.
Step 2: Hatch the entire plot. For this, use fill_between:
hatch = plt.fill_between([xmin,xmax],y1,y2,hatch='///////',color="none",edgecolor='black')
Check details of fill_between to set xmin, xmax, y1 and y2. You simply define two horizontal lines beyond the bounds of your plot, and hatch the area in between. Use more, or less /s to set hatch density.
To adjust hatch thickness, use below lines:
import matplotlib as mpl
mpl.rcParams['hatch.linewidth'] = 0.3
As an alternative to hatching everything, you can plot all your x-y points (or, lon-lat couples) as markers. A simple solution is putting a dot (x also looks good).
hatch = plt.plot(x,y,'.',color='black',markersize=1.5)
One of the above will be the basis of your 'hatch'. This is how it should look after Step 2:
Step 3: On top of these two, plot your color mesh once again with pcolormesh, this time masking cells containing statistically significant values. This way, the markers on your 'insignificant' cells become invisible again, while significant markers stay visible.
Assuming you have an identically sized array containing the t statistic for each cell (t_z), you can mask significant values using numpy's ma module.
z_masked = numpy.ma.masked_where(t_z >= your_threshold, z)
Then, plot the color mesh, using the masked array.
mesh_masked = plt.pcolormesh(x,y,z_masked)
Use zorder to make sure the layers are in correct order. This is how it should look after Step 3:

Creating intersecting images in matplotlib with imshow or other function

I have two 3-D arrays of ground penetrating radar data. Each array is basically a collection of time-lapse 2-D images, where time is increasing along the third dimension. I want to create a 3-D plot which intersects a 2-D image from each array.
I'm essentially trying to create a fence plot. Some examples of this type of plot are found on these sites:
http://www.geogiga.com/images/products/seismapper_3d_seismic_color.gif
http://www.usna.edu/Users/oceano/pguth/website/so461web/seismic_refl/fence.png
I typically use imshow to individually display the 2-D images for analysis. However, my research into the functionality of imshow suggests it doesn't work with the 3D axes. Is there some way around this? Or is there another plotting function which could replicate imshow functionality but can be combined with 3D axes?
There might be better ways, but at least you can always make a planar mesh and color it:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
# create a 21 x 21 vertex mesh
xx, yy = np.meshgrid(np.linspace(0,1,21), np.linspace(0,1,21))
# create some dummy data (20 x 20) for the image
data = np.random.random((20, 20))
# create vertices for a rotated mesh (3D rotation matrix)
X = np.sqrt(1./3) * xx + np.sqrt(1./3) * yy
Y = -np.sqrt(1./3) * xx + np.sqrt(1./3) * yy
Z = np.sqrt(1./3) * xx - np.sqrt(1./3) * yy
# create the figure
fig = plt.figure()
# show the reference image
ax1 = fig.add_subplot(121)
ax1.imshow(data, cmap=plt.cm.BrBG, interpolation='nearest', origin='lower', extent=[0,1,0,1])
# show the 3D rotated projection
ax2 = fig.add_subplot(122, projection='3d')
ax2.plot_surface(X, Y, Z, rstride=1, cstride=1, facecolors=plt.cm.BrBG(data), shade=False)
This creates:
(Please note, I was not very careful with the rotation matrix, you will have to create your own projection. It might really be a good idea to use a real rotation matrix.)
Just note that there is a slight problem with the fence poles and fences, i.e. the grid has one more vertex compared to the number of patches.
The approach above is not very efficient if you have high-resolution images. It may not even be useful with them. Then the other possibility is to use a backend which supports affine image transforms. Unfortunately, you will then have to calculate the transforms yourself. It is not hideously difficult, but still a bit clumsy, and then you do not get a real 3D image which could be rotated around, etc.
For this approach, see http://matplotlib.org/examples/api/demo_affine_image.html
Alternateively, you can use OpenCV and its cv2.warpAffine function to warp your image before showing it with imshow. If you fill the surroundings with transparent color, you can then layer images to get a result which looks like your example iamge.
Just to give you an idea of the possibilities of plot_surface, I tried to warp Lena around a semi-cylinder:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
# create a 513 x 513 vertex mesh
xx, yy = np.meshgrid(np.linspace(0,1,513), np.linspace(0,1,513))
# create vertices for a rotated mesh (3D rotation matrix)
theta = np.pi*xx
X = np.cos(theta)
Y = np.sin(theta)
Z = yy
# create the figure
fig = plt.figure()
# show the 3D rotated projection
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, facecolors=plt.imread('/tmp/lena.jpg')/255., shade=False)
She indeed bends well, but all operations on the image are quite slow:
If you're happy to contemplate using a different plotting library (ie not matplotlib) then it might be worth considering mayavi / tvtk (although the learning curve is a little steep). The closest I've seen to what you want is the scalar cut planes in
http://wiki.scipy.org/Cookbook/MayaVi/Examples
The bulk of the documentation is at:
http://docs.enthought.com/mayavi/mayavi/index.html
There is no way of doing this with matplotlib. #DrV's answer is an approximation. Matplotlib does not actually show each individual pixel of the original image but some rescaled image. rstride and cstride allow you to help specify how the image gets scaled, however, the output will not be the exact image.

Python 3D plotting of measurement data

I have captured 3D measurement data on a sphere (this is an antenna radiation pattern, so the measurement antenna captured the radiation intensity from each phi,theta direction and logged this value as a function of phi,theta).
I am having great difficulty getting the data represented.
I have tried multiple options. This is the last one I am now trying:
import numpy as np
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
nElevationPoints = 16
nAzimuthPoints = 40
stepSizeRad = 0.05 * np.pi
def r(phi,theta):
radius = 1
return radius
phi = np.arange(0,nAzimuthPoints*stepSizeRad,stepSizeRad)
theta = np.arange(0,nElevationPoints*stepSizeRad,stepSizeRad)
x = (r(phi,theta)*np.outer(r(phi,theta)*np.cos(phi), np.sin(theta)))
y = (-r(phi,theta)*np.outer(np.sin(phi), np.sin(theta)))
z = (r(phi,theta)*np.outer(np.ones(np.size(phi)), np.cos(theta)))
fig = plt.figure(1)
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(x, y, z, rstride=4, cstride=4, color='b')
plt.ioff()
plt.show()
This code in itself is working, and it plots a sphere. Now the thing is, that in accordance with the measurement data, I would actually need the radius not be a constant "1", but corresponding with the radiation intensity measured. So it needs to be a function of phi,theta.
However, as soon as I change the "r" function to anything containing the phi or theta parameter, I get an error about operands that could not be broadcast.
If there's any work around that loops through phi,theta that would be perfectly fine as well.
But I'm stuck now, so I'd appreciate any help :-)
BTW, the reason I went for the above approach is because I couldn't make sense of how the x,y,z should be defined in order to be acceptable to the plot_surface function.
I did manage to generate a scatter plot, by calculating the actual positions (x,y,z) from the phi,theta,intensity data, but this is only a representation by individual points and doesn't generate any well visible antenna radiation pattern plot. For this I assume that a contour plot would be better, but then again I am stuck at either the "r" function call or by understanding how x,y,z should be formatted (the documentation refers to x,y,z needing to be 2D-arrays, but this is beyond my comprehension as x,y,z usually are one dimensional arrays in themselves).
Anyway, looking forward to any help anyone may be willing to give.
-- EDIT --
With #M4rtini 's suggested changes I come to the following:
import numpy as np
from mayavi import mlab
def r(phi,theta):
r = np.sin(phi)**2
return r
phi, theta = np.mgrid[0:2*np.pi:201j, 0:np.pi:101j]
x = r(phi,theta)*np.sin(phi)*np.cos(theta)
y = r(phi,theta)*np.sin(phi)*np.sin(theta)
z = r(phi,theta)*np.cos(phi)
intensity = phi * theta
obj = mlab.mesh(x, y, z, scalars=intensity, colormap='jet')
obj.enable_contours = True
obj.contour.filled_contours = True
obj.contour.number_of_contours = 20
mlab.show()
This works, thanks, #M4rtini, and I now am able to have a phi,theta dependent "r" function.
However, noted that the example now ensures phi and theta to be of the same length (due to the mgrid function). This is not the case in my measurement. When declaring phi and theta separately and of different dimensions, it doesn't work still. So I now will have a look into measurement interpolation.
This might not be the exact answer you were looking for, but if you can accept using intensity values as a mapping of a color, this should work.
Actually, you could probably calculate a specific r here also. But i did not test that.
Using mayavi since it is, in my opinion, far superior than matplotlib for 3D.
import numpy as np
from mayavi import mlab
r = 1.0
phi, theta = np.mgrid[0:np.pi:200j, 0:2*np.pi:101j]
x = r*np.sin(phi)*np.cos(theta)
y = r*np.sin(phi)*np.sin(theta)
z = r*np.cos(phi)
intensity = phi * theta
obj = mlab.mesh(x, y, z, scalars=intensity, colormap='jet')
obj.enable_contours = True
obj.contour.filled_contours = True
obj.contour.number_of_contours = 20
mlab.show()
Output of example script, now this is in a interactive gui. so you can rotate, translate, scale as you please. And even interactively manipulate the data, and the representation options.

How to draw intersecting planes?

I want to use matplotlib to draw more or less the figure I attached below, which includes the two intersecting planes with the right amount of transparency indicating their relative orientations, and the circles and vectors in the two planes projected in 2D.
I'm not sure if there is an existing package for doing this, any hints?
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
dim = 10
X, Y = np.meshgrid([-dim, dim], [-dim, dim])
Z = np.zeros((2, 2))
angle = .5
X2, Y2 = np.meshgrid([-dim, dim], [0, dim])
Z2 = Y2 * angle
X3, Y3 = np.meshgrid([-dim, dim], [-dim, 0])
Z3 = Y3 * angle
r = 7
M = 1000
th = np.linspace(0, 2 * np.pi, M)
x, y, z = r * np.cos(th), r * np.sin(th), angle * r * np.sin(th)
ax.plot_surface(X2, Y3, Z3, color='blue', alpha=.5, linewidth=0, zorder=-1)
ax.plot(x[y < 0], y[y < 0], z[y < 0], lw=5, linestyle='--', color='green',
zorder=0)
ax.plot_surface(X, Y, Z, color='red', alpha=.5, linewidth=0, zorder=1)
ax.plot(r * np.sin(th), r * np.cos(th), np.zeros(M), lw=5, linestyle='--',
color='k', zorder=2)
ax.plot_surface(X2, Y2, Z2, color='blue', alpha=.5, linewidth=0, zorder=3)
ax.plot(x[y > 0], y[y > 0], z[y > 0], lw=5, linestyle='--', color='green',
zorder=4)
plt.axis('off')
plt.show()
caveats:
I am running a version very close to the current master, so I am not
sure what will work in older versions
The reason for splitting up the plotting is that 'above' and 'below' are determined in a some what arcane way (I am not strictly sure the zorder actually does anything), and is really dependent on the order the artists are drawn in. Thus surfaces can not intersect (one will be above the other every where), so you need to plot the sections on either side of the intersection separately. (You can see this in the black line which I didn't split at looks like it in 'on top of' the upper blue plane).
The 'proper' ordering of the surfaces also seems to be dependent on the view angle.
Matplotlib does have 3d projection capability, but the dashed lines are drawn with constant width in the final 2D image view, not looking as if laid flat on the tilted planes. If the geometry is simple, and the "orbits" circular, it might work, but if you're wanting to draw ellipses seen at an angle, the viewer may desire more visual clues on the whole 3D arrangement.
If I had to make one nice fancy illustration like that, but even nicer and fancier, and it didn't have to be automated, I'd start by creating the graphics - at least the dashed line circles - for each of the planes as a simple flat 2D image using whatever seems handy at the moment - a vector drawing program like Illustrator or Inkscape, or in matplotlib if there is data to be followed.
Then, I'd use POV-Ray or Blender to model the planes at whatever angles, spheres for the round things (planets?). The 2D graphics already generated would become textures to be mapped to the planes. POV-Ray uses a scripting language allowing a record to be kept, modified, and copied for future projets. If it were really one-time and I didn't mind doing it all by hand, Blender is good. Whichever tool I use, the result is an image showing the desired projection of the 3D geometric elements into 2D.
Are the round things, what I'm calling "planets" supposed to be flat circles in the final work, like in the examples? Then I'd draw them with a vector drawing app over the rendered 3D image. But I suspect you'd prefer spheres in 3D.
The samples shown have no lighting or shadows. Shadows would help with clarifying the geometry in 3D, although the first of those two illustrations isn't too bad. The short green line showing the inclined plane's planet over the red line seems clear enough but a shadow would help. The second illustration does look a bit more confusing as to the shape, location an intersections of the various entities. Here, shadows would help more. POV-Ray or Blender will happily create these with little effort. Even more, inter-reflections, known as radiosity, help with seeing 3D relations in 2D images. This advanced effect is easy to do these days, not needing expertise in optics or graphics, just knowing that it exists.
Of course, this advice is no good unless one is already familiar with 3D graphics and tools such s POV-Ray.
For an automated solution, using OpenGL in some quick and dirty program may be best. Shadows may take some work, though.

Categories

Resources