I have some data that consists of several 2D images that I would like to render in specific [x,y,z] positions relative to one another using mayavi2 (v4.3.0).
From the documentation it seems that I should just be able to do this with mlab.imshow(). Unfortunately, mayavi throws an exception when I call imshow specifying the extent parameter (AttributeError: 'ImageActor' object has no attribute 'actor').
I also tried setting the x,y and z data directly by modifying im.mlab_source.x,y,z.... Weirdly, whilst this correctly changes the x and y extents, it does nothing to the z-position even though im.mlab_source.z clearly changes.
Here's a runnable example:
import numpy as np
from scipy.misc import lena
from mayavi import mlab
def normal_imshow(img=lena()):
return mlab.imshow(img,colormap='gray')
def set_extent(img=lena()):
return mlab.imshow(img,extent=[0,100,0,100,50,50],colormap='cool')
def set_xyz(img=lena()):
im = mlab.imshow(img,colormap='hot')
src = im.mlab_source
print 'Old z :',src.z
src.x = 100*(src.x - src.x.min())/(src.x.max() - src.x.min())
src.y = 100*(src.y - src.y.min())/(src.x.max() - src.y.min())
src.z[:] = 50
print 'New z :',src.z
return im
if __name__ == '__main__':
# this works
normal_imshow()
# # this fails (AttributeError)
# set_extent()
# weirdly, this seems to work for the x and y axes, but does not change
# the z-postion even though data.z does change
set_xyz()
Ok, it turns out that this is a known bug in mayavi. However, it is possible to change the orientation, position and scale of an ImageActor object after it has been created:
obj = mlab.imshow(img)
obj.actor.orientation = [0, 0, 0] # the required orientation
obj.actor.position = [0, 0, 0] # the required position
obj.actor.scale = [0, 0, 0] # the required scale
Related
I'm using the amazing open3d Python libary to visualize some point Cloud. I already know the normal vectors of these points that I attribute directly as follows:
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(points)
pcd.normals = o3d.utility.Vector3dVector(normals)
I am also setting a visualizer in which I insert these points as follows:
app = gui.Application.instance
app.initialize()
vis = o3d.visualization.O3DVisualizer("Open3D - 3D Text", 1024, 768)
vis.show_settings = True
vis.add_geometry("my points", pcd)
with o3d.utility.VerbosityContextManager(o3d.utility.VerbosityLevel.Debug) as cm:
'''visualize'''
vis.reset_camera_to_default()
app.add_window(vis)
app.run()
Up to now, all of this has run as intended, however I am not able to set the visualizer in such a way that enables me to visualize the normal vectors. Apparently o3d.visualization.Visualizer() has this method get_render_option() that is said to "retrieve a RenderOption" object, and in this RenderOption object there is a point_show_normal property but I couldn't make my code (more complicated than the minimal example above) work with o3d.visualization.Visualizer(): I don't see how to use this o3d.visualization.Visualizer().get_render_option().point_show_normal.
Is there any way to show the normal vectors with with open3d.visualization.O3DVisualizer?
you need add two lines to your code, get the render and set point_show_normal to True:
opt = vis.get_render_option()
opt.point_show_normal = True
You can see in the documentation open3D tutorials and python examples
I hope it helps
I didn't find a solution so far, so I resorted to look at my normal vectors in another window, produced using the mayavi library rather than the open3D library. To do so, I used this simple code snippet:
from mayavi.mlab import *
P = [my list of 3D points]
N = [my list of normal vectors]
x = P[:, 0]
y = P[:, 1]
z = P[:, 2]
points3d(x, y, z, color=(0, 1, 0), scale_factor=0.5)
u = N[:, 0]
v = N[:, 1]
w = N[:, 2]
quiver3d(x, y, z, u, v, w)
show()
And it worked as intended. Ideally I would like to have the normal vectors displayed with the rest of the figure, but this responded to my immediate needs.
I consider this as a workaround rather than the definitive solution, so I put it here as an answer if someone else having the problem finds it useful. But my question still isn't solved.
I am trying to plot a single line (or tube) in Mayavi that has a non-constant width or radius. This seems like a simple task though I may not be understanding what is happening behind the scenes well enough to make this happen.
The following code creates the line I want, and I am able to scale by color; however, I would also like to scale by width.
import mayavi.mlab as mlab
import numpy as np
x = range(100)
y = range(100)
z = range(100)
s = np.random.uniform(0, 1, 100)
mlab.plot3d(x, y, z, s, tube_radius=10)
I don't have an image of the desired output as I am unable to create it, though it would essentially be the preceding image scaled by radius instead of color, so that some areas of the line would be wider than other areas. One possible solution would be to use the tube_radius parameter and plot each section individually, though this really seems like poor practice as the lines can get quite long and have many different sections.
In the GUI, you can go to the Tube pipeline and use Vary_radius = 'vary_radius_by_scalar'
In the script you can do
import mayavi.mlab as mlab
import numpy as np
x = range(100)
y = range(100)
z = range(100)
s = np.random.uniform(0, 1, 100)
t = mlab.plot3d(x, y, z, s, tube_radius=10)
t.parent.parent.filter.vary_radius = 'vary_radius_by_scalar'
Since the parent of the surface is the Module manager (colors, etc) and its parent is the Tube pipeline
Is there a way to extract the data from an array, which corresponds to a line of a contourplot in python? I.e. I have the following code:
n = 100
x, y = np.mgrid[0:1:n*1j, 0:1:n*1j]
plt.contour(x,y,values)
where values is a 2d array with data (I stored the data in a file but it seems not to be possible to upload it here). The picture below shows the corresponding contourplot. My question is, if it is possible to get exactly the data from values, which corresponds e.g. to the left contourline in the plot?
Worth noting here, since this post was the top hit when I had the same question, that this can be done with scikit-image much more simply than with matplotlib. I'd encourage you to check out skimage.measure.find_contours. A snippet of their example:
from skimage import measure
x, y = np.ogrid[-np.pi:np.pi:100j, -np.pi:np.pi:100j]
r = np.sin(np.exp((np.sin(x)**3 + np.cos(y)**2)))
contours = measure.find_contours(r, 0.8)
which can then be plotted/manipulated as you need. I like this more because you don't have to get into the deep weeds of matplotlib.
plt.contour returns a QuadContourSet. From that, we can access the individual lines using:
cs.collections[0].get_paths()
This returns all the individual paths. To access the actual x, y locations, we need to look at the vertices attribute of each path. The first contour drawn should be accessible using:
X, Y = cs.collections[0].get_paths()[0].vertices.T
See the example below to see how to access any of the given lines. In the example I only access the first one:
import matplotlib.pyplot as plt
import numpy as np
n = 100
x, y = np.mgrid[0:1:n*1j, 0:1:n*1j]
values = x**0.5 * y**0.5
fig1, ax1 = plt.subplots(1)
cs = plt.contour(x, y, values)
lines = []
for line in cs.collections[0].get_paths():
lines.append(line.vertices)
fig1.savefig('contours1.png')
fig2, ax2 = plt.subplots(1)
ax2.plot(lines[0][:, 0], lines[0][:, 1])
fig2.savefig('contours2.png')
contours1.png:
contours2.png:
plt.contour returns a QuadContourSet which holds the data you're after.
See Get coordinates from the contour in matplotlib? (which this question is probably a duplicate of...)
I use matplotlib's method hexbin to compute 2d histograms on my data.
But I would like to get the coordinates of the centers of the hexagons in order to further process the results.
I got the values using get_array() method on the result, but I cannot figure out how to get the bins coordinates.
I tried to compute them given number of bins and the extent of my data but i don't know the exact number of bins in each direction. gridsize=(10,2) should do the trick but it does not seem to work.
Any idea?
I think this works.
from __future__ import division
import numpy as np
import math
import matplotlib.pyplot as plt
def generate_data(n):
"""Make random, correlated x & y arrays"""
points = np.random.multivariate_normal(mean=(0,0),
cov=[[0.4,9],[9,10]],size=int(n))
return points
if __name__ =='__main__':
color_map = plt.cm.Spectral_r
n = 1e4
points = generate_data(n)
xbnds = np.array([-20.0,20.0])
ybnds = np.array([-20.0,20.0])
extent = [xbnds[0],xbnds[1],ybnds[0],ybnds[1]]
fig=plt.figure(figsize=(10,9))
ax = fig.add_subplot(111)
x, y = points.T
# Set gridsize just to make them visually large
image = plt.hexbin(x,y,cmap=color_map,gridsize=20,extent=extent,mincnt=1,bins='log')
# Note that mincnt=1 adds 1 to each count
counts = image.get_array()
ncnts = np.count_nonzero(np.power(10,counts))
verts = image.get_offsets()
for offc in xrange(verts.shape[0]):
binx,biny = verts[offc][0],verts[offc][1]
if counts[offc]:
plt.plot(binx,biny,'k.',zorder=100)
ax.set_xlim(xbnds)
ax.set_ylim(ybnds)
plt.grid(True)
cb = plt.colorbar(image,spacing='uniform',extend='max')
plt.show()
I would love to confirm that the code by Hooked using get_offsets() works, but I tried several iterations of the code mentioned above to retrieve center positions and, as Dave mentioned, get_offsets() remains empty. The workaround that I found is to use the non-empty 'image.get_paths()' option. My code takes the mean to find centers but which means it is just a smidge longer, but it does work.
The get_paths() option returns a set of x,y coordinates embedded that can be looped over and then averaged to return the center position for each hexagram.
The code that I have is as follows:
counts=image.get_array() #counts in each hexagon, works great
verts=image.get_offsets() #empty, don't use this
b=image.get_paths() #this does work, gives Path([[]][]) which can be plotted
for x in xrange(len(b)):
xav=np.mean(b[x].vertices[0:6,0]) #center in x (RA)
yav=np.mean(b[x].vertices[0:6,1]) #center in y (DEC)
plt.plot(xav,yav,'k.',zorder=100)
I had this same problem. I think what needs to be developed is a framework to have a HexagonalGrid object which can then be applied to many different data sets (and it would be awesome to do it for N dimensions). This is possible and it surprises me that neither Scipy or Numpy has anything for it (furthermore there seems to be nothing else like it except perhaps binify)
That said, I assume you want to use hexbinning to compare multiple binned data sets. This requires some common base. I got this to work using matplotlib's hexbin the following way:
import numpy as np
import matplotlib.pyplot as plt
def get_data (mean,cov,n=1e3):
"""
Quick fake data builder
"""
np.random.seed(101)
points = np.random.multivariate_normal(mean=mean,cov=cov,size=int(n))
x, y = points.T
return x,y
def get_centers (hexbin_output):
"""
about 40% faster than previous post only cause you're not calculating the
min/max every time
"""
paths = hexbin_output.get_paths()
v = paths[0].vertices[:-1] # adds a value [0,0] to the end
vx,vy = v.T
idx = [3,0,5,2] # index for [xmin,xmax,ymin,ymax]
xmin,xmax,ymin,ymax = vx[idx[0]],vx[idx[1]],vy[idx[2]],vy[idx[3]]
half_width_x = abs(xmax-xmin)/2.0
half_width_y = abs(ymax-ymin)/2.0
centers = []
for i in xrange(len(paths)):
cx = paths[i].vertices[idx[0],0]+half_width_x
cy = paths[i].vertices[idx[2],1]+half_width_y
centers.append((cx,cy))
return np.asarray(centers)
# important parts ==>
class Hexagonal2DGrid (object):
"""
Used to fix the gridsize, extent, and bins
"""
def __init__ (self,gridsize,extent,bins=None):
self.gridsize = gridsize
self.extent = extent
self.bins = bins
def hexbin (x,y,hexgrid):
"""
To hexagonally bin the data in 2 dimensions
"""
fig = plt.figure()
ax = fig.add_subplot(111)
# Note mincnt=0 so that it will return a value for every point in the
# hexgrid, not just those with count>mincnt
# Basically you fix the gridsize, extent, and bins to keep them the same
# then the resulting count array is the same
hexbin = plt.hexbin(x,y, mincnt=0,
gridsize=hexgrid.gridsize,
extent=hexgrid.extent,
bins=hexgrid.bins)
# you could close the figure if you don't want it
# plt.close(fig.number)
counts = hexbin.get_array().copy()
return counts, hexbin
# Example ===>
if __name__ == "__main__":
hexgrid = Hexagonal2DGrid((21,5),[-70,70,-20,20])
x_data,y_data = get_data((0,0),[[-40,95],[90,10]])
x_model,y_model = get_data((0,10),[[100,30],[3,30]])
counts_data, hexbin_data = hexbin(x_data,y_data,hexgrid)
counts_model, hexbin_model = hexbin(x_model,y_model,hexgrid)
# if you want the centers, they will be the same for both
centers = get_centers(hexbin_data)
# if you want to ignore the cells with zeros then use the following mask.
# But if want zeros for some bins and not others I'm not sure an elegant way
# to do this without using the centers
nonzero = counts_data != 0
# now you can compare the two data sets
variance_data = counts_data[nonzero]
square_diffs = (counts_data[nonzero]-counts_model[nonzero])**2
chi2 = np.sum(square_diffs/variance_data)
print(" chi2={}".format(chi2))
I'm trying to annotate points plotted with the points3d() function using mayavi.mlab.
Each point is associated with a label which I would like to plot next to the points using the text3d() function. Plotting the points is fast, however the mlab.text3d() function does not seem to accept arrays of coordinates, so I have to loop over the points and plot the text individually, which is very slow:
for i in xrange(0, self.n_labels):
self.mlab_data.append(
mlab.points3d( pX[self.labels == self.u_labels[i], 0],
pX[self.labels == self.u_labels[i], 1],
pX[self.labels == self.u_labels[i], 2],
color=self.colours[i],
opacity=1,
scale_mode="none",
scale_factor=sf ) )
idcs, = np.where(self.labels == self.u_labels[i])
for n in idcs.flatten():
mlab.text3d( pX[n, 0],
pX[n, 1],
pX[n, 2],
"%d" % self.u_labels[i],
color=self.colours[i],
opacity=1,
scale=sf )
Any ideas how I could speed this up? Also, is it possible to add a legend (as for instance in matplotlib), I couldn't find anything in the docs.
Thanks,
Patrick
The way you are doing it above will render the scene every time you plot a point or text. This is slow. You can disable the scene rendering, do the plotting and then render the scene by figure.scene.disable_render = True/False:
import scipy
from mayavi import mlab
X = 100 * scipy.rand(100, 3)
figure = mlab.figure('myfig')
figure.scene.disable_render = True # Super duper trick
mlab.points3d(X[:,0], X[:,1], X[:,2], scale_factor=0.4)
for i, x in enumerate(X):
mlab.text3d(x[0], x[1], x[2], str(i), scale=(2, 2, 2))
figure.scene.disable_render = False # Super duper trick
I use this trick and others in Figure class in morphic Viewer https://github.com/duanemalcolm/morphic/blob/master/morphic/viewer.py
Another good trick in the code is to reuse existing objects, i.e., if you've plotted the text already, don't replot them, just update their position and text attributes. This means saving the mlab object. You can see how I do this in morphic.Viewer.