Explanation of quiver arrow scaling - python

Good afternoon,
I have been trying to understand the documentation of the matplotlib quiver function. And my eyes are officially bleeding.
I have attempted to play with the keyword arguments in my simple example below in order to understand how to change the appearance of my arrows, but half of the keyword arguments have no affect and the other half cause errors to be thrown. See below the code for some specific questions.
# ODE for falling object
def field(t, v):
v_dot = 9.8 - (gamma / mass) * v
return v_dot
# gamma is the coefficient for air resistance
gamma = 0.392
mass = 3.2
x = np.linspace(0,50, 11) # time from zero to 50 inclusive
y = np.linspace(0,100, 11)# initial velocities from 0 to 100 inclusive
# meshgrid creates two arrays of shape(len(y), len(x))
# by pairing the values of these two arrays, we create a set
# of ordered pairs that give our coordinates for the location
# of the quiver arrows we wish to plot.
X, Y = np.meshgrid(x, y)
# We know slope is v' = f(t, v), so we can use trig to get
# to the x and y components of our quiver arrow vectors
theta = np.arctan(field(X, Y))
U = np.cos(theta)
V = np.sin(theta)
plt.quiver(X, Y, U, V)
plt.grid()
plt.legend()
plt.show()
Functional Questions
How do I change the width or length of the arrows (supposing that I want to change them all uniformly)?
How do I change length AND/OR width of the arrows (supposing I want them to shrink or grow based on the y (velocity) values)?
What does the "angles" argument do? I thought that the angles were determined by the vector component arguments U and V...
What about the argument "units"? What does this argument do functionally?
If I choose units='width' what width are we referring to here? The documentation says the width of the axis, but that makes no sense to me... And when I choose units='y' nothing changes in the appearance of my graph. So what is the point of this argument?
As you can see, I have a lot of questions, and the documentation here is quite poor in my opinion. Perhaps it is understandable by someone with more advanced knowledge that I do not have.
Any links to good tutorials for beginners would be great. None of the ones I have found do much with the keyword arguments, and those that do are too advanced for me to understand.

Related

Converting matplotlib's streamplot coordiantes to numpy coordiantes

I'm currently working with matplotlib in order to create a module of a specific vector field using matplotlib.pyplot.streamplot
after the creation and coloring of the lines in the streamplot, i'm trying to color the whitespace around the divergence points around the plot, im trying to achieve a gradient of color that is dependent on the distance of the white pixels around it.
The streamplot in question is built according to:
xs=np.linspace(-10,10,2000)
ys=np.linspace(-10,10,2000)
Therefore, if the divergence is located (for demonstration purposes) at (0,0) it will be located exactly in the middle of the plot.
Now, the only method i can think of for coloring according to distance from it, is kind of clunky since it requires me to:
add a matplotlib.patches.Rectangle on top of the divergence point in a specific color that is not in the image yet.
convert the plot, with the streamlines and rectangles (one rectangle for each divergence point in streamplot) to a np.array
find the new coordinates of the colors of the rectangles (they represent the location of the divergence point in the new np.array created from streamplot).
calculate the pixels like i want from the colored pixels.
This whole method feels way to clunky and over-complicating, and obviously slower than i could do. im sure theres a way to convert the coordinates from the matplotlib plot to the ones in np.array somehow or perhaps handle the coloring in matplotlib which will be even easier.
sadly i couldn't find a solution that answers this specific need yet.
thanks in advance for any help given!
EDIT
I'm adding an example (not my code, but a representation of what I wish to achieve).
I want to clarify that the solution of adding a patches.circle on top of a circle patch is not my go to, since i'm looking to keep my painting options more dynamic.
If you can define the color intensity you want as a 2-dimensional function, you can plot that function with plt.imshow() and then put the streamplot on top of it. You just need to transform the coordinates linearly to match the image coordinates.
Here is an example:
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [10, 10]
# plot 2d function
grid = np.arange(-1, 1, 0.001)
x, y = np.meshgrid(grid, grid)
z = 1 - (x ** 2 + y ** 2) ** 0.5
plt.imshow(z, cmap='Blues')
# streamplot example from matplotlib docs (modified)
w = 3
Y, X = np.mgrid[-w:w:100j, -w:w:100j]
U = Y ** 2
V = X ** 2
# transform according to previous plot
n = len(grid) / 2
scale = n / w
X = (X + w) * scale
Y = (Y + w) * scale
U = (U + w) * scale
V = (V + w) * scale
plt.xticks(ticks=[0, n, 2*n],
labels=[-w, 0, w])
plt.yticks(ticks=[0, n, 2*n],
labels=[-w, 0, w])
plt.streamplot(X, Y, U, V);

Polar plot Magnus effect not showing correct data

I wanted to plot the velocity equations of the flow around a rotating cylinder on a polar plot. (The equations are from "Fundamentals of Aerodynamics" by Andersen.) You can see the two equations inside the for loop statements.
I cannot for crying out loud manage to represent the calculated data onto the polar plot. I have tried every idea of mine, but arrived nowhere. I did check the data, and this one seems all correct, as it behaves how it should.
Here the code of my last attempt:
import numpy as np
import matplotlib.pyplot as plt
RadiusColumn = 1.0
VelocityInfinity = 10.0
RPM_Columns = 0.0#
ColumnOmega = (2*np.pi*RPM_Columns)/(60)#rad/s
VortexStrength = 2*np.pi*RadiusColumn**2 * ColumnOmega#rad m^2/s
NumberRadii = 6
NumberThetas = 19
theta = np.linspace(0,2*np.pi,NumberThetas)
radius = np.linspace(RadiusColumn, 10 * RadiusColumn, NumberRadii)
f = plt.figure()
ax = f.add_subplot(111, polar=True)
for r in xrange(len(radius)):
for t in xrange(len(theta)):
VelocityRadius = (1.0 - (RadiusColumn**2/radius[r]**2)) * VelocityInfinity * np.cos(theta[t])
VelocityTheta = - (1.0 + (RadiusColumn**2/radius[r]**2))* VelocityInfinity * np.sin(theta[t]) - (VortexStrength/(2*np.pi*radius[r]))
TotalVelocity = np.linalg.norm((VelocityRadius, VelocityTheta))
ax.quiver(theta[t], radius[r], theta[t] + VelocityTheta/TotalVelocity, radius[r] + VelocityRadius/TotalVelocity)
plt.show()
As you can see, I have set for now the RPMs to 0. That means that the flow should go from left to right, and be symmetric across the horizontal axis. (The flow should go around the cylinder the same way on both sides.) The result however looks more like this:
This is complete nonsense. There seems to be a vorticity, even when there is none set! Even weirder, when I only display data from 0 to pi/2, the flow changes!
As you can see from the code, I have tried to make use of unit vectors, but clearly this is not the way to go. I would appreciate any useful input.
Thanks!
The basic problem is that the .quiver method of a polar Axes object still expects its vector components in Cartesian coordinates, so you need to convert your theta and radial components to x and y yourself:
for r in range(len(radius)):
for t in range(len(theta)):
VelocityRadius = (1.0 - (RadiusColumn**2/radius[r]**2)) * VelocityInfinity * np.cos(theta[t])
VelocityTheta = - (1.0 + (RadiusColumn**2/radius[r]**2))* VelocityInfinity * np.sin(theta[t]) - (VortexStrength/(2*np.pi*radius[r]))
TotalVelocity = np.linalg.norm((VelocityRadius, VelocityTheta))
ax.quiver(theta[t], radius[r],
VelocityRadius/TotalVelocity*np.cos(theta[t])
- VelocityTheta/TotalVelocity*np.sin(theta[t]),
VelocityRadius/TotalVelocity*np.sin(theta[t])
+ VelocityTheta/TotalVelocity*np.cos(theta[t]))
plt.show()
However, you can improve your code a lot by making use of vectorization: you don't need to loop over each point to obtain what you need. So the equivalent of your code, but even clearer:
def pol2cart(th,v_th,v_r):
"""convert polar velocity components to Cartesian, return v_x,v_y"""
return v_r*np.cos(th) - v_th*np.sin(th), v_r*np.sin(th) + v_th*np.cos(th)
theta = np.linspace(0, 2*np.pi, NumberThetas, endpoint=False)
radius = np.linspace(RadiusColumn, 10 * RadiusColumn, NumberRadii)[:,None]
f = plt.figure()
ax = f.add_subplot(111, polar=True)
VelocityRadius = (1.0 - (RadiusColumn**2/radius**2)) * VelocityInfinity * np.cos(theta)
VelocityTheta = - (1.0 + (RadiusColumn**2/radius**2))* VelocityInfinity * np.sin(theta) - (VortexStrength/(2*np.pi*radius))
TotalVelocity = np.linalg.norm([VelocityRadius, VelocityTheta],axis=0)
VelocityX,VelocityY = pol2cart(theta, VelocityTheta, VelocityRadius)
ax.quiver(theta, radius, VelocityX/TotalVelocity, VelocityY/TotalVelocity)
plt.show()
Few notable changes:
I added endpoint=False to theta: since your function is periodic in 2*pi, you don't need to plot the endpoints twice. Note that this means that currently you have more visible arrows; if you want the original behaviour I suggest that you decrease NumberThetas by one.
I added [:,None] to radius: this will make it a 2d array, so later operations in the definition of the velocities will give you 2d arrays: different columns correspond to different angles, different rows correspond to different radii. quiver is compatible with array-valued input, so a single call to quiver will do your work.
Since the velocities are now 2d arrays, we need to call np.linalg.norm on essentially a 3d array, but this works as expected if we specify an axis to work over.
I defined the pol2cart auxiliary function to do the conversion from polar to Cartesian components; this is not necessary but it seems clearer to me this way.
Final remark: I suggest choosing shorter variable names, and ones that don't have CamelCase. That would probably make your coding faster too.

Matplotlib: How to increase colormap/linewidth quality in streamplot?

I have the following code to generate a streamplot based on an interp1d-Interpolation of discrete data:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from scipy.interpolate import interp1d
# CSV Import
a1array=pd.read_csv('a1.csv', sep=',',header=None).values
rv=a1array[:,0]
a1v=a1array[:,1]
da1vM=a1array[:,2]
a1 = interp1d(rv, a1v)
da1M = interp1d(rv, da1vM)
# Bx and By vector components
def bx(x ,y):
rad = np.sqrt(x**2+y**2)
if rad == 0:
return 0
else:
return x*y/rad**4*(-2*a1(rad)+rad*da1M(rad))/2.87445E-19*1E-12
def by(x ,y):
rad = np.sqrt(x**2+y**2)
if rad == 0:
return 4.02995937E-04/2.87445E-19*1E-12
else:
return -1/rad**4*(2*a1(rad)*y**2+rad*da1M(rad)*x**2)/2.87445E-19*1E-12
Bx = np.vectorize(bx, otypes=[np.float])
By = np.vectorize(by, otypes=[np.float])
# Grid
num_steps = 11
Y, X = np.mgrid[-25:25:(num_steps * 1j), 0:25:(num_steps * 1j)]
Vx = Bx(X, Y)
Vy = By(X, Y)
speed = np.sqrt(Bx(X, Y)**2+By(X, Y)**2)
lw = 2*speed / speed.max()+.5
# Star Radius
circle3 = plt.Circle((0, 0), 16.3473140, color='black', fill=False)
# Plot
fig0, ax0 = plt.subplots(num=None, figsize=(11,9), dpi=80, facecolor='w', edgecolor='k')
strm = ax0.streamplot(X, Y, Vx, Vy, color=speed, linewidth=lw,density=[1,2], cmap=plt.cm.jet)
ax0.streamplot(-X, Y, -Vx, Vy, color=speed, linewidth=lw,density=[1,2], cmap=plt.cm.jet)
ax0.add_artist(circle3)
cbar=fig0.colorbar(strm.lines,fraction=0.046, pad=0.04)
cbar.set_label('B[GT]', rotation=270, labelpad=8)
cbar.set_clim(0,1500)
cbar.draw_all()
ax0.set_ylim([-25,25])
ax0.set_xlim([-25,25])
ax0.set_xlabel('x [km]')
ax0.set_ylabel('z [km]')
ax0.set_aspect(1)
plt.title('polyEos(0.05,2), M/R=0.2, B_r(0,0)=1402GT', y=1.01)
plt.savefig('MR02Br1402.pdf',bbox_inches=0)
plt.show(fig0)
I uploaded the csv-file here if you want to try some stuff https://www.dropbox.com/s/4t7jixpglt0mkl5/a1.csv?dl=0.
Which generates the following plot:
I am actually pretty happy with the result except for one small detail, which I can not figure out: If one looks closely the linewidth and the color change in rather big steps, which is especially visible at the center:
Is there some way/option with which I can decrease the size of this steps to especially make the colormap smother?
I had another look at this and it wasnt as painful as I thought it might be.
Add:
subdiv = 15
points = np.arange(len(t[0]))
interp_points = np.linspace(0, len(t[0]), subdiv * len(t[0]))
tgx = np.interp(interp_points, points, tgx)
tgy = np.interp(interp_points, points, tgy)
tx = np.interp(interp_points, points, tx)
ty = np.interp(interp_points, points, ty)
after ty is initialised in the trajectories loop (line 164 in my version). Just substitute whatever number of subdivisions you want for subdiv = 15. All the segments in the streamplot will be subdivided into as many equally sized segments as you choose. The colors and linewidths for each will still be properly obtained from interpolating the data.
Its not as neat as changing the integration step but it does plot exactly the same trajectories.
If you don't mind changing the streamplot code (matplotlib/streamplot.py), you could simply decrease the size of the integration steps. Inside _integrate_rk12() the maximum step size is defined as:
maxds = min(1. / dmap.mask.nx, 1. / dmap.mask.ny, 0.1)
If you decrease that, lets say:
maxds = 0.1 * min(1. / dmap.mask.nx, 1. / dmap.mask.ny, 0.1)
I get this result (left = new, right = original):
Of course, this makes the code about 10x slower, and I haven't thoroughly tested it, but it seems to work (as a quick hack) for this example.
About the density (mentioned in the comments): I personally don't see the problem of that. It's not like we are trying to visualize the actual path line of (e.g.) a particle; the density is already some arbitrary (controllable) choice, and yes it is influenced by choices in the integration, but I don't thing that it changes the (not quite sure how to call this) required visualization we're after.
The results (density) do seem to converge a bit for decreasing step sizes, this shows the results for decreasing the integration step with a factor {1,5,10,20}:
You could increase the density parameter to get more smooth color transitions,
but then use the start_points parameter to reduce your overall clutter.
The start_points parameter allows you to explicity choose the location and
number of trajectories to draw. It overrides the default, which is to plot
as many as possible to fill up the entire plot.
But first you need one little fix to your existing code:
According to the streamplot documentation, the X and Y args should be 1d arrays, not 2d arrays as produced by mgrid.
It looks like passing in 2d arrays is supported, but it is undocumented
and it is currently not compatible with the start_points parameter.
Here is how I revised your X, Y, Vx, Vy and speed:
# Grid
num_steps = 11
Y = np.linspace(-25, 25, num_steps)
X = np.linspace(0, 25, num_steps)
Ygrid, Xgrid = np.mgrid[-25:25:(num_steps * 1j), 0:25:(num_steps * 1j)]
Vx = Bx(Xgrid, Ygrid)
Vy = By(Xgrid, Ygrid)
speed = np.hypot(Vx, Vy)
lw = 3*speed / speed.max()+.5
Now you can explicitly set your start_points parameter. The start points are actually
"seed" points. Any given stream trajectory will grow in both directions
from the seed point. So if you put a seed point right in the center of
the example plot, it will grow both up and down to produce a vertical
stream line.
Besides controlling the number of trajectories, using the
start_points parameter also controls the order they are
drawn. This is important when considering how trajectories terminate.
They will either hit the border of the plot, or they will terminate if
they hit a cell of the plot that already has a trajectory. That means
your first seeds will tend to grow longer and your later seeds will tend
to get limited by previous ones. Some of the later seeds may not grow
at all. The default seeding strategy is to plant a seed at every cell,
which is pretty obnoxious if you have a high density. It also orders
them by planting seeds first along the plot borders and spiraling inward.
This may not be ideal for your particular case. I found a very simple
strategy for your example was to just plant a few seeds between those
two points of zero velocity, y=0 and x from -10 to 10. Those trajectories
grow to their fullest and fill in most of the plot without clutter.
Here is how I create the seed points and set the density:
num_streams = 8
stptsy = np.zeros((num_streams,), np.float)
stptsx_left = np.linspace(0, -10.0, num_streams)
stptsx_right = np.linspace(0, 10.0, num_streams)
stpts_left = np.column_stack((stptsx_left, stptsy))
stpts_right = np.column_stack((stptsx_right, stptsy))
density = (3,6)
And here is how I modify the calls to streamplot:
strm = ax0.streamplot(X, Y, Vx, Vy, color=speed, linewidth=lw, density=density,
cmap=plt.cm.jet, start_points=stpts_right)
ax0.streamplot(-X, Y, -Vx, Vy, color=speed, linewidth=lw,density=density,
cmap=plt.cm.jet, start_points=stpts_left)
The result basically looks like the original, but with smoother color transitions and only 15 stream lines. (sorry no reputation to inline the image)
I think your best bet is to use a colormap other than jet. Perhaps cmap=plt.cmap.plasma.
Wierd looking graphs obscure understanding of the data.
For data which is ordered in some way, like by the speed vector magnitude in this case, uniform sequential colormaps will always look smoother. The brightness of sequential maps varies monotonically over the color range, removing large percieved color changes over small ranges of data. The uniform maps vary linearly over their whole range which makes the main features in the data much more visually apparent.
(source: matplotlib.org)
The jet colormap spans a very wide variety of brightnesses over its range with in inflexion in the middle. This is responsible for the particularly egregious red to blue transition around the center region of your graph.
(source: matplotlib.org)
The matplotlib user guide on choosing a color map has a few recomendations for about selecting an appropriate map for a given data set.
I dont think there is much else you can do to improve this by just changing parameters in your plot.
The streamplot divides the graph into cells with 30*density[x,y] in each direction, at most one streamline goes through each cell. The only setting which directly increases the number of segments is the density of the grid matplotlib uses. Increasing the Y density will decrease the segment length so that the middle region may transition more smoothly. The cost of this is an inevitable cluttering of the graph in regions where the streamlines are horizontal.
You could also try to normalise the speeds differently so the the change is artifically lowered in near the center. At the end of the day though it seems like it defeats the point of the graph. The graph should provide a useful view of the data for a human to understand. Using a colormap with strange inflexions or warping the data so that it looks nicer removes some understanding which could otherwise be obtained from looking at the graph.
A more detailed discussion about the issues with colormaps like jet can be found on this blog.

Construct an array spacing proportional to a function or other array

I have a function (f : black line) which varies sharply in a specific, small region (derivative f' : blue line, and second derivative f'' : red line). I would like to integrate this function numerically, and if I distribution points evenly (in log-space) I end up with fairly large errors in the sharply varying region (near 2E15 in the plot).
How can I construct an array spacing such that it is very well sampled in the area where the second derivative is large (i.e. a sampling frequency proportional to the second derivative)?
I happen to be using python, but I'm interested in a general algorithm.
Edit:
1) It would be nice to be able to still control the number of sampling points (at least roughly).
2) I've considered constructing a probability distribution function shaped like the second derivative and drawing randomly from that --- but I think this will offer poor convergence, and in general, it seems like a more deterministic approach should be feasible.
Assuming f'' is a NumPy array, you could do the following
# Scale these deltas as you see fit
deltas = 1/f''
domain = deltas.cumsum()
To account only for order of magnitude swings, this could be adjusted as follows...
deltas = 1/(-np.log10(1/f''))
I'm just spitballing here ... (as I don't have time to try this out for real)...
Your data looks (roughly) linear on a log-log plot (at least, each segment seems to be... So, I might consider doing a sort-of integration in log-space.
log_x = log(x)
log_y = log(y)
Now, for each of your points, you can get the slope (and intercept) in log-log space:
rise = np.diff(log_y)
run = np.diff(log_x)
slopes = rise / run
And, similarly, the the intercept can be calculated:
# y = mx + b
# :. b = y - mx
intercepts = y_log[:-1] - slopes * x_log[:-1]
Alright, now we have a bunch of (straight) lines in log-log space. But, a straight line in log-log space, corresponds to y = log(intercept)*x^slope in real space. We can integrate that easily enough: y = a/(k+1) x ^ (k+1), so...
def _eval_log_log_integrate(a, k, x):
return np.log(a)/(k+1) * x ** (k+1)
def log_log_integrate(a, k, x1, x2):
return _eval_log_log_integrate(a, k, x2) - _eval_log_log_integrate(a, k, x1)
partial_integrals = []
for a, k, x_lower, x_upper in zip(intercepts, slopes, x[:-1], x[1:]):
partial_integrals.append(log_log_integrate(a, k, x_lower, x_upper))
total_integral = sum(partial_integrals)
You'll want to check my math -- It's been a while since I've done this sort of thing :-)
1) The Cool Approach
At the moment I implemented an 'adaptive refinement' approach inspired by hydrodynamics techniques. I have a function which I want to sample, f, and I choose some initial array of sample points x_i. I construct a "sampling" function g, which determines where to insert new sample points.
In this case I chose g as the slope of log(f) --- since I want to resolve rapid changes in log space. I then divide the span of g into L=3 refinement levels. If g(x_i) exceeds a refinement level, that span is subdivided into N=2 pieces, those subdivisions are added into the samples and are checked against the next level. This yields something like this:
The solid grey line is the function I want to sample, and the black crosses are my initial sampling points.
The dashed grey line is the derivative of the log of my function.
The colored dashed lines are my 'refinement levels'
The colored crosses are my refined sampling points.
This is all shown in log-space.
2) The Simple Approach
After I finished (1), I realized that I probably could have just chosen a maximum spacing in in y, and choose x-spacings to achieve that. Similarly, just divide the function evenly in y, and find the corresponding x points.... The results of this are shown below:
A simple approach would be to split the x-axis-array into three parts and use different spacing for each of them. It would allow you to maintain the total number of points and also the required spacing in different regions of the plot. For example:
x = np.linspace(10**13, 10**15, 100)
x = np.append(x, np.linspace(10**15, 10**16, 100))
x = np.append(x, np.linspace(10**16, 10**18, 100))
You may want to choose a better spacing based on your data, but you get the idea.

Mayavi points3d with different size and colors

Is it possible in mayavi to specify individually both the size and the colors of every point?
That API is cumbersome to me.
points3d(x, y, z...)
points3d(x, y, z, s, ...)
points3d(x, y, z, f, ...)
x, y and z are numpy arrays, or lists, all of the same shape, giving the positions of the points.
If only 3 arrays x, y, z are given, all the points are drawn with the same size and color.
In addition, you can pass a fourth array s of the same shape as x, y, and z giving an associated scalar value for each point, or a function f(x, y, z) returning the scalar value. This scalar value can be used to modulate the color and the size of the points.
So in this case scalar controls both the size and the color and it's not possible to disentangle them. I want a way to specify size as a (N,1) array and color as another (N,1) array individually..
Seems complicated?
Each VTK source has a dataset for both scalars and vectors.
The trick I use in my program to getting the color and size to differ is to bypass the mayavi source and directly in the VTK source, use scalars for color and vectors for size (it probably works the other way around as well).
nodes = points3d(x,y,z)
nodes.glyph.scale_mode = 'scale_by_vector'
#this sets the vectors to be a 3x5000 vector showing some random scalars
nodes.mlab_source.dataset.point_data.vectors = np.tile( np.random.random((5000,)), (3,1))
nodes.mlab_source.dataset.point_data.scalars = np.random.random((5000,))
You may need to transpose the 5000x3 vector data or otherwise shift the matrix dimensions somehow.
I agree that the API that Mayavi provides here is unpleasant. The Mayavi documentation suggests the following hack (which I have paraphrased slightly) to independently adjust the size and color of points.
pts = mayavi.mlab.quiver3d(x, y, z, sx, sy, sz, scalars=c, mode="sphere", scale_factor=f)
pts.glyph.color_mode = "color_by_scalar"
pts.glyph.glyph_source.glyph_source.center = [0,0,0]
This will display x,y,z points as spheres, even though you're calling mayavi.mlab.quiver3d. Mayavi will use the norm of sx,sy,sz vectors to determine the size the points, and will use the scalar values in c to index into a color map. You can optionally supply a constant size scaling factor, which will be applied to all the points.
This is certainly not the most self-documenting code you'll ever write, but it works.
I also agree that API is ugly. I just did a simple and complete example with using #aestrivex's idea:
from mayavi.mlab import *
import numpy as np
K = 10
xx = np.arange(0, K, 1)
yy = np.arange(0, K, 1)
x, y = np.meshgrid(xx, yy)
x, y = x.flatten(), y.flatten()
z = np.zeros(K*K)
colors = 1.0 * (x + y)/(max(x)+max(y))
nodes = points3d(x, y, z, scale_factor=0.5)
nodes.glyph.scale_mode = 'scale_by_vector'
nodes.mlab_source.dataset.point_data.scalars = colors
show()
which produces:
If, as in my case, anyone is trying to for example update the scale of a single point, or the color of a point upon a clicking action or a keypress event, you may need to add the following line to make sure it updates the scalars even after the figure is already displayed (I add the complete example of my function that modifies the size of a point upon clicking on it as it might be helpful to some people) :
def picker(picker):
if picker.actor in glyphs.actor.actors:
point_id = picker.point_id//glyph_points.shape[0]
# If the no points have been selected, we have '-1'
if point_id != -1:
glyphs.mlab_source.dataset.point_data.scalars[point_id] = 10
# following line is necessary for the live update
glyphs.mlab_source.dataset.modified()
# you would typically use this function the following way in your main :
figure = mlab.gcf()
mlab.clf()
# define your points
pts = ...
# define scalars or they will be defined to None by default
s = len(pts)*[1]
glyphs = mlab.points3d(pts[:,0], pts[:,1], pts[:,2], s, scale_factor=1, mode='cube')
glyph_points = glyphs.glyph.glyph_source.glyph_source.output.points.to_array()
picker = figure.on_mouse_pick(picker, button='Left')
picker.tolerance = 0.01
mlab.show()
Inspired from this example : https://docs.enthought.com/mayavi/mayavi/auto/example_select_red_balls.html

Categories

Resources