Mayavi - Mouse hovering interaction & plot same scene from different perspectives - python

i am currently struggling to see through how to interact in an appropriate way with a mayavi rendered scene.
I have a lidar point cloud which gets plotted by the function points3d(), now i have set in addition a bounding box around a car in between the point cloud, and i would like to change the color of the points inside the box as soon as i hover with my mouse over the bounding box.
Can you tell me how I can just select the points inside the bbox and change their color?
And my second question is, how can i show the same scene of the pointcloud in a 3d view and a bird view, concurrently?
Thank you very much :]

I have found a solution regarding the color problem - I don't know if it is best practice. But i still need help for determining the points inside the bounding box. I would also like to create a gui which enables the user to modify the size and orientation of the bounding box. [but that is an other topic]
import numpy as np
from mayavi.mlab import draw, points3d
from tvtk.api import tvtk
# Primitives
N = 3000 # Number of points
ones = np.ones(N) #np.hstack((np.tile(np.array([1]), int(N/2)).T, np.tile(np.array([4000]), int(N/2)).T))
scalars = ones #np.arange(N) # Key point: set an integer for each point
# Define color table (including alpha), which must be uint8 and [0,255]
colors = np.vstack((np.tile(np.array([[255],[255],[0]]), int(N/2)).T, np.tile(np.array([[0],[0],[255]]), int(N/2)).T))
# Define coordinates and points
x, y, z = (np.random.random((N, 3))*255).astype(np.uint8).T # Assign x, y, z values to match color
pts = points3d(x, y, z, scale_factor=10) # Create points
#pts.glyph.color_mode = 'color_by_vector' # Color by scalar
# Set look-up table and redraw
#pts.module_manager.scalar_lut_manager.lut.table = colors
pts.glyph.scale_mode = 'scale_by_vector'
sc=tvtk.UnsignedCharArray()
sc.from_array(colors)
pts.mlab_source.dataset.point_data.scalars = sc
draw()

Related

Animation of sphere rotation in Mayavi

I am using Mayavi to simulate satellite trajectories, and hence need the sphere which is the main central body to rotate, as the satellite moves, but I can not seem to be able to find a way to change the orientation of the sphere which is in fact a TVTK Actor object, as I am using this method to apply the texture. Is there a way or will I have to find another way to implement the texture on the sphere and then rotate it? Thank You for the help. I am also open to using another python library which can help me achieve this.
In this example, we will animate the rotation of a sphere in Mayavi.
We will create a sphere source and rotate it using a Transform node.
We will also add a scalar cut plane to visualize the rotation.
import numpy as np from mayavi import mlab # Create a sphere source sphere = mlab.pipeline.scalar_field(x, y, z) # Rotate the sphere using a Transform node transform = mlab.pipeline.transform_data(sphere) transform.transform.rotate(angle=0.2, axis=(1, 0, 0)) # Add a scalar cut plane cut_plane = mlab.pipeline.scalar_cut_plane(transform) # Animate the rotation anim = mlab.animate(100) # Rotate the sphere for i in range(100): transform.transform.rotate(angle=0.2, axis=(1, 0, 0)) mlab.draw() anim.save('rotating_sphere.gif', figure=mlab.gcf())

Select remaining points after cropping a point cloud

I am currently facing a problem regarding point cloud cropping.
More specifically, I already know how to crop a point cloud based on Open3D, a package for point cloud processing. There are several ways to do it, for example:
newCamView = np.hstack((camView, np.zeros(shape=camView.shape[0]).reshape(3,1)))
vol = o3d.visualization.SelectionPolygonVolume()
vol.bounding_polygon = o3d.utility.Vector3dVector(newCamView)
vol.orthogonal_axis = "Z"
vol.axis_max = 10
vol.axis_min = -10
pcd_cropped = vol.crop_point_cloud(pcd_raw)
pcd_final = np.asarray(np.hstack((pcd_cropped.points,pcd_cropped.colors)))
But in the context of my problem, I also need to extract the points outside the volume of interest. And even after studying the Open3D documentation and searching on the internet I can't find an answer.
I would be interested in some help to either find out how to invert the selection based on a cropping method or a way to extract the specific indexes of the points that lie within the bounding volume so that I can use the function select_by_index from o3d.geometry.PointCloud to get both inliers and outliers.
You can use the Point Cloud Distance for this task. The following code should give you the points outside the crop:
dists = np.asarray(pcd_raw.compute_point_cloud_distance(pcd_cropped))
indices = np.where(dists > 0.00001)[0]
pcd_cropped_inv = pcd_raw.select_by_index(indices)
Another method to crop pointcloud in open3d is using object of class bounding box. (So this method is only for rectangular shape and not for a polygon based cropping.)
Lets create an arbitrary bounding box with center at origin, a value below 1 as edge length, and rotation R.
R = np.identity(3)
extent = np.ones(3)/1.5 # trying to create a bounding box below 1 unit
center = np.zeros(3)
obb = o3d.geometry.OrientedBoundingBox(center,R,extent) # or you can use axis aligned bounding box class
Now you can crop your point cloud (pcd) by:
cropped = pcd.crop(obb)
o3d.visualization.draw_geometries([cropped]) #press ESC to close
To get indices of points inside this bounding box:
inliers_indices = obb.get_point_indices_within_bounding_box(pcd.points)
inliers_pcd = pcd.select_by_index(inliers_indices, invert=False) # select inside points = cropped
outliers_pcd = pcd.select_by_index(inliers_indices, invert=True) #select outside points
o3d.visualization.draw_geometries([outliers_pcd])
If you already know the boundaries that you want to crop, you can create a bounding box like above and crop. Or if you want to crop w.r.t. the bounding box of another pointcloud/object (if you know pose, you can transform it and then compute bounding box of it) and use this bounding box to crop the larger point cloud. To get bounding box of point cloud:
obb = pcd.get_oriented_bounding_box(robust=False) #set robust =True for more robust computation.
aabb = pcd.get_axis_aligned_bounding_box()

How can a choropleth map be combined with a shaded raster in Python?

I want to plot characteristics of areas on a map, but with very uneven population density, the larger tiles misleadingly attract attention. Think of averages (of test scores, say) by ZIP codes.
High-resolution maps are available to separate inhabited locales and even density within them. The Python code below does produce a raster colored according to the average such density for every pixel.
However, what I would really need is coloring from a choropleth map of the same area (ZIP codes of Hungary in this case) but the coloring affecting only points that would show up on the raster anyway. The raster could only determine the gamma of the pixel (or maybe height in some 3D analog). What is a good way to go about this?
A rasterio.mask.mask somehow?
(By the way, an overlay with the ZIP code boundaries would also be nice, but I have a better understanding of how that could work with GeoViews.)
import rasterio
import os
import datashader as ds
from datashader import transfer_functions as tf
import xarray as xr
from matplotlib.cm import viridis
# download a GeoTIFF from this location: https://data.humdata.org/dataset/hungary-high-resolution-population-density-maps-demographic-estimates
data_path = '~/Downloads/'
file_name = 'HUN_youth_15_24.tif' # young people
file_path = os.path.join(data_path, file_name)
src = rasterio.open(file_path)
da = xr.open_rasterio(file_path)
cvs = ds.Canvas(plot_width=5120, plot_height=2880)
img = tf.shade(cvs.raster(da,layer=1), cmap=viridis)
ds.utils.export_image(img, "map", export_path=data_path, fmt=".png")
I am not sure if I understand, so please just tell me if I am mistaken. If I understood well, you can achieve what you want using numpy only (I am sure translating this to xarray will be easy):
# ---- snipped code already in the question -----
import numpy as np
import matplotlib.pyplot as plt
# fake a choropleth in a dirty, fast way
height, width = 2880, 5120
choropleth = np.empty((height, width, 3,), dtype=np.uint8)
CHUNKS = 10
x_size = width // CHUNKS
for x_step, x in enumerate(range(0, width, width // CHUNKS)):
y_size = height // CHUNKS
for y_step, y in enumerate(range(0, height, height // CHUNKS)):
choropleth[y: y+y_size, x: x+x_size] = (255-x_step*255//CHUNKS,
0, y_step*255//CHUNKS)
plt.figure("Fake Choropleth")
plt.imshow(choropleth)
# Option 1: play with alpha only
outimage = np.empty((height, width, 4,), dtype=np.uint8) # RGBA image
outimage[:, :, 3] = img # Set alpha channel
outimage[:, :, :3] = choropleth # Set color
plt.figure("Alpha filter only")
plt.imshow(outimage)
# Option 2: clear the empty points
outimage[img == 0, :3] = 0 # White. use 0 for black
plt.figure("Points erased")
plt.imshow(outimage[:,:,:3]) # change to 'outimage' to see the image with alpha
Results:
Dummy choroplet
Alpha filtered figure
Black background, no alpha filter
Note that the images might seem different because of matplotlib's antialiasing.
Datashader will let you combine data of many types into a common raster shape where you can do whatever making or filtering you like using xarray operations based on NumPy. E.g. you can render the choropleth as polygons, then mask out uninhabited regions. How to normalize by area is up to you, and could get very complex, but should be doable once you define precisely what you are intending to do. See the transform code at https://examples.pyviz.org/nyc_taxi/nyc_taxi.html for examples of how to do this, as in:
def transform(overlay):
picks = overlay.get(0).redim(pickup_x='x', pickup_y='y')
drops = overlay.get(1).redim(dropoff_x='x', dropoff_y='y')
pick_agg = picks.data.Count.data
drop_agg = drops.data.Count.data
more_picks = picks.clone(picks.data.where(pick_agg>drop_agg))
more_drops = drops.clone(drops.data.where(drop_agg>pick_agg))
return (hd.shade(more_drops, cmap=['lightcyan', "blue"]) *
hd.shade(more_picks, cmap=['mistyrose', "red"]))
picks = hv.Points(df, ['pickup_x', 'pickup_y'])
drops = hv.Points(df, ['dropoff_x', 'dropoff_y'])
((hd.rasterize(picks) * hd.rasterize(drops))).apply(transform).opts(
bgcolor='white', xaxis=None, yaxis=None, width=900, height=500)
Here it's not really masking anything, but hopefully you can see how masking would work; just get some rasterized object then do a mathematical operation using some other rasterized object. Here the steps are all done in a function using HoloViews objects so that you can have a live interactive plot, but you would probably want to work out the approach using the more basic code at datashader.org where you only have to deal with xarray objects and not a HoloViews pipeline; you can then translate what you did for a single xarray into the HoloViews pipeline that would then allow full interactive usage with pan, zoom, axes, etc.

Show grid lines over image in pyqtgraph

I'm drawing an image in pyqtgraph, and I'd like to be able to see the grid lines. But the grid lines are always drawn underneath the image, so any black areas of the image obscure the grid. Here's a fairly minimal example:
import matplotlib # necessary for interactive plots in pyqtgraph
import pyqtgraph as pg
import numpy as np
n = 100000
sigma_y = 1e-3
sigma_x = 1e-3
x0 = np.matrix([np.random.normal(0, sigma_x, n), np.random.normal(0, sigma_y, n)])
bins = 30
histogram, x_edges, y_edges = np.histogram2d(np.asarray(x0)[0], np.asarray(x0)[1], bins)
x_range = x_edges[-1] - x_edges[0]
y_range = y_edges[-1] - y_edges[0]
imv = pg.ImageView(view=pg.PlotItem())
imv.show()
imv.setPredefinedGradient('thermal')
imv.getView().showGrid(True, True)
imv.setImage(histogram, pos=(x_edges[0], y_edges[0]), scale=(x_range / bins, y_range / bins))
Here's what I see (after zooming out a little). You can see that the black area of the image obscures the grid lines.
EDIT: it's possible in the GUI to change the black colour to transparent (not my first choice, but an OK workaround for now), so you can see the grid below the image. That works OK but I can't figure out how to do it in code. How do I get the lookup table out of the ImageView to modify it?
Here is what I did.
glw = pyqtgraph.GraphicsLayoutWidget()
pw = glw.addPlot(0, 0)
# Fix Axes ticks and grid
for key in pw.axes:
ax = pw.getAxis(key)
# Set the grid opacity
if grid_is_visible:
ax.setGrid(grid_opacity * 255)
else:
ax.setGrid(False)
# Fix Z value making the grid on top of the image
ax.setZValue(1)
This did cause some other issue, I think. It may have been the context menu or it had to do with panning and zooming, because of how Qt was signaling the events. One axes got the event priority and prevented the event from propagating for the other axes to pan and zoom. I submitted a pull request to pyqtgraph, so that might not be an issue anymore. I can't remember what caused the problem though. It may work just fine for you. I was doing a lot of other things like changing the background color and changing the viewbox background color which caused some small issues.
As a note I also changed the image z value. You shouldn't have to though.
imv.setZValue(1)

Dithering in JES/Jython

My goal is to dither an image in JES/Jython using the Floyd-Steinberg method. Here is what I have so far:
def Dither_RGB (Canvas):
for Y in range(getHeight(Canvas)):
for X in range(getWidth(Canvas)):
P = getColor(Canvas,X,Y)
E = getColor(Canvas,X+1,Y)
SW = getColor(Canvas,X-1,Y+1)
S = getColor(Canvas,X,Y+1)
SE = getColor(Canvas,X+1,Y+1)
return
The goal of the above code is to scan through the image's pixels and process the neighboring pixels needed for Floyd-Steinberg.
What I'm having trouble understanding is how to go about calculating and distributing the differences in R,G,B between the old pixel and the new pixel.
Anything that could point me in the right direction would be greatly appreciated.
I don't know anything about the method you are trying to implement, but for the rest: Assuming Canvas is of type Picture, you can't get directly the color that way. The color of a pixel can be obtained from a variable of type Pixel:
Example: Here is the procedure to get the color of each pixels from an image and assign them at the exact same position in a new picture:
def copy(old_picture):
# Create a picture to be returned, of the exact same size than the source one
new_picture = makeEmptyPicture(old_picture.getWidth(), old_picture.getHeight())
# Process copy pixel by pixel
for x in xrange(old_picture.getWidth()):
for y in xrange(old_picture.getHeight()):
# Get the source pixel at (x,y)
old_pixel = getPixel(old_picture, x, y)
# Get the pixel at (x,y) from the resulting new picture
# which remains blank until you assign it a color
new_pixel = getPixel(new_picture, x, y)
# Grab the color of the previously selected source pixel
# and assign it to the resulting new picture
setColor(new_pixel, getColor(old_pixel))
return new_picture
file = pickAFile()
old_pic = makePicture(file)
new_pic = copy(old_pic)
Note: The example above applies only if you want to work on a new picture without modifying the old one. If your algorithm requires to modify the old picture on the fly while performing the algorithm, the final setColor would have been applied directly to the original pixel (no need for a new picture, neither the return statement).
Starting from here, you can compute anything you want by manipulating the RGB values of a pixel (using setRed(), setGreen() and setBlue() functions applied to a Pixel, or col = makeColor(red_val, green_val, blue_val) and apply the returned color to a pixel using setColor(a_pixel, col)).
Example of RGB manipulations here.
Some others here and especially here.

Categories

Resources