Convert XYZ point cloud to grayscale image - python

Everyone
I'm trying to convert point cloud (X, Y, Z) to the grayscale image using python. I learned that the grayscale image could be generated by a Numpy array. But what I have now is a set of points which contains X, Y and height. I wanna generate a grayscale image based on X, Y and grayscale value which is Height.
Can someone give me an idea about this?
Thanks beforehand.
Rowen

Thanks, guys. I just finished writing my own codes to do interpolation. But my idea is from yours. Thank you to #asaflotz and #Paul Panzer.
The thing is in my scenario, points in point cloud are not arranged well. The intervals between two nearby points are not uniform. It's impossible to use grid directly. So I picked up an unstructured method in Scipy.Interpolate which has so many practical methods can be used depending on different use case. My code below is a modified version of the example from Scipy.Interpolate.griddata.
x_range=((df.X.max()-df.X.min()))
y_range=((df.Y.max()-df.Y.min()))
grid_x, grid_y = np.mgrid[df.X.min():df.X.max():(x_range*1j), df.Y.min():df.Y.max():(y_range*1j)]
points = df[['X','Y']].values
values = df['new'].values
grid_z0 = griddata(points, values, (grid_x, grid_y), method='linear').astype(np.uint8)
im=Image.fromarray(grid_z0,'L')
im.show()
Noticed that in griddata, methods like 'linear', 'nearest', 'cubic' can be applied depending on your scenarios.
Here is the grayscale elevation image generated.
Lastly, my question has been solved basically. Please comment on this post if you have any good ideas or confusion. Thanks all!
Rowen

let's assume that the X,Y are arranged so they will form a grid (which is mandatory in order to build a rectangular image). from there this is easy:
import numpy as np
import matplotlib.pyplot as plt
# generate some data
ax = np.arange(-9, 10)
X, Y = np.meshgrid(ax, ax)
Z = X ** 2 + Y ** 2
# normalize the data and convert to uint8 (grayscale conventions)
zNorm = (Z - Z.min()) / (Z.max() - Z.min()) * 255
zNormUint8 = zNorm.astype(np.uint8)
# plot result
plt.figure()
plt.imshow(zNormUint8)

Related

Converting matplotlib's streamplot coordiantes to numpy coordiantes

I'm currently working with matplotlib in order to create a module of a specific vector field using matplotlib.pyplot.streamplot
after the creation and coloring of the lines in the streamplot, i'm trying to color the whitespace around the divergence points around the plot, im trying to achieve a gradient of color that is dependent on the distance of the white pixels around it.
The streamplot in question is built according to:
xs=np.linspace(-10,10,2000)
ys=np.linspace(-10,10,2000)
Therefore, if the divergence is located (for demonstration purposes) at (0,0) it will be located exactly in the middle of the plot.
Now, the only method i can think of for coloring according to distance from it, is kind of clunky since it requires me to:
add a matplotlib.patches.Rectangle on top of the divergence point in a specific color that is not in the image yet.
convert the plot, with the streamlines and rectangles (one rectangle for each divergence point in streamplot) to a np.array
find the new coordinates of the colors of the rectangles (they represent the location of the divergence point in the new np.array created from streamplot).
calculate the pixels like i want from the colored pixels.
This whole method feels way to clunky and over-complicating, and obviously slower than i could do. im sure theres a way to convert the coordinates from the matplotlib plot to the ones in np.array somehow or perhaps handle the coloring in matplotlib which will be even easier.
sadly i couldn't find a solution that answers this specific need yet.
thanks in advance for any help given!
EDIT
I'm adding an example (not my code, but a representation of what I wish to achieve).
I want to clarify that the solution of adding a patches.circle on top of a circle patch is not my go to, since i'm looking to keep my painting options more dynamic.
If you can define the color intensity you want as a 2-dimensional function, you can plot that function with plt.imshow() and then put the streamplot on top of it. You just need to transform the coordinates linearly to match the image coordinates.
Here is an example:
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [10, 10]
# plot 2d function
grid = np.arange(-1, 1, 0.001)
x, y = np.meshgrid(grid, grid)
z = 1 - (x ** 2 + y ** 2) ** 0.5
plt.imshow(z, cmap='Blues')
# streamplot example from matplotlib docs (modified)
w = 3
Y, X = np.mgrid[-w:w:100j, -w:w:100j]
U = Y ** 2
V = X ** 2
# transform according to previous plot
n = len(grid) / 2
scale = n / w
X = (X + w) * scale
Y = (Y + w) * scale
U = (U + w) * scale
V = (V + w) * scale
plt.xticks(ticks=[0, n, 2*n],
labels=[-w, 0, w])
plt.yticks(ticks=[0, n, 2*n],
labels=[-w, 0, w])
plt.streamplot(X, Y, U, V);

Data analysis of a 3D form in python

My question must have been answered already somewhere but i couldn't find it.
I have a binary numpy 3D array (shape =(512, 512, 304) ) in which there is a random form (labelled as 1). Any other point is labelled as 0.
Lets take for a simple example a sphere.
I want to plot this form on a 3D plot where we can see the sphere.
I already tried 3D plot matplotlib but couldn't get the hand of it
I used the interactive function from (ipywidgets) to print it slice by slice but that's not effetive
I also want to calculate the volume of the form (it may be a completely random poly polyhedron)
I am looking for advices more than answer
Thanks in advance
You can use voxels, although it will be very slow if you try to run it with an array that big. You can for example plot only a 10% of the elements:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# Make a sphere
x, y, z = np.ogrid[-1:1:512j, -1:1:512j, -1:1:304j]
sphere = np.sqrt(x * x + y * y + z * z) < 0.5
# Make 3D axis
ax = plt.figure().add_subplot(projection='3d')
# Make voxels figures at 10% resolution
ax.voxels(filled=sphere[::10, ::10, ::10])
ax.figure.show()
Output:
One way to put it is to plot it using widgets:
def show_axial(image_array, slice):
plt.imshow(img_array[:, :, mr_slice].T, cmap="gray")
interact(show_axial,img_array = fixed(im_arr), slice=widgets.IntSlider(max=im_arr.shape[2] - 1,min = 0))
That gaves a 2d plot with a slide bar that ranged between 0 to the image width and make it showable slide by slide.

how to mask the specific array data based on the shapefile

Here is my question:
the 2-d numpy array data represent some property of each grid space
the shapefile as the administrative division of the study area(like a city).
For example:
http://i4.tietuku.com/84ea2afa5841517a.png
The whole area has 40x40 grids network, and I want to extract the data inside the purple area. In other words , I want to mask the data outside the administrative
boundary into np.nan.
My early attempt
I label the grid number and select the specific array data into np.nan.
http://i4.tietuku.com/523df4783bea00e2.png
value[0,:] = np.nan
value[1,:] = np.nan
.
.
.
.
Can Someone show me a easier method to achieve the target?
Add
Found an answer here which can plot the raster data into shapefile, but the data itself doesn't change.
Update -2016-01-16
I have already solved this problem inspired by some answers.
Someone which are interested on this target, check this two posts which I have asked:
1. Testing point with in/out of a vector shapefile
2. How to use set clipped path for Basemap polygon
The key step was to test the point within/out of the shapefile which I have already transform into shapely.polygon.
Step 1. Rasterize shapefile
Create a function that can determine whether a point at coordinates (x, y) is or is not in the area. See here for more details on how to rasterize your shapefile into an array of the same dimensions as your target mask
def point_is_in_mask(mask, point):
# this is just pseudocode
return mask.contains(point)
Step 2. Create your mask
mask = np.zeros((height, width))
value = np.zeros((height, width))
for y in range(height):
for x in range(width):
if not point_is_in_mask(mask, (x, y)):
value[y][x] = np.nan
Best is to use matplotlib:
def outline_to_mask(line, x, y):
"""Create mask from outline contour
Parameters
----------
line: array-like (N, 2)
x, y: 1-D grid coordinates (input for meshgrid)
Returns
-------
mask : 2-D boolean array (True inside)
"""
import matplotlib.path as mplp
mpath = mplp.Path(line)
X, Y = np.meshgrid(x, y)
points = np.array((X.flatten(), Y.flatten())).T
mask = mpath.contains_points(points).reshape(X.shape)
return mask
alternatively, you may use shapely contains method as suggested in the above answer. You may speed-up calculations by recursively sub-dividing the space, as indicated in this gist (but matplotlib solution was 1.5 times faster in my tests):
https://gist.github.com/perrette/a78f99b76aed54b6babf3597e0b331f8

Scale square matrix in geometrical sense using python

I have an matrix (ndarray) with real values that I want to scale in a geometrical sense - that is expand the matrix's size while keeping the values as similar as possible. It can be viewed as scaling an image.
But my matrix is NOT an image. I have real values ranging from 8,000 to 50,000. As far as I know these values cannot represent anything from an usual image point of view.
I have searched the web for answers but every answer suggested using PIL or similar image processing libraries, that use standard pixel values that wouldn't accept my matrix.
So is there a way to scale a matrix containing any real numbers in the geometrical (or image) sense?
Is there a python library for that or list comprehension of some kind or someting similar?
Thank you.
What you're describing is 2D interpolation. Scipy provides an implementation in scipy.interpolate.RectBivariateSpline
from scipy.interpolate import RectBivariateSpline
# sample data
data = np.random.rand(8, 4)
width, height = data.shape
xs = np.arange(width)
ys = np.arange(height)
# target size and interpolation locations
new_width, new_height = width*2, height*2
new_xs = np.linspace(0, width-1, new_width)
new_ys = np.linspace(0, height-1, new_height)
# create the spline object, and use it to interpolate
spline = RectBivariateSpline(xs, ys, data) #, kx=1, ky=1) for linear interpolation
spline(new_xs, new_ys)

apply gaussian blur to an image ussing python

I am trying to replicate the following smoothing of an image using a Gaussian filter (images from a journal):
In the paper says that in order to get from the left image to the right image I have to apply a gaussian fiter with values x,y = 1,...,100 and sigma = 14 to obtain the "best resuts"
I have developed the following program in python to try to achieve this smoothing:
import scipy.ndimage as ndimage
import matplotlib.pyplot as plt
img = ndimage.imread('left2.png')
img = ndimage.gaussian_filter(img, sigma=(14), order=0)
plt.imshow(img)
plt.show()
for some reason the result obtained is not similar to the picture in the right.
Can someone please point out what do I have to modify in the program to get from the left image to the right image?
Thank you.
I'm going to take a guess here:
Because they mention that their x and y values range from 0-100, they're probably applying a "sigma = 14 unit blur" instead of a "sigma = 14 pixel blur".
The sigma parameter in scipy.ndimage.gaussian_filter is in pixel units. If I'm correct about the author's intent, you'll need to scale the sigma parameter you pass in.
If the authors specified that both x and y ranged from 0-100, the sigma in the x and y directions will be different, as your input data appears have a different number of rows than columns (i.e. it isn't a perfectly square image).
Perhaps try something similar to this?
nrows, ncols = img.shape
sigma = (14 * nrows / 100.0, 14 * ncols / 100.0)
img = ndimage.gaussian_filter(img, sigma=sigma)

Categories

Resources