I used the following code to get the 3D depth projection of the shown 2 images. I need the max and minimum depth values, and the x and y coordinates of these max and min depth values.
Is there a function/method from which I can get this information? Even if it will be using a library other than matplotlib.
import cv2
import numpy as np
import math
import scipy.ndimage as ndimage
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
image2=cv2.imread('D:/Post_Grad/STDF/iPython_notebooks/2228.jpg')
image2 = image2[:,:,1] # get the first channel
rows, cols = image2.shape
x, y= np.meshgrid(range(cols), range(rows)[::-1])
blurred = ndimage.gaussian_filter(image2,(5, 5))
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(221)
ax.imshow(image2, cmap='gray')
ax = fig.add_subplot(222, projection='3d')
ax.elev= 5
f1=ax.plot_surface(x,y,image2, cmap=cm.jet)
ax = fig.add_subplot(223)
ax.imshow(blurred, cmap='gray')
ax = fig.add_subplot(224, projection='3d')
ax.elev= 5
f2=ax.plot_surface(x,y,blurred, cmap=cm.jet)
plt.show()
max depth and min depth are just maximum and minimum pixel values of image. And you can easily find the values via np.max(image2),np.min(image2) etc..
Also coordinates can be found via a simple function
def getCoord(image,val):
coords = []
for i in range(image.shape[0]):
for j in range(image.shape[1]):
if image[i][j] == val:
coords.append([i,j])
return coords
so getCoord(image2,np.max(image2)) will return all highest pixel coordinates in image2 (it can be more than 1) , getCoord(blurred,np.min(blurred)) will return all lowest pixel coordinates in blurred etc..
Related
I'm trying to plot a series of frequency spectra in a 3D space using PolyCollection. My goal is to set "facecolors" as a gradient, i.e., the higher the magnitude, the lighter the color.
Please see this image for reference (I am not looking for the fancy design, just the gradients).
I tried to use the cmap argument of the PollyCollection, but I was unsuccessful.
I came this far with the following code adapted from here:
import matplotlib.pyplot as plt
from matplotlib.collections import PolyCollection
from mpl_toolkits.mplot3d import axes3d
import numpy as np
from scipy.ndimage import gaussian_filter1d
def plot_poly(magnitudes):
freq_data = np.arange(magnitudes.shape[0])[:,None]*np.ones(magnitudes.shape[1])[None,:]
mag_data = magnitudes
rad_data = np.linspace(1,magnitudes.shape[1],magnitudes.shape[1])
verts = []
for irad in range(len(rad_data)):
xs = np.concatenate([[freq_data[0,irad]], freq_data[:,irad], [freq_data[-1,irad]]])
ys = np.concatenate([[0],mag_data[:,irad],[0]])
verts.append(list(zip(xs, ys)))
poly = PolyCollection(verts, edgecolor='white', linewidths=0.5, cmap='Greys')
poly.set_alpha(.7)
fig = plt.figure(figsize=(24, 16))
ax = fig.add_subplot(111, projection='3d', proj_type = 'ortho')
ax.add_collection3d(poly, zs=rad_data, zdir='y')
ax.set_xlim3d(freq_data.min(), freq_data.max())
ax.set_xlabel('Frequency')
ax.set_ylim3d(rad_data.min(), rad_data.max())
ax.set_ylabel('Measurement')
ax.set_zlabel('Magnitude')
# Remove gray panes and axis grid
ax.xaxis.pane.fill = False
ax.xaxis.pane.set_edgecolor('white')
ax.yaxis.pane.fill = False
ax.yaxis.pane.set_edgecolor('white')
ax.zaxis.pane.fill = False
ax.zaxis.pane.set_edgecolor('white')
ax.view_init(50,-60)
plt.show()
sample_data = np.random.rand(2205, 4)
sample_data = gaussian_filter1d(sample_data, sigma=10, axis=0) # Just to smoothe the curves
plot_poly(sample_data)
Besides the missing gradients I am happy with the output of the code above.
Lets say I have the following dataset:
import numpy as np
import matplotlib.pyplot as plt
x_bins = np.arange(10)
y_bins = np.arange(10)
z = np.random.random((9,9))
I can easily plot this data with
plt.pcolormesh(x_bins, y_bins, z, cmap = 'viridis)
However, let's say I now add some alpha value for each point:
a = np.random.random((9,9))
How can I change the alpha value of each box in the pcolormesh plot to match the corresponding value in array "a"?
The mesh created by pcolormesh can only have one alpha for the complete mesh. To set an individual alpha for each cell, the cells need to be created one by one as rectangles.
The code below shows the pcolormesh without alpha at the left, and the mesh of rectangles with alpha at the right. Note that on the spots where the rectangles touch, the semi-transparency causes some unequal overlap. This can be mitigated by not drawing the cell edge (edgecolor='none'), or by longer black lines to separate the cells.
The code below changes the x dimension so easier verify that x and y aren't mixed up. relim and autoscale are needed because with matplotlib's default behavior the x and y limits aren't changed by adding patches.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle, Patch
x_bins = np.arange(12)
y_bins = np.arange(10)
z = np.random.random((9, 11))
a = np.random.random((9, 11))
cmap = plt.get_cmap('inferno')
norm = plt.Normalize(z.min(), z.max())
fig, (ax1, ax2) = plt.subplots(ncols=2)
ax1.pcolormesh(x_bins, y_bins, z, cmap=cmap, norm=norm)
for i in range(len(x_bins) - 1):
for j in range(len(y_bins) - 1):
rect = Rectangle((x_bins[i], y_bins[j]), x_bins[i + 1] - x_bins[i], y_bins[j + 1] - y_bins[j],
facecolor=cmap(norm(z[j, i])), alpha=a[j, i], edgecolor='none')
ax2.add_patch(rect)
# ax2.vlines(x_bins, y_bins.min(), y_bins.max(), edgecolor='black')
# ax2.hlines(y_bins, x_bins.min(), x_bins.max(), edgecolor='black')
ax2.relim()
ax2.autoscale(enable=True, tight=True)
plt.show()
I am trying to get an elevation for each pixel in the image using image processing by python. My first try is by converting the image to grayscale and covert the 2d image to 3d image by using the following code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import imread
imageFile = 'D:\Books\Pav Man\PICS\pic (17) - Copy.png'
mat = imread(imageFile)
mat = mat[:,:,0] # get the first channel
#mat = mat - np.full_like(mat , mat.mean()) #Use this to get negative value
rows, cols = mat.shape
xv, yv = np.meshgrid(range(cols), range(rows)[::-1])
blurred = ndimage.gaussian_filter(mat, sigma=(5, 5), order=0)
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(221)
ax.imshow(mat, cmap='gray')
ax = fig.add_subplot(222, projection='3d')
ax.elev= 75
ax.plot_surface(xv, yv, mat)
ax = fig.add_subplot(223)
ax.imshow(blurred, cmap='gray')
ax = fig.add_subplot(224, projection='3d')
ax.elev= 75
ax.plot_surface(xv, yv, blurred)
plt.show()
The mat contains x,y,z values for each pixel, x = width coordinate , y = height coordinate , z = grayscale value that ranges from 0 to 1 but it does not include the real elevation.
The second try is by using depth data from 2 images as mentioned in the following link:
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_depthmap/py_depthmap.html
But there is no clear way to estimate or predict the elevation of points in the image.
The following picture describes what I mean:
click here to show picture
My question is how to get the elevation of each point in the image to create a topographic profile?
I am working in a project to estimate road profile (Elevation) from a 2D images captured with the same height from road surface and same orientation (angle), the following code is my first try :
import numpy as np
from matplotlib.pyplot import imread
import scipy.ndimage as ndimage
import matplotlib.pyplot as plt
imageFile = 'D:\Books\Pav Man\PICS\pic (17) - Copy.png'
mat = imread(imageFile,0)
mat = mat[:,:,0] # get the first channel
#mat = mat - np.full_like(mat , mat.mean()) #Use this to get negative value
rows, cols = mat.shape
xv, yv = np.meshgrid(range(cols), range(rows)[::-1])
blurred = ndimage.gaussian_filter(mat, sigma=(5, 5), order=0)
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(221)
ax.imshow(mat, cmap='gray')
ax = fig.add_subplot(222, projection='3d')
ax.elev= 75
ax.plot_surface(xv, yv, mat)
ax = fig.add_subplot(223)
ax.imshow(blurred, cmap='gray')
ax = fig.add_subplot(224, projection='3d')
ax.elev= 75
ax.plot_surface(xv, yv, blurred)
plt.show()
)
from this code, I can show an image in grayscale then I can get color value (0-255) for each pixel in the image.
consider I have the average Z elevation of a number of a pixel with color value, is it correct to estimate Z (elevation) by calibration, by example:
pixel (x=20,y=100) have color value 150 and the elevation in the real world in the road is 20 cm, then I consider each pixel in this image with color value 150 should have 20 cm elevation in the real world.
and if this estimation correct, can I applied this for different images with the same height from the surface and the same orientation?
I have a collection of (sparse) data that has temperature measurements. With a heatmap, areas that have more observations show a higher value because the heatmap accumulates the values.
Is there a way to get more of an average as opposed to a sum? But also with the feel of gaussian filtering. If no data is in a region, a 0 value would be preferred (which would be transparent).
If you would like to gaussian filter, see ndimage.gaussian_filter
Here's an example:
import matplotlib.pyplot as plt
import numpy as np
import scipy.ndimage
fig = plt.figure()
# Random example data with some values set to 0
im = np.random.random((10, 10))
im[im < 0.3] = 0
# Smooth image
smoothed_im = scipy.ndimage.filters.gaussian_filter(im, sigma=1)
im[im == 0] = None
plt.imshow(im, interpolation = "nearest")
plt.title("Original image")
plt.colorbar()
plt.figure()
plt.imshow(smoothed_im, interpolation = "nearest")
plt.title("Smoothed image")
plt.colorbar()
# Blank elements that were originally 0
smoothed_im[np.isnan(im)] = None
plt.figure()
plt.imshow(smoothed_im, interpolation = "nearest")
plt.title("Smoothed image with original zeros blanked")
plt.colorbar()
This produces: