Estimate Z value from 2D image after converting to grayscale - python

I am working in a project to estimate road profile (Elevation) from a 2D images captured with the same height from road surface and same orientation (angle), the following code is my first try :
import numpy as np
from matplotlib.pyplot import imread
import scipy.ndimage as ndimage
import matplotlib.pyplot as plt
imageFile = 'D:\Books\Pav Man\PICS\pic (17) - Copy.png'
mat = imread(imageFile,0)
mat = mat[:,:,0] # get the first channel
#mat = mat - np.full_like(mat , mat.mean()) #Use this to get negative value
rows, cols = mat.shape
xv, yv = np.meshgrid(range(cols), range(rows)[::-1])
blurred = ndimage.gaussian_filter(mat, sigma=(5, 5), order=0)
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(221)
ax.imshow(mat, cmap='gray')
ax = fig.add_subplot(222, projection='3d')
ax.elev= 75
ax.plot_surface(xv, yv, mat)
ax = fig.add_subplot(223)
ax.imshow(blurred, cmap='gray')
ax = fig.add_subplot(224, projection='3d')
ax.elev= 75
ax.plot_surface(xv, yv, blurred)
plt.show()
)
from this code, I can show an image in grayscale then I can get color value (0-255) for each pixel in the image.
consider I have the average Z elevation of a number of a pixel with color value, is it correct to estimate Z (elevation) by calibration, by example:
pixel (x=20,y=100) have color value 150 and the elevation in the real world in the road is 20 cm, then I consider each pixel in this image with color value 150 should have 20 cm elevation in the real world.
and if this estimation correct, can I applied this for different images with the same height from the surface and the same orientation?

Related

matplotlib plot_surface 3D depth values

I used the following code to get the 3D depth projection of the shown 2 images. I need the max and minimum depth values, and the x and y coordinates of these max and min depth values.
Is there a function/method from which I can get this information? Even if it will be using a library other than matplotlib.
import cv2
import numpy as np
import math
import scipy.ndimage as ndimage
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
image2=cv2.imread('D:/Post_Grad/STDF/iPython_notebooks/2228.jpg')
image2 = image2[:,:,1] # get the first channel
rows, cols = image2.shape
x, y= np.meshgrid(range(cols), range(rows)[::-1])
blurred = ndimage.gaussian_filter(image2,(5, 5))
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(221)
ax.imshow(image2, cmap='gray')
ax = fig.add_subplot(222, projection='3d')
ax.elev= 5
f1=ax.plot_surface(x,y,image2, cmap=cm.jet)
ax = fig.add_subplot(223)
ax.imshow(blurred, cmap='gray')
ax = fig.add_subplot(224, projection='3d')
ax.elev= 5
f2=ax.plot_surface(x,y,blurred, cmap=cm.jet)
plt.show()
max depth and min depth are just maximum and minimum pixel values of image. And you can easily find the values via np.max(image2),np.min(image2) etc..
Also coordinates can be found via a simple function
def getCoord(image,val):
coords = []
for i in range(image.shape[0]):
for j in range(image.shape[1]):
if image[i][j] == val:
coords.append([i,j])
return coords
so getCoord(image2,np.max(image2)) will return all highest pixel coordinates in image2 (it can be more than 1) , getCoord(blurred,np.min(blurred)) will return all lowest pixel coordinates in blurred etc..

how to get the elevation of each points(pixel) in image using python

I am trying to get an elevation for each pixel in the image using image processing by python. My first try is by converting the image to grayscale and covert the 2d image to 3d image by using the following code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import imread
imageFile = 'D:\Books\Pav Man\PICS\pic (17) - Copy.png'
mat = imread(imageFile)
mat = mat[:,:,0] # get the first channel
#mat = mat - np.full_like(mat , mat.mean()) #Use this to get negative value
rows, cols = mat.shape
xv, yv = np.meshgrid(range(cols), range(rows)[::-1])
blurred = ndimage.gaussian_filter(mat, sigma=(5, 5), order=0)
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(221)
ax.imshow(mat, cmap='gray')
ax = fig.add_subplot(222, projection='3d')
ax.elev= 75
ax.plot_surface(xv, yv, mat)
ax = fig.add_subplot(223)
ax.imshow(blurred, cmap='gray')
ax = fig.add_subplot(224, projection='3d')
ax.elev= 75
ax.plot_surface(xv, yv, blurred)
plt.show()
The mat contains x,y,z values for each pixel, x = width coordinate , y = height coordinate , z = grayscale value that ranges from 0 to 1 but it does not include the real elevation.
The second try is by using depth data from 2 images as mentioned in the following link:
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_depthmap/py_depthmap.html
But there is no clear way to estimate or predict the elevation of points in the image.
The following picture describes what I mean:
click here to show picture
My question is how to get the elevation of each point in the image to create a topographic profile?

Add color scale to matplotlib colorbar according to RGBA image channels

I am trying to plot a RGBA image with a colorbar representing color values.
The RGBA image is generated from raw data, transforming the 2d data array into a 6d-array with x, y, [R, G, B and A] according to the color input. E.g. 'green' will make it fill just the G channel with the values from the 2d-array, leaving R and B = 0 and A = 255. Like this:
All solutions I found would apply a color map or limit the vmin and vmax of the colorbar but what I need is a colorbar that goes from pitch black to the brightest color present in the image. E.g. if I have an image in shades of purple, the color bar should go from 0 to 'full' purple with only shades of purple in it. The closest solution I found was this (https://pelson.github.io/2013/working_with_colors_in_matplotlib/), but it doesn't fit a "general" solution.
An image I'm getting is given below.
import numpy as np
from ImgMath import colorize
import matplotlib.pyplot as plt
import Mapping
data = Mapping.getpeakmap('Au')
# data shape is (10,13) and len(data) is 10
norm_data = data/data.max()*255
color_data = colorize(norm_data,'green')
# color_data shape is (10,13,4) and len(color_data) is 10
fig, ax = plt.subplots()
im = plt.imshow(color_data)
fig.colorbar(im)
plt.show()
You could map your data with a custom, all-green, colormap
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
# input 2D array
data = np.random.randint(0,255, size=(10,13))
z = np.zeros(256)
colors = np.linspace(0,1,256)
alpha = np.ones(256)
#create colormap
greencolors = np.c_[z,colors,z,alpha]
cmap = ListedColormap(greencolors)
im = plt.imshow(data/255., cmap=cmap, vmin=0, vmax=1)
plt.colorbar(im)
plt.show()

Combine picture and plot with matplotlib with alpha channel

I have a .png image with alpha channel and a random pattern generated with numpy.
I want to supperpose both images using matplotlib. The bottom image must be the random pattern and over this, I want to see the second image (attached in the end of the post).
The code for both images is the following:
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
# Random image pattern
fig = plt.subplots(figsize = (20,4))
x = np.arange(0,2000,1)
y = np.arange(0,284,1)
X,Y = np.meshgrid(x,y)
Z = 0.6+0.1*np.random.rand(284,2000)
Z[0,0] = 0
Z[1,1] = 1
# Plot the density map using nearest-neighbor interpolation
plt.pcolormesh(X,Y,Z,cmap = cm.gray)
The result is the following image:
To import the image, I use the following code:
# Sample data
fig = plt.subplots(figsize = (20,4))
# Plot the density map using nearest-neighbor interpolation
plt.imread("good_image_2.png")
plt.imshow(img)
print(img.shape)
The image is the following:
Thus, the final result that I want is:
You can make an image-like array for Z and then just use imshow to display it before the image of the buttons, etc. Note that this only works because your png has an alpha channel.
Code:
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
# Plot the density map using nearest-neighbor interpolation
img = plt.imread("image.png")
(xSize, ySize, cSize) = img.shape
x = np.arange(0,xSize,1)
y = np.arange(0,ySize,1)
X,Y = np.meshgrid(x,y)
Z = 0.6+0.1*np.random.rand(xSize,ySize)
Z[0,0] = 0
Z[1,1] = 1
# We need Z to have red, blue and green channels
# For a greyscale image these are all the same
Z=np.repeat(Z,3).reshape(xSize,ySize,3)
fig = plt.figure(figsize=(20,8))
ax = fig.add_subplot(111)
ax.imshow(Z, interpolation=None)
ax.imshow(img, interpolation=None)
fig.savefig('output.png')
Output:
You can also turn off axes if you prefer.
ax.axis('off')

nonlinear scaling image in figure axis matplotlib

enter image description hereI hope I have not over-looked as previously asked question. I don't think so.
I have an image of a spectrum. I have several laser lines for calibration. Since the laser lines and the spectrum were collected in the same way they should be correlated in pixel distance. The relationship between pixel number and wavelength is nonlinear. I have fit the pixel number along the x-axis against the wavelength of the laser lines (blue # 405nm green # 532nm and red # 650nm) using a 3rd degree polynomial with high correlation. I want to plot the spectrum by computing the wavelength( nm) directly from the pixel number and display the wavelength beneath the spectrum. Is this possible without overlapping the image on another figure? spectrograph of Laser Lines
import matplotlib.pyplot as plt
from scipy import ndimage
from pylab import *
import numpy as np
import skimage
image= laser_lines
print(image.shape)
for i in range(image.shape[1]):
x=i^3*-3.119E-6+2.926E-3*i^2+0.173*i+269.593
for j in range(image.shape[0]):
y=image[i,j]
imshow(image)
plt.show()
Probably the easiest option is to use a pcolormesh instead of an imshow plot. The pcolormesh shows the edges of a grid, such that you might simply transform the original grid using the functional dependence between pixels and wavelength to define the edges of each pixel in terms of wavelength.
import numpy as np
import matplotlib.pyplot as plt
image = np.sort(np.random.randint(0,256,size=(400,600)),axis=0)
f = lambda i: i**3*-3.119E-6+2.926E-3*i**2+0.173*i+269.593
xi = np.arange(0,image.shape[1]+1)-0.5
yi = np.arange(0,image.shape[0]+1)-0.5
Xi, Yi = np.meshgrid(xi, yi)
Xw = f(Xi)
fig, (ax) = plt.subplots(figsize=(8,4))
ax.pcolormesh(Xw, Yi, image)
ax.set_xlabel("wavelength [nm]")
plt.show()
If the image has 3 colorchannels, you need to use the color argument of pcolormesh to set the color of each pixel, as shown in this question: Plotting an irregularly-spaced RGB image in Python
import numpy as np
import matplotlib.pyplot as plt
r = np.sort(np.random.randint(0,256,size=(200,600)),axis=1)
g = np.sort(np.random.randint(0,256,size=(200,600)),axis=0)
b = np.sort(np.random.randint(0,256,size=(200,600)),axis=1)
image = np.dstack([r, g, b])
color = image.reshape((image.shape[0]*image.shape[1],image.shape[2]))
if color.max() > 1.:
color = color/255.
f = lambda i: i**3*-3.119E-6+2.926E-3*i**2+0.173*i+269.593
xi = np.arange(0,image.shape[1]+1)-0.5
yi = np.arange(0,image.shape[0]+1)-0.5
Xi, Yi = np.meshgrid(xi, yi)
Xw = f(Xi)
fig, (ax) = plt.subplots(figsize=(8,4))
pc = ax.pcolormesh(Xw, Yi, Xw, color=color )
pc.set_array(None)
ax.set_xlabel("wavelength [nm]")
plt.show()

Categories

Resources