I'm trying to get texture properties from a GLCM I created using greycomatrix from skimage.feature. My input data is an image with multiple bands and I want the texture properties for each pixel (resulting in an image with the dimensions cols x rows x (properties *bands)), as it can be achieved using ENVI. But I'm too new to this to come to grips with greycomatrix and greycoprops. This is what I tried:
import numpy as np
from skimage import io
from skimage.feature import greycomatrix, greycoprops
array = io.imread('MYFILE.tif')
array = array.astype(np.int64)
props = ['contrast', 'dissimilarity', 'homogeneity', 'energy', 'correlation', 'ASM']
textures = np.zeros((array.shape[0], array.shape[1], array.shape[2] * len(props)), np.float32)
angles = [0, np.pi / 4, np.pi / 2, 3 * np.pi / 4]
bands = array.shape[2]
for b in range(bands):
glcm = greycomatrix(array[:, :, b], [1], angles, np.nanmax(array) + 1,
symmetric=True, normed=True)
for p, prop in enumerate(props):
textures[:, :, b] = greycoprops(glcm, prop)
Unfortunately, this gives me a 1 x 4 matrix per prop, which I guess is one value per angle FOR THE WHOLE IMAGE, but this is not what I want. I need it per pixel, like contrast for each single pixel, computed from its respective surroundings. What am I missing?
This snippet should get the job done:
import numpy as np
from skimage import io, util
from skimage.feature.texture import greycomatrix, greycoprops
img = io.imread('fourbandimg.tif')
rows, cols, bands = img.shape
radius = 5
side = 2*radius + 1
distances = [1]
angles = [0, np.pi/2]
props = ['contrast', 'dissimilarity', 'homogeneity']
dim = len(distances)*len(angles)*len(props)*bands
padded = np.pad(img, radius, mode='reflect')
windows = [util.view_as_windows(padded[:, :, band].copy(), (side, side))
for band in range(bands)]
feats = np.zeros(shape=(rows, cols, dim))
for row in range(rows):
for col in range(cols):
pixel_feats = []
for band in range(bands):
glcm = greycomatrix(windows[band][row, col, :, :],
distances=distances,
angles=angles)
pixel_feats.extend([greycoprops(glcm, prop).ravel()
for prop in props])
feats[row, col, :] = np.concatenate(pixel_feats)
The sample image has 128 rows, 128 columns and 4 bands (click here to download). At each image pixel a square local neighbourhood of size 11 is used to compute the grayscale matrices corresponding to the pixel to the right and the pixel above for each band. Then, contrast, dissimilarity and homogeneity are computed for those matrices. Thus we have 4 bands, 1 distance, 2 angles and 3 properties. Hence for each pixel the feature vector has 4 × 1 × 2 × 3 = 24 components.
Notice that in order to preserve the number of rows and columns the image has been padded using the image itself mirrored along the edge of the array. It this approach does not fit your needs you could simply ignore the outer frame of the image.
As a final caveat, the code could take a while to run.
Demo
In [193]: img.shape
Out[193]: (128, 128, 4)
In [194]: feats.shape
Out[194]: (128, 128, 24)
In [195]: feats[64, 64, :]
Out[195]:
array([ 1.51690000e+04, 9.50100000e+03, 1.02300000e+03,
8.53000000e+02, 1.25203577e+01, 9.38930575e+00,
2.54300000e+03, 1.47800000e+03, 3.89000000e+02,
3.10000000e+02, 2.95064854e+01, 3.38267222e+01,
2.18970000e+04, 1.71690000e+04, 1.21900000e+03,
1.06700000e+03, 1.09729371e+01, 1.11741654e+01,
2.54300000e+03, 1.47800000e+03, 3.89000000e+02,
3.10000000e+02, 2.95064854e+01, 3.38267222e+01])
In [196]: io.imshow(img)
Out[196]: <matplotlib.image.AxesImage at 0x2a74bc728d0>
Edit
You could cast your data to the type required by greycomatrix through NumPy's uint8 or scikit-images's img_as_ubyte.
Related
`n = 3
array = np.ones((n,n)) / (n*n)
n = array.shape[0] * array.shape1
while(True):
ret, frame = cap.read()
if ret is True:
print("newframe")
gframe = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
dst = cv2.copyMakeBorder(gframe, 1, 1, 1, 1, borderType, None, None)
blur = cv2.blur(dst,(3,3))
if k == 1 :
lastframe = gframe
curframe = gframe
nextframe = gframe
newFrame = gframe
k = 0
else :
lf = ndimage.convolve(lastframe, array, mode='constant', cval= 0.0)
cf = ndimage.convolve(curframe, array, mode='constant', cval= 0.0)
nf = ndimage.convolve(nextframe, array, mode='constant', cval= 0.0)
lastframe = curframe
curframe = nextframe
nextframe = gframe
b = np.zeros((3, 528, 720))
b[0] = lf
b[1] = cf
b[2] = nf
result = np.mean(b, axis=0)
cv2.imshow('frame',result)
cv2.imshow('frame2',gframe)
`enter image description here
I am trying to add all pixel values of a 3x3 pixel and then average them. I need to do that for every pixel and every frame and replace the primary pixel with the averaged one. However the way i am trying to do it makes it really slow and not really accurate.
This sounds like a convolution.
import numpy as np
from scipy import ndimage
a = np.random.random((5, 5))
a
[[0.14742615 0.83548453 0.67433445 0.59162829 0.21160044]
[0.1700598 0.89074466 0.84155171 0.65092969 0.3842437 ]
[0.22662423 0.2266929 0.47757456 0.34480112 0.06261333]
[0.89402116 0.00101947 0.90503461 0.93112109 0.44817247]
[0.21788789 0.3338606 0.07323461 0.28944439 0.91217591]]
Convolution operation with window size 3x3
n = 3
k = np.ones((n, n)) / (n * n)
n = k.shape[0] * k.shape[1]
b = ndimage.convolve(a, k, mode='constant', cval=0.0)
b
[[0.22707946 0.39551126 0.49829704 0.3726987 0.2042669 ]
[0.27744803 0.49894366 0.61486021 0.47103081 0.24953517]
[0.26768469 0.51481368 0.58549664 0.56067136 0.31354238]
[0.21112292 0.37288334 0.39808704 0.4937969 0.33203648]
[0.16075435 0.26945093 0.28152386 0.39546479 0.28676821]]
Now you just have to do it for the current frame, and the two prior frames.
-------- EDIT: For three frames -----------
For 3D you could write a convolution function like in this post, but its quite complex as it uses FFTs
If you just want to average across three frames, you could do:
f1 = np.random.random((5, 5)) # Frame1
f2 = np.random.random((5, 5)) # Frame2
f3 = np.random.random((5, 5)) # Frame3
n = 3
k = np.ones((n, n)) / (n * n)
n = k.shape[0] * k.shape[1]
b0 = ndimage.convolve(f1, k, mode='constant', cval=0.0)
b1 = ndimage.convolve(f2, k, mode='constant', cval=0.0)
b2 = ndimage.convolve(f3, k, mode='constant', cval=0.0)
# Create a 3D Matrix, with each fame placed along the first dimension
b = np.zeros((3, 5, 5))
b[0] = b0
b[1] = b1
b[2] = b2
# Take the average across the first dimension (across frames)
result = np.mean(b, axis=0)
There probably is a more elegant solution than this, but it gets the job done.
-------- EDIT: For Movies -----------
Based on all the questions in the comments I've decided to attempt to add some more code to help with implementation.
Firstly I'm starting out with these 7 consecutive stills from a movie:
I have not verified that the following code is bug proof or actually returns the correct result.
import cv2
import numpy as np
from scipy import ndimage
# this is a function to do previous code
def mean_frames(frames, kernel):
b = np.zeros(frames.shape)
for i in range(frames.shape[0]):
b[i] = ndimage.convolve(frames[i], k, mode='constant', cval=0.0)
b = np.mean(b, axis=0) / frames.shape[0]
return b
mean_N = 3 # frames to average
# read in 1 file to get dimenions
im = cv2.imread(f'{root}1.png', cv2.IMREAD_GRAYSCALE)
# setup numpy matrix that will hold mean_N frames at a time
frames = np.zeros((mean_N, im.shape[0], im.shape[1]))
avg_frames = [] # list to store our 3 averaged frames
count = 0 # counter to position frames in 1st dim of 3D matrix for avg
k = np.ones((3, 3)) / (3 * 3) # kernel for 2D convolution
for j in range(1, 7): # 7 images
file_name = root + str(j) + '.png'
im = cv2.imread(file_name, cv2.IMREAD_GRAYSCALE)
frames[count, ::] = im # store in 3D matrix
# if loaded more than min req. for avg, we average
if j >= mean_N:
# average and store to list
avg_frames.append(mean_frames(frames, k))
# if the count is mean_N - 1, that means we need to replace
# the 0th matrix in frames so that we are doing a 'moving avg'
if count == (mean_N - 1):
count = 0
else:
count += 1 #increase position in 0th dim for 3D matrix storage
# ouput averaged frames
for i, f in enumerate(avg_frames):
cv2.imwrite(f'{path}output{i}.jpg', f)
Then looking at the folder, there are 5 files (as expected if we did a moving average of 3 frames over 7 stills:
looking at before and after:
Image 3:
and averaged image #1:
The image not only is in gray scale (as expected) but seems quite dark. Perhaps some brightening would make things look better/more apparent.
Your question is very interesting.
I saw that you use many loops for activating this function. Let's process analysis.
Just for a frame.
You want to add all pixel values of a 3x3 pixel neighborhood. So I think Image interpolation is very suitable for this case. In OpenCV, we use resize() to interpolate pixel for image. So the INTER_NEAREST is best for this situation.
This is the formula for INTER_NEAREST.
Now you get the pixel added image.
Then you want to do that for every pixel and every frame and replace the primary pixel with the average one. And I think the Average filtering is a better solution.
The filter will work every pixel.
The code of a temporary example.
Interpolation
img = cv2.resize(img, (img.size[0]*3, img.size[1]*3), cv2.INTER_NEAREST)
Filter
img = cv2.blur(img, (3, 3))
I am trying to extract pixel values by overlaying polygons. I use a code from Patrick Grey (http://patrickgray.me/open-geo-tutorial/chapter_5_classification.html). When I masked the image with the shape features, I wanted, I got out_image. Then the next step would be to remove 0, which totally mess up the array as values are not present according bands.
I tried many different ways as to remove 0 and keep the order of band values according to the class. In R I can do it without any problem and when I export the data as CSV and train the algorithm everything works fine in a Python environment.
How can I extract pixel values and keep the numbers band and class-wise?
X = np.array([], dtype=np.int8).reshape(0,8) # pixels for training
y = np.array([], dtype=np.string_) # labels for training
with rasterio.open(img_fp) as src:
band_count = src.count
for index, geom in enumerate(geoms):
feature = [mapping(geom)]
# the mask function returns an array of the raster pixels within this feature
out_image, out_transform = mask(src, feature, crop=True)
# eliminate all the pixels with 0 values for all 8 bands - AKA not actually part of the shapefile
out_image_trimmed = out_image[:,~np.all(out_image == 0, axis=0)]
# eliminate all the pixels with 255 values for all 8 bands - AKA not actually part of the shapefile
out_image_trimmed = out_image_trimmed[:,~np.all(out_image_trimmed == 255, axis=0)]
# reshape the array to [pixel count, bands]
out_image_reshaped = out_image_trimmed.reshape(-1, band_count)
# append the labels to the y array
y = np.append(y,[shapefile["Classname"][index]] * out_image_reshaped.shape[0])
# stack the pizels onto the pixel array
X = np.vstack((X,out_image_reshaped))
Many thanks for any hints!
Here is to solution. I had to slice up the data band wise then transpose it and stack it by columns. After this step np.vstack worked and everything is in order.
X = np.array([], dtype=np.int8).reshape(0, 9) # pixels for training
y = np.array([], dtype=np.int8) # labels for training
# extract the raster values within the polygon
with rio.open(sentinal_band_paths[7]) as src:
band_count = src.count
for index, geom in enumerate(geoms):
feature = [mapping(geom)]
# the mask function returns an array of the raster pixels within this feature
out_image, out_transform = mask(src, feature, crop=True)
# eliminate all the pixels with 0 values for all 8 bands - AKA not actually part of the shapefile
out_image_trimmed = out_image[:, ~np.all(out_image == 0, axis=0)]
# eliminate all the pixels with 255 values for all 8 bands - AKA not actually part of the shapefile
out_image_trimmed = out_image_trimmed[:, ~np.all(out_image_trimmed == 255, axis=0)]
# reshape the array to [pixel count, bands]
out_image_reshaped = out_image_trimmed.reshape(-1, band_count)
# reshape the array to [pixel count, bands]
trial = np.split(out_image_trimmed, 9) ##### share it to equally after bands
B1 = trial[0].T ####transpons columns
B2 = trial[1].T
B3 = trial[2].T
B4 = trial[3].T
B5 = trial[4].T
B6 = trial[5].T
B7 = trial[6].T
B8 = trial[7].T
B9 = trial[8].T
colum_data = np.column_stack((B1, B2, B3, B4, B5, B6, B7, B8, B9)) ####concatenate data colum wise
# append the labels to the y array
y = np.append(y, [shapefile["id"][index]] * out_image_reshaped.shape[0])
# stack the pizels onto the pixel array
X = np.vstack((X, colum_data))
eliminate all the pixels with 0 values for all 8 bands - AKA not actually part of the shapefile:
out_image_trimmed = out_image[:,~np.all(out_image == 0, axis=0)]
eliminate all the pixels with 255 values for all 8 bands - AKA not actually part of the shapefile:
out_image_trimmed = out_image_trimmed[:,~np.all(out_image_trimmed == 255, axis=0)]
I'm not even sure if it is possible, but I am pretty new to python.
I have three 3D datasets, each is a 64 x 64 x 50 numpy array. I am trying to combine each 3D dataset into a single 3D RGB image, where each cell is represented by an RGB value, and each color channel represents values for a single dataset.
For example, my data is three different isotopes measured in a rock, so I would like R to represent the values for oxygen-16, G = sulfur-32, and B = magnesium-24.
I have figured out how to normalize each isotope array to a discretized value between 0-255 with the following generalized equation:
new_arr = ((arr - arr.min()) * (1/(arr.max() - arr.min()) * 255).astype('uint8')
More specifically for my data, I have the following:
O16R = ((O16.get_data() - np.min(O16.get_data())) * (1/(np.max(O16.get_data()) - np.min(O16.get_data())) * 255).astype('uint8'))
S32G = ((S32.get_data() - np.min(S32.get_data())) * (1/(np.max(S32.get_data()) - np.min(S32.get_data())) * 255).astype('uint8'))
Mg24B = ((Mg24.get_data() - np.min(Mg24.get_data())) * (1/(np.max(Mg24.get_data()) - np.min(Mg24.get_data())) * 255).astype('uint8'))
Now, I would like to create another 64 x 64 x 50 3D array, with each index in the array defined by the RGB values corresponding to the indexed values defined above.
For a simplified example, if I had small 2 x 1 arrays of:
O16R = (151, 3)
S32G = (2 , 57)
Mg24B = (0, 111)
Then I need a resulting RGB nested matrix with values:
RGB = ( [151,2,0] , [3,57,111] )
I figure that I need to create a for loop, but I haven't been able to figure it out. This is what I have so far, but it doesn't parse the data.
RGB = np.zeros(shape=(64,64,50))
for i in RGB:
RGB = ([O16R, S32G, Mg24B])
Any help would be appreciated.
IIUC, for you minimal example you can do either of the following:
# setup:
O16R = (151, 3)
S32G = (2 , 57)
Mg24B = (0, 111)
# using zip:
RGB = np.array(list(zip(O16R, S32G, Mg24B)))
# or just transposing the array:
RGB = np.array([O16R, S32G, Mg24B]).T
Both return:
>>> RGB
array([[151, 2, 0],
[ 3, 57, 111]])
need to read an image as an array and for each pixel select 7*7 neighbor pixels then reshape it and put as a first row of training set:
import numpy as np
from scipy import misc
face1=misc.imread('face1.jpg')
face1 dimensions are (288, 352, 3) , need to find 7*7 neighbor pixels for every pixel , so 49*3 color then reshape it as a (1,147) array and stack it into an array for all pixels , i took the following approach:
X_training=np.zeros([1,147] ,dtype=np.uint8)
for i in range(3, face1.shape[0]-3):
for j in range(3, face1.shape[1]-3):
block=face1[i-3:i+4,j-3:j+4]
pxl=np.reshape(block,(1,147))
X_training=np.vstack((pxl,X_training))
resulting X_training shape is (97572, 147)
and as last row contains all zeros then:
a = len(X_training)-1
X_training = X_training[:a]
above code works well for one picture but with Wall time: 5min 19s i have 2000 images, so it will take ages to do it for all the images. I am looking for a faster way to iterate over every pixel and do the above task.
Edit:
this is what i mean by neighbor pixels , for every pixel face1[i-3 : i+4 ,j-3:j+4]
An efficient way is to use stride_tricks to create a 2d rolling window over the image, then flatten it out:
import numpy as np
face1 = np.arange(288*352*3).reshape(288, 352, 3) # toy data
n = 7 # neighborhood size
h, w, d = face1.shape
s = face1.strides
tmp = np.lib.stride_tricks.as_strided(face1, strides=s[:2] + s,
shape=(h - n + 1, w - n + 1, n, n, d))
X_training = tmp.reshape(-1, n**2 * d)
X_training = X_training[::-1] # to get the rows into same order as in the question
tmp is a 5D view into the image, where tmp[x, y, :, :, c] is equivalent to the neigborhood face1[x:x+n, y:y+n, c] in color channel c.
The following is < 1s on my laptop:
import scipy as sp
im = sp.rand(300, 300, 3)
size = 3
ij = sp.meshgrid(range(size, im.shape[0]-size), range(size, im.shape[1]-size))
i = ij[0].T.flatten()
j = ij[1].T.flatten()
N = len(i)
L = (2*size + 1)**2
X_training = sp.empty(shape=[N, 3*L])
for pixel in range(N):
si = (slice(i[pixel]-size, i[pixel]+size+1))
sj = (slice(j[pixel]-size, j[pixel]+size+1))
X_training[pixel, :] = im[si, sj, :].flatten()
X_training = X_training[-1::-1, :]
I'm always a bit sad when I can't think of one-line vectorized version, but at least it's faster for you.
Using scikit-image:
import numpy as np
from skimage import util
image = np.random.random((288, 352, 3))
windows = util.view_as_windows(image, (7, 7, 3))
out = windows.reshape(-1, 7 * 7 * 3)
I have two sets of corresponding points from two images. I have estimated the Essential matrix which encodes the transformation between the cameras:
E, mask = cv2.findEssentialMat(points1, points2, 1.0)
I've then extracted the rotation and translation components:
points, R, t, mask = cv2.recoverPose(E, points1, points2)
But how do I actually get the cameras matrices of the two cameras so I can use cv2.triangulatePoints to generate a little point cloud?
Here is what I did:
Input:
pts_l - set of n 2d points in left image. nx2 numpy float array
pts_r - set of n 2d points in right image. nx2 numpy float array
K_l - Left Camera matrix. 3x3 numpy float array
K_r - Right Camera matrix. 3x3 numpy float array
Code:
# Normalize for Esential Matrix calaculation
pts_l_norm = cv2.undistortPoints(np.expand_dims(pts_l, axis=1), cameraMatrix=K_l, distCoeffs=None)
pts_r_norm = cv2.undistortPoints(np.expand_dims(pts_r, axis=1), cameraMatrix=K_r, distCoeffs=None)
E, mask = cv2.findEssentialMat(pts_l_norm, pts_r_norm, focal=1.0, pp=(0., 0.), method=cv2.RANSAC, prob=0.999, threshold=3.0)
points, R, t, mask = cv2.recoverPose(E, pts_l_norm, pts_r_norm)
M_r = np.hstack((R, t))
M_l = np.hstack((np.eye(3, 3), np.zeros((3, 1))))
P_l = np.dot(K_l, M_l)
P_r = np.dot(K_r, M_r)
point_4d_hom = cv2.triangulatePoints(P_l, P_r, np.expand_dims(pts_l, axis=1), np.expand_dims(pts_r, axis=1))
point_4d = point_4d_hom / np.tile(point_4d_hom[-1, :], (4, 1))
point_3d = point_4d[:3, :].T
Output:
point_3d - nx3 numpy array