numpy.where() on grayscale frame, wrong indices? - python

I'm trying to implement a basic RANSAC algorithm for the detection of a circle in a grayscale image.
The problem is that, after I thresholded the image and I search for non-zero pixels I get the right shape, but the points are somehow delocalized from the original position:
video = cv2.VideoCapture('../video/01_CMP.avi')
video.set(cv2.CAP_PROP_POS_FRAMES,200)
succ, frame = video.read()
frame = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
frame = cv2.normalize(frame,frame, alpha=0,norm_type=cv2.NORM_MINMAX, beta = 255)
ret,frame = cv2.threshold(frame,35,255,cv2.THRESH_BINARY)
points = n.where(frame>0) #Thresholded pixels
#Orienting correctly the points in a (n,2) shape
#needed because of arguments of circle.points_distance()
points = n.transpose(n.vstack([points[0],points[1]]))
plt.imshow(frame,cmap='gray');
plt.plot(points[:,0],points[:,1],'wo')
video.release()
What am I missing here?

OpenCV use NumPy ndarray to represent image, the axis 0 of the array is vertical, corresponding to Y axis of the image.
So, to plot the points you need: plt.plot(points[:,1],points[:,0],'wo')

Related

Rotate an image in python and fill the cropped area with image

Have a look at the image and it will give you the better idea what I want to achieve. I want to rotate the image and fill the black part of image just like in required image.
# Read the image
img = cv2.imread("input.png")
# Get the image size
h, w = img.shape[:2]
# Define the rotation matrix
M = cv2.getRotationMatrix2D((w/2, h/2), 30, 1)
# Rotate the image
rotated = cv2.warpAffine(img, M, (w, h))
mask = np.zeros(rotated.shape[:2], dtype=np.uint8)
mask[np.where((rotated == [0, 0, 0]).all(axis=2))] = 255
img_show(mask)
From the code I am able to get the mask of black regions. Now I want to replace these black regions with the image portion as shown in the image 1. Any better solution how can I achieve this.
Use the borderMode parameter of warpAffine.
You want to pass the BORDER_WRAP value.
Here's the result. This does exactly what you described with your first picture.
I have an approach. You can first create a larger image consisting of 3 * 3 times your original image. When you rotate this image and only cut out the center of this large image, you have your desired result.
import cv2
import numpy as np
# Read the image
img = cv2.imread("input.png")
# Get the image size of the origial image
h, w = img.shape[:2]
# make a large image containing 3 copies of the original image in each direction
large_img = np.tile(img, [3,3,1])
cv2.imshow("large_img", large_img)
# Define the rotation matrix. Rotate around the center of the large image
M = cv2.getRotationMatrix2D((w*3/2, h*3/2), 30, 1)
# Rotate the image
rotated = cv2.warpAffine(large_img, M, (w*3, h*3))
# crop only the center of the image
cropped_image = rotated[w:w*2,h:h*2,:]
cv2.imshow("cropped_image", cropped_image)
cv2.waitKey(0)

Smoothly fade color image to grayscale based on input value between 0-1 for RGB Numpy Array

I am writing some code to display a camera input to a 32*32 LED array.
My code to get the image and display it looks like this:
def start_cam(x,y):
# Start the webcam
webcam = cv2.VideoCapture(0)
# Set frame rate to 45 frames per second
frame_rate = 45
# Loop 45 times per second
while True:
# Capture a frame from the webcam
ret, frame = webcam.read()
# Resize the frame to 16x16
frame = cv2.resize(frame, (x, y))
frame = sp_noise(frame,0.85)
# Get input orientation
orientation = 0
# Rotate the frame by 90 degrees based on user input
if orientation == 1:
frame = np.rot90(frame)
elif orientation == 2:
frame = np.rot90(frame, 2)
elif orientation == 3:
frame = np.rot90(frame, 3)
# Initialize empty list to store RGB values
rgb_list = []
# Loop through each pixel in the frame
for i in range(x):
for j in range(y):
# Get RGB values of each pixel
r, g, b = frame[i, j]
# Append RGB values to list
rgb_list = rgb_list + [b, g, r]
# Print the list of RGB values
#print(rgb_list)
rgb_out = []
for i in rgb_list:
rgb_out.append(gamma[i]//2)
rgb_out = sp_noise(rgb_out,0.2)
temp_send(rgb_out, x,y)
I have a function already made called sp.noise that adds salt and pepper static to the image based on a value between 0-1. I would like to make a second image processing function that would have the image go from being fully colored at a value of 0, to fully gray at a value of 1.
How could I go about making a smooth gray-scale function for my RGB value NP array?
I wrote a function that simply computes both gray and color, and averages them weighing them based on the input value, but that is incredibly inefficient. And reduces my FPS to unusable levels.
To "make an image fully gray" is to desaturate an image; so removing the colors while retaining the hue and brightness of the pixels. You can:
First, convert your RGB image into HSL space. This will convert your (red, green, blue) pixel triplets into (hue, saturation, lightness) triplets, where "how much color a pixel has" is contained within the single value saturation.
For OpenCV you can use something like output = cv2.cvtColor(img, cv2.COLOR_RGB2HSV) or with cv2.COLOR_BGR2HSV depending on your input color
Simple example from GeeksforGeeks
OpenCV example
Then you can write a simple desaturation function. For example, a function to multiply the saturation of each pixel by your value of range [0,1]. This will make the image "fully gray" at 0, and "fully color" at 1.
(Optional) You can then convert the image back to RGB if necessary with the same function, but different flag: output = cv2.cvtColor(img, cv2.COLOR_HSV2RGB)

affine transformation using nearest neighbor in python

I want to make an affine transformation and afterwards use nearest neighbor interpolation while keeping the same dimensions for input and output images. I use for example the scaling transformation T= [[2,0,0],[0,2,0],[0,0,1]]. Any idea how can I fill the black pixels with nearest neighbor ? I tryied giving them the min value of neighbors' intensities. For ex. if a pixel has neighbors [55,22,44,11,22,55,23,231], I give it the value of min intensity: 11. But the result is not anything clear..
import numpy as np
from matplotlib import pyplot as plt
#Importing the original image and init the output image
img = plt.imread('/home/left/Desktop/computerVision/SET1/brain0030slice150_101x101.png',0)
outImg = np.zeros_like(img)
# Dimensions of the input image and output image (the same dimensions)
(width , height) = (img.shape[0], img.shape[1])
# Initialize the transformation matrix
T = np.array([[2,0,0], [0,2,0], [0,0,1]])
# Make an array with input image (x,y) coordinations and add [0 0 ... 1] row
coords = np.indices((width, height), 'uint8').reshape(2, -1)
coords = np.vstack((coords, np.zeros(coords.shape[1], 'uint8')))
output = T # coords
# Arrays of x and y coordinations of the output image within the image dimensions
x_array, y_array = output[0] ,output[1]
indices = np.where((x_array >= 0) & (x_array < width) & (y_array >= 0) & (y_array < height))
# Final coordinations of the output image
fx, fy = x_array[indices], y_array[indices]
# Final output image after the affine transformation
outImg[fx, fy] = img[fx, fy]
The input image is:
The output image after scaling is:
well you could simply use the opencv resize function
import cv2
new_image = cv2.resize(image, new_dim, interpolation=cv.INTER_AREA)
it'll do the resize and fill in the empty pixels in one go
more on cv2.resize
If you need to do it manually, then you could simply detect dark pixels in resized image and change their value to mean of 4 neighbour pixels (for example - it depends on your required alghoritm)
See: nereast neighbour, bilinear, bicubic, etc.

How to plot centroids on image after kmeans clustering?

I have a color image and wanted to do k-means clustering on it using OpenCV.
This is the image on which I wanted to do k-means clustering.
This is my code:
import numpy as np
import cv2
import matplotlib.pyplot as plt
image1 = cv2.imread("./triangle.jpg", 0)
Z1 = image1.reshape((-1))
Z1 = np.float32(Z1)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K1 = 2
ret, mask, center =cv2.kmeans(Z1,K1,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
center = np.uint8(center)
print(center)
res_image1 = center[mask.flatten()]
clustered_image1 = res_image1.reshape((image1.shape))
for c in center:
plt.hlines(c, xmin=0, xmax=max(clustered_image1.shape[0], clustered_image1.shape[1]), lw=1.)
plt.imshow(clustered_image1)
plt.show()
This is what I get from the center variable.
[[112]
[255]]
This is the output image
My problem is that I'm unable to understand the output. I have two lists in the center variable because I wanted two classes. But why do they have only one value?
Shouldn't it be something like this (which makes sense because centroids should be points):
[[x1, y1]
[x2, y2]]
instead of this:
[[x]
[y]]
and if I read the image as a color image like this:
image1 = cv2.imread("./triangle.jpg")
Z1 = image1.reshape((-1, 3))
I get this output:
[[255 255 255]
[ 89 173 1]]
Color image output
Can someone explain to me how I can get 2d points instead of lines? Also, how do I interpret the output I got from the center variable when using the color image?
Please let me know if I'm unclear anywhere. Thanks!!
K-Means-clustering finds clusters of similar values. Your input is an array of color values, hence you find the colors that describe the 2 clusters. [255 255 255] is the white color, [ 89 173 1] is the green color. Similar for [112] and [255] in the grayscale version. What you're doing is color quantization
They are correctly the centroids, but their dimension is color, not location. Therefor you cannot plot it anywhere. Well you can, but I looks like this:
See how the 'color location' determines to which class each pixel belongs?
This is not something you can locate in your image. What you can do is find the pixels that belong to the different clusters, and use the locations of the found pixels to determine their centroid or 'average' position.
To get the 'average' position of each color, you have to separate out the pixel coordinates according to the class/color to which they belong. In the code below I used np.where( img <= 240) where 240 is the threshold. I used 240 out of ease, but you could use K-Means to determine where the threshold should be. (inRange() might be useful at some point)) If you sum the coordinates and divide that by the number of pixels found, you'll have what I think you are looking for:
Result:
Code:
import cv2
# load image as grayscale
img = cv2.imread('D21VU.jpg',0)
# get the positions of all pixels that are not full white (= triangle)
triangle_px = np.where( img <= 240)
# dividing the sum of the values by the number of pixels
# to get the average location
ty = int(sum(triangle_px[0])/len(triangle_px[0]))
tx = int(sum(triangle_px[1])/len(triangle_px[1]))
# print location and draw filled black circle
print("Triangle ({},{})".format(tx,ty))
cv2.circle(img, (tx,ty), 10,(0), -1)
# the same process, but now with only white pixels
white_px = np.where( img > 240)
wy = int(sum(white_px[0])/len(white_px[0]))
wx = int(sum(white_px[1])/len(white_px[1]))
# print location and draw white filled circle
print("White: ({},{})".format(wx,wy))
cv2.circle(img, (wx,wy), 10,(255), -1)
# display result
cv2.imshow('Result',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here is an Imagemagick solution, since I am not proficient with OpenCV.
Basically, I convert your actual image (from your link in the comments) to binary, then use image moments to extract the centroid and other statistics.
I suspect you can do something similar in OpenCV, Skimage, or Python Wand, which is based upon Imagemagick. (See for example:
https://docs.opencv.org/3.4/d3/dc0/group__imgproc__shape.html#ga556a180f43cab22649c23ada36a8a139
https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.moments_coords_central
https://en.wikipedia.org/wiki/Image_moment)
Input:
Your image does not have just two colors. Perhaps this image did not have kmeans clustering applied with 2 colors only. So I will do that with an Imagemagick script that I have built.
kmeans -n 2 -m 5 img.png img2.png
final colors:
count,hexcolor
99234,#65345DFF
36926,#27AD0EFF
Then I convert the two colors to black and white by simply thresholding and stretching the dynamic range to full black and white.
convert img2.png -threshold 50% -auto-level img3.png
Then I get all the image moment statistics for the white pixels, which includes the x,y centroid in pixels relative to the top left corner of the image. It also includes the equivalent ellipse major and minor axes, angle of major axis, eccentricity of the ellipse, and equivalent brightness of the ellipse, plus the 8 Hu image moments.
identify -verbose -moments img3.png
Channel moments:
Gray:
--> Centroid: 208.523,196.302 <--
Ellipse Semi-Major/Minor axis: 170.99,164.34
Ellipse angle: 140.853
Ellipse eccentricity: 0.197209
Ellipse intensity: 106.661 (0.41828)
I1: 0.00149333 (0.380798)
I2: 3.50537e-09 (0.000227937)
I3: 2.10942e-10 (0.00349771)
I4: 7.75424e-13 (1.28576e-05)
I5: 9.78445e-24 (2.69016e-09)
I6: -4.20164e-17 (-1.77656e-07)
I7: 1.61745e-24 (4.44704e-10)
I8: 9.25127e-18 (3.91167e-08)

Python Open CV - Get coordinates of region

I am a beginner in image processing (and openCV). After applying watershed algorithm to an image, the output that is obtained is something like this -
Is it possible to have the co-ordinates of the regions segmented out ?
The code used is this (in case you wish to have a look) -
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('input.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# noise removal
kernel = np.ones((3,3),np.uint8)
opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel, iterations = 2)
# sure background area
sure_bg = cv2.dilate(opening,kernel,iterations=3)
# Finding sure foreground area
dist_transform = cv2.distanceTransform(opening,cv2.cv.CV_DIST_L2,5)
ret, sure_fg = cv2.threshold(dist_transform,0.7*dist_transform.max(),255,0)
# Finding unknown region
sure_fg = np.uint8(sure_fg)
unknown = cv2.subtract(sure_bg,sure_fg)
# Marker labelling
ret, markers = cv2.connectedComponents(sure_fg)
# Add one to all labels so that sure background is not 0, but 1
markers = markers+1
# Now, mark the region of unknown with zero
markers[unknown==255] = 0
markers = cv2.watershed(img,markers)
img[markers == -1] = [255,0,0]
plt.imshow(img)
plt.show()
Is there any function or algorithm to extract the co-ordinates of the coloured regions that are separated out ? Any help would be much appreciated !
After this line:
markers = cv2.watershed(img,markers)
markers will be an image with all region segmented, and the pixel value in each region will be an integer (label) greater than 0. Background has label 0, boundaries has label -1.
You already know the number of labels from ret returned by connectedComponents.
You need a data structure to contains the points for each region. For example, the points of each region will go in an array of points. You need several of this (for each region), so another array.
So, if you want to find the pixel of each region, you can do:
1) Scan the image and append the point to an array of arrays of points, where each array of points will contains the points of the same region
// Pseudocode
"labels" is an array of an array of points
initialize labels size to "ret", the length of each array of points is 0.
for r = 1 : markers.rows
for c = 1 : markers.cols
value = markers(r,c)
if(value > 0)
labels{value-1}.append(Point(c,r)) // r = y, c = x
end
end
end
2) Generate a mask for each label value, and collect the points in the mask
// Pseudocode
"labels" is an array of an array of points
initialize labels size to "ret", the length of each array of points is 0.
for value = 1 : ret-1
mask = (markers == value)
labels{value-1} = all points in the mask // You can use cv::boxPoints(...) for this
end
The first approach is likely to be much faster, the second is easier to implement. Sorry, but I can't give you Python code (C++ would have been much better :D ), but you should find your way out whit this.
Hope it helps

Categories

Resources