Get the segmentation mask for the bounded polygon - python

I am trying to create the segmentation mask for each red color boundary. Input image has 4 polygons with a red color boundary. I want to mark each polygon segment with different colors as shown in the output. Please help me with this.
Input Image
Output Image

take red channel only. consider as single-channel grayscale/binary.
invert. areas become white. borders become black.
connected components labeling (of the white areas). connectedComponents is the API
To paint a picture where each area has a color, use numpy operations (mask indexing, assignment) to construct that picture from the labels map returned by connectedComponents.

Why can't you simply scan through the image and do the following:
execute flood fill with a unique (non-red color) every time you find a black pixel
replace red border with the same color if needed
repeat

Related

How to replace color of image using opencv?

I'm trying to replace different colors in image using opencv.
Image is as below
I'm trying to replace border color and main object color which is shade of yellow into some other random different color say orange and red, first I tried to change border color as in below code
image = cv.imread(r'image.png')
hsv=cv.cvtColor(image,cv.COLOR_BGR2HSV)
yellow_lo=np.array([0,0,240])
yellow_hi=np.array([179,255,255])
mask=cv.inRange(hsv,yellow_lo,yellow_hi)
I get mask image as below
as you can see there's a gap between the lines in the border color , when I replace the color for this mask image I still can see the original color present in the image as below, line is not continously red in color
image[mask>0]=(0,0,255)
this is happening because pixel intensity of border varies its not constant as shown in below zoomed image
How can I solve this and replace the color of total border? I tried to erode and dilate mask image to complete broken line it didn't fix the issue. Any help suggestion to fix this will be highly appreciated.
To replace the border you need a prominent mask. A mask that strongly represents the border.
Analyzing the third channel in HSV color space (value) gives the following:
Applying an appropriate threshold on this channel gives a well defined mask, which can later be used to replace the color:
mask = cv2.threshold(hsv[:,:,2], 150, 255, cv2.THRESH_BINARY)[1]
img[th==255]=(0,0,255)

Is there a way to eliminate background noise in a color screened image?

im using the following code to color screen a photo. We are trying to locate the orange circle within the image. Is there a way to eliminate some of the background noise shown in the second photo? Tweaking the color range some may help but its never enough to fully eliminate the background noise. I've also considered trying to locate circle shapes within the image but i am unsure how to do that. Any help would be amazing!
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
Lower_bound = np.array([0, 80, 165]) # COLORS NEEDED
Upper_bound = np.array([75, 255, 255])
mask = cv2.inRange(hsv, Lower_bound, Upper_bound)
Option 1 (HSV color space):
If you want to continue using HSV color space robustly, you must check out this post. There you can control the variations across the three channels using trackbars.
Option 2 (LAB color space):
Here I will be using the LAB color space where dominant colors can be segmented pretty easily. LAB space stores image across three channels (1 brightness channel) and (2 color channels):
L-channel: amount of lightness in the image
A-channel: amount of red/green in the image
B-channel: amount of blue/yellow in the image
Since orange is a close neighbor of color red, using the A-channel can help segment it.
Code:
img = cv2.imread('image_path')
# Convert to LAB color space
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
cv2.imshow('A-channel', lab[:,:,1])
The above image is the A-channel, where the object of interest is pretty visible.
# Apply threshold
th = cv2.threshold(lab[:,:,1],150,255,cv2.THRESH_BINARY)[1]
Why does this work?
Looking at the LAB color plot, the red color is on one end of A-axis (a+) while green color is on the opposite end of the same axis (a-). This means, higher values in this channel represents colors close to red, while the lower values values represents colors close to green.
The same can be done along the b-channel while trying to segment yellow/blue color in the image.
Post-processing:
From here onwards for every frame:
Identify the largest contour in the binary threshold image th.
Mask it over the frame using cv2.bitwise_and()
Note: this LAB color space can help segment dominant colors easily. You need to test them before using it for other colors.

Text edge zigzag effect removal (OR finding the dominant color for a image region)

My goal is to draw the text bounding boxes for the following image. Since the two regions are colored differently, so this should be easy. I just need to select the pixels that match a certain color values to filter out the other text region and run a convex hull detection.
However, when I zoom in the image, I notice that the text regions has the zig-zag effect on the edges, so I'm not able to easily find the two color values (for the blue and green) from the above image.
I wonder is there a way to remove the zig-zag effect to make sure each phrase is colored consistently? Or is there a way to determine the dominant color for each text region?
The anti-aliasing causes the color to become lighter (or darker if against a black background) so you can think of the color as being affected by light. In that case, we can use light-invariant color spaces to extract the colors.
So first convert to hsv since it is a light invariant colorspace. Since the background can be either black or white, we will filter out them out (if the bg is always white and the text can be black you would need to change the filtering to allow for that).
I took the saturation as less than 80 as that will encompass white black and gray since they are the only colors with low saturation. (your image is not perfectly white, its 238 instead of 255 maybe due to jpg compression)
Since we found all the black, white and gray, the rest of the image are our main colors, so i took the inverse mask of the filter, then to make the colors uniform and unaffected by light, set the Saturation and Value of the colors to 255, that way the only difference between all the colors will be the hue. I also set bg pixels to 0 to make it easier for finding contours but thats not necissary
After this you can use whatever method you want to get the different groups of colors, I just did a quick histogram for the hue values and got 3 peaks but 2 were close together so they can be bundled together as 1. You can maybe use peak finding to try to find the peaks. There might be better methods of finding the color groups but this is what i just thought of quickly.
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
mask = hsv[:,:,1] < 80 # for white, gray & black
hsv[mask] = 0 # set bg pixels to 0
hsv[~mask,1:] = 255 # set fg pixels saturation and value to 255 for uniformity
colors = hsv[~mask]
z = np.bincount(colors[:,0])
print(z)
bgr = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
cv2.imshow('bgr', bgr)

How do i fill the missing part in this picture using OpenCV (python)?

I have to generate a new image such that the missing portion of the black ring is shown
For Example, consider this image
As we can see , a sector of the inner black ring is missing, and my task is to identify
where to fill in. I have to take a plain white image of same dimensions as the input image and predict
(marked by black color) the pixels that i’ll fill in to complete the black outer ring. A
pictorial representation of the output image is as follows:
Please help me out...i'm new to OpenCV so please explain me the steps as detailed as possible.I am working in python, so i insist on a python solution for the above problem
You can find a white object (sector) whose centroid is at the maximum distance from the center of the picture.
import numpy as np
import cv2
img = cv2.imread('JUSS0.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
w,h=gray.shape
thresh=cv2.threshold(gray, 253, 255, cv2.THRESH_BINARY )[1]
output = cv2.connectedComponentsWithStats(thresh, 4, cv2.CV_32S)
num_labels = output[0]
labels = output[1]
centroids = output[3]
polar_centroids_sq=[]
for i in range(num_labels):
polar_centroids_sq.append((centroids[i][0]-w/2)**2+(centroids[i][1]-h/2)**2)
idx=polar_centroids_sq.index(max(polar_centroids_sq))
out=np.uint8(255*(labels==idx))
cv2.imshow('sector', out)
cv2.imwrite('sector.png', out)
This is one of many possible approaches.
make every pixel that is not black into white, so your image is black and white. This means your processing is simpler, uses less memory and has only 1 channel to process instead of 3. You can do this with cvtColor() to get greyscale and then cv2.threshold() to get pure black and white.
repeatedly construct (imaginary) radial lines until, when checking the pixels along the lines, you have 2 black stretches. You now have the inner and outer radius of the inner, incomplete circle. You can get the coordinates of points along a line with scikit-image line function.
draw that circle in full in black with cv2.circle()
subtract that image from your initial black and white image so that only the differences (missing part) shows up in the result.
Of course, if you already know the inner and outer radius of the incomplete black ring, you can completely omit the second step above and do what Yves suggested in the comments.
Or, instead of second step above, run edge detection and HoughCircles to get the radii.
Another approach might be to call cv2.warpPolar() to convert your circular image to a long horizontal one with 2 thick black lines, one of them discontinuous. Then just draw that line across the full width of the image and warp back to a circle.

Differentiate membrane signal from organelles, microscopy image analysis

I am trying to count cells in a microscopy image, and i need to differentiate between the membrane signal and organelles within.
There is only one color, as we are visualizing a protein within the cells using GFP
Right now i am using skimage package (measure, labels). This method kinda works, as it can find connected black regions, and by using the convex of these in together with the bounding box, i can achieve the following (inside: red, membrane: blue):
I am however having problems with organelles (bright spots inside) that touch the membrane and hence I lose signal from the inside (which then is added to the membrane signal - which is a problem).
Any suggestions for a better method?
from skimage import measure
from skimage.segmentation import clear_border
image= ndimage.gaussian_filter(raw_image, sigma=(0.5,0.5), order=0)
median = np.median(image)
mask_inv =np.ma.masked_where(image>median*1.5,image) # was 5
array = np.zeros(image.shape)
img_contour_inv =np.array(array+mask_inv,dtype=np.float)
mask_inverse_bool = img_contour_inv>0
labels = measure.label(mask_inverse_bool,connectivity=1)
df=measure.regionprops(lables, intensity_image=intensity_image)
Followed by some plotting sorting by size and plotting yields image 2
You can try this method:
Find the black spots as you have done in Inside image. Make Inside
image a black and white image.
Now inverse the Inside image and multiply it with the raw image. Bright spots will be
left behind.
To get cell contours, you can subtract the bright spots from the raw image.
Another method could be using thresholding to find bright spots.

Categories

Resources