Finding mean pixel intensity value of an image inside a contour - python

I have found a particular contour in an image. I have created a mask with the entire image black except for the boundary points of the contour. The contour has been mapped perfectly. Clickhere for the picture of the contour created.
Now I want to go to my original image and get the average pixel intensity value of all points inside this contour of the original image.
When I use the cv.mean() function, do I get the average value of only the points specified by the mask, i.e. just the boundary points or all the points inside the mask?

The easiest way to do this is by picking out pixels in your image that correspond to places where the mask is white. If you want pixel on the boundary use the mask as you have shown it. If you want pixel in (and on) the boundary; draw it instead as a filled contour (thickness=-1). Here's an example:
img = cv2.imread('image.jpg')
mask = cv2.imread('mask.png', 0)
locs = np.where(mask == 255)
pixels = img[locs]
print(np.mean(pixels))

Related

Obtain the subtracted area of 2 images via Python OpenCV

I can subtract 2 images by the following codes:
import cv2
original = cv2.imread('image1.png', cv2.IMREAD_UNCHANGED)
tiled = cv2.imread('image2.png', cv2.IMREAD_UNCHANGED)
subtract = cv2.subtract(tiled, original)
cv2.imwrite('subtract.png', subtract)
However, how can I obtain the area (maybe in form of array of pixels, or shapes) that results in black (i.e. after subtraction, the pixel is black color)?
I can only think of looping through each pixel of an image to check whether the pixel value equals to an array of zeros.
Ultimately, I want to change those pixels in black color after subtraction to be transparent.

Merging each instance mask back to the original image Python

I am having a bunch of mask (object is white, non-object is black) bounded by their bounding box as a separate image, and I'm trying to put them back to their original positions on the original image. What I have in mind right now is:
Create a black image of the same size of the original image.
Add the value of each mask with the value of the coordinate of the bounding box on the original image together.
Could anyone tell me if I am heading in the right path, is there any better way to do this?.
Below is roughly my implementation
import cv2
black_img = np.zeros((height,width)) # A image that is of the size of the original but is all black
mask = cv2.imread("mask.png")
bbox = [x1, y1, x2, y2] # Pretend that this is a valid bounding box coordinate on the original image
black_img[y1:y2, x1:x2] += mask
For example:
I have the first image which is one of my masks. Its size is of the same of the bounding box on the original image. I'm trying merge each mask back together so that I achieved something like the second image.
One of the mask:
After merging all the mask:
I am assuming the mask is 0 and 1's and your image is grayscale. Also, for each small_mask, you have a corresponding bbox.
mask = np.zeros((height,width))
for small_mask, bbox in zip(masks, bboxes):
x1, y1, x2, y2 = bbox
mask[y1:y2, x1:x2] += small_mask
mask = ((mask>=1)*255.0).astype(np.uint8)
Now you combined all the small masks together.
The last line:
My assumption was somehow two masks may intersect. So those intersection may have values more than 1. mask >= 1 tells me that the pixels that are more than 0 are gonna be all on.
I multiplied that by 255.0 because I wanted to make it white. You won't be able to see 1's in a grayscale image.
(mask >= 1)*255.0 expanded the range from [0-1] to [0-255]. But this value is float which is not image type.
.astype(np.uint8) converts the float to uint8. Now you can do all the image operations without any problem. When it is float, you may face a lot of issues, like plotting, saving, all will cause some issues.

Method for cropping an image along color borders?

Images such as this one (https://imgur.com/a/1B7nQnk) should be cropped into individual cells. Sadly, the vertical distance is not static, so a more complex method needs to be applied. Since the cells have alternating background colors (grey and white, maybe not visible at low contrast monitors), I thought it might be possible to get the coordinates of the boundaries between white and grey, with which accurate cropping can be accomplished. Is there a way to, e.g., transform the image into a giant two dimensional array, with digits corresponding to the color of the pixel ?... so basically:
Or is there another way?
Here's a snippet that shows how to access the individual pixels of an image. For simplicity, it first converts the image to grayscale and then prints out the first three pixels of each row. It also indicates where the brightness of the first pixel is different from that pixel in that column on the previous row—which you could use to detect the vertical boundaries.
You could do something similar over on the right side to determine where the boundaries are on that side ( you've determined the vertical ones).
from PIL import Image
IMAGE_FILENAME = 'cells.png'
WHITE = 255
img = Image.open(IMAGE_FILENAME).convert('L') # convert image to 8-bit grayscale
WIDTH, HEIGHT = img.size
data = list(img.getdata()) # convert image data to a list of integers
# convert that to a 2D list (list of lists of integers)
data = [data[offset:offset+WIDTH] for offset in range(0, WIDTH*HEIGHT, WIDTH)]
prev_pixel = WHITE
for i, row in enumerate(range(HEIGHT)):
possible_boundary = ' boundary?' if data[row][0] != prev_pixel else ''
print(f'row {i:5,d}: {data[row][:3]}{possible_boundary}')
prev_pixel = data[row][0]

Extract ROI From Image with a Skew Angle OpenCv Python

I have been using the
x,y,w,h = cv2.boundingBox(cnt)
roi = img[y:y+h,x:x+w]
function in openCv in order to get a portion of the image within a contour.
However, I am now trying to use the openCv function minAreaRect in order to get my bounding box. The function returns the x and y coordinates of each corner, along with the skew angle. Example:
((363.5, 676.0000610351562), (24.349538803100586, 34.46882629394531), -18.434947967529297)
Is there a simple way of extracting this portion of the image? Because I can obviously not do
roi = img[y:y+h,x:x+w]
Because of the skew angle and such. I was going to possibly rotate the entire image and extract the points, but this would take far too long when going through thousands of contours at once.
What I currently get is encompassed within the green rectangle, and what I want is in the red rectangle. I want to extract this portion of the image, but cannot figure out how to select a diagonal rectangle.

How to create mask for outer pixels when using skimage.transform.rotate

skimage rotate function create "outer" pixels, no matter how this pixels extrapolated (wrap, mirror, constant, etc) - they are fake, and can affect statistical analysis of image. How can I get mask of this pixels to ignore them in analysis?
mask_val = 2
rotated = skimage.transform.rotate(img, 15, resize=True, cval=mask_val,
preserve_range=False)
mask = rotated == mask_val
Idea: pick a value for the mask which doesn't appear in the image, then obtain mask by checking for equality with this value. Works well when image pixels are normalized floats. rotate above transforms image pixels to normalized floats internally thanks to preserve_range=False (this is default value, I specified it just to make point that without it this wouldn't work).

Categories

Resources