I was wondering whether anyone was aware of any approaches to discover which portion of an image was pixelated. For example for the following saussage dog where I have applied the following code
img = cv2.imread("sausage.jpg")
blurred_img = cv2.blur(img, (21, 21), 0)
mask = np.zeros(img.shape, dtype=np.uint8)
mask = cv2.circle(mask, (200, 100), 100, [255, 255, 255], -1)
out = np.where(mask==[255, 255, 255], blurred_img,img)
I would like to zoom in to a circle centered at 200,100 with a radius of 100.
I have tried looking at edges, but this doesn't give anything definitive and I haven't got an algorithm to extract the information yet.
Related
Using OpenCV for Python, I am trying to get a mask of the noise elements in a image to be later used as input for the cv.inpaint() function.
I am given a greyscale image (2D matrix with values from 0 to 255) in the input_mtx_8u variable, with noise (isolated polygons of very low values).
So far what I did was:
get the edges in which the gradient is above 25:
laplacian = cv2.Laplacian(input_mtx_8u, cv2.CV_8UC1)
lapl_bin, lapl_bin_val = cv2.threshold(laplacian, 25, 255, cv2.THRESH_BINARY)
get the contours of the artifacts
contours, _ = cv2.findContours(lapl_bin_val, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
fill the contours identified
filled_mtx = input_mtx_8u.copy()
cv2.fillPoly(filled_mtx, contours, (255, 255, 0), 4)
For some reason, my 'filled polygons' are not completely filled (see figure).
What can I be doing wrong?
As pointed by #fmw42 , a solution to get the contours filled is using drawContours() instead of fillPoly().
The final working code I got is:
# input_mtx_8u = 2D matrix with uint8 values from 0 to 255
laplacian = cv2.Laplacian(input_mtx_8u, cv2.CV_8UC1)
lapl_bin, lapl_bin_val = cv2.threshold(laplacian, 25, 255, cv2.THRESH_BINARY)
contours, _ = cv2.findContours(lapl_bin_val, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
inpaint_mask = np.zeros(input_mtx_8u.shape, dtype=np.uint8)
for contour in contours:
cv2.drawContours(inpaint_mask, [contour], -1, (255, 0, 0), thickness=-1)
# inpaint_mask = can be used as the mask for cv2.inpaint()
Note that for some reason:
cv2.drawContours(input_mtx_cont, contours, -1, (255, 0, 0), thickness=-1)
does not work. One must loop and draw contour by contour...
We create a 360 degree photo of the sky from many camera photos from different angles. During this process, few imperfections arise:
What would be the best way to get rid of of these visible overlaps? Is it possible to do after the stitching or should we try to prevent this during the process of stitching?
We already tried blurring the lines after the stitch but it does not seem to be the right way to go.
img = cv2.imread('light-sky-stitch-202001011200.png')
blurred_img = cv2.GaussianBlur(img, (211, 211), 0)
mask = np.zeros((2000, 2000, 3), dtype=np.uint8)
mask = cv2.circle(img=mask, center=(1000, 1000), radius=500,
color=(255, 255, 255), thickness=-1)
out = np.where(mask==np.array([255, 255, 255]), blurred_img, img)
I know here already some questions were asked but they did't help me to solve my problem. I will appreciate any help to solve my problem.
I'm new to opencv.
I have an image and apply some code to get contours from image. Now i want to get the RGB color values from detected contours. How can i do that?
I do research on it and find that it could be solved by using contours so i try to implement contours and now finally i want to get the color values of the contours.
Here is my Code:
import cv2
import numpy as np
img = cv2.imread('C:/Users/Rizwan/Desktop/example_strip1.jpg')
img_hsv = cv2.cvtColor(255-img, cv2.COLOR_BGR2HSV)
lower_red = np.array([40, 20, 0])
upper_red = np.array([95, 255, 255])
mask = cv2.inRange(img_hsv, lower_red, upper_red)
contours, _ = cv2.findContours(mask, cv2.RETR_TREE,
cv2.CHAIN_APPROX_SIMPLE)
color_detected_img = cv2.bitwise_and(img, img, mask=mask)
print(len(contours))
for c in contours:
area = cv2.contourArea(c)
x, y, w, h = cv2.boundingRect(c)
ax = cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 0), 2)
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
im = cv2.drawContours(color_detected_img, [box], -1, (255, 0, 0), 2)
cv2.imshow("Cropped", color_detected_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
I expect the output should be the RGB values of the detected color inside the contours.
As asked in the comments, here's a possible solution to extract the BGR(!) values from the pixels of an image inside a before found contour. The proper detecting of the desired, colored stripes is omitted here as also discussed in the comments.
Having an image and a filled mask of a contour, for example from cv2.drawContours, we can simply use NumPy's boolean array indexing by converting the (most likely uint8) mask to an bool_ array.
Here's a short code snippet, that uses NumPy's savetxt to store all values in some txt file:
import cv2
import numpy as np
# Some dummy image
img = np.zeros((100, 100, 3), np.uint8)
img = cv2.rectangle(img, (0, 0), (49, 99), (255, 0, 0), cv2.FILLED)
img = cv2.rectangle(img, (50, 0), (99, 49), (0, 255, 0), cv2.FILLED)
img = cv2.rectangle(img, (50, 50), (99, 99), (0, 0, 255), cv2.FILLED)
# Mask of some dummy contour
mask = np.zeros((100, 100), np.uint8)
mask = cv2.fillPoly(mask, np.array([[[20, 20], [30, 70], [70, 50], [20, 20]]]), 255)
# Show only for visualization purposes
cv2.imshow('img', img)
cv2.imshow('mask', mask)
# Convert mask to boolean array
mask = np.bool_(mask)
# Use boolean array indexing to get all BGR values from img within mask
values = img[mask]
# For example, save values to txt file
np.savetxt('values.txt', values)
cv2.waitKey(0)
cv2.destroyAllWindows()
The dummy image looks like this:
The dummy contour mask looke like this:
The resulting values.txt has some >1000 entries, please check yourself. Attention: Values are BGR values; e.g. prior converting the image to RGB is needed to get RGB values.
Hope that helps!
This question already has answers here:
Segmentation problem for tomato leaf images in PlantVillage Dataset
(2 answers)
Closed 3 years ago.
I have a set of images, all of which look almost like this leaf here:
I want to extract the leaf from the background, for which I used the GrabCut algorithm as used here.
As a different approach, I also used thresholding based on ratios of r, g and b values as here:
import numpy as np
import cv2
import matplotlib.pyplot as plt
testImg = cv2.imread('path_to_the_image')
testImg = cv2.resize(testImg, (256, 256))
#bgImg = cv2.imread('')
#blurBg = cv2.GaussianBlur(bgImg, (5, 5), 0)
#blurBg = cv2.resize(blurBg, (256, 256))
#testImg = cv2.GaussianBlur(testImg, (5, 5), 0)
cv2.imshow('testImg', testImg)
#plt.imshow(bgImg)
cv2.waitKey(0)
#plt.show()
modiImg = testImg.copy()
ht, wd = modiImg.shape[:2]
print(modiImg[0][0][0])
for i in range(ht):
for j in range(wd):
r = modiImg[i][j][0]
g = modiImg[i][j][1]
b = modiImg[i][j][2]
r1 = r/g
r2 = g/b
r3 = r/b
r4 = round((r1+r2+r3)/3, 1)
if g > r and g > b:
modiImg[i][j] = [255, 255, 255]
elif r4 >= 1.2:
modiImg[i][j] = [255, 255, 255]
else:
modiImg[i][j] = [0, 0, 0]
# if r4 <= 1.1:
# modiImg[i][j] = [0, 0, 0]
# elif g > r and g > b:
# modiImg[i][j] = [255, 255, 255]
# else:
# modiImg[i][j] = [255, 255, 255]
# elif r4 >= 1.2:
# modiImg[i][j] = [255, 255, 255]
# else:
# modiImg[i][j] = [0, 0, 0]
plt.imshow(modiImg)
plt.show()
testImg = testImg.astype(float)
alpha = modiImg.astype(float) / 255
testImg = cv2.multiply(alpha, testImg)
cv2.imshow('final', testImg/255)
cv2.waitKey(0)
But the dark spots on the leaf always go missing in the extracted leaf image as shown here:
Is there any other method to separate the leaf from its background, given that there is only one leaf per image, and the background is almost the same for other images that I have and also the leaves are positioned almost similarly as in here.
You can try image segmentation using HSV colormap.
Code:
img = cv2.imread('leaf.jpg')
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# find the green color
mask_green = cv2.inRange(hsv, (36,0,0), (86,255,255))
# find the brown color
mask_brown = cv2.inRange(hsv, (8, 60, 20), (30, 255, 200))
# find the yellow color in the leaf
mask_yellow = cv2.inRange(hsv, (21, 39, 64), (40, 255, 255))
# find any of the three colors(green or brown or yellow) in the image
mask = cv2.bitwise_or(mask_green, mask_brown)
mask = cv2.bitwise_or(mask, mask_yellow)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(img,img, mask= mask)
cv2.imshow("original", img)
cv2.imshow("final image", res)
cv2.waitKey(0)
cv2.destroyAllWindows()
Output:
Moreover, If you change lower range of yellow color from (21, 39, 64) to (14, 39, 64), then you will see that the small black spots present on the leaf start filling and will improve the result even further.
You may want to use a deep learning method. The U-Net is doing pretty good on tasks like that https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/. As I see they also provide a trained model. If you have Matlab and Caffee installed you should be able to copy your files into the right folder, run the program and receive the results you are looking for.
Thresholding is not a good idea for that kind of task. Your method should be able to recognize patterns instead of just looking at pixel's color.
A problem with the deep learning method is tough, that you either need a pretrained network that has trained the segmentation of RBG imges of leafes or you need data (RGB image of leafes and the corresponding segmentation).
I've detected contours for an image using opencv python,now I should blackout the image outside the contour.Could anyone help me to do this?
Given your found contours, use drawContours to create a binary mask, in which your contours are filled. Dependent on how you do that (black image, white contours vs. white image, black contours), you set all pixels in your input image to 0 expect for the masked (or unmasked) ones. See the following code snippet for a visualization:
import cv2
import numpy as np
# Artificial input
input = np.uint8(128 * np.ones((200, 100, 3)))
cv2.rectangle(input, (10, 10), (40, 60), (255, 240, 172), cv2.FILLED)
cv2.circle(input, (70, 100), 20, (172, 172, 255), cv2.FILLED)
# Input to grayscale
gray = cv2.cvtColor(input, cv2.COLOR_RGB2GRAY)
# Simple binary threshold
_, gray = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY)
# Find contours
cnts, _ = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Generate mask
mask = np.ones(gray.shape)
mask = cv2.drawContours(mask, cnts, -1, 0, cv2.FILLED)
# Generate output
output = input.copy()
output[mask.astype(np.bool), :] = 0
cv2.imwrite("images/input.png", input)
cv2.imwrite("images/mask.png", np.uint8(255 * mask))
cv2.imwrite("images/output.png", output)
The artificial input image:
The mask generated during processing:
The final output: