I have a following problem that I want to solve:
Paper cup is photographed from it's side and it's edges and contours are found with Canny and findContours function (please see IMG1 attached bellow), then I have to test if that contour is straight (without any bumps or other imprefections) and return if it's passed or not.
EDIT: as per Christoph Rackwitz recomendation I'm posting original non altered image.
I have successfully implemented edge detection as shown in IMG1 bellow, but now I want to remove parts of edges that are not used and want to get result that are shown in IMG2.
What would be the best way of doing it?
Few things that are on my mind:
Masks (mask off other paths)
Somehow delete paths that are radically different in x coordinates.
Yet another issue is detecting bumps on that curvy path. I'm thinking about few possible solutions:
Loop thru contours and look for spikes in x coordinates (radical changes, from 20 to 40 for examples)
draw similar radius arc and compare it with that contour.
The best result is shown in IMG3 image. Would be the perfect way of solving this problem for me.
Maybe there is someone who has more expierience with OPENCV and could light up my path to solving this problem. THANK YOU!
my code is as follow:
import numpy as np
import cv2
# Read the original image
img = cv2.imread('test2.jpg')
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_blur = cv2.GaussianBlur(img_gray, (3,3), 0)
edges = cv2.Canny(image=img_blur, threshold1=110, threshold2=180) # Canny Edge Detection
ret, thresh = cv2.threshold(edges, 110, 180, cv2.THRESH_BINARY)
contours, hierarchy = cv2.findContours(image=thresh, mode=cv2.RETR_TREE, method=cv2.CHAIN_APPROX_NONE)
# print(contours)
for cnt in contours:
print(cv2.arcLength(cnt, False))
image_copy = img.copy()
cv2.drawContours(image=image_copy, contours=contours, contourIdx=-1, color=(255, 0, 0), thickness=1, lineType=cv2.LINE_AA)
cv2.imshow('Binary image', image_copy)
cv2.waitKey(0)
cv2.destroyAllWindows()
IMG1 This is what is currently detected by OpenCV
IMG2 Good edge detection that I strive to achieve
IMG3 Perfect result with combined expected path and defect in one image
IMG4 Original image for testing and so on
Related
I have a number of grayscale images as the left one below. I only want to keep the outer edges of the object in the image using python, preferably OpenCV. I have tried OpenCV erosion and then get the right image below. However, as seen in the image there is still brighter areas inside of the object. I want to remove those and only keep the outer edge. What would be an efficient way to achieve this? Simple thresholding will not work, because the bright "stripes" inside the object may sometimes be brighter than the edges. Edge detection will detect edges also inside the "main" edges.
The right image - "eroded_areas" is achieved by the following code:
im = cv2.imread(im_path, cv2.IMREAD_GRAYSCALE)
im_eroded = copy.deepcopy(im)
kernel_size=3
kernel = np.ones((kernel_size, kernel_size), np.uint8)
im_eroded = cv2.erode(im_eroded, kernel, iterations=1)
eroded_areas = im - im_eroded
plt.imshow(eroded_areas)
Here is one approach in Python/OpenCV.
Read the input as grayscale
Do Otsu thresholding
Get contours and filter out very small defects
Draw the contours as white on a black background
Save the results
Input:
import cv2
import numpy as np
# read the input as grayscale
img = cv2.imread('long_blob.png', cv2.IMREAD_GRAYSCALE)
# threshold
thresh = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
# get contours and filter out small defects
result = np.zeros_like(img)
contours = cv2.findContours(thresh , cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
for cntr in contours:
area = cv2.contourArea(cntr)
if area > 20:
cv2.drawContours(result, [cntr], 0, 255, 1)
# save results
cv2.imwrite('long_blob_contours.png', result)
# show results
cv2.imshow('thresh', thresh)
cv2.imshow('result', result)
cv2.waitKey(0)
Result:
Did you try canny edge detection on it? It has a non-max suppression step in that algorithm that removes those lighter tones. I believe the canny edge detector is built into open cv but if not, heres a guide of it https://en.wikipedia.org/wiki/Canny_edge_detector
Open cv document
https://docs.opencv.org/4.x/da/d22/tutorial_py_canny.html
Code ex
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
img = cv.imread('messi5.jpg',0)
edges = cv.Canny(img,100,200)
plt.subplot(121),plt.imshow(img,cmap = 'gray')
plt.title('Original Image'), plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(edges,cmap = 'gray')
plt.title('Edge Image'), plt.xticks([]), plt.yticks([])
plt.show()
This should work from the first image since the grey scale gradually changes to the lighter and darker tones.
If this still doesnt work, maybe you can try first making the high tones higher while low tones lower using just pixel wise multiplications, then running a threshold after doing the 'im_eroded = cv2.erode(im_eroded, kernel, iterations=1)' step.
I just had a more simple solution that probably only works for your current example image. Since you have it masked perfectly, you could just change everything that is not pure black to pure white, and then run an edge detector on it. It should be able to get the main outline and those little holes and ignore those lighter stripes.
There might be a better way to do this, but this is the best Ive got for now.
I have a microscopy image and need to calculate the area shown in red. The idea is to build a function that returns the area inside the red line on the right photo (float value, X mm²).
Since I have almost no experience in image processing, I don't know how to approach this (maybe silly) problem and so I'm looking for help. Other image examples are pretty similar with just 1 aglomerated "interest area" close to the center.
I'm comfortable coding in python and have used the software ImageJ for some time.
Any python package, software, bibliography, etc. should help.
Thanks!
EDIT:
The example in red I made manually just to make people understand what I want. Detecting the "interest area" must be done inside the code.
Canny, morphological transformation and contours can provide a decent result.
Although it might need some fine-tuning depending on the input images.
import numpy as np
import cv2
# Change this with your filename
image = cv2.imread('test.png', cv2.IMREAD_GRAYSCALE)
# You can fine-tune this, or try with simple threshold
canny = cv2.Canny(image, 50, 580)
# Morphological Transformations
se = np.ones((7,7), dtype='uint8')
image_close = cv2.morphologyEx(canny, cv2.MORPH_CLOSE, se)
contours, _ = cv2.findContours(image_close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Create black canvas to draw area on
mask = np.zeros(image.shape[:2], np.uint8)
biggest_contour = max(contours, key = cv2.contourArea)
cv2.drawContours(mask, biggest_contour, -1, 255, -1)
area = cv2.contourArea(biggest_contour)
print(f"Total area image: {image.shape[0] * image.shape[1]} pixels")
print(f"Total area contour: {area} pixels")
cv2.imwrite('mask.png', mask)
cv2.imshow('img', mask)
cv2.waitKey(0)
# Draw the contour on the original image for demonstration purposes
original = cv2.imread('test.png')
cv2.drawContours(original, biggest_contour, -1, (0, 0, 255), -1)
cv2.imwrite('result.png', original)
cv2.imshow('result', original)
cv2.waitKey(0)
The code produces the following output:
Total area image: 332628 pixels
Total area contour: 85894.5 pixels
Only thing left to do is convert the pixels to your preferred measurement.
Two images for demonstration below.
Result
Mask
I don't know much about this topic, but it seems like a very similar question to this one, which has what look like two great answers:
Python: calculate an area within an irregular contour line
I'm trying to find the contour of an image using cv2. There are many related questions, but the answer always appear to be very specific and not applicable to my case.
I have an black and white image that I change into color.
thresh = cv2.cvtColor(thresh, cv2.COLOR_RGB2GRAY)
plt.imshow(thresh)
Next, I try to find the contours.
image, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
and then I visualize it by plotting it on a black background.
blank_image = np.zeros((thresh.shape[0],thresh.shape[1],3), np.uint8)
img = cv2.drawContours(blank_image, contours, 0, (255,255,255), 3)
plt.imshow(img)
The contour follows the actual contour, i.e. surrounding the whole thing. How do I get something like this very bad paint impression:
You can use Canny edge detection to do this:
import cv2
frame = cv2.imread("iCyrOT3.png") # read a frame
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # turn it gray
edges = cv2.Canny(gray, 100, 200) # get canny edges
cv2.imshow('Test', edges) # display the result
cv2.waitKey(0)
Similar questions have been asked, but none of them seem to help in my case (nevertheless, I learned a few things from those threads).
I am using Tesseract for OCR, but the results are far from satisfactory when the text is slightly skewed (see the image above).
Inspired by similar cases, I tried to use OpenCV to detect and fix the skew, but unfortunately it just doesn't seem to work. Below, you can see my current attempt, which doesn't yield the necessary result. What I get is just another bounding box around the image (that has already been cropped).
import cv2
from matplotlib import pyplot as plt
import numpy as np
img = cv2.imread("skew.JPG")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#gray = cv2.bitwise_not(gray)
ret,thresh1 = cv2.threshold(gray, 0, 255 ,cv2.THRESH_OTSU)
rect_kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3
, 2))
dilation = cv2.dilate(thresh1, rect_kernel, iterations = 1)
cv2.imshow('dilation', dilation)
cv2.waitKey(0)
cv2.destroyAllWindows()
contours, hierarchy = cv2.findContours(dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
for cnt in contours:
rect = cv2.minAreaRect(cnt)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img,[box],0,(0,0,255),3)
cv2.imshow('final', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
I would appreciate any advice.
Tesseract seems to have a lot of troubles when the text has some distortions.
The idea is to find the contour of the text to be able to undistort the image and then use Tesseract.
The contour is generally a rectangle which has undergone the same distortion as the text. So it does not appear as a perfect rectangle in your image anymore. Opencv gives you different methods to find it. cv2.minAreaRect() finds the best rotated rectangle. It may be sufficient depending on the distortion of your text. Otherwise, you can use cv2.convexHull() to better fit your text.
The contour should give you the corners of the text that you want to remap to a regular rectangle. You can do that with:
cv2.getAffineTransform(corners, dest_corners) # requires 3 points
cv2.getPerspectiveTransform(corners, dest_corners) # requires 4 points
and then
cv2.warpAffine(...)
cv2.warpPerspective(...)
Also, don't forget to correctly set the page segmentation method that Tesseract needs to use (https://github.com/tesseract-ocr/tesseract/wiki/ImproveQuality). In your case, "6 Assume a single uniform block of text." seems to be adapted.
I am new to OpenCV and Python and I made a program that finds contours with area that is above 500 and saves them into a new image I used boundingRect as advised on the internet, it runs and does the job well but I got a problem with an output of an image. It seems that noises near beside the region of interest are also saved. As you can see in the image below, there are some tiny shapes near beside the ROI. The output is good for other images its just that I want to get rid of noises like this. Is there a way to remove those kind of noises in the output?
Here is the output of the program I made:
Here is the input image:
Hide with contouring
This solution uses cv2.drawContours() to simply draw black contours over the noise. I ran the black and white sample image through a few iterations of dilation, filtered contours by area, and then drew black contour lines over the noise. I used the threshold feature because there turned out to be a good bit of minuscule noise in what initially appeared to be a simple black and white image.
Input:
Code:
import cv2
thresh_value = 10
img = cv2.imread("cells_BW.jpg")
img = cv2.medianBlur(img, 5)
dilation = cv2.dilate(img,(3,3),iterations = 3)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
(T, thresh) = cv2.threshold(img_gray, thresh_value, 255, cv2.THRESH_BINARY)
_, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours = [i for i in contours if cv2.contourArea(i) < 5000]
cv2.drawContours(img, contours, -1, (0,0,0), 10, lineType=8)
cv2.imwrite("cells_BW_CLEAN.jpg", img)
Output:
There could be several solutions depends on the assumption on the input data.
Probable Methods
If the ROI has a significantly different color than others,
1-1. You can threshold the input image using RGB before finding the contour.
If the area of the object you want to find is significantly bigger that others,
2-1. Fill the holes like this example
2-2. Calculate the size of the blobs, and exclude all the blobs except the largest one (example to calculate the size of blobs).
If there has intersection point between the contours of multiple objects, Method 2 surely fail to segment the region of single cell.