Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am currently trying to contour a human body from an image, but I am stuck right now.
I have taken different video lectures on contour, but they were related to objects like rectangles, circles and other simle shapes.
Can someone guide me in Human body contour? This picture shows an example of contour I am looking for.
You have to understand that detecting a human body is not so simple because it is hard to diferentiate the background from the body. That being said, if you have a simple background like the uploaded image, you can try to apply number of image tranformations (like applying binary threshold, otsu... look at opencv documentation - OpenCV documentation) to make your ROI "stand out" so you can detect with cv2.findContours() - same as drawing contour for circles, squares, etc. You can even apply cv2.Canny() (Canny edge detection) which detects a wide range of edges in the image and then search for contour. Here is an example for your image (the results could be better if the image didn't already have a red contour surrounding the body). Steps are desribed in comments in the code. Note that this is very basic stuff and would not work in most cases as the human detection is very difficult and broad question.
Example:
import cv2
# Read image and convert it to grayscale.
img = cv2.imread('human.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Search for edges in the image with cv2.Canny().
edges = cv2.Canny(img,150,200)
# Search for contours in the edged image with cv2.findContour().
_, contours, hierarchy = cv2.findContours(edges,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
# Filter out contours that are not in your interest by applying size criterion.
for cnt in contours:
size = cv2.contourArea(cnt)
if size > 100:
cv2.drawContours(img, [cnt], -1, (255,0,0), 3)
# Display the image.
cv2.imshow('img', img)
Result:
Here is another useful link in the OpenCV documentation regarding this subject: Background Subtraction. Hope it helps a bit and gives you an idea on how to proceede. Cheers!
Related
We print 500 bubble surveys, get them back, and scan them in a giant batch giving us 500 PNG images.
Each image has a slight variations in alignment, but identical size and resolution. We need to register the images so they're all perfectly aligned. (With the next step being semi-automated scoring of the bubbles).
If these were 3D-MRI images, I could accomplish this with a single command line utility; But I'm not seeing any such tool for aligning scanned text documents.
I've played around with opencv as described in Image Alignment (Feature Based) using OpenCV, and it produces dynamite results when it works, but it often fails spectacularly. That approach is looking for documents hidden within natural scenes, a much harder problem than our case where the images are just rotated and translated in 2D, not 3.
I've also explored imreg_dft, which runs consistently but does a very poor job -- presumably the dft approach is better on photographs than text documents.
Does a solution for Image Registration of Scanned Forms already exist? If not, what's the correct approach? Opencv, imreg_dft, or something else?
Similar prior question: How to find blank field on scanned document image
What you can try is using the red outline of the answer boxes to create a mask where you can select the outline. I create a sample below. You can also remove the blue letters by creating a mask for the letters, inverting it, then apply it as a mask. I didn't do that, because he image of the publisher is low-res, and it caused issues. I expect your scans to perform better.
When you have the contours of the boxes you can transform/compare them individually (as the boxes have different sizes). Or you can use the biggest contour to create a transform for the entire document.
You can then use minAreaRect to find the cornerpoints of the contours. Threshold the contourArea to exclude noise / non answer area's.
import cv2
import numpy as np
# load image
img = cv2.imread('Untitled.png')
# convert to hsv colorspace
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of image bachground in HSV
lower_val = np.array([0,0,0])
upper_val = np.array([179,255,237])
# Threshold the HSV image
mask = cv2.inRange(hsv, lower_val, upper_val)
# find external contours in the mask
contours, hier = cv2.findContours(mask, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
# draw contours
for cnt in contours:
cv2.drawContours(img,[cnt],0,(0,255,0),3)
# display image
cv2.imshow('Result', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
I am very new to OpenCV(and to StackOverflow). I'm writing a program with OpenCV which takes a picture with an object (i.e. pen(rice, phone) put on paper) and calculates what percent does the object make of the picture.
Problem I'm facing with is when I threshold image (tried adaptive and otsu) photo is a little bit shadow around edges:
Original image
Resulted picture
And here's my code:
import cv2
img = cv2.imread("image.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
b,g,r = cv2.split(img)
th, thresh = cv2.threshold(b, 100, 255, cv2.THRESH_BINARY|cv2.THRESH_OTSU)
cv2.imwrite("image_bl_wh.png", thresh)
Tried to blur and morphology, but couldn't do it.
How can I make my program count that black parts around the picture as background and is there more better and easier way to do it?
P.S. Sorry for my English grammar mistakes.
This is not a programmatic solution but when you do automatic visual inspection it is the first thing you should try: Improve your set-up. The image is simply darker around the edges so increasing the brightness when recording the images should help.
If that's not an option you could consider having an empty image for comparison. What you are trying to do is background segmentation and there are better ways than simple color thresholding they do however usually require at least one image of the background or multiple images.
If you want a software only solution you should try an edge detector combined with morphological operators.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
My problem is that I want to differentiate the light and dark areas in the following image to generate a binary mask.
https://i.stack.imgur.com/7ZRKB.jpg
An approximation to the output can be this:
https://i.stack.imgur.com/2UuJb.jpg
I've tried a lot of things but the results still have some noise or I lost a lot of data, like in this image:
https://i.stack.imgur.com/hUyjY.png
I've used python with opencv and numpy, gaussian filters, opening, closing, etc...
Somebody have some idea to doing this?
Thanks in advance!
I first reduced the size of the image using pyrDown then used CLAHE to equalize the histogram. I used medianblur as this will create patches then used opening 3 times. After that it was a simple binary_inv threshold. If you want to get the original image size, use cv2.pyrUp on image. By playing with the parameters you can manage to get better results.
import cv2
image = cv2.imread("7ZRKB.jpg",0)
image = cv2.pyrDown(image)
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(16,16))
image = clahe.apply(image)
image = cv2.medianBlur(image, 7)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT,(5,5))
image = cv2.morphologyEx(image, cv2.MORPH_OPEN, kernel, iterations=3)
ret,image = cv2.threshold(image, 150, 255, cv2.THRESH_BINARY_INV)
cv2.imshow("image",image)
cv2.waitKey()
cv2.destroyAllWindows()
Result:
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have an image and some ROIs (in form of seperate files) cropped from it. I want to find the coordinates of corner points of these ROI in the original image.
Is there a simple way to do this besides checking for all pixel values in the original image ?
Example -
Entire Image
Cropped ROI
I need to find the coordinates of the car in the original image.
If the cropped image isn't rotated or scaled then you can do template matching. Your ROI image will be your template. Check this, This will draw a rectangle over your original image from where you can get the coordinates.
If the crops are exact (i.e. no changes from the original pixels were introduced after cropping, for example as a result of JPG compression when saving), then you could use integral images to speed up the search.
Compute the sum of the pixel values of the cropped images, then the integral image of the original ones, then use the integral image to search for a window of equal size of the crop that has equal sum. This algorithm is, of course, linear time in the size of the original image.
Note that the sum is a "weak" signature, and you may find multiple matches, but you can then verify these candidate matches directly at the pixel level.
I'm working with the following input image:
I want to extract all the boxes inside original images as an individual images with position so that i can also construct it after doing some operations on it. Currently I'm trying to detect contours on the image using OpenCV. But the problem is it also extracts all the words inside the box. The output is coming something like this:
Is there is any way where i can set the dimensions of box to be taken or something else is required for this.
Fairly simple approach:
Convert to grayscale.
Invert the image (to avoid getting top level contour detected around whole image -- we want the lines white and background black)
Find external contours only (we don't have any nested boxes).
Filter contours by area, discard the small ones.
You could possibly also filter by bounding box dimensions, etc. Feel free to experiment.
Example Script
Note: for OpenCV 2.4.x
import cv2
img = cv2.imread('cnt2.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = 255 - gray # invert
contours,heirarchy = cv2.findContours(gray,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
for contour in contours:
area = cv2.contourArea(contour)
if area > 500.0:
cv2.drawContours(img,[contour],-1,(0,255,0),1)
cv2.imwrite('cnt2_out.png', img)
Example Output