Fill Contour/Pixels within other Contours | Detecting Boxes using Depth Images - python

I'm using OpenCV to find boxes within a depth grayscale image.
First, I cut the depth image from the top of the box as shown below:
Next, I cut the depth image from the bottom of the box as shown below:
After some filtering, closing and doing some basic operations, I get an image of the top cut merged with where the box bottoms are along with bounding boxes. Please see the image below:
The goal now is to attempt to stretch the inner rectangles until they touch the corners of the top edges. The issue is that the boxes could be rotated which makes filling much harder than just looping horizontally/vertically and filling in gaps.
An approach would be getting a side of the inner rectangle, keep moving it in the direction of stretching until it touches some pixels of value 255. However, this takes a lot of time and isn't efficient.
Another approach is eliminating all pixels that aren't along the direction of stretching the segment, finding the contours of the remaining objects, and attempting the same method.
A third approach and the one that makes most sense to me:
Draw the other 2 segments connected to the side of the rectangle but as lines so to fill the entire image.
Anything outside the 2 lines is ignored
Find get a list of coordinates of the contours of the remaining objects in the direction of stretching/filling
Find the closest pixel to the side; this is where we'll stretch the box to
This is still rather inefficient.
Any help, guidance, or insights would be much appreciated. Thank you!

Related

Corner detection: getting rid of unwanted corners

I'd like to find the corners of the following box
However, as you can see I'm detecting a lot of corners I don't want to find. I'm completly stuck on this one. No matter what I try, I always seem to find corners in the dots on the box. I used the function goodFeaturesToTrack() but I also tried cornerHarris()
The most important thing to me is to find the coordinates of the corner pixels so I can draw a wire frame.
Kind regards, Schweini
Edit:
To draw the wire frame onto the image, following process can be thinkable.
When extracting outline of the black box region, the outline consists with 6 straight line segments.
Therefore, you'll able to find at least 6 corners of the box as the intersection of each two adjacent line segments.
Additionally, it looks like that, outline of 6 surfaces of the box will be able to coarsely estimated from each adjacent line segment pairs. (with assuming parallelogram)
This means estimating the positions of the remaining two corners (to draw the wire frame).
Furthermore, if you want, comparing the estimation result with your examination (corner detecition) result will be able to refine the coordinates of one corner.

find the polygon enclosing the given coordinates and find the coordinates of polygon (python opencv)

Example image used in program
I am trying to find the coordinates of a polygon in an image
(just like flood fill algorithm we are given a coordinate and we need to search the surrounding pixels for the boundary, if boundary is found we need to append its coordinate to the list if not we need to keep searching other pixels.)and if all the pixels are traversed the program should stop returning the list of pixels.
usually color of boundary is black and image is a gray scale image of maps of building.
It seems that flood-fill will be good enough to completely fill a room, despite the extra annotations. After filling, extract the outer outline. Now you can detect the straight portions of the outline by checking the angle formed by three successive points. I would keep a spacing between them to avoid local inaccuracies.
You will find a sequence of line segments, possibly interrupted at corners. Optionally use line fitting to maximize accuracy, and recompute the corners by intersecting the segments. Also consider joining aligned segments that are interrupted by short excursions.
If the rooms are not well closed, flood filling can leak and you are a little stuck. Consider filling with a larger brush, though this can cause other problems.

How to detect edge of object using OpenCV

I am trying to use OpenCV to measure size of filament ( that plastic material used for 3D printing)
What I am trying to do is measuring filament size ( that plastic material used for 3D printing ). The idea is that I use led panel to illuminate filament, then take image with camera, preprocess the image, apply edge detections and calculate it's size. Most filaments are fine made of one colour which is easy to preprocess and get fine results.
The problem comes with transparent filament. I am not able to get useful results. I would like to ask for a little help, or if someone could push me the right directions. I have already tried cropping the image to heigh that is a bit higher than filament, and width just a few pixels and calculating size using number of pixels in those images, but this did not work very well. So now I am here and trying to do it with edge detections
works well for filaments of single colour
not working for transparent filament
Code below is working just fine for common filaments, the problem is when I try to use it for transparent filament. I have tried adjusting tresholds for Canny function. I have tried different colour-spaces. But I am not able to get the results.
Images that may help to understand:
https://imgur.com/gallery/CIv7fxY
image = cv.imread("../images/img_fil_2.PNG") # load image
gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY) # convert image to grayscale
edges = cv.Canny(gray, 100, 200) # detect edges of image
You can use the assumption that the images are taken under the same conditions.
Your main problem is that the reflections in the transparent filament are detected as edges. But, since the image is relatively simple, without any other edges, you can simply take the upper and the lower edge, and measure the distance between them.
A simple way of doing this is to take 2 vertical lines (e.g. image sides), find the edges that intersect the line (basically traverse a column in the image and find edge pixels), and connect the highest and the lowest points to form the edges of the filament. This also removes the curvature in the filament, which I assume is not needed for your application.
You might want to use 3 or 4 vertical lines, for robustness.

How can I merge rectangles with small area (OpenCV, Python)

I'm using OpenCV in order to detect rectangles in my image so I can crop them later: (rectangles contains products).
The first thing I have done was detecting Vertical and Horizontal lines, combine them to get this picture : Image_With_Small_rectangles
As you can see on the left side of my image, I have got some indesirable vertical/horizontal lines, that create small rectangles.
As I'm new to OpenCV, I want to know if it's possible to merge rectangle with a small area! (if area > min_area) merge_rectangle
So I can get an image clean like this one : Clean_Image
Many thanks in advance!

why are my contours not being totally filled in opencv?

I am processing video that looks like this (these are moving blobs):
I am successfully able to do basic processing in opencv to find contours around the blobs, draw circles around them, etc. Below is some tracking info drawn over an inverted image of the original video frame:
I would like to do a projection of these moving blob sequences, such that I get an image with a black trace of the blobs movement. I call this a blob track. I think I'm using the term "projection" correctly, because if this was a 3d stack it would be called a "z-projection" (only the projection this time is through time/frames in the video).
When I do this, I get an image of the blob track that is close to what I want, but there are tiny green pixels inside that I do not expect to be there considering I am filling a contour with black and then merging these filled contours. I get something like this:
Note the small green pixels present inside the blob track. They might seem subtle, but I don't want them there and can't figure out why they are there considering all I am doing in the code is stamping black blobs on top of one-another. The fact that they are green implies to me that they are part of the green background image on which I draw the blob tracks.
I know my filling is working because if I take a single frame instead of making a blob-track I get this, which looks good:
Something must be going wrong with the way I am doing the projection (addition of the filled contours through time).
The code for the single frame (working and shown up above) is:
cv2.drawContours(contourimage,bigbadlistofcontours[0],-1,(0,0,0),-1)
where contourimage is a green image the size of my frame, and bigbadlistofcontours[0] is the first entry in a list of my contours, and as you can see bigbadlistofcontours[0] contains two contours, which represent the two blobs, drawn successfully above.
The code for adding/projecting the multiple frames (not working and having these small green pixels inside) is:
for xx in bigbadlistofcontours:
cv2.drawContours(contourimage,xx[0],-1,(0,0,0),-1)
cv2.drawContours(contourimage,xx[1],-1,(0,0,0),-1)
#cv2.fillPoly(contourimage, pts =xx[0], color=(0,0,0))
#cv2.fillPoly(contourimage, pts =xx[1], color=(0,0,0))
As you can see I tried it using two methods - one using drawContours, and the other using fillPoly. Both produce small pixels inside the blob track that should not be there. Any idea what could cause these pixels to be there?
Make a small correction to your code and try below code:
for xx in bigbadlistofcontours:
cv2.drawContours(contourimage,[xx[0]],-1,(0,0,0),-1)
cv2.drawContours(contourimage,[xx[1]],-1,(0,0,0),-1)
Or simply try the following:
for xx in bigbadlistofcontours:
cv2.drawContours(contourimage,xx,-1,(0,0,0),-1)
findContours needs a list of contours as arguments, ie a list of numpy arrays. When you passed bigbadlistofcontours[0], it was a list of two numpy arrays ie, two contours. But when you pass xx the second time, you passed xx[0] which is a numpy array and xx[1] which is another numpy array. In that case, it will draw only a point in that numpy array, not full contours.
More Details on Contours

Categories

Resources