I am processing video that looks like this (these are moving blobs):
I am successfully able to do basic processing in opencv to find contours around the blobs, draw circles around them, etc. Below is some tracking info drawn over an inverted image of the original video frame:
I would like to do a projection of these moving blob sequences, such that I get an image with a black trace of the blobs movement. I call this a blob track. I think I'm using the term "projection" correctly, because if this was a 3d stack it would be called a "z-projection" (only the projection this time is through time/frames in the video).
When I do this, I get an image of the blob track that is close to what I want, but there are tiny green pixels inside that I do not expect to be there considering I am filling a contour with black and then merging these filled contours. I get something like this:
Note the small green pixels present inside the blob track. They might seem subtle, but I don't want them there and can't figure out why they are there considering all I am doing in the code is stamping black blobs on top of one-another. The fact that they are green implies to me that they are part of the green background image on which I draw the blob tracks.
I know my filling is working because if I take a single frame instead of making a blob-track I get this, which looks good:
Something must be going wrong with the way I am doing the projection (addition of the filled contours through time).
The code for the single frame (working and shown up above) is:
cv2.drawContours(contourimage,bigbadlistofcontours[0],-1,(0,0,0),-1)
where contourimage is a green image the size of my frame, and bigbadlistofcontours[0] is the first entry in a list of my contours, and as you can see bigbadlistofcontours[0] contains two contours, which represent the two blobs, drawn successfully above.
The code for adding/projecting the multiple frames (not working and having these small green pixels inside) is:
for xx in bigbadlistofcontours:
cv2.drawContours(contourimage,xx[0],-1,(0,0,0),-1)
cv2.drawContours(contourimage,xx[1],-1,(0,0,0),-1)
#cv2.fillPoly(contourimage, pts =xx[0], color=(0,0,0))
#cv2.fillPoly(contourimage, pts =xx[1], color=(0,0,0))
As you can see I tried it using two methods - one using drawContours, and the other using fillPoly. Both produce small pixels inside the blob track that should not be there. Any idea what could cause these pixels to be there?
Make a small correction to your code and try below code:
for xx in bigbadlistofcontours:
cv2.drawContours(contourimage,[xx[0]],-1,(0,0,0),-1)
cv2.drawContours(contourimage,[xx[1]],-1,(0,0,0),-1)
Or simply try the following:
for xx in bigbadlistofcontours:
cv2.drawContours(contourimage,xx,-1,(0,0,0),-1)
findContours needs a list of contours as arguments, ie a list of numpy arrays. When you passed bigbadlistofcontours[0], it was a list of two numpy arrays ie, two contours. But when you pass xx the second time, you passed xx[0] which is a numpy array and xx[1] which is another numpy array. In that case, it will draw only a point in that numpy array, not full contours.
More Details on Contours
Related
I want to use Numpy (without any other packages) to find the outer contour of the 1st binary image and fill the inside area so it looks like the 2nd image, basically filling the holes of the wheels but I don't know how to do it. Does anyone have any ideas?
You're looking to implement the flood fill algorithm. The high-level idea is:
Pick an origin point, say (0, 0).
Run a breath-first or depth-first search from the origin to collect a list of points with the same RGB value. You pick a pixel (starting with the origin), find it's horizontal and vertical neighbours, and if the colour is the same, repeat on the new pixel.
Set every pixel that wasn't identified in the search to white.
This operation has been implemented many times before. If you are not opposed to using a new library, take a look at findContours and drawContours in OpenCV. OpenCV operates on numpy arrays so you won't have to transform the data.
I used semantic segmentation to color code the different elements in an image shown below.
In Python, I want to crop the original image into many small images based on the colors of the second image, so that the sofa becomes one cropped part, the lamp becomes one.etc. The overlap of the pillows on the sofa can be ignored. Say I have a 3D array of an image, I want to separate that array into the individual colored sections, and apply the coordinate of those elements in cropping the original image. How should I achieve this?
You can do it like this:
find the number of unique colours in the segmented image - see here
iterate over that list of colours making that colour white and everything else black, then findContours() to get the bounding box and save the contents of that bounding box as a PNG.
I am dealing with some images which contain tables and there are 1 or 2 stickers on them. What I am trying to do is getting rid of those stickers. Using color thresholding (in HSV) and contour detection I am able to create a mask for those stickers. Now I want those stickers to "dissolve" out from there (I don't know the correct term for this). While keeping those tables lines intact, so that my line detection works well (which I have to do after this cleaning).
I tried OpenCV's inpaint. But this doesn't work well here, because the sticker size is big enough.
See this example:
Part of the whole image where the sticker is sticking (inside contents are censored by me). It can be over horizontal lines, or vertical lines, or both. Basically, it's sticking somewhere on the table (maybe over some text too, but that can't be recovered anyway). The background won't be necessary whitish, it can be pink/orange/other colors.
This is the thresholded image, creating a mask of the sticker. We can also get the contour of this if required.
This is the result of cv.inpaint() with radius 3.
What I want is to reconstruct those lines.
My solution
Now my approach is to interpolate the colors in between the sticker contour, to fill it up. For each pixel inside the contour, I will do a vertical interpolation and a horizontal interpolation (interpolation of the boundary colors) and then fill that pixel with the average of both. I am hoping that this will preserve my vertical and horizontal lines at least. (Might fail if it's on a corner of the table). This will also keep the background smooth, my background can have some different colors.
Now my problem is how I can implement this. What I have are contours that I find using OpenCV's get_contours(). I don't know how to get the colors on its boundary and how to interpolate the in-between colors.
Any help is appreciated. Thanks in advance.
Due to confidentiality, I cannot share the whole image.
EDIT
I tried the seam-carving method (implementation). Here are the results:
Vertical seaming
Horizontal seaming
It works well once I know which one to use. And I am not sure how well it will do when we have both horizontal and vertical lines.
PS. Don't suggest a solution which needs to find lines and then work. Because there will be many lines in my whole image.
You can make synthetic example images. To better explain your issue.
As I got it you can use Poisson image editing. Just take a piece of clear paper image and paste it using poisson blending and the mask you extracted.
Check this github repo as instance for examples with code.
I am trying to use OpenCV to measure size of filament ( that plastic material used for 3D printing)
What I am trying to do is measuring filament size ( that plastic material used for 3D printing ). The idea is that I use led panel to illuminate filament, then take image with camera, preprocess the image, apply edge detections and calculate it's size. Most filaments are fine made of one colour which is easy to preprocess and get fine results.
The problem comes with transparent filament. I am not able to get useful results. I would like to ask for a little help, or if someone could push me the right directions. I have already tried cropping the image to heigh that is a bit higher than filament, and width just a few pixels and calculating size using number of pixels in those images, but this did not work very well. So now I am here and trying to do it with edge detections
works well for filaments of single colour
not working for transparent filament
Code below is working just fine for common filaments, the problem is when I try to use it for transparent filament. I have tried adjusting tresholds for Canny function. I have tried different colour-spaces. But I am not able to get the results.
Images that may help to understand:
https://imgur.com/gallery/CIv7fxY
image = cv.imread("../images/img_fil_2.PNG") # load image
gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY) # convert image to grayscale
edges = cv.Canny(gray, 100, 200) # detect edges of image
You can use the assumption that the images are taken under the same conditions.
Your main problem is that the reflections in the transparent filament are detected as edges. But, since the image is relatively simple, without any other edges, you can simply take the upper and the lower edge, and measure the distance between them.
A simple way of doing this is to take 2 vertical lines (e.g. image sides), find the edges that intersect the line (basically traverse a column in the image and find edge pixels), and connect the highest and the lowest points to form the edges of the filament. This also removes the curvature in the filament, which I assume is not needed for your application.
You might want to use 3 or 4 vertical lines, for robustness.
Using Python, OpenCV, and live webcam input, I can't figure out how to set a point based on an x y coordinate and track where it moves.
Below is a simple example to track a yellow object.
https://github.com/abidrahmank/OpenCV-Python/blob/master/Other_Examples/track_yellow_draw_line.py
Here is the method to track yellow color:
1) Extract the first frame of video
2) Convert frame into HSV color space. Take H plane and threshold it for yellow color so that you get binary image with yellow object as white (also called blob) and remaining as black.
3) Now you find centre point of blob. You can use moments or contours(especially if you have more than one blob. In the example above, very simple logic is used. Just find leftmost,rightmost,topmost and bottommost points on blob and draw a rectangle around it). And store this values.
4) Extract next frame and follow all above steps to get new position. Join these two position and draw a line.
Over.
There are a few blogs that explain the basics. Check out this one: Object tracking in OpenCV and Python 2.6.
Edit: I don't think you can track arbitrary points. To be able to make a correspondence between one point in two images, you need to know something unique about the point to track. This is often done with interest points, which are "unique enough" to be compared across images. Other methods are based making the point easy to detect using a projection scheme.