Detect Red and Green Circles - python

I want to detect red and green circles separately in the following image (and a few other similar images)
I'm using opencv and python.
I've tried using houghcircles but that wasn't of any help even after changing the params.
Any suggestion how to do this would really help a lot.
I would appreciate if someone sends a code

You mentioned in the comments that the circles will always have the same size.
Let's take advantage of this fact. My code snippets are in C++ language but this should not be a problem because they are only here to show which OpenCV functions to use (and how) ...
TL; DR Do this:
Create typical circle image - the template image.
Use template matching to get all circle positions.
Check the color of every circle.
Now, let's begin!
Step 1 - The template image
You need an image that shows the circle that is clearly separated from the background. You have two options (both are equally good):
make such an image yourself (computing it if you know the radius), or
simply take one image from the set you are working on and then crop one well-visible circle and save it as a separate image (that's what I did because it was a quicker option)
The circle can be of any color - it is only important that it is distinct from the background.
Step 2 - Template matching
Load the image and template image and convert them to HSV color space. Then split channels so that you will be able to only work with saturation channel:
using namespace std;
using namespace cv;
Mat im_rgb = imread("circles.jpg");
Mat tm_rgb = imread("template.jpg");
Mat im_hsv, tm_hsv;
cvtColor(im_rgb, im_hsv, CV_RGB2HSV);
cvtColor(tm_rgb, tm_hsv, CV_RGB2HSV);
vector<Mat> im_channels, tm_channels;
split(im_hsv, im_channels);
split(tm_hsv, tm_channels);
That's how circles and the template look now:
Next, you have to obtain an image that will contain information about circle borders. Regardless of what you do to achieve that, you have to apply exactly the same operations on image and template saturation channels.
I used sobel operator to get the job done. The code example only shows the operations I did on image saturation channel; the template saturation channel went through exactly the same procedure:
Mat im_f;
im_channels[1].convertTo(im_f, CV_32FC1);
GaussianBlur(im_f, im_f, Size(3, 3), 1, 1);
Mat sx, sy;
Sobel(im_f, sx, -1, 1, 0);
Sobel(im_f, sy, -1, 0, 1);
Mat image_input = abs(sx) + abs(sy);
That's how the circles on the obtained image and the template look like:
Now, perform template matching. I advise that you choose the type of template matching that computes normalized correlation coefficients:
Mat match_result;
matchTemplate(image_input, template_input, match_result, CV_TM_CCOEFF_NORMED);
This is the template matching result:
This image tells you how well the template correlates with the underlying image if you place the template at different positions on image. For example, the result value at pixel (0,0) corresponds to template placed at (0,0) on the input image.
When the template is placed in such a position that it matches well with the underlying image, the correlation coefficient is high. Use threshold method to discard everything except the peaks of signal (the values of template matching will lie inside [-1, 1] interval and you are only interested in values that are close to 1):
Mat thresholded;
threshold(match_result, thresholded, 0.8, 1.0, CV_THRESH_BINARY);
Next, determine the positions of template result maxima inside each isolated area. I recommend that you use thresholded image as a mask for this purpose. Only one maximum needs to be selected within each area.
These positions tell you where you have to place the template so that it matches best with the circles. I drew rectangles that start at these points and have the same width/height as template image:
Step 3: The color of the circle
Now you know where templates should be positioned so that they cover the circles nicely. But you still have to find out where the circle center is located on the template image. You can do this by computing center of mass of the template's saturation channel:
On the image, the circle centers are located at these points:
Point circ_center_on_image = template_position + circ_center_on_template;
Now you only have to check if the red color channel intensity at these points is larger that the green channel intensity. If yes, the circle is red, otherwise it is green:

Related

How do i fill the missing part in this picture using OpenCV (python)?

I have to generate a new image such that the missing portion of the black ring is shown
For Example, consider this image
As we can see , a sector of the inner black ring is missing, and my task is to identify
where to fill in. I have to take a plain white image of same dimensions as the input image and predict
(marked by black color) the pixels that i’ll fill in to complete the black outer ring. A
pictorial representation of the output image is as follows:
Please help me out...i'm new to OpenCV so please explain me the steps as detailed as possible.I am working in python, so i insist on a python solution for the above problem
You can find a white object (sector) whose centroid is at the maximum distance from the center of the picture.
import numpy as np
import cv2
img = cv2.imread('JUSS0.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
w,h=gray.shape
thresh=cv2.threshold(gray, 253, 255, cv2.THRESH_BINARY )[1]
output = cv2.connectedComponentsWithStats(thresh, 4, cv2.CV_32S)
num_labels = output[0]
labels = output[1]
centroids = output[3]
polar_centroids_sq=[]
for i in range(num_labels):
polar_centroids_sq.append((centroids[i][0]-w/2)**2+(centroids[i][1]-h/2)**2)
idx=polar_centroids_sq.index(max(polar_centroids_sq))
out=np.uint8(255*(labels==idx))
cv2.imshow('sector', out)
cv2.imwrite('sector.png', out)
This is one of many possible approaches.
make every pixel that is not black into white, so your image is black and white. This means your processing is simpler, uses less memory and has only 1 channel to process instead of 3. You can do this with cvtColor() to get greyscale and then cv2.threshold() to get pure black and white.
repeatedly construct (imaginary) radial lines until, when checking the pixels along the lines, you have 2 black stretches. You now have the inner and outer radius of the inner, incomplete circle. You can get the coordinates of points along a line with scikit-image line function.
draw that circle in full in black with cv2.circle()
subtract that image from your initial black and white image so that only the differences (missing part) shows up in the result.
Of course, if you already know the inner and outer radius of the incomplete black ring, you can completely omit the second step above and do what Yves suggested in the comments.
Or, instead of second step above, run edge detection and HoughCircles to get the radii.
Another approach might be to call cv2.warpPolar() to convert your circular image to a long horizontal one with 2 thick black lines, one of them discontinuous. Then just draw that line across the full width of the image and warp back to a circle.

How to skew an image by moving its vertex?

I'm trying to find a way to transform an image by translating one of its vertexes.
I have already found various methods for transforming an image like rotation and scaling, but none of the methods involved skewing like so:
There is shearing, but it's not the same since it can move two or more of the image's vertex while I only want to move one.
What can I use that can perform such an operation?
I took your "cat-thing" and resized it to a nice size, added some perfectly vertical and horizontal white gridlines and added some extra canvas in red at the bottom to give myself room to transform it. That gave me this which is 400 pixels wide and 450 pixels tall:
I then used ImageMagick to do a "Bilinear Forward Transform" in Terminal. Basically you give it 4 pairs of points, the first pair is where the top-left corner is before the transform and then where it must move to. The next pair is where the top-right corner is originally followed by where it ends up. Then the bottom-right. Then the bottom-left. As you can see, 3 of the 4 pairs are unmoved - only the bottom-right corner moves. I also made the virtual pixel black so you can see where pixels were invented by the transform in black:
convert cat.png -matte -virtual-pixel black -interpolate Spline -distort BilinearForward '0,0 0,0 399,0 399,0 399,349 330,430 0,349 0,349' bilinear.png
I also did a "Perspective Transform" using the same transform coordinates:
convert cat.png -matte -virtual-pixel black -distort Perspective '0,0 0,0 399,0 399,0 399,349 330,430 0,349 0,349' perspective.png
Finally, to illustrate the difference, I made a flickering comparison between the 2 images so you can see the difference:
I am indebted to Anthony Thyssen for his excellent work here which I commend to you.
I understand you were looking for a Python solution and would point out that there is a Python binding to ImageMagick called Wand which you may like to use - here.
Note that I only used red and black to illustrate what is going on (atop the Stack Overflow white background) and where aspects of the result come from, you would obviously use white for both!
The perspective transformation is likely what you want, since it preserves straight lines at any angle. (The inverse bilinear only preserves horizontal and vertical straight lines).
Here is how to do it in ImageMagick, Python Wand (based upon ImageMagick) and Python OpenCV.
Input:
ImageMagick
(Note the +distort makes the output the needed size to hold the full result and is not restricted to the size of the input. Also the -virtual-pixel white sets color of the area outside the image pixels to white. The points are ordered clockwise from the top left in pairs as inx,iny outx,outy)
convert cat.png -virtual-pixel white +distort perspective \
"0,0 0,0 359,0 359,0 379,333 306,376 0,333 0,333" \
cat_perspective_im.png
Python Wand
(Note the best_fit=true makes the output the needed size to hold the full result and is not restricted to the size of the input.)
#!/bin/python3.7
from wand.image import Image
from wand.display import display
with Image(filename='cat.png') as img:
img.virtual_pixel = 'white'
img.distort('perspective', (0,0, 0,0, 359,0, 359,0, 379,333, 306,376, 0,333, 0,333), best_fit=True)
img.save(filename='cat_perspective_wand.png')
display(img)
Python OpenCV
#!/bin/python3.7
import cv2
import numpy as np
# Read source image.
img_src = cv2.imread('cat.png')
# Four corners of source image
# Coordinates are in x,y system with x horizontal to the right and y vertical downward
pts_src = np.float32([[0,0], [359,0], [379,333], [0,333]])
# Four corners of destination image.
pts_dst = np.float32([[0, 0], [359,0], [306,376], [0,333]])
# Get perspecive matrix if only 4 points
m = cv2.getPerspectiveTransform(pts_src,pts_dst)
# Warp source image to destination based on matrix
# size argument is width x height
# compute from max output coordinates
img_out = cv2.warpPerspective(img_src, m, (359+1,376+1), cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT, borderValue=(255, 255, 255))
# Save output
cv2.imwrite('cat_perspective_opencv.png', img_out)
# Display result
cv2.imshow("Warped Source Image", img_out)
cv2.waitKey(0)
cv2.destroyAllWindows()

How to find the right point correspondance between template and a rotated image?

I have a .dxf file containing a drawing (template) which is just a piece with holes, from said drawing I successfully extract the coordinates of the holes and their diameters given in a list [[x1,y1,d1],[x2,y2,d2]...[xn,yn,dn]].
After this, I take a picture of the piece (same as template) and after some image processing, I obtain the coordinates of my detected holes and the contours. However, this piece in the picture can be rotated with respect to the template.
How do I do the right hole correspondance (between coordinates of holes in template and the rotated coordinates of holes in image) so I can know the which diameter corresponds to each hole in the image?
Is there any method of point sorting it can give me this correspondence?
I'm working with Python and OpenCV.
All answers will be highly appreciated. Thanks!!!
Image of Template: https://ibb.co/VVpWmKx
In the template image, contours are drawn to the same size as given in the .dxf file, which differs to the size (in pixels) of the contours of the piece taken from camera.
Processed image taken from the camera, contours of the piece are shown: https://ibb.co/3rjCg5F
I've tried OpenCV functions of feature matching (ORB algorithm) so I can get the rotation angle the piece in picture was rotates with respect to the template?
but I still cannot get this rotation angle? how can I get the rotation angle with image descriptors?
is this the best approach for this problem? are there any better methods to address this problem?
Considering the image of the extracted contours, you might not need something as heavy as the feature matching algorithm of the OCV library. One approach would be to take the most outter contour of the piece and get the cv::minAreaRect of it. Resulting rotated rectangle will give you the angle. Now you just have to decide if the symmetry matches, because it might be flipped. That can be done as well in many ways. One of the most simple one (excluding the fact, the scale might be off) is that you take the most outter contour again, fill it and count the percentage of the points that overlay with the template. The one with right symmetric orientation should match in almost all points. Given that the scale of the matched piece and the template are the same.
emm you should use huMoments which gives translation, scale and rotation invariance descriptor for matching.
The hu moment can be found here https://en.wikipedia.org/wiki/Image_moment. and it is implemented in opencv
you can dig up the theory of Moment invariants on the wiki site pretty easily
to use it you can simply call
// Calculate Moments
Moments moments = moments(im, false);
// Calculate Hu Moments
double huMoments[7];
HuMoments(moments, huMoments);
The sample moment will be
h[0] = 0.00162663
h[1] = 3.11619e-07
h[2] = 3.61005e-10
h[3] = 1.44485e-10
h[4] = -2.55279e-20
h[5] = -7.57625e-14
h[6] = 2.09098e-20
Usually, here is a large range of the moment. There usually coupled with a log transform to lower the dynamic range for matching
H=log(H)
H[0] = 2.78871
H[1] = 6.50638
H[2] = 9.44249
H[3] = 9.84018
H[4] = -19.593
H[5] = -13.1205
H[6] = 19.6797
BTW, you might need to pad the template to extract the edge contour

Comparing and plotting regions of the same color over a dataset of a few hundred images

A chem student asked me for help with plotting image segmenetation:
A stationary camera takes a picture of the experimental setup every second over a period of a few minutes, so like 300 images yield.
The relevant parts in the setup are two adjacent layers of differently-colored foams observed from the side, a 2-color sandwich shrinking from both sides, basically, except one of the foams evaporates a bit faster.
I'd like to segment each of the images in the way that would let me plot both foam regions' "width" against time.
Here is a "diagram" :)
I want to go from here --> To here
Ideally, given a few hundred of such shots, in which only the widths change, I get an array of scalars back that I can plot. (Going to look like a harmonic series on either side of the x-axis)
I have a bit of python and matlab experience, but have never used OpenCV or Image Processing toolbox in matlab, or actually never dealt with any computer vision in general. Could you guys throw like a roadmap of what packages/functions to use or steps one should take and i'll take it from there?
I'm not sure how to address these things:
-selecting at which slice along the length of the slice the algorithm measures the width(i.e. if the foams are a bit uneven), although this can be ignored.
-which library to use to segment regions of the image based on their color, (some k-means shenanigans probably), and selectively store the spatial parameters of the resulting segments?
-how to iterate that above over a number of files.
Thank you kindly in advance!
Assume your Intensity will be different after converting into gray scale ( if not, just convert to other color space like HSV or LAB, then just use one of the components)
img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
First, Threshold your grayscaled input into a few bands
ret,thresh1 = cv.threshold(img,128,255,cv.THRESH_BINARY)
ret,thresh2 = cv.threshold(img,27,255,cv.THRESH_BINARY_INV)
ret,thresh3 = cv.threshold(img,77,255,cv.THRESH_TRUNC)
ret,thresh4 = cv.threshold(img,97,255,cv.THRESH_TOZERO)
ret,thresh5 = cv.threshold(img,227,255,cv.THRESH_TOZERO_INV)
The value should be tested out by your actual data. Here Im just give a example
Clean up the segmented image using median filter with a radius larger than 9. I do expect some noise. You can also use ROI here to help remove part of noise. But personally I`m lazy, I just wrote program to handle all cases and angle
threshholed_images_aftersmoothing = cv2.medianBlur(threshholed_images,9)
Each band will be corresponding to one color (layer). Now you should have N segmented image from one source. where N is the number of layers you wish to track
Second use opencv function bounding rect to find location and width/height of each Layer AKA each threshholed_images_aftersmoothing. Eg. boundingrect on each sub-segmented images.
C++: Rect boundingRect(InputArray points)
Python: cv2.boundingRect(points) → retval¶
Last, the rect have x,y, height and width property. You can use a simple sorting order to sort from top to bottom layer based on rect attribute x. Run though all vieo to obtain the x(layer id) , height vs time graph.
Rect API
Public Attributes
_Tp **height** // this is what you are looking for
_Tp width
_Tp **x** // this tells you the position of the band
_Tp y
By plot the corresponding heights (|AB| or |CD|) over time, you can obtain the graph you needed.
The more correct way is to use Kalman filter to track the position and height graph as I would expect some sort of bubble will occur and will interfere with the height of the layers.
To be honest, i didnt expect a chem student to be good at this. Haha good luck
Anything wrong you can find me here or Email me if i`m not watching stackoverflow
You can select a region of interest straight down the middle of the foams, a few pixels wide. If you stack these regions for each image it will show the shrink over time.
If for example you use 3 pixel width for the roi, the result of 300 images will be a 900 pixel wide image, where the left is the start of the experiment and the right is the end. The following image can help you understand:
Though I have not fully tested it, this code should work. Note that there must only be images in the folder you reference.
import cv2
import numpy as np
import os
# path to folder that holds the images
path = '.'
# dimensions of roi
x = 0
y = 0
w = 3
h = 100
# store references to all images
all_images = os.listdir(path)
# sort images
all_images.sort()
# create empty result array
result = np.empty([h,0,3],dtype=np.uint8)
for image in all_images:
# load image
img = cv2.imread(path+'/'+image)
# get the region of interest
roi = img[y:y+h,x:x+w]
# add the roi to previous results
result = np.hstack((result,roi))
# optinal: save result as image
# cv2.imwrite('result.png',result)
# display result - can also plot with matplotlib
cv2.imshow('Result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Update after question edit:
If the foams have different colors, your can use easily separate them by color by converting the image you hsv and using inrange (example). This creates a mask (=2D array with values from 0-255, one for each pixel) that you can use to calculate average height and extract the parameters and area of the image.
You can find a script to help you find the HSV colors for separation on this GitHub

Extract ROI From Image with a Skew Angle OpenCv Python

I have been using the
x,y,w,h = cv2.boundingBox(cnt)
roi = img[y:y+h,x:x+w]
function in openCv in order to get a portion of the image within a contour.
However, I am now trying to use the openCv function minAreaRect in order to get my bounding box. The function returns the x and y coordinates of each corner, along with the skew angle. Example:
((363.5, 676.0000610351562), (24.349538803100586, 34.46882629394531), -18.434947967529297)
Is there a simple way of extracting this portion of the image? Because I can obviously not do
roi = img[y:y+h,x:x+w]
Because of the skew angle and such. I was going to possibly rotate the entire image and extract the points, but this would take far too long when going through thousands of contours at once.
What I currently get is encompassed within the green rectangle, and what I want is in the red rectangle. I want to extract this portion of the image, but cannot figure out how to select a diagonal rectangle.

Categories

Resources