Extract ROI From Image with a Skew Angle OpenCv Python - python

I have been using the
x,y,w,h = cv2.boundingBox(cnt)
roi = img[y:y+h,x:x+w]
function in openCv in order to get a portion of the image within a contour.
However, I am now trying to use the openCv function minAreaRect in order to get my bounding box. The function returns the x and y coordinates of each corner, along with the skew angle. Example:
((363.5, 676.0000610351562), (24.349538803100586, 34.46882629394531), -18.434947967529297)
Is there a simple way of extracting this portion of the image? Because I can obviously not do
roi = img[y:y+h,x:x+w]
Because of the skew angle and such. I was going to possibly rotate the entire image and extract the points, but this would take far too long when going through thousands of contours at once.
What I currently get is encompassed within the green rectangle, and what I want is in the red rectangle. I want to extract this portion of the image, but cannot figure out how to select a diagonal rectangle.

Related

Get pixel location of binary image with intensity 255 in python opencv

I want to get the pixel coordinates of the blue dots in an image.
To get it, I first converted it to gray scale and use threshold function.
import numpy as np
import cv2
img = cv2.imread("dot.jpg")
img_g = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret1,th1 = cv2.threshold(img_g,127,255,cv2.THRESH_BINARY_INV)
What to do next if I want to get the pixel location with intensity 255? Please tell if there is some simpler method to do the same.
I don't think this is going to work as you would expect.
Usually, in order to get a stable tracking over a shape with a specific color, you do that in RGB/HSV/HSL plane, you could start with HSV which is more robust in terms of lighting.
1-Convert to HSV using cv2.cvtColor()
2-Use cv2.inRagne(blue_lower, blue_upper) to "filter" all un-wanted colors.
Now you have a good-looking binary image with only blue color in it (assuming you have a static background or more filters should be added).
3-Now if you want to detect dots (which is usually more than one pixel) you could try cv2.findContours
4- You can get x,y pixel of contours using many methods(depends on the shape of what you want to detect) like this cv2.boundingRect()

Finding mean pixel intensity value of an image inside a contour

I have found a particular contour in an image. I have created a mask with the entire image black except for the boundary points of the contour. The contour has been mapped perfectly. Clickhere for the picture of the contour created.
Now I want to go to my original image and get the average pixel intensity value of all points inside this contour of the original image.
When I use the cv.mean() function, do I get the average value of only the points specified by the mask, i.e. just the boundary points or all the points inside the mask?
The easiest way to do this is by picking out pixels in your image that correspond to places where the mask is white. If you want pixel on the boundary use the mask as you have shown it. If you want pixel in (and on) the boundary; draw it instead as a filled contour (thickness=-1). Here's an example:
img = cv2.imread('image.jpg')
mask = cv2.imread('mask.png', 0)
locs = np.where(mask == 255)
pixels = img[locs]
print(np.mean(pixels))

Detecting corners using Opencv Python

I am trying to find the corners of the 4 pillars which are of yellow in colour and also detecting extreme corners of the board which is of white in colour.
Basically i want to calculate the area of whole space after subtracting the area of each pillar.
For that first am trying to identifying the corner of pillars to find the area of each pillar.
Here is the code which I tried, I am almost half way through it.
import numpy as np
import cv2
img = cv2.imread('Corner_0.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray = np.float32(gray)
corners = cv2.goodFeaturesToTrack(gray, 100, 0.01, 10)
corners = np.int0(corners)
for corner in corners:
x,y = corner.ravel()
cv2.circle(img,(x,y),3,255,-1)
cv2.imwrite('Detected_Corner_0.jpg',img)
I would like to detect corner and calculating the area of the pillar.
When I use Grabcut I am able to apply for one pillar, does this make sense?
Corner detectors often cannot be relied on. The show extra corners and miss the ones you expect. What's more, you have to identify an regroup them.
You obtain interesting results by computing a saturation image (S in LSH). Then by binarization and blob analysis, you can easily find the areas.

Detect Red and Green Circles

I want to detect red and green circles separately in the following image (and a few other similar images)
I'm using opencv and python.
I've tried using houghcircles but that wasn't of any help even after changing the params.
Any suggestion how to do this would really help a lot.
I would appreciate if someone sends a code
You mentioned in the comments that the circles will always have the same size.
Let's take advantage of this fact. My code snippets are in C++ language but this should not be a problem because they are only here to show which OpenCV functions to use (and how) ...
TL; DR Do this:
Create typical circle image - the template image.
Use template matching to get all circle positions.
Check the color of every circle.
Now, let's begin!
Step 1 - The template image
You need an image that shows the circle that is clearly separated from the background. You have two options (both are equally good):
make such an image yourself (computing it if you know the radius), or
simply take one image from the set you are working on and then crop one well-visible circle and save it as a separate image (that's what I did because it was a quicker option)
The circle can be of any color - it is only important that it is distinct from the background.
Step 2 - Template matching
Load the image and template image and convert them to HSV color space. Then split channels so that you will be able to only work with saturation channel:
using namespace std;
using namespace cv;
Mat im_rgb = imread("circles.jpg");
Mat tm_rgb = imread("template.jpg");
Mat im_hsv, tm_hsv;
cvtColor(im_rgb, im_hsv, CV_RGB2HSV);
cvtColor(tm_rgb, tm_hsv, CV_RGB2HSV);
vector<Mat> im_channels, tm_channels;
split(im_hsv, im_channels);
split(tm_hsv, tm_channels);
That's how circles and the template look now:
Next, you have to obtain an image that will contain information about circle borders. Regardless of what you do to achieve that, you have to apply exactly the same operations on image and template saturation channels.
I used sobel operator to get the job done. The code example only shows the operations I did on image saturation channel; the template saturation channel went through exactly the same procedure:
Mat im_f;
im_channels[1].convertTo(im_f, CV_32FC1);
GaussianBlur(im_f, im_f, Size(3, 3), 1, 1);
Mat sx, sy;
Sobel(im_f, sx, -1, 1, 0);
Sobel(im_f, sy, -1, 0, 1);
Mat image_input = abs(sx) + abs(sy);
That's how the circles on the obtained image and the template look like:
Now, perform template matching. I advise that you choose the type of template matching that computes normalized correlation coefficients:
Mat match_result;
matchTemplate(image_input, template_input, match_result, CV_TM_CCOEFF_NORMED);
This is the template matching result:
This image tells you how well the template correlates with the underlying image if you place the template at different positions on image. For example, the result value at pixel (0,0) corresponds to template placed at (0,0) on the input image.
When the template is placed in such a position that it matches well with the underlying image, the correlation coefficient is high. Use threshold method to discard everything except the peaks of signal (the values of template matching will lie inside [-1, 1] interval and you are only interested in values that are close to 1):
Mat thresholded;
threshold(match_result, thresholded, 0.8, 1.0, CV_THRESH_BINARY);
Next, determine the positions of template result maxima inside each isolated area. I recommend that you use thresholded image as a mask for this purpose. Only one maximum needs to be selected within each area.
These positions tell you where you have to place the template so that it matches best with the circles. I drew rectangles that start at these points and have the same width/height as template image:
Step 3: The color of the circle
Now you know where templates should be positioned so that they cover the circles nicely. But you still have to find out where the circle center is located on the template image. You can do this by computing center of mass of the template's saturation channel:
On the image, the circle centers are located at these points:
Point circ_center_on_image = template_position + circ_center_on_template;
Now you only have to check if the red color channel intensity at these points is larger that the green channel intensity. If yes, the circle is red, otherwise it is green:

Detecting shape of a contour and color inside

I am new to opencv using python and trying to get the shape of a contour in an image.
Considering only regular shapes like square, rectangle, circle and triangle is there any way to get the contour shape using only numpy and cv2 libraries?
Also i want to find the colour inside a contour. How can I do it?
For finding area of a contour there is an inbuilt function: cv2.contourArea(cnt).
Are there inbuilt functions for "contour shape" and "color inside contour" also?
Please help!
Note : The images I am considering contains multiple regular shapes.
This method might be longer, but right now it is on the top of my head. For finding contour shape, use findcontours function, it will give vector of points as output(boundary points of contours). Now find the center of contour, using moments.
for finding contour use this function-
cv2.findContours(image, mode, method[, contours[, hierarchy[, offset]]])
image is the canny output image.
calculate center from moments, refer to this link
http://docs.opencv.org/trunk/dd/d49/tutorial_py_contour_features.html
calculate distance of each point stored in contours from the center
Now classify shaped by comparing distance of points from center
1)circle - all contours points will be roughly at equal distance from center.
2)square, rectangle- find farthest 4 points from center, These points will be vertices and will have approximately same distance. Now differentiate square from rectangle using edge length
3) traingles - this can be tricky, for different types of triangle, so you can just use else condition here, since you have only 4 shapes
For finding colour, use the vertices for square, rectangle and triangle to create a mask.
Since you have single color only, you make a small patch around center and get the avg value of RGB pixels there.
Assume you have center at (100,100) and its a circle with radius 20 pixel. create patch of size say 10 X 10, with center at (100,100) and find average value to R,G and B values in this patch.
for red R ~ 255 G ~0 and B~0
for green R ~ 0 G ~255 and B~0
for blue R ~0 G ~0 and B~255
Note: opencv stores value as BGR, not RGB
For finding the shape of a particular contour we can draw a bounded rectangle around the contour.
Now we can compare the area of contour with the area of bounded rectangle.
If area of contour is equal to half the area of bounded rectangle the shape is a triangle.
If the area of contour is less that area of bounded rectangle but is greater than half the area of bounded rectangle then its a circle.
Note: This method is limited to regular triangle and circle. this doesnt apply to polygons like hexagon,heptagon etc.

Categories

Resources