Map contour points to original image - python

I've performed some basic image operations and managed to isolate the object I am interested in.
As a result, I ended up with a container; get<0>(results) holding the corresponding (x, y) of the object. For visual purposes, I drew these points on a different frame named contourFrame
My question is, how do I map these points back to the original image?
I have looked into the combination of findHomography() and warpPerspective but in order to find the homography matrix, I need to provide respective destination points which is what I am looking for in the first place.
remap() seems to do what I am looking for but I think its a misunderstanding of the API on my part but after trying it out, it returns a blank white frame.
Is remap() even the way to go about this? If yes, how? If no, what other method can I use to map my contour points to another image?
A Python solution/suggestion works with me as well.
Edit I
For the current scenario, its a 1-1 mapping back to the original image. But what if I rotate the original image? Or move my camera back and forth i.e. move it closer and further away? How can I still map back the coordinates to the newly oriented original image?
Original Image
Results
Top Left: Canny Edge
Top Right: Dilated
Bottom Left: Eroded
Bottom Right: Frame with desired object/contour
tuple<vector<vector<Point>>, Mat, Mat> removeStupidIcons(Mat& edges)
{
Mat dilated, eroded;
vector<vector<Point>> contours, filteredContours;
dilate(edges, dilated, Mat(), Point(-1, -1), 5);
erode(dilated, eroded, Mat(), Point(-1, -1), 5);
findContours(eroded, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
for(vector<Point>contour: contours)
if(contourArea(contour) < 200)
filteredContours.push_back(contour);
return make_tuple(filteredContours, dilated, eroded);
}
Mat mapPoints(Mat& origin, Mat& destination)
{
Mat remapWindow, mapX, mapY;
mapX.create(origin.size(), CV_32FC1);
mapY.create(origin.size(), CV_32FC1);
remap(origin, destination, mapX, mapY, CV_INTER_LINEAR);
return destination;
}
int main(int argc, const char * argv[])
{
string image_path = "ipad.jpg";
original = imread(image_path);
blur(original, originalBlur, Size(15, 15));
cvtColor(originalBlur, originalGrey, CV_RGB2GRAY);
Canny(originalGrey, cannyEdges, 50, 130, 3);
cannyEdges.convertTo(cannyFrame, CV_8U);
tuple<vector<vector<Point>>, Mat, Mat> results = removeStupidIcons(cannyEdges);
//
// get<0>(results) -> contours
// get<1>(results) -> matrix after dilation
// get<2>(results) -> matrix after erosion
Mat contourFrame = Mat::zeros(original.size(), CV_8UC3);
Scalar colour = Scalar(rand()&255, rand()&255, rand()&255);
drawContours(contourFrame, get<0>(results), -1, colour, 3);
Mat contourCopy, originalCopy;
original.copyTo(originalCopy); contourFrame.copyTo(contourCopy);
// a white background is returned.
Mat mappedFrame = mapPoints(originalCopy, contourCopy);
imshow("Canny", cannyFrame);
imshow("Original", original);
imshow("Contour", contourFrame);
imshow("Eroded", get<2>(results));
imshow("Dilated", get<1>(results));
imshow("Original Grey", originalGrey);
imshow("Mapping Result", contourCopy);
waitKey(0);
return 0;
}

Related

Detecting small squares attached to something else on the image

I want to detect small squares circled in red on the image. But the problem is that they are on another white line. I want to know how to separate those squares from the white line and detect them.
I have used OpenCV Python to write the code. What I have done until now is that I cropped the image so that I get access only to the circular part of the image. Then I cropped the image to get the required part that is the white line. Then I used erosion so that the white line will vanish and the squares remain in the image. Then used Hough circles to detect the squares. This does work for some images but it cannot be generalized. Please help me in finding a generalized code for this. Let me know the logic and also the python code.
Also could anyone help me detect that aruco marker on the image. Its getting rejected. I dont know why.
Image is in this link. Detect small squares on an image
here's C++ code with distanceTransform.
Since I nearly only used openCV functions, you can probably easily convert it to Python code.
I removed the white stripe at the top of the image manually, hope this isn't a problem.
int main()
{
cv::Mat input = cv::imread("C:/StackOverflow/Input/SQUARES.png", cv::IMREAD_GRAYSCALE);
cv::Mat thres = input > 0; // make binary mas
cv::Mat dst;
cv::distanceTransform(thres, dst, CV_DIST_L2, 3);
double min, max;
cv::Point minPt, maxPt;
cv::minMaxLoc(dst, &min, &max, 0, 0);
double distThres = max*0.65; // a real clustering would be better. This assumes that the white circle thickness is a bout 50% of the square size, so 65% should be ok...
cv::Mat squaresMask = dst >= distThres;
cv::imwrite("C:/StackOverflow/Input/SQUARES_mask.png", squaresMask);
std::vector<std::vector<cv::Point> > contours;
cv::findContours(squaresMask, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
cv::Mat output;
cv::cvtColor(input, output, cv::COLOR_GRAY2BGR);
for (int i = 0; i < contours.size(); ++i)
{
cv::Point2f center;
float radius;
cv::minEnclosingCircle(contours[i], center, radius);
cv::circle(output, center, 5, cv::Scalar(255, 0, 255), -1);
//cv::circle(output, center, radius, cv::Scalar(255, 0, 255), 1);
}
cv::imwrite("C:/StackOverflow/Input/SQUARES_output.png", output);
cv::imshow("output", output);
cv::waitKey(0);
}
this is the input:
this it the squaresMask after distance transform
and this is the result

Detect contour intersection without drawing

I'm working to detect cells within microscope images like the one below. There are often spurious contours that get drawn due to imperfections on the microscope slides, like the one below the legend in the figure below.
I'm currently using this solution to clean these up. Here's the basic idea.
# Create image of background
blank = np.zeros(image.shape[0:2])
background_image = cv2.drawContours(blank.copy(), background_contour, 0, 1, -1)
for i, c in enumerate(contours):
# Create image of contour
contour_image = cv2.drawContours(blank.copy(), contours, i, 1, -1)
# Create image of focal contour + background
total_image = np.where(background_image+contour_image>0, 1, 0)
# Check if contour is outside postive space
if total_image.sum() > background_image.sum():
continue
This works as expected; if the total_image area is greater than the area of the background_image then c must be outside the region of interest. But drawing all of these contours is incredibly slow and checking thousands of contours takes hours. Is there a more efficient way to check if contours overlap that doesn't require drawing the contours?
I assume the goal is to exclude the external contour from further analysis? If so, the easiest is to use the red background contour as a mask. Then use the masked image to detect the blue cells.
# Create image of background
blank = np.zeros(image.shape[0:2], dtype=np.uint8)
background_image = cv2.drawContours(blank.copy(), background_contour, 0, (255), -1)
# mask input image (leaves only the area inside the red background contour)
res = cv2.bitwise_and(image,image,mask=background_image )
#[detect blue cells]
assuming you are trying to find points on the different contours that are overlaping
consider contour as
vector<vector<Point> > contours;
..... //obtain you contrours.
vector<Point> non_repeating_points;
for(int i=0;i<contours.size();i++)
{
for(int j=0;j<contours[i].size();j++)
{
Point this_point= countour[i][j];
for(int k=0;k<non_repeating_points.size();k++)
{//check this list for previous record
if(non_repeating_points[k] == this_point)
{
std::cout<< "found repeat points at "<< std::endl;
std::cout<< this_point << std::endl;
break;
}
}
//if not seen before just add it in the list
non_repeating_points.push_back(this_point);
}
}
I just wrote it without compile. but I think you can understand the idea.
the information you provide is not enough.
In case you mean to find the nearest connected boundary. And there is no overlapping.
you can declare a local cluster near the point non_repeating_points[k]. Call it surround_non_repeating_points[k];
you can control the distance that can be considered as intercept and push all of them in this surround_non_repeating_points[k];
Then just check in a loop for
if(surround_non_repeating_points[k] == this_point)

Detecting individual boxes in W2 with opencv - python

I've done extensive research and cannot find a combination of techniques that will achieve what I need.
I have a situation where I need to perform OCR on hundreds of W2s to extract the data for a reconciliation. The W2s are very poor quality, as they are printed and subsequently scanned back into the computer. The aforementioned process is outside of my control; unfortunately I have to work with what I've got.
I was able to successfully perform this process last year, but I had to brute force it as timeliness was a major concern. I did so by manually indicating the coordinates to extract the data from, then performing the OCR only on those segments one at a time. This year, I would like to come up with a more dynamic situation in the anticipation that the coordinates could change, format could change, etc.
I have included a sample, scrubbed W2 below. The idea is for each box on the W2 to be its own rectangle, and extract the data by iterating through all of the rectangles. I have tried several edge detection techniques but none have delivered exactly what is needed. I believe that I have not found the correct combination of pre-processing required. I have tried to mirror some of the Sudoku puzzle detection scripts.
Here is the result of what I have tried thus far, along with the python code, which can be used whether with OpenCV 2 or 3:
import cv2
import numpy as np
img = cv2.imread(image_path_here)
newx,newy = img.shape[1]/2,img.shape[0]/2
img = cv2.resize(img,(newx,newy))
blur = cv2.GaussianBlur(img, (3,3),5)
ret,thresh1 = cv2.threshold(blur,225,255,cv2.THRESH_BINARY)
gray = cv2.cvtColor(thresh1,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,220,apertureSize = 3)
minLineLength = 20
maxLineGap = 50
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength,maxLineGap)
for x1,y1,x2,y2 in lines[0]:
cv2.line(img,(x1,y1),(x2,y2),(255,0,255),2)
cv2.imshow('hough',img)
cv2.waitKey(0)
He he, edge detection is not the only way. As the edges are thick enough (at least one pixel everywhere), binarization allows you to singulate the regions inside the boxes.
By simple criteria you can get rid of clutter, and just bounding boxes give you a fairly good segmentation.
Let me know if you don't follow anything in my code. The biggest faults of this concept are
1: (if you have noisy breaks in the main box line that would break it into separate blobs)
2: idk if this is a thing where there can be handwritten text, but having letters overlap the edges of boxes could be bad.
3: It does absolutely no orientation checking, (you may actually want to improve this as I don't think it would be too bad and would give you more accurate handles). What I mean is that it depends on your boxes being approximately aligned to the xy axes, if they are sufficiently skew, it will give you gross offsets to all your box corners (though it should still find them all)
I fiddled with the threshold set point a bit to get all the text separated from the edges, you could probably pull it even lower if necessary before you start breaking the main line. Also, if you are worried about line breaks, you could add together sufficiently large blobs into the final image.
Basically, first step fiddling with the threshold to get it in the most stable (likely lowest value that still keeps a connected box) cuttoff value for separating text and noise from box.
Second find the biggest positive blob (should be the boxgrid). If your box doesnt stay all together, you may want to take a few of the highest blobs... though that will get sticky, so try to get the threshold so that you can get it as a single blob.
Last step is to get the rectangles, to do this, I just look for negative blobs (ignoring the first outer area).
And here is the code (sorry that it is in C++, but hopefully you understand the concept and would write it yourself anyhow):
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <stdio.h>
#include <opencv2/opencv.hpp>
using namespace cv;
//Attempts to find the largest connected group of points (assumed to be the interconnected boundaries of the textbox grid)
Mat biggestComponent(Mat targetImage, int connectivity=8)
{
Mat inputImage;
inputImage = targetImage.clone();
Mat finalImage;// = inputImage;
int greatestBlobSize=0;
std::cout<<"Top"<<std::endl;
std::cout<<inputImage.rows<<std::endl;
std::cout<<inputImage.cols<<std::endl;
for(int i=0;i<inputImage.cols;i++)
{
for(int ii=0;ii<inputImage.rows;ii++)
{
if(inputImage.at<uchar>(ii,i)!=0)
{
Mat lastImage;
lastImage = inputImage.clone();
Rect* boundbox;
int blobSize = floodFill(inputImage, cv::Point(i,ii), Scalar(0),boundbox,Scalar(200),Scalar(255),connectivity);
if(greatestBlobSize<blobSize)
{
greatestBlobSize=blobSize;
std::cout<<blobSize<<std::endl;
Mat tempDif = lastImage-inputImage;
finalImage = tempDif.clone();
}
//std::cout<<"Loop"<<std::endl;
}
}
}
return finalImage;
}
//Takes an image that only has outlines of boxes and gets handles for each textbox.
//Returns a vector of points which represent the top left corners of the text boxes.
std::vector<Rect> boxCorners(Mat processedImage, int connectivity=4)
{
std::vector<Rect> boxHandles;
Mat inputImage;
bool outerRegionFlag=true;
inputImage = processedImage.clone();
std::cout<<inputImage.rows<<std::endl;
std::cout<<inputImage.cols<<std::endl;
for(int i=0;i<inputImage.cols;i++)
{
for(int ii=0;ii<inputImage.rows;ii++)
{
if(inputImage.at<uchar>(ii,i)==0)
{
Mat lastImage;
lastImage = inputImage.clone();
Rect boundBox;
if(outerRegionFlag) //This is to floodfill the outer zone of the page
{
outerRegionFlag=false;
floodFill(inputImage, cv::Point(i,ii), Scalar(255),&boundBox,Scalar(0),Scalar(50),connectivity);
}
else
{
floodFill(inputImage, cv::Point(i,ii), Scalar(255),&boundBox,Scalar(0),Scalar(50),connectivity);
boxHandles.push_back(boundBox);
}
}
}
}
return boxHandles;
}
Mat drawTestBoxes(Mat originalImage, std::vector<Rect> boxes)
{
Mat outImage;
outImage = originalImage.clone();
outImage = outImage*0; //really I am just being lazy, this should just be initialized with dimensions
for(int i=0;i<boxes.size();i++)
{
rectangle(outImage,boxes[i],Scalar(255));
}
return outImage;
}
int main() {
Mat image;
Mat thresholded;
Mat processed;
image = imread( "Images/W2.png", 1 );
Mat channel[3];
split(image, channel);
threshold(channel[0],thresholded,150,255,1);
std::cout<<"Coputing biggest object"<<std::endl;
processed = biggestComponent(thresholded);
std::vector<Rect> textBoxes = boxCorners(processed);
Mat finalBoxes = drawTestBoxes(image,textBoxes);
namedWindow("Original", WINDOW_AUTOSIZE );
imshow("Original", channel[0]);
namedWindow("Thresholded", WINDOW_AUTOSIZE );
imshow("Thresholded", thresholded);
namedWindow("Processed", WINDOW_AUTOSIZE );
imshow("Processed", processed);
namedWindow("Boxes", WINDOW_AUTOSIZE );
imshow("Boxes", finalBoxes);
std::cout<<"waiting for user input"<<std::endl;
waitKey(0);
return 0;
}

How to obtain the right alpha value to perfectly blend two images?

I've been trying to blend two images. The current approach I'm taking is, I obtain the coordinates of the overlapping region of the two images, and only for the overlapping regions, I blend with a hardcoded alpha of 0.5, before adding it. SO basically I'm just taking half the value of each pixel from overlapping regions of both the images, and adding them. That doesn't give me a perfect blend because the alpha value is hardcoded to 0.5. Here's the result of blending of 3 images:
As you can see, the transition from one image to another is still visible. How do I obtain the perfect alpha value that would eliminate this visible transition? Or is there no such thing, and I'm taking a wrong approach?
Here's how I'm currently doing the blending:
for i in range(3):
base_img_warp[overlap_coords[0], overlap_coords[1], i] = base_img_warp[overlap_coords[0], overlap_coords[1],i]*0.5
next_img_warp[overlap_coords[0], overlap_coords[1], i] = next_img_warp[overlap_coords[0], overlap_coords[1],i]*0.5
final_img = cv2.add(base_img_warp, next_img_warp)
If anyone would like to give it a shot, here are two warped images, and the mask of their overlapping region: http://imgur.com/a/9pOsQ
Here is the way I would do it in general:
int main(int argc, char* argv[])
{
cv::Mat input1 = cv::imread("C:/StackOverflow/Input/pano1.jpg");
cv::Mat input2 = cv::imread("C:/StackOverflow/Input/pano2.jpg");
// compute the vignetting masks. This is much easier before warping, but I will try...
// it can be precomputed, if the size and position of your ROI in the image doesnt change and can be precomputed and aligned, if you can determine the ROI for every image
// the compression artifacts make it a little bit worse here, I try to extract all the non-black regions in the images.
cv::Mat mask1;
cv::inRange(input1, cv::Vec3b(10, 10, 10), cv::Vec3b(255, 255, 255), mask1);
cv::Mat mask2;
cv::inRange(input2, cv::Vec3b(10, 10, 10), cv::Vec3b(255, 255, 255), mask2);
// now compute the distance from the ROI border:
cv::Mat dt1;
cv::distanceTransform(mask1, dt1, CV_DIST_L1, 3);
cv::Mat dt2;
cv::distanceTransform(mask2, dt2, CV_DIST_L1, 3);
// now you can use the distance values for blending directly. If the distance value is smaller this means that the value is worse (your vignetting becomes worse at the image border)
cv::Mat mosaic = cv::Mat(input1.size(), input1.type(), cv::Scalar(0, 0, 0));
for (int j = 0; j < mosaic.rows; ++j)
for (int i = 0; i < mosaic.cols; ++i)
{
float a = dt1.at<float>(j, i);
float b = dt2.at<float>(j, i);
float alpha = a / (a + b); // distances are not between 0 and 1 but this value is. The "better" a is, compared to b, the higher is alpha.
// actual blending: alpha*A + beta*B
mosaic.at<cv::Vec3b>(j, i) = alpha*input1.at<cv::Vec3b>(j, i) + (1 - alpha)* input2.at<cv::Vec3b>(j, i);
}
cv::imshow("mosaic", mosaic);
cv::waitKey(0);
return 0;
}
Basically you compute the distance from your ROI border to the center of your objects and compute the alpha from both blending mask values. So if one image has a high distance from the border and other one a low distance from border, you prefer the pixel that is closer to the image center. It would be better to normalize those values for cases where the warped images aren't of similar size.
But even better and more efficient is to precompute the blending masks and warp them. Best would be to know the vignetting of your optical system and choose and identical blending mask (typically lower values of the border).
From the previous code you'll get these results:
ROI masks:
Blending masks (just as an impression, must be float matrices instead):
image mosaic:
There are 2 obvious problems with your images:
Border area has distorted lighting conditions
That is most likely caused by the optics used to acquire images. So to remedy that you should use only inside part of the images (cut off few pixels from border.
So when cut off 20 pixels from the border and blending to common illumination I got this:
As you can see the ugly border seam is away now only the illumination problems persists (see bullet #2).
Images are taken at different lighting conditions
Here the subsurface scattering effects hits in making the images "not-compatible". You should normalize them to some uniform illumination or post process the blended result line by line and when coherent bump detected multiply the rest of line so the bump will be diminished.
So the rest of the line should be multiplied by constant i0/i1. These kind if bumps can occur only on the edges between overlap values so you can either scan for them or use those positions directly ... To recognize valid bump it should have neighbors nearby in previous and next lines along the whole image height.
You can do this also in y axis direction in the same way ...

To find the number of circles in an image using OpenCV

I have an image as below :
Can anyone tell me how to detect the number of circles in it.I'm using Hough circle transform to achieve this and this is my code:
# import the necessary packages
import numpy as np
import sys
import cv2
# load the image, clone it for output, and then convert it to grayscale
image = cv2.imread(str(sys.argv[1]))
output = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# detect circles in the image
circles = cv2.HoughCircles(gray, cv2.cv.CV_HOUGH_GRADIENT, 1.2, 5)
no_of_circles = 0
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
no_of_circles = len(circles)
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
cv2.imshow("output", np.hstack([image, output]))
print 'no of circles',no_of_circles
I'm getting wrong answers for this code.Can anyone tell me where I went wrong?
i tried a tricky way to detect all circles.
i found HoughCircles parameters manually
HoughCircles( src_gray, circles, HOUGH_GRADIENT, 1, 50, 40, 46, 0, 0 );
the tricky part is
flip( src, flipped, 1 );
hconcat( src,flipped, flipped );
hconcat( flipped, src, src );
flip( src, flipped, 0 );
vconcat( src,flipped, flipped );
vconcat( flipped, src, src );
flip( src, src, -1 );
will create a model like below before detection.
the result is like this
the c++ code can be easily converted to python
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
Mat src, src_gray, flipped, display;
if (argc < 2)
{
std::cerr<<"No input image specified\n";
return -1;
}
// Read the image
src = imread( argv[1], 1 );
if( src.empty() )
{
std::cerr<<"Invalid input image\n";
return -1;
}
flip( src, flipped, 1 );
hconcat( src,flipped, flipped );
hconcat( flipped, src, src );
flip( src, flipped, 0 );
vconcat( src,flipped, flipped );
vconcat( flipped, src, src );
flip( src, src, -1 );
// Convert it to gray
cvtColor( src, src_gray, COLOR_BGR2GRAY );
// Reduce the noise so we avoid false circle detection
GaussianBlur( src_gray, src_gray, Size(9, 9), 2, 2 );
// will hold the results of the detection
std::vector<Vec3f> circles;
// runs the actual detection
HoughCircles( src_gray, circles, HOUGH_GRADIENT, 1, 50, 40, 46, 0, 0 );
// clone the colour, input image for displaying purposes
display = src.clone();
Rect rect_src(display.cols / 3, display.rows / 3, display.cols / 3, display.rows / 3 );
rectangle( display, rect_src, Scalar(255,0,0) );
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
Rect r = Rect( center.x-radius, center.y-radius, radius * 2, radius * 2 );
Rect intersection_rect = r & rect_src;
if( intersection_rect.width * intersection_rect.height > r.width * r.height / 3 )
{
// circle center
circle( display, center, 3, Scalar(0,255,0), -1, 8, 0 );
// circle outline
circle( display, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
}
// shows the results
imshow( "results", display(rect_src));
// get user key
waitKey();
return 0;
}
This SO post describes detection of semi-circles, and may be a good start for you:
Detect semi-circle in opencv
If you get stuck in OpenCV, try coding up the solution yourself. Writing a Hough circle finder parameterized for your particular application is relatively straightforward. If you write application-specific Hough algorithms a few times, you should be able to write a reasonable solution in less time than it takes to sort through a bunch of google results, decipher someone else's code, and so on.
You definitely don't need Canny edge detection for an image like this, but it won't hurt.
Other libraries (esp. commercial ones) will allow you to set more parameters for Hough circle finding. I would've expected some overload of the HoughCircle function to allow a struct of search parameters to be passed in, including the minimum percentage of circle completeness (arc length) allowed.
Although it's good to learn both RANSAC and Hough techniques--and, over time, more exotic techniques--I wouldn't necessarily recommend using RANSAC when you have circles defined so nicely and crisply. Without offering specific evidence, I'll just claim that fiddling with RANSAC parameters may be less intuitive than fiddling with Hough parameters.
HoughCircles needs some parameter tuning to work properly.
It could be that in your case the default values of Param1 and Param2 (set to 100) are not good.
You can fine tune your detection with HoughCircle, by computing the ultimate eroded. It will give you the number of circles in your image.
If there are only circles and background on the input you can count the number of connected components and ignore the component associated with background. This will be the simplest and most robust solution

Categories

Resources