Detect circular segment in the image - python

I am trying to determine the circle parameters, using its slice, visible in the image. I want to know the radius, coordinates of center and draw the full circle (on the bigger picture of course). I've tried to use imfindcircles with Hough Transformation. Unfortunately, it looks like the slice of the circle is not big enough, and the algorithm does not recognize it as the full circle.
clear all;
close all;
figure();
image = imread('circle3.jpg');
imshow(image);
Rmin = 10;
Rmax = 10000;
[centersBright, radiiBright] = imfindcircles(image,[Rmin
Rmax],'ObjectPolarity','bright','Method','TwoStage','Sensitivity',0.98,'EdgeThreshold',0.9);
viscircles(centersBright, radiiBright,'Color','b');
I have changed the sensivity, threshold, Rmin, Rmax, even the method, but still, nothing happens. How can I detect this circle, if not in Matlab, then maybe in Python?

Related

How to create a transparent gradient along the edge of an image/mask image with python?

Basically I want to create of a halo effect for the image that fades to transparent.
I have images of wounds (and their masks), with no background. I want to add a red gradient behind it, that way if I paste it on something it looks like the area surrounded the wound is irritated.
image
and
mask
...
Originally, I was making a radial (circle) gradient and pasted my transparent image on it, but this does not look good for my non circular images.
image with circle gradient
same image as above,but pasted on white easier to see gradient
...
Now, i think the best idea would be to make the gradient in a shape the goes around the edge of the image to a certain distance and fades.
something like this ( i did this poorly with paint ... but basically for it to go out a certain distance, from the edge of the object, and fade out)
paint version of what i would like to do
...
My code for the the circle gradient is posted below, this creates a transparent circle gradient. Once i have this i paste my transparent image on it. I do not know how to change this code so that it creates the gradient the edge of the image.
import numpy as np,
from PIL import Image
W,H=900,900
im = Image.new(mode='RGB', size=(W,H), color=(153,0,0))
Y = np.linspace(-1, 1, H)[None, :]*255
X = np.linspace(-1, 1, W)[:, None]*255
alpha = np.sqrt(X**2 + Y**2) # equation of a circle
alpha = 255 - np.clip(0,255,alpha)
Please help

How to find the right point correspondance between template and a rotated image?

I have a .dxf file containing a drawing (template) which is just a piece with holes, from said drawing I successfully extract the coordinates of the holes and their diameters given in a list [[x1,y1,d1],[x2,y2,d2]...[xn,yn,dn]].
After this, I take a picture of the piece (same as template) and after some image processing, I obtain the coordinates of my detected holes and the contours. However, this piece in the picture can be rotated with respect to the template.
How do I do the right hole correspondance (between coordinates of holes in template and the rotated coordinates of holes in image) so I can know the which diameter corresponds to each hole in the image?
Is there any method of point sorting it can give me this correspondence?
I'm working with Python and OpenCV.
All answers will be highly appreciated. Thanks!!!
Image of Template: https://ibb.co/VVpWmKx
In the template image, contours are drawn to the same size as given in the .dxf file, which differs to the size (in pixels) of the contours of the piece taken from camera.
Processed image taken from the camera, contours of the piece are shown: https://ibb.co/3rjCg5F
I've tried OpenCV functions of feature matching (ORB algorithm) so I can get the rotation angle the piece in picture was rotates with respect to the template?
but I still cannot get this rotation angle? how can I get the rotation angle with image descriptors?
is this the best approach for this problem? are there any better methods to address this problem?
Considering the image of the extracted contours, you might not need something as heavy as the feature matching algorithm of the OCV library. One approach would be to take the most outter contour of the piece and get the cv::minAreaRect of it. Resulting rotated rectangle will give you the angle. Now you just have to decide if the symmetry matches, because it might be flipped. That can be done as well in many ways. One of the most simple one (excluding the fact, the scale might be off) is that you take the most outter contour again, fill it and count the percentage of the points that overlay with the template. The one with right symmetric orientation should match in almost all points. Given that the scale of the matched piece and the template are the same.
emm you should use huMoments which gives translation, scale and rotation invariance descriptor for matching.
The hu moment can be found here https://en.wikipedia.org/wiki/Image_moment. and it is implemented in opencv
you can dig up the theory of Moment invariants on the wiki site pretty easily
to use it you can simply call
// Calculate Moments
Moments moments = moments(im, false);
// Calculate Hu Moments
double huMoments[7];
HuMoments(moments, huMoments);
The sample moment will be
h[0] = 0.00162663
h[1] = 3.11619e-07
h[2] = 3.61005e-10
h[3] = 1.44485e-10
h[4] = -2.55279e-20
h[5] = -7.57625e-14
h[6] = 2.09098e-20
Usually, here is a large range of the moment. There usually coupled with a log transform to lower the dynamic range for matching
H=log(H)
H[0] = 2.78871
H[1] = 6.50638
H[2] = 9.44249
H[3] = 9.84018
H[4] = -19.593
H[5] = -13.1205
H[6] = 19.6797
BTW, you might need to pad the template to extract the edge contour

Image Positioning OpenCV

This may be called "Region of Interest" I'm not exactly sure. But, what I'd like to do is rather easy to explain.
I have a photo that I need to align to a grid.
https://snag.gy/YaAWdg.jpg
For starters; the little text that says "here" must be 151px from the top of the screen.
Next; from "here" to position 8 of the chin must be 631px
Finally; a straight line down the middle of the picture at line 28 on the nose must be done.
If i'm not making sense please tell me to elaborate.
I have the following ideas (this is pseudo code)
It is simply to loop until the requirements are met with a resize function, a lot like brute forcing but thats all i can think of..
i.e..
while (top.x,y = 151,0)
img.top-=1 ## this puts the image one pixel above until reaching the desired positioning
while (top.x,y & eight.x,y != 631)
resize += 1 # resize by 1 pixel until the height is reached
## center the nose
image.position = nose.
Consider switching the order of your operations to prevent the need to iterate. A little bit of math and a change of perspective should do the trick:
1.) Resize the image such that the distance from "here" to the chin is 631px.
2.) Use a region of interest to crop your image so that "here" is 151px from the top of the screen.
3.) Draw your line.
EDIT:
The affine transform in OpenCV would work to morph your image into the proper fill, assuming that you have all the proper constraints defined.
If all you need to do is a simple scale... First calculate the distance between points, using something like this.
Point2f a(10,10);
Point2f b(100,100);
float euclideanDist(Point& p, Point& q) {
Point diff = p - q;
return cv::sqrt(diff.x*diff.x + diff.y*diff.y);
}
Then create a scale factor to resize your image
float scaleFactor = euclideanDist(a,b) / 631;
cv::resize(input, output, cv::Size(), scaleFactor, scaleFactor, cv::INTER_LINEAR);
Using both instances of scaleFactor will create a uniform scaling in X&Y. Using two different scale factors will scale X and Y independently.
Take a look to OpenCV's tutorial, for images with faces you can use Haar Cascades to simplify the work. (https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html#face-detection)
Otherwise, look at ROI (Region of Interest) to extract an area and apply your algorithm on it (resize or crop)

Detect Red and Green Circles

I want to detect red and green circles separately in the following image (and a few other similar images)
I'm using opencv and python.
I've tried using houghcircles but that wasn't of any help even after changing the params.
Any suggestion how to do this would really help a lot.
I would appreciate if someone sends a code
You mentioned in the comments that the circles will always have the same size.
Let's take advantage of this fact. My code snippets are in C++ language but this should not be a problem because they are only here to show which OpenCV functions to use (and how) ...
TL; DR Do this:
Create typical circle image - the template image.
Use template matching to get all circle positions.
Check the color of every circle.
Now, let's begin!
Step 1 - The template image
You need an image that shows the circle that is clearly separated from the background. You have two options (both are equally good):
make such an image yourself (computing it if you know the radius), or
simply take one image from the set you are working on and then crop one well-visible circle and save it as a separate image (that's what I did because it was a quicker option)
The circle can be of any color - it is only important that it is distinct from the background.
Step 2 - Template matching
Load the image and template image and convert them to HSV color space. Then split channels so that you will be able to only work with saturation channel:
using namespace std;
using namespace cv;
Mat im_rgb = imread("circles.jpg");
Mat tm_rgb = imread("template.jpg");
Mat im_hsv, tm_hsv;
cvtColor(im_rgb, im_hsv, CV_RGB2HSV);
cvtColor(tm_rgb, tm_hsv, CV_RGB2HSV);
vector<Mat> im_channels, tm_channels;
split(im_hsv, im_channels);
split(tm_hsv, tm_channels);
That's how circles and the template look now:
Next, you have to obtain an image that will contain information about circle borders. Regardless of what you do to achieve that, you have to apply exactly the same operations on image and template saturation channels.
I used sobel operator to get the job done. The code example only shows the operations I did on image saturation channel; the template saturation channel went through exactly the same procedure:
Mat im_f;
im_channels[1].convertTo(im_f, CV_32FC1);
GaussianBlur(im_f, im_f, Size(3, 3), 1, 1);
Mat sx, sy;
Sobel(im_f, sx, -1, 1, 0);
Sobel(im_f, sy, -1, 0, 1);
Mat image_input = abs(sx) + abs(sy);
That's how the circles on the obtained image and the template look like:
Now, perform template matching. I advise that you choose the type of template matching that computes normalized correlation coefficients:
Mat match_result;
matchTemplate(image_input, template_input, match_result, CV_TM_CCOEFF_NORMED);
This is the template matching result:
This image tells you how well the template correlates with the underlying image if you place the template at different positions on image. For example, the result value at pixel (0,0) corresponds to template placed at (0,0) on the input image.
When the template is placed in such a position that it matches well with the underlying image, the correlation coefficient is high. Use threshold method to discard everything except the peaks of signal (the values of template matching will lie inside [-1, 1] interval and you are only interested in values that are close to 1):
Mat thresholded;
threshold(match_result, thresholded, 0.8, 1.0, CV_THRESH_BINARY);
Next, determine the positions of template result maxima inside each isolated area. I recommend that you use thresholded image as a mask for this purpose. Only one maximum needs to be selected within each area.
These positions tell you where you have to place the template so that it matches best with the circles. I drew rectangles that start at these points and have the same width/height as template image:
Step 3: The color of the circle
Now you know where templates should be positioned so that they cover the circles nicely. But you still have to find out where the circle center is located on the template image. You can do this by computing center of mass of the template's saturation channel:
On the image, the circle centers are located at these points:
Point circ_center_on_image = template_position + circ_center_on_template;
Now you only have to check if the red color channel intensity at these points is larger that the green channel intensity. If yes, the circle is red, otherwise it is green:

Find upper left of image after resizing to fit screen

I have a problem I haven't been able to figure out-
Say I have an image of arbitrary dimensions. I resize it so that it fits inside 1024x768 while keeping the aspect ratio. I center it on the screen. After doing this, how can I find where the upper left corner will end up?
So, if the image is wider than it is tall, we end up with something like
The green rectangle started at a different size. It was resized to fit the pink rectangle. I want to find the upper left corner of the green rectangle.
I wrote a bunch of notes and drew a bunch of diagrams, but I'm getting all the wrong answers. Can someone explain how to do this? I'm using python 2.7
Let w,h be the size of your image.
To fit the width of 1024, we must scale the image by:
>>> r=1024./w
However, if the image is taller, after scaling it by r, its height won't fit the screen, so in this case the scaling factor is:
>>> if h*r > 768: r=768./h
The coordinate of the upper left corner of the scaled image is:
>>> (1024-w*r)*0.5,(768-h*r)*0.5
Edit:
A handy function to compute the topleft point (works in Python 2.x as well):
def topLeft(w,h,screenw=1024,screenh=768):
r=float(screenw)/float(w)
if h*r > screenh: r=float(screenh)/float(h)
return (screenw-w*r)*0.5,(screenh-h*r)*0.5

Categories

Resources