I plan to calculate the surface area of this cone by
Splitting the image into top/bottom halves of the cone
Finding the brightest spot on top/bottom halves
Finding the distance between brightest spots on top/bottom halves as a diameter for every pixel along the x axis and using it to calculate a dS for the total S, surface area
However, this appears unreliable at the extremities (tip and base). How can I make it more reliable at base/tip? or is my approach entirely wrong?
Edit: I want it to truncate in the black space, on both ends
I would try applying a filter that will make cone pixels white and other pixels black (e.g. provide a binary image). After that the area of the cone is just the sum of the white pixels.
Related
I am trying to calculate all the red dots in areas between two concentric circles. Finding the red dots is easy, I simply search using a for loop everything of red color, but the problem is finding that inside a contour, especially when I try to run over all the areas between the circles.
Code as bellow:
img2=Image.open("C:\Python27\Image.png")
pixels=list(img2.getdata())
for pixel in pixels:
if pixel==(255,0,0): print pixel
Bellow you can see the sample picture I'm working on to try my algorithm.
enter image description here
If you know where the circle's center is you simply calculate the distance between the red dot and the center. This tells you in which circle band your dot's are.
If you don't know where the circles are apply techniques for finding circles. Hough transform for example.
If you start scanning a single row of pixels in the middle of the edge of the image from left to right you can determine when a pixel is black.
When you record a series of white then black then white pixels you know you've found the edge of a circle. Scanning the same row from right to left will let you figure out the opposite side of the circle. Then you can calculate the equation of that circle from the diameter.
If you keep recording each circle as you move towards the center, you'll find the equation of each circle. Then when you find red pixels, you can determine which area they belong to by using the (x, y) coordinates of the red pixel and the equations of the circles.
I am new to opencv using python and trying to get the shape of a contour in an image.
Considering only regular shapes like square, rectangle, circle and triangle is there any way to get the contour shape using only numpy and cv2 libraries?
Also i want to find the colour inside a contour. How can I do it?
For finding area of a contour there is an inbuilt function: cv2.contourArea(cnt).
Are there inbuilt functions for "contour shape" and "color inside contour" also?
Please help!
Note : The images I am considering contains multiple regular shapes.
This method might be longer, but right now it is on the top of my head. For finding contour shape, use findcontours function, it will give vector of points as output(boundary points of contours). Now find the center of contour, using moments.
for finding contour use this function-
cv2.findContours(image, mode, method[, contours[, hierarchy[, offset]]])
image is the canny output image.
calculate center from moments, refer to this link
http://docs.opencv.org/trunk/dd/d49/tutorial_py_contour_features.html
calculate distance of each point stored in contours from the center
Now classify shaped by comparing distance of points from center
1)circle - all contours points will be roughly at equal distance from center.
2)square, rectangle- find farthest 4 points from center, These points will be vertices and will have approximately same distance. Now differentiate square from rectangle using edge length
3) traingles - this can be tricky, for different types of triangle, so you can just use else condition here, since you have only 4 shapes
For finding colour, use the vertices for square, rectangle and triangle to create a mask.
Since you have single color only, you make a small patch around center and get the avg value of RGB pixels there.
Assume you have center at (100,100) and its a circle with radius 20 pixel. create patch of size say 10 X 10, with center at (100,100) and find average value to R,G and B values in this patch.
for red R ~ 255 G ~0 and B~0
for green R ~ 0 G ~255 and B~0
for blue R ~0 G ~0 and B~255
Note: opencv stores value as BGR, not RGB
For finding the shape of a particular contour we can draw a bounded rectangle around the contour.
Now we can compare the area of contour with the area of bounded rectangle.
If area of contour is equal to half the area of bounded rectangle the shape is a triangle.
If the area of contour is less that area of bounded rectangle but is greater than half the area of bounded rectangle then its a circle.
Note: This method is limited to regular triangle and circle. this doesnt apply to polygons like hexagon,heptagon etc.
Is it possible the value pixel of image is change after image rotate? I rotate an image, ex, I rotate image 13 degree, so I pick a random pixel before the image rotate and say it X, then I brute force in image has been rotate, and I not found pixel value as same as X. so is it possible the value pixel can change after image rotate? I rotate with opencv library in python.
Any help would be appreciated.
Yes, it is possible for the initial pixel value not to be found in the transformed image.
To understand why this would happen, remember that pixels are not infinitely small dots, but they are rectangles with horizontal and vertical sides, with small but non-zero width and height.
After a 13 degrees rotation, these rectangles (which have constant color inside) will not have their sides horizontal and vertical anymore.
Therefore an approximation needs to be made in order to represent the rotated image using pixels of constant color, with sides horizontal and vertical.
If you just rotate the same image plane the image pixels will remain same. Simple maths
I'm trying to integrate over the area of a circular aperture superimposed on an array of pixels (see image below). However, I need to determine the fraction of flux (area) inside the circular aperture, and leave out anything outside the circular aperture in each square pixel on the boundary of the circle.
How would I go about coding this in numpy/python such that I am getting an accurate measure of the flux inside the circle?
Calculate the proportion of each pixel that is within the circle using calculus. (Integrate the equation of your circle between the left-right boundaries of the each.)
Draw a white circle on a black background of the radius you're after in an image editor of your choice, and save a bitmap of your output
Load the image in your code, with scipy.misc.imload, and divide the pixel values by 255 so you have a mask in 0.0...1.0
Calculate the sum of the product of that mask with your data to integrate
I work with OpenCV library in Python.
The question is how to select in separate roi the area across two curves?
Curves are defined by two quadric polynoms.
I want to find count of black pixels at the area restricted between curve 1 and curve 2
You can create mask by drawing ellipse, but you should have the following data from your equation,
center – Center of the ellipse (here I used centre of image).
axes – Half of the size of the ellipse main axes (here I used image size/2 and image size/4 respectively for both curve).
angle – Ellipse rotation angle in degrees, (here I used 0)
startAngle – Starting angle of the elliptic arc in degrees. (here I used 0)
endAngle – Ending angle of the elliptic arc in degrees.(here I used -180)
If you got the above data for both curve, you can simply draw ellipse with thickness=CV_FILLED like,
First draw largest ellipse with color=255.
Now draw second ellipse with color = 0.
See an example,
Mat src(480,640,CV_8UC3,Scalar(0,0,0));
ellipse(src,Point(src.cols/2,src.rows/2), Size (src.cols/2,src.rows/2), 0, 0,-180,Scalar(0,0,255), -1,8, 0);
ellipse(src,Point(src.cols/2,src.rows/2), Size (src.cols/4,src.rows/4), 0, 0,-180,Scalar(0,0,0), -1,8, 0);
Draw it on a single channel image, if you want to use it as mask.
Edit:-
To find the area, draw above to single channel image with color=255.
Then use countNonZero to get white pixel count.