Camera calibration, focal length value seems too large - python

I tried a camera calibration with python and opencv to find the camera matrix. I used the following code from this link
https://automaticaddison.com/how-to-perform-camera-calibration-using-opencv/
import cv2 # Import the OpenCV library to enable computer vision
import numpy as np # Import the NumPy scientific computing library
import glob # Used to get retrieve files that have a specified pattern
# Path to the image that you want to undistort
distorted_img_filename = r'C:\Users\uid20832\3.jpg'
# Chessboard dimensions
number_of_squares_X = 10 # Number of chessboard squares along the x-axis
number_of_squares_Y = 7 # Number of chessboard squares along the y-axis
nX = number_of_squares_X - 1 # Number of interior corners along x-axis
nY = number_of_squares_Y - 1 # Number of interior corners along y-axis
# Store vectors of 3D points for all chessboard images (world coordinate frame)
object_points = []
# Store vectors of 2D points for all chessboard images (camera coordinate frame)
image_points = []
# Set termination criteria. We stop either when an accuracy is reached or when
# we have finished a certain number of iterations.
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Define real world coordinates for points in the 3D coordinate frame
# Object points are (0,0,0), (1,0,0), (2,0,0) ...., (5,8,0)
object_points_3D = np.zeros((nX * nY, 3), np.float32)
# These are the x and y coordinates
object_points_3D[:,:2] = np.mgrid[0:nY, 0:nX].T.reshape(-1, 2)
def main():
# Get the file path for images in the current directory
images = glob.glob(r'C:\Users\Kalibrierung\*.jpg')
# Go through each chessboard image, one by one
for image_file in images:
# Load the image
image = cv2.imread(image_file)
# Convert the image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Find the corners on the chessboard
success, corners = cv2.findChessboardCorners(gray, (nY, nX), None)
# If the corners are found by the algorithm, draw them
if success == True:
# Append object points
object_points.append(object_points_3D)
# Find more exact corner pixels
corners_2 = cv2.cornerSubPix(gray, corners, (11,11), (-1,-1), criteria)
# Append image points
image_points.append(corners)
# Draw the corners
cv2.drawChessboardCorners(image, (nY, nX), corners_2, success)
# Display the image. Used for testing.
#cv2.imshow("Image", image)
# Display the window for a short period. Used for testing.
#cv2.waitKey(200)
# Now take a distorted image and undistort it
distorted_image = cv2.imread(distorted_img_filename)
# Perform camera calibration to return the camera matrix, distortion coefficients, rotation and translation vectors etc
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(object_points,
image_points,
gray.shape[::-1],
None,
None)
But I think I always get wrong parameters. My focal length is around 1750 in x and y direction from calibration. I think this couldnt be rigth, it is pretty much. The camera documentation says the focal lentgh is between 4-7 mm. But I am not sure, why it is so high from the calibration. Here are some of my photos for the calibration. Maybe something is wrong with them. I moved the chessboard under the camera in different directions, angles and high.
I was also wondering, why I dont need the size of the squares in the code. Can someone explains it to me or did I forgot this input somewhere?

Your misconception is about "focal length". It's an overloaded term.
"focal length" (unit mm) in the optical part: it describes the distance between the lens plane and image/sensor plane, assuming a focus to infinity
"focal length" (unit pixels) in the camera matrix: it describes a scale factor for mapping the real world to a picture of a certain resolution
1750 may very well be correct, if you have a high resolution picture (Full HD or something).
The calculation goes:
f [pixels] = (focal length [mm]) / (pixel pitch [µm / pixel])
(take care of the units and prefixes, 1 mm = 1000 µm)
Example: a Pixel 4a phone, which has 1.40 µm pixel pitch and 4.38 mm focal length, has f = ~3128.57 (= fx = fy).
Another example: A Pixel 4a has a diagonal Field of View of approximately 77.7 degrees, and a resolution of 4032 x 3024 pixels, so that's 5040 pixels diagonally. You can calculate:
f = (5040 / 2) / tan(~77.7° / 2)
f = ~3128.6 [pixels]
And that calculation you can apply to arbitrary cameras for which you know the field of view and picture size. Use horizontal FoV and horizontal resolution if the diagonal resolution is ambiguous. That can happen if the sensor isn't 16:9 but the video you take from it is cropped to 16:9... assuming the crop only crops vertically, and leaves the horizontal alone.
Why don't you need the size of the chessboard squares in this code? Because it only calibrates the intrinsic parameters (camera matrix and distortion coefficients). Those don't depend on the distance to the board or any other object in the scene.
If you were to calibrate extrinsic parameters, i.e. the distance of cameras in a stereo setup, then you would need to give the size of the squares.

Related

Angle between 2 ArUco markers planes

I want to measure the deviation of the angle of an ArUco marker to a plane defined by a second reference ArUco marker.
A reference ArUco marker (M1) is fixed against a flat wall and a second ArUco marker (M2) is a few centimeters in front of that same wall. I want to know when the marker M2 is deviating more than 10 degrees from the xy plane of M1.
Here is an illustration of the configuration:
To do so, I thaught I should calculate the relative rotation between the pose rvec as explained in this post:
Relative rotation between pose (rvec)
that was proposing the following code:
def inversePerspective(rvec, tvec):
""" Applies perspective transform for given rvec and tvec. """
R, _ = cv2.Rodrigues(rvec)
R = np.matrix(R).T
invTvec = np.dot(R, np.matrix(-tvec))
invRvec, _ = cv2.Rodrigues(R)
return invRvec, invTvec
def relativePosition(rvec1, tvec1, rvec2, tvec2):
""" Get relative position for rvec2 & tvec2. Compose the returned rvec & tvec to use composeRT
with rvec2 & tvec2 """
rvec1, tvec1 = rvec1.reshape((3, 1)), tvec1.reshape((3, 1))
rvec2, tvec2 = rvec2.reshape((3, 1)), tvec2.reshape((3, 1))
# Inverse the second marker, the right one in the image
invRvec, invTvec = inversePerspective(rvec2, tvec2)
info = cv2.composeRT(rvec1, tvec1, invRvec, invTvec)
composedRvec, composedTvec = info[0], info[1]
composedRvec = composedRvec.reshape((3, 1))
composedTvec = composedTvec.reshape((3, 1))
return composedRvec, composedTvec
Computing the composedRvec, I get the following results :
With both ArUco markers in the same plane (composedRvec values in the top right corner) :
With both ArUco markers at a 90 degrees angle:
I do not really understand the results:
Ok for with the 0,0,0 composedRvec when markers in the same plane.
But why 0,1.78,0 in the second case?
What general condition should I have on the resulting composedRvec to tell me when the angle between the 2 markers is above 10 degrees?
Am I even following the right strategy with the composedRvec?
**** EDIT ***
Results of the 2 markers in the same xy plane with a 40° angle:
||composedRvec||= sqrt(0.619^2+0.529^2+0.711^2)=1.08 rad = 61.87°
**** EDIT 2 ***
By retaking measurements in the 40° angle configuration, I found out that the values are quite fluctuating even without modifying the set up or lightening. From time to time, I fall on the correct values:
||composedRvec||= sqrt(0.019^2+0.012^2+0.74^2)=0.74 rad = 42.4° which is quite accurate.
**** EDIT 3 ***
So here is my final code based on #Gilles-Philippe Paillé's edited answer:
import numpy as np
import cv2
import cv2.aruco as aruco
cap = cv2.VideoCapture(0, cv2.CAP_DSHOW) # Get the camera source
img_path='D:/your_path/'
# FILE_STORAGE_READ
cv_file = cv2.FileStorage(img_path+"camera.yml", cv2.FILE_STORAGE_READ)
matrix_coefficients = cv_file.getNode("K").mat()
distortion_coefficients = cv_file.getNode("D").mat()
nb_markers=2
def track(matrix_coefficients, distortion_coefficients):
while True:
ret, frame = cap.read()
# operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Change grayscale
aruco_dict = aruco.custom_dictionary(nb_markers, 5)
parameters = aruco.DetectorParameters_create() # Marker detection parameters
# lists of ids and the corners beloning to each id
corners, ids, rejected_img_points = aruco.detectMarkers(gray,
aruco_dict,parameters=parameters,cameraMatrix=matrix_coefficients,distCoeff=distortion_coefficients)
# store rz1 and rz2
R_list=[]
if np.all(ids is not None): # If there are markers found by detector
for i in range(0, len(ids)): # Iterate in markers
# Estimate pose of each marker and return the values rvec and tvec---different from camera coefficients
rvec, tvec, markerPoints = aruco.estimatePoseSingleMarkers(corners[i], 0.02, matrix_coefficients,
distortion_coefficients)
(rvec - tvec).any() # get rid of that nasty numpy value array error
aruco.drawDetectedMarkers(frame, corners) # Draw A square around the markers
aruco.drawAxis(frame, matrix_coefficients, distortion_coefficients, rvec, tvec, 0.01) # Draw Axis
R, _ = cv2.Rodrigues(rvec)
# convert (np.matrix(R).T) matrix to array using np.squeeze(np.asarray()) to get rid off the ValueError: shapes (1,3) and (1,3) not aligned
R = np.squeeze(np.asarray(np.matrix(R).T))
R_list.append(R[2])
# Display the resulting frame
if len(R_list) == 2:
print('##############')
angle_radians = np.arccos(np.dot(R_list[0], R_list[1]))
angle_degrees=angle_radians*180/np.pi
print(angle_degrees)
cv2.imshow('frame', frame)
# Wait 3 milisecoonds for an interaction. Check the key and do the corresponding job.
key = cv2.waitKey(3000) & 0xFF
if key == ord('q'):
break
track(matrix_coefficients, distortion_coefficients)
And here are some results:
red -> real angle, white -> measured angle
This is out of the scope of the question but I find that the pose estimation is quite fluctuating. For example when the 2 markers are against the wall, the values easely jump from 9° to 37° without touching the system.
The result uses the Angle-axis representation, i.e., the norm of the vector is the angle of rotation (what you want), and the direction of the vector is the axis of rotation.
You are looking for θ = ||composedRvec||. Note that the result is in radians. The condition would be ||composedRvec|| > 10*π/180.
Edit: To only consider the angle between the Z-axis of both planes, convert the two rotation vectors rvec1 and rvec2 into matrices and extract the 3rd columns. The angle is then angle_radians = np.arccos(np.dot(rz1, rz2))

OpenCv Shape Detection To Shape Transformation (Pythhon)

How can I take a image of a square that has been detected using shape detection algorithm on openCV and "Transform" it to a triangle the quickest way possible?
For EXAMPLE say one of the images from google is a square and i want to see the fastsest way to turn it to a triangle. How would I go about researching this? I have looked up shape transformation for openCV but it mostly covers zooming in on the image and changing views.
One way to distort a rectangle to a triangle is to use a perspective transformation in Python/OpenCV.
Read the input
Get the input control points as the 4 corners of the input image
Define the output control points as to top 2 points close to the top center of the output (-+1 or whatever separation you want) and bottom 2 points the same as the input bottom two points
Compute the perspective transformation matrix from the control points
Warp the input to the output
Save the result.
Input:
import cv2
import numpy as np
# Read source image.
img_src = cv2.imread('lena.png')
hh, ww = img_src.shape[:2]
# Four corners of source image ordered clockwise from top left corner
# Coordinates are in x,y system with x horizontal to the right and y vertical downward
pts_src = np.float32([[0,0], [ww-1,0], [ww-1,hh-1], [0,hh-1]])
# Four corners of destination image.
pts_dst = np.float32([[ww/2-1, 0], [ww/2+1,0], [ww-1,hh-1], [0,hh-1]])
# Get perspecive matrix if only 4 points
m = cv2.getPerspectiveTransform(pts_src,pts_dst)
# Warp source image to destination based on matrix
img_out = cv2.warpPerspective(img_src, m, (ww,hh), cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT, borderValue=(255, 255, 255))
# Save output
cv2.imwrite('lena_triangle.png', img_out)
# Display result
cv2.imshow("Warped Source Image", img_out)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result (though likely not what you want).
If you separate the top two output points by a larger difference, then it will look more like the input.
For example, using ww/2-10 and ww/2+10 rather than ww/2-1 and ww/2+1 for the top two output points, I get:

Detect if an OCR text image is upside down

I have some hundreds of images (scanned documents), most of them are skewed. I wanted to de-skew them using Python.
Here is the code I used:
import numpy as np
import cv2
from skimage.transform import radon
filename = 'path_to_filename'
# Load file, converting to grayscale
img = cv2.imread(filename)
I = cv2.cvtColor(img, COLOR_BGR2GRAY)
h, w = I.shape
# If the resolution is high, resize the image to reduce processing time.
if (w > 640):
I = cv2.resize(I, (640, int((h / w) * 640)))
I = I - np.mean(I) # Demean; make the brightness extend above and below zero
# Do the radon transform
sinogram = radon(I)
# Find the RMS value of each row and find "busiest" rotation,
# where the transform is lined up perfectly with the alternating dark
# text and white lines
r = np.array([np.sqrt(np.mean(np.abs(line) ** 2)) for line in sinogram.transpose()])
rotation = np.argmax(r)
print('Rotation: {:.2f} degrees'.format(90 - rotation))
# Rotate and save with the original resolution
M = cv2.getRotationMatrix2D((w/2,h/2),90 - rotation,1)
dst = cv2.warpAffine(img,M,(w,h))
cv2.imwrite('rotated.jpg', dst)
This code works well with most of the documents, except with some angles: (180 and 0) and (90 and 270) are often detected as the same angle (i.e it does not make difference between (180 and 0) and (90 and 270)). So I get a lot of upside-down documents.
Here is an example:
The resulted image that I get is the same as the input image.
Is there any suggestion to detect if an image is upside down using Opencv and Python?
PS: I tried to check the orientation using EXIF data, but it didn't lead to any solution.
EDIT:
It is possible to detect the orientation using Tesseract (pytesseract for Python), but it is only possible when the image contains a lot of characters.
For anyone who may need this:
import cv2
import pytesseract
print(pytesseract.image_to_osd(cv2.imread(file_name)))
If the document contains enough characters, it is possible for Tesseract to detect the orientation. However, when the image has few lines, the orientation angle suggested by Tesseract is usually wrong. So this can not be a 100% solution.
Python3/OpenCV4 script to align scanned documents.
Rotate the document and sum the rows. When the document has 0 and 180 degrees of rotation, there will be a lot of black pixels in the image:
Use a score keeping method. Score each image for it's likeness to a zebra pattern. The image with the best score has the correct rotation. The image you linked to was off by 0.5 degrees. I omitted some functions for readability, the full code can be found here.
# Rotate the image around in a circle
angle = 0
while angle <= 360:
# Rotate the source image
img = rotate(src, angle)
# Crop the center 1/3rd of the image (roi is filled with text)
h,w = img.shape
buffer = min(h, w) - int(min(h,w)/1.15)
roi = img[int(h/2-buffer):int(h/2+buffer), int(w/2-buffer):int(w/2+buffer)]
# Create background to draw transform on
bg = np.zeros((buffer*2, buffer*2), np.uint8)
# Compute the sums of the rows
row_sums = sum_rows(roi)
# High score --> Zebra stripes
score = np.count_nonzero(row_sums)
scores.append(score)
# Image has best rotation
if score <= min(scores):
# Save the rotatied image
print('found optimal rotation')
best_rotation = img.copy()
k = display_data(roi, row_sums, buffer)
if k == 27: break
# Increment angle and try again
angle += .75
cv2.destroyAllWindows()
How to tell if the document is upside down? Fill in the area from the top of the document to the first non-black pixel in the image. Measure the area in yellow. The image that has the smallest area will be the one that is right-side-up:
# Find the area from the top of page to top of image
_, bg = area_to_top_of_text(best_rotation.copy())
right_side_up = sum(sum(bg))
# Flip image and try again
best_rotation_flipped = rotate(best_rotation, 180)
_, bg = area_to_top_of_text(best_rotation_flipped.copy())
upside_down = sum(sum(bg))
# Check which area is larger
if right_side_up < upside_down: aligned_image = best_rotation
else: aligned_image = best_rotation_flipped
# Save aligned image
cv2.imwrite('/home/stephen/Desktop/best_rotation.png', 255-aligned_image)
cv2.destroyAllWindows()
Assuming you did run the angle-correction already on the image, you can try the following to find out if it is flipped:
Project the corrected image to the y-axis, so that you get a 'peak' for each line. Important: There are actually almost always two sub-peaks!
Smooth this projection by convolving with a gaussian in order to get rid of fine structure, noise, etc.
For each peak, check if the stronger sub-peak is on top or at the bottom.
Calculate the fraction of peaks that have sub-peaks on the bottom side. This is your scalar value that gives you the confidence that the image is oriented correctly.
The peak finding in step 3 is done by finding sections with above average values. The sub-peaks are then found via argmax.
Here's a figure to illustrate the approach; A few lines of you example image
Blue: Original projection
Orange: smoothed projection
Horizontal line: average of the smoothed projection for the whole image.
here's some code that does this:
import cv2
import numpy as np
# load image, convert to grayscale, threshold it at 127 and invert.
page = cv2.imread('Page.jpg')
page = cv2.cvtColor(page, cv2.COLOR_BGR2GRAY)
page = cv2.threshold(page, 127, 255, cv2.THRESH_BINARY_INV)[1]
# project the page to the side and smooth it with a gaussian
projection = np.sum(page, 1)
gaussian_filter = np.exp(-(np.arange(-3, 3, 0.1)**2))
gaussian_filter /= np.sum(gaussian_filter)
smooth = np.convolve(projection, gaussian_filter)
# find the pixel values where we expect lines to start and end
mask = smooth > np.average(smooth)
edges = np.convolve(mask, [1, -1])
line_starts = np.where(edges == 1)[0]
line_endings = np.where(edges == -1)[0]
# count lines with peaks on the lower side
lower_peaks = 0
for start, end in zip(line_starts, line_endings):
line = smooth[start:end]
if np.argmax(line) < len(line)/2:
lower_peaks += 1
print(lower_peaks / len(line_starts))
this prints 0.125 for the given image, so this is not oriented correctly and must be flipped.
Note that this approach might break badly if there are images or anything not organized in lines in the image (maybe math or pictures). Another problem would be too few lines, resulting in bad statistics.
Also different fonts might result in different distributions. You can try this on a few images and see if the approach works. I don't have enough data.
You can use the Alyn module. To install it:
pip install alyn
Then to use it to deskew images(Taken from the homepage):
from alyn import Deskew
d = Deskew(
input_file='path_to_file',
display_image='preview the image on screen',
output_file='path_for_deskewed image',
r_angle='offest_angle_in_degrees_to_control_orientation')`
d.run()
Note that Alyn is only for deskewing text.

How do I programmatically find the pixel locations of specific features in an image?

I'm building an automated electricity / gas meter reader using OpenCV and Python. I've got as far as taking shots with a webcam:
I can then use afine transform to unwarp the image (an adaptation of this example):
def unwarp_image(img):
rows,cols = img.shape[:2]
# Source points
left_top = 12
left_bottom = left_top+2
top_left = 24
top_right = 13
bottom = 47
right = 180
srcTri = np.array([(left_top,top_left),(right,top_right),(left_bottom,bottom)], np.float32)
# Corresponding Destination Points. Remember, both sets are of float32 type
dst_height=30
dstTri = np.array([(0,0),(cols-1,0),(0,dst_height)],np.float32)
# Affine Transformation
warp_mat = cv2.getAffineTransform(srcTri,dstTri) # Generating affine transform matrix of size 2x3
dst = cv2.warpAffine(img,warp_mat,(cols,dst_height)) # Now transform the image, notice dst_size=(cols,rows), not (rows,cols)
#cv2.imshow("crop_img", dst)
#cv2.waitKey(0)
return dst
..which gives me an image something like this:
I still need to extract the text using some sort of OCR routine but first I'd like to automate the part that identifies what pixel locations to apply the affine transform to. So if someone knocks the webcam it doesn't stop the software working.
Since your image is pretty much planar, you can look into finding the homography between the image you get from the webcam and the desired image (in the upright position).
Edit: This will rotate the image in the upright position. Once you've registered your image (brought it in the upright position), you could do row-wise or column-wise projections (sum all the pixels along the columns to get one vector, sum all the pixels along the rows to get one vector). You can use these vectors to figure out where you have a jump in color, and crop it there.
Alternatively you can use the Hough transform, which gives you lines in an image. You can probably get away with not registering the image if you do this.

Python OpenCV - Find black areas in a binary image

There is any method/function in the python wrapper of Opencv that finds black areas in a binary image? (like regionprops in Matlab)
Up to now I load my source image, transform it into a binary image via threshold and then invert it to highlight the black areas (that now are white).
I can't use third party libraries such as cvblobslob or cvblob
Basically, you use the findContours function, in combination with many other functions OpenCV provides for especially this purpose.
Useful functions used (surprise, surprise, they all appear on the Structural Analysis and Shape Descriptors page in the OpenCV Docs):
findContours
drawContours
moments
contourArea
arcLength
boundingRect
convexHull
fitEllipse
example code (I have all the properties from Matlab's regionprops except WeightedCentroid and EulerNumber - you could work out EulerNumber by using cv2.RETR_TREE in findContours and looking at the resulting hierarchy, and I'm sure WeightedCentroid wouldn't be that hard either.
# grab contours
cs,_ = cv2.findContours( BW.astype('uint8'), mode=cv2.RETR_LIST,
method=cv2.CHAIN_APPROX_SIMPLE )
# set up the 'FilledImage' bit of regionprops.
filledI = np.zeros(BW.shape[0:2]).astype('uint8')
# set up the 'ConvexImage' bit of regionprops.
convexI = np.zeros(BW.shape[0:2]).astype('uint8')
# for each contour c in cs:
# will demonstrate with cs[0] but you could use a loop.
i=0
c = cs[i]
# calculate some things useful later:
m = cv2.moments(c)
# ** regionprops **
Area = m['m00']
Perimeter = cv2.arcLength(c,True)
# bounding box: x,y,width,height
BoundingBox = cv2.boundingRect(c)
# centroid = m10/m00, m01/m00 (x,y)
Centroid = ( m['m10']/m['m00'],m['m01']/m['m00'] )
# EquivDiameter: diameter of circle with same area as region
EquivDiameter = np.sqrt(4*Area/np.pi)
# Extent: ratio of area of region to area of bounding box
Extent = Area/(BoundingBox[2]*BoundingBox[3])
# FilledImage: draw the region on in white
cv2.drawContours( filledI, cs, i, color=255, thickness=-1 )
# calculate indices of that region..
regionMask = (filledI==255)
# FilledArea: number of pixels filled in FilledImage
FilledArea = np.sum(regionMask)
# PixelIdxList : indices of region.
# (np.array of xvals, np.array of yvals)
PixelIdxList = regionMask.nonzero()
# CONVEX HULL stuff
# convex hull vertices
ConvexHull = cv2.convexHull(c)
ConvexArea = cv2.contourArea(ConvexHull)
# Solidity := Area/ConvexArea
Solidity = Area/ConvexArea
# convexImage -- draw on convexI
cv2.drawContours( convexI, [ConvexHull], -1,
color=255, thickness=-1 )
# ELLIPSE - determine best-fitting ellipse.
centre,axes,angle = cv2.fitEllipse(c)
MAJ = np.argmax(axes) # this is MAJor axis, 1 or 0
MIN = 1-MAJ # 0 or 1, minor axis
# Note: axes length is 2*radius in that dimension
MajorAxisLength = axes[MAJ]
MinorAxisLength = axes[MIN]
Eccentricity = np.sqrt(1-(axes[MIN]/axes[MAJ])**2)
Orientation = angle
EllipseCentre = centre # x,y
# ** if an image is supplied with the BW:
# Max/Min Intensity (only meaningful for a one-channel img..)
MaxIntensity = np.max(img[regionMask])
MinIntensity = np.min(img[regionMask])
# Mean Intensity
MeanIntensity = np.mean(img[regionMask],axis=0)
# pixel values
PixelValues = img[regionMask]
After inverting binary image to turn black to white areas, apply cv.FindContours function. It will give you boundaries of the region you need.
Later you can use cv.BoundingRect to get minimum bounding rectangle around region. Once you got the rectangle vertices, you can find its center etc.
Or to find centroid of region, use cv.Moment function after finding contours. Then use cv.GetSpatialMoments in x and y direction. It is explained in opencv manual.
To find area, use cv.ContourArea function.
Transform it to binary image using threshold with the CV_THRESH_BINARY_INV flag, you get threshold + inversion in one step.
If you can consider using another free library, you could use SciPy. It has a very convenient way of counting areas:
from scipy import ndimage
def count_labels(self, mask_image):
"""This function returns the count of labels in a mask image."""
label_im, nb_labels = ndimage.label(mask_image)
return nb_labels
If necessary you can use:
import cv2 as opencv
image = opencv.inRange(image, lower_threshold upper_threshold)
before to get a mask image, which contains only black and white, where white are the objects in the given range.
I know this is an old question, but for completeness I wanted to point out that cv2.moments() will not always work for small contours. In this case, you can use cv2.minEnclosingCircle() which will always return the center coordinates (and radius), even if you have only a single point. Slightly more resource-hungry though, I think...

Categories

Resources