I'm trying to use Python to detect how many objects are on a white surface. An example image is found at the end of this post.
I'm wondering how I should do this, mainly because the background is white and most of the time it gets detected as foreground.
What I have now in Python based on this tutorial (http://pythonvision.org/basic-tutorial) uses several libraries and detects the white as the object so count is 1, the tools get detected as background and thus are ignored:
dna = mahotas.imread('dna.jpeg')
dna = dna.squeeze()
dna = pymorph.to_gray(dna)
print dna.shape
print dna.dtype
print dna.max()
print dna.min()
dnaf = ndimage.gaussian_filter(dna, 8)
T = mahotas.thresholding.otsu(dnaf)
labeled, nr_objects = ndimage.label(dnaf > T)
print nr_objects
pylab.imshow(labeled)
pylab.jet()
pylab.show()
Are there any options for getting the white part as background and the tools as foreground?
Thanks in advance!
Example image:
The segmented image where red is foreground and blue background (the few tools merging is not a problem):
If the shadow is not a problem
You can label the images in mahotas (http://mahotas.readthedocs.org/en/latest/labeled.html) given this binary image. You can also use skimage.morphology (which uses ndlabel as was mentioned in comments). http://scikit-image.org/docs/dev/auto_examples/plot_label.html
These are examples of connect-component algorithms and are standard in any general image processing package. ImageJ also makes this quite simple.
If the shadow is a problem
Otsu thresholding returns a single value: a pixel brightness, and all you're doing is keeping all pixels that are dimmer than this threshold. This method is getting tripped up by your shadows, so you need to try another segmentation algorithm, preferably one that does local segmentation (IE it segments small regions of the image individually.)
Adaptive or local methods don't have this problem and would be really well-suited to your image's shadows:
http://scikit-image.org/docs/dev/auto_examples/plot_threshold_adaptive.html#example-plot-threshold-adaptive-py
In mahotas there should be other segmentation methods but I'm only knowledgeable about scikit-image. If you want a serious overview on segmentation, check out this paper: https://peerj.com/preprints/671/
Full disclosure, it's my paper.
Related
I am working with an IR camera and am trying to find out if we have any lens distortion. I am using the example from OpenCV here to guide my work. I used a chessboard template from here and attached it to the back of a book. Before taking any images I heated the book/paper and observed that the checkerboard pattern was coming in very clear.
I took ~50 still frames with the chessboard pattern tilted/moved so that every part of the frame contained some part of the pattern. An example of one of my images is below:
I used the following code which resulted in False for every image. I tried every combination of grid pattern sizes from (5-9, 5-9).
import numpy as np
import cv2
import glob2 as glob
import matplotlib.pyplot as plt
base = 'pathtoimages/'
files = glob.glob(base + '*.png')
for file in files:
img = cv2.imread(file)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(gray, (6,8), None)
print (ret)
I can't figure out why the algorithm is not finding the corners. Any ideas?
Edit May 30, 2019:
Today I took some more images with the camera. I took the photos in a more controlled environment without any external light sources present. These new images still fail the findchessboard corner detection. I tried increasing the contrast and brightness using cv2.convertScaleAbs to produce the following image as an example.
This also fails. If I use cv2.goodFeaturesToTrack to find corners I get the following result:
It seems like the Opencv corner detection algorithm is actively avoiding my chess board corners. It will find any corner it can before finding one on the chessboard. I am truly stumped here.
As a sanity check I made sure that openCV can find the corners on the original chessboard I am using and it worked perfectly.
Any ideas?
Edit June 4, 2019:
I ended up writing a script that allows me to manually assign each of the corners. I was able to get the camera distortion model successfully. I still have no solution for why the corners couldn't automatically be found by openCV. I think if I were to do this again in IR, I would make a custom grid that increases the contrast between grid cells simply due to differing thermal properties between "white" and "black" grid cells (use different materials).
I got findChessboardCorners to work with 2 adjustments.
As said in one of the comments, openCV expects a white boarder
around the chessboard. To achieve that you can 'invert' the image,
essentially creating the equivalent to a photographic negative.
Before I called findChessboardCorners, I did: image_inverted =
numpy.array(256 – image_original, dtype=uint8)
Only use the cv2.CALIB_CB_ADAPTIVE_THREASH flag when calling
findChessboardCorners. CALIB_CB_FAST_CHECK and
CALIB_CB_NORMALIZE_IMAGE flags seem to make findChessboardCorners
not find the corners.
Good Luck
Working on object detection in Python with opencv.
I have two pictures
The reference picture with no object in it.
Picture with object.
The result of the images is:
The problem is, the pattern of the reference image is now on my objects. I want to remove this pattern and I don't know how to do it. For further image processing I need the the correct outline of the objects.
Maybe you know how to fix it, or have better ideas to exctract the object.
I would be glad for your help.
Edit: 4. A black object:
As #Mark Setchell commented, the difference of the two images shows which pixels contain the object, you shouldn't try to use it as the output. Instead, find the pixels with a significant difference, and then read those pixels directly from the input image.
Here, I'm using Otsu thresholding to find what "significant difference" is. There are many other ways to do this. I then use the inverse of the mask to blank out pixels in the input image.
import PyDIP as dip
bg = dip.ImageReadTIFF('background.tif')
bg = bg.TensorElement(1) # The image has 3 channels, let's use just the green one
fg = dip.ImageReadTIFF('object.tif')
fg = fg.TensorElement(1)
mask = dip.Abs(bg - fg) # Difference between the two images
mask, t = dip.Threshold(mask, 'otsu') # Find significant differences only
mask = dip.Closing(mask, 7) # Smooth the outline a bit
fg[~mask] = 0 # Blank out pixels not in the mask
I'm using PyDIP above, not OpenCV, because I don't have OpenCV installed. You can easily do the same with OpenCV.
An alternative to smoothing the binary mask as I did there, is to smooth the mask image before thresholding, for example with dip.Gauss(mask,[2]), a Gaussian smoothing.
Edit: The black object.
What happens with this image, is that its illumination has changed significantly, or you have some automatic exposure settings in your camera. Make sure you have turned all of that off so that every image is exposed exactly the same, and that you use the raw images directly off of the camera for this, not images that have gone through some automatic enhancement procedure or even JPEG compression if you can avoid it.
I computed the median of the background image divided by the object image (fg in the code above, but for this new image), which came up to 1.073. That means that the background image is 7% brighter than the object image. I then multiplied fg by this value before computing the absolute difference:
mask = dip.Abs(fg * dip.Median(bg/fg)[0][0] - bg)
This helped a bit, but it showed that the changes in contrast are not consistent across the image.
Next, you can change the threshold selection method. Otsu assumes a bimodal histogram, and works well if you have a significant number of pixels in each group (foreground and background). Here we'll have fewer pixels belonging to the object, because only some of the object pixels have a different color from the background. The 'triangle' method is suitable in this case:
mask, t = dip.Threshold(mask, 'triangle')
This will lead to a mask that contains only some of the object pixels. You'll have to add some additional knowledge about your object (i.e. it is a rotated square) to find the full object. There are also some isolated background pixels that are being picked up by the threshold, those are easy to eliminate using a bit of blurring before the threshold or a small opening after.
Getting the exact outline of the object in this case will be impossible with your current setup. I would suggest you improve your setup by either:
making the background more uniform in illumination,
using color (so that there are fewer possible objects that match the background color so exactly as in this case),
using infrared imaging (maybe the background could have different properties from all the objects to be detected in infrared?),
using back-illumination (this is the best way if your aim is to measure the objects).
I'm currently working on a computer vision project, and got most of my algorithm working. However I'm currently doing background subtraction manually on every image. This is because the most common background subtraction algorithms that I can find make use of thresholding, and my project should deal with backgrounds both brighter and darker than the object I want to extract.
This is the way I am subtracting the background currently (using python and the scikit stack):
val = filters.threshold_otsu(image)
return image > val
Of course, this only works with backgrounds darker than the subject.
I had the idea of finding whether or not the background is bright, and then depending on that change the sign of the inequality, but could not find a way to do that.
Is there a background subtraction algorithm which is able to handle both bright and dark backgrounds, or is there another way to solve this problem?
There are no fixed method of solving your problem generally. Foreground and background can be defined differently according to situations.
That being said, it is not impossible to use some heuristic method to make the algorithm work on your dataset. It will be helpful if you can share some of the images to give us a better understanding of your definition of foreground and background.
Here are some of heuristic method that might help:
Run Ostu thresholding with both THRESH_BINARY and THRESH_BINARY_INV. Then assuming your foreground is always centered, choose the result where the a large portion of the center region is white.
If the foreground is always larger than backgorund or vice versa, calculate the area of white region instead.
There are several automatic thresholding techiques available. One of them is Otsu.
http://www.labbookpages.co.uk/software/imgProc/otsuThreshold.html
It is implemented in opencv (https://docs.opencv.org/trunk/d7/d4d/tutorial_py_thresholding.html)
import cv2
img = cv2.imread('noisy2.png',0)
ret2,th2 = cv2.threshold(img,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
I am new to openCV and python both. I am trying to count people in an image. The image is supposed to be captured with an overhead camera or the way a CCTV camera is placed.
I have converted the colored image into binary image and then inverted the binary image. Then I used bitwise OR on original and inverted binary image so that the background is white and the people are colored.
How to count these people? Is it necessary to use a classifier or can i just count the contours ,if yes then how to count them?
Plus there are some issues with the technique I'm using.
Faces of people are light in color so sometimes only hair are getting extracted.
The dark objects other than people also get extracted.
If the floor is dark it won't give the binary image that is needed.
So is there any other method to achieve what I'm trying to do here?
Not sure but it may worth to check there.
It explain how to perform face recognition using openCV and python in pictures and extand it to webcam here, it's not quite what your looking for but may give you some clue/
I am working on a project that requires detecting lines on a plate of sand. The lines are hand-drew by user so that are not exactly "straight" (see photo). And because of the sand, the lines are quite hard to distinguish.
I tried cv2.HoughLines from OpenCV but didn't achieve good results. So any suggestion on the detecting method? And welcome for suggestion to improve the clarity of the lines. I am thinking of putting a few led light surrounding the plate.
Thanks
The detecting method depends a lot on how much generality you require: is the exposure and contrast going to change from one image to another? Is the typical width of lines going to change? In the following, I assume that such parameters do not vary much for your applications, please correct me if I'm wrong.
I'll be using scikit-image, a common image processing package for Python. If you're not familiar with this package, documentation can be found on http://scikit-image.org/, and the package is bundled with all installations of Scientific Python. However, the algorithms that I use are also available in other tools, like opencv.
My solution is written below. Basically, the principle is
first, denoise the image. Life is usually simpler after a denoising step. Here I use a total variation filter, since it results in a piecewise-constant image that will be easier to threshold. I enhance dark regions using a morphological erosion (on the gray-level image).
then apply an adaptive threshold that varies locally in space, since the contrast varies through the image. This operation results in a binary image.
erode the binary image to break spurious links between regions, and keep only large regions.
compute a measure of the elongation of the regions to keep only the most elongated ones. Here I use the ratio of the eigenvalues of the inertia tensor.
Parameters that are the most difficult to tune is the block size for the adaptive thresholding, and the minimum size of regions to keep. I also tried a Canny filter on the denoised image (skimage.filters.canny), and results were quite good, but edges were not always closed, you might also want to try an edge-detection method such as a Canny filter.
The result is shown below:
# Import modules
import numpy as np
from skimage import io, measure, morphology, restoration, filters
from skimage import img_as_float
import matplotlib.pyplot as plt
# Open the image
im = io.imread('sand_lines.png')
im = img_as_float(im)
# Denoising
tv = restoration.denoise_tv_chambolle(im, weight=0.4)
ero = morphology.erosion(tv, morphology.disk(5))
# Threshold the image
binary = filters.threshold_adaptive(ero, 181)
# Clean the binary image
binary = morphology.binary_dilation(binary, morphology.disk(8))
clean = morphology.remove_small_objects(np.logical_not(binary), 4000)
labels = measure.label(clean, background=0) + 1
# Keep only elongated regions
props = measure.regionprops(labels)
eigvals = np.array([prop.inertia_tensor_eigvals for prop in props])
eigvals_ratio = eigvals[:, 1] / eigvals[:, 0]
eigvals_ratio = np.concatenate(([0], eigvals_ratio))
color_regions = eigvals_ratio[labels]
# Plot the result
plt.figure()
plt.imshow(color_regions, cmap='spectral')