I was looking for ways to classify the different colours present in the bands of the resistor using openCV and python.
What algorithm can be used to segment the image into different bands etc. I tried using the watershed algorithm but i couldnt get the markers right and didnt get the desired results.
Sample Image I used:
The possible colors of resistor codes are known apriori.
You could simply scan across the resistor and check which of those colors are present in which order.
The final implementation would of course depend on many things.
If you just have a random snapshot of a resistor it will be more difficult than having the same orientation, position, perspective and scale every time.
Finding the resistor and its main axis should be rather simple, then all you need is a scan line.
Another option: Transform the image to HUE, then use the resistors body colour and the background colour for two threshold operations, which should leave you with the colour bands.
Related
I have an image as follows that shows the residue of fluorescent powder left on a surface after 5 sequential contacts. Is it possible to quantify the a difference in the amount of residue between contacts?
I have looked at Fiji/ImageJ and selected each finger print at a time to get the mean grey value but I don't see much difference to be honest. Any help or thoughts would be very much appreciated. Happy to think about python or matlab.
In order for the quantification of the intensities to be useful, I would imagine you would need to assume your image was evenly lit and that any fluorescence you see isn't a result of oversaturation. That being said, in principle, you could contour the given fingerprint, duplicate it to a new image, and then measure the stack histogram after adjusting the threshold such that regions darker than your fingerprint powder are set to black. Perhaps the change in distribution will illustrate the change that occurs, if any.
Edit:
First: merge the RGB channels by adding them to one another using the channel calculator function. Your signal is the result of multiple colors, so splitting it does not make sense to me here.
The steps would then be:
Duplicate your given print to a new window.
Use "Adjust Threshold" to set a threshold of 0-n, where n is the highest intensity that doesn't include your fingerprint.
Run the command "Edit/Selection/Create Selection."
Run the command "Edit/Clear."
Press "ctrl+H" to measure the histogram of the pixels, and then "List" to get the actual values.
Repeat for each print and plot on the same chart.
When you are already obtaining the actual histogram values, and not just the range from the particle analyzer, then I'm not sure there's much else that I can personally suggest.
I've got the following image.
Other Samples
I want to detect the six square-shaped green portions and the one circular portion above them. I basically want a binary image with these portions marked 1 (white) and everything else 0 (black).
What have I done so far?
I found a range of H, S, and V within which these colors fall which works fine for a single image, but I've got multiple such images, some under different illumination (brightness) conditions and the ranges do not work in those cases. What should I do to make the thresholding as invariant to brightness as possible? Is there a different approach I should take for thresholding?
What you did was manually analyze the values you need for thresholding for a specific image, and then apply that. What you see is that analysis done on one image doesn't necessarily fit other images.
The solution is to do the analysis automatically for each image. This can be achieved by creating a histogram for each of the channels, and if you're working in HSV, I'm guessing that the H channel would be pretty much useless in this case.
Anyway, once you have the histograms, you should analyze the threshold using something like Lloyd-Max, which is basically a K-Means type clustering of intensities. This should give the centroids for the intensity of the white background, and the other colors. Then you choose the threshold based on the cluster standard deviation.
For example, in the image you gave above, the histogram of the S channel looks like:
You can see the large blob near 0 is the white background that has the lowest saturation.
I have a few thousand data-points with labels which I'm plotting in gray-scale as an image using PIL (Python Image Library). I'm using the function "render()" available here. I would now also like to pass cluster labels into the function for each point and plot the clusters in different colours. For this I have to generate different colours randomly.
Can someone suggest how I can do this colour generation?
Thanks!
A nice colour generator is the one Dopplr came up with for city labels:
We wanted a deterministic RGB colour value
for each city. At first, we tried mapping the
latitude and longitude of a city to a point in
colour space, but we found that this made
neighbouring cities too similar in colour. This
means that people who travel frequently between Glasgow and Edinburgh wouldn’t
clearly see the difference in colour between
the two. Also, since so much of the Earth’s
surface is covered in water rather than cities,
it leads to a sparse use of the potential colour
space. In the end, we went with a much simpler approach: we take the MD5 digest of the city’s name, convert it to hex and take the first 6 characters as a CSS RGB value.
From the Dopplr blog, saved by Ian Kennedy.
http://everwas.com/2009/03/dopplr-city-colors.html
This is easy to implement in Python and you can input your label names and get an RGB colour out.
how can I identify the presence or absence of regular stripes of different colors, but ranging from very very very very light pink to black inside of a scanned image (bitmap 200x200dpi 24-bit).
Carry a few examples.
Example 1
Example 2 (the lines are in all the columns except 7 in the second row of the last column)
For now try to identify (using python language) whether or not there is at least 5-10 pixels for the presence of different color from white to each strip, however, does not always work because the scanned image is not of high quality and the strip changes color very similar to color that surrounds it.
Thanks.
This looks to me a connected component labeling in an image to identify discrete regions of certain color range. You can have a look to cvBlobLib. Some pre-processing would be required to merge the pixels if there are holes or small variations between neighbors.
Not going to happen. The human visual system is far better than any image processing system, and I don't see anything in the 2nd row of #3. #1 and #5 are also debatable.
You need to find some way to increase the optical quality of your input.
Search for segmentation algorithm ,with a low threshold.
It should give you good results as the edges are sharp.
Sobel would be a good start ;)
I have about 3000 images and 13 different colors (the background of the majority of these images is white). If the main color of an image is one of those 13 different colors, I'd like them to be associated.
I've seen similar questions like Image color detection using python that ask for an average color algorithm. I've pretty much copied that code, using the Python Image Library and histograms, and gotten it to work - but I find that it's not too reliable for determining main colors.
Any ideas? Or libraries that could address this?
Thanks in advance!
:EDIT:
Thanks guys - you all pretty much said the same thing, to create "buckets" and increase the bucket count with each nearest pixel of the image. I seem to be getting a lot of images returning "White" or "Beige," which is also the background on most of these images. Is there a way to work around or ignore the background?
Thanks again.
You can use the getcolors function to get a list of all colors in the image. It returns a list of tuples in the form:
(N, COLOR)
where N is the number of times the color COLOR occurs in the image. To get the maximum occurring color, you can pass the list to the max function:
>>> from PIL import Image
>>> im = Image.open("test.jpg")
>>> max(im.getcolors(im.size[0]*im.size[1]))
(183, (255, 79, 79))
Note that I passed im.size[0]*im.size[1] to the getcolors function because that is the maximum maxcolors value (see the docs for details).
Personally I would split the color space into 8-16 main colors, then for each pixel I'd increment the closest colored bucket by one. At the end the color of the bucket with the highest amount of pixels wins.
Basically, think median instead of average. You only really care about the colors in the image, whereas averaging colors usually gives you a whole new color.
Since you're trying to match a small number of preexisting colors, you can try a different approach. Test each image against all of the colors, and see which one is the closest match.
As for doing the match, I'd start by resizing each image to a smaller size to reduce the amount of work you'll be doing for each; our perception of the color of an image isn't too dependent on the amount of detail. For each pixel of the smaller image, find which of the 13 colors is the closest. If it's within some threshold, bump a counter for that color. At the end whichever of the 13 has the highest count is the winner.