I have a few thousand data-points with labels which I'm plotting in gray-scale as an image using PIL (Python Image Library). I'm using the function "render()" available here. I would now also like to pass cluster labels into the function for each point and plot the clusters in different colours. For this I have to generate different colours randomly.
Can someone suggest how I can do this colour generation?
Thanks!
A nice colour generator is the one Dopplr came up with for city labels:
We wanted a deterministic RGB colour value
for each city. At first, we tried mapping the
latitude and longitude of a city to a point in
colour space, but we found that this made
neighbouring cities too similar in colour. This
means that people who travel frequently between Glasgow and Edinburgh wouldn’t
clearly see the difference in colour between
the two. Also, since so much of the Earth’s
surface is covered in water rather than cities,
it leads to a sparse use of the potential colour
space. In the end, we went with a much simpler approach: we take the MD5 digest of the city’s name, convert it to hex and take the first 6 characters as a CSS RGB value.
From the Dopplr blog, saved by Ian Kennedy.
http://everwas.com/2009/03/dopplr-city-colors.html
This is easy to implement in Python and you can input your label names and get an RGB colour out.
Related
I am a beginner in python & image processing.I have the following image.
I would like to detect the changes between this picture and another one where the other one may be
taken from a slightly different angle or the object is translated
have different light conditions
and the changes may be
a change in color in part of the object
an extra or missing part
After various searches I thought about using ORB to detect the matching parts and remove them from the picture then use contour to extract and compare the difference.
But I cannot seem to find a way to remove the matching parts from the two images.
I am open for all suggestions / better way to approach the problem.
edit:
Sorry , forgot to mention that the color change could either be white or pink
Looking at your image, it appears there are three dominant colors. If this is always the case the first thing that comes to mind is to apply a color K mean algorithm with three clusters, like explained here.
The center color of each cluster would then give you information on the color of the tubes, and the size of each cluster (# of pixels belonging to that cluster) would give you if there are extra or missing parts.
I have an image as follows that shows the residue of fluorescent powder left on a surface after 5 sequential contacts. Is it possible to quantify the a difference in the amount of residue between contacts?
I have looked at Fiji/ImageJ and selected each finger print at a time to get the mean grey value but I don't see much difference to be honest. Any help or thoughts would be very much appreciated. Happy to think about python or matlab.
In order for the quantification of the intensities to be useful, I would imagine you would need to assume your image was evenly lit and that any fluorescence you see isn't a result of oversaturation. That being said, in principle, you could contour the given fingerprint, duplicate it to a new image, and then measure the stack histogram after adjusting the threshold such that regions darker than your fingerprint powder are set to black. Perhaps the change in distribution will illustrate the change that occurs, if any.
Edit:
First: merge the RGB channels by adding them to one another using the channel calculator function. Your signal is the result of multiple colors, so splitting it does not make sense to me here.
The steps would then be:
Duplicate your given print to a new window.
Use "Adjust Threshold" to set a threshold of 0-n, where n is the highest intensity that doesn't include your fingerprint.
Run the command "Edit/Selection/Create Selection."
Run the command "Edit/Clear."
Press "ctrl+H" to measure the histogram of the pixels, and then "List" to get the actual values.
Repeat for each print and plot on the same chart.
When you are already obtaining the actual histogram values, and not just the range from the particle analyzer, then I'm not sure there's much else that I can personally suggest.
Bit of theoretical question. I'd like someone to explain me which colour space provides the best distances among similar looking colours? I am trying to make a humidity detection system using a normal RGB camera in dry fruits like almond and peanuts. I have tried RGB and HSV color space for EDA(please find the attachment). Currently I am unable to find a really big difference between the accepted and rejected model. It would be really a great help if someone can tell me what should I look for and also where.
The problem with this question is that you can't define "similar looking" without some metric value, and the metric value depends on the color space you choose.
That said, CIELab color space is said to is supposed to be created with the aim of similar looking colors having similar coordinates, and is frequently used in object recognition. haven't used it myself though, so no personal experience.
For starters I would recommend on treating the pixels associated with the dry fruits as 3D coordinates in the color space that you chose, and try to apply classification algorithm on these data points. Common algorithms that I can think of are linear discriminant analysis (LDA), Support Vector Machine (SVM) and Expectation Maximization (EM). All these algorithms belong to supervised learning class, as they require labeled data.
If you images are taken under different light conditions, a good choice for color space is one that separates the luminance value from the chromatic values, such as LUV.
Anyhow, it will be easier to answer this question if you provide example images.
I was looking for ways to classify the different colours present in the bands of the resistor using openCV and python.
What algorithm can be used to segment the image into different bands etc. I tried using the watershed algorithm but i couldnt get the markers right and didnt get the desired results.
Sample Image I used:
The possible colors of resistor codes are known apriori.
You could simply scan across the resistor and check which of those colors are present in which order.
The final implementation would of course depend on many things.
If you just have a random snapshot of a resistor it will be more difficult than having the same orientation, position, perspective and scale every time.
Finding the resistor and its main axis should be rather simple, then all you need is a scan line.
Another option: Transform the image to HUE, then use the resistors body colour and the background colour for two threshold operations, which should leave you with the colour bands.
how can I identify the presence or absence of regular stripes of different colors, but ranging from very very very very light pink to black inside of a scanned image (bitmap 200x200dpi 24-bit).
Carry a few examples.
Example 1
Example 2 (the lines are in all the columns except 7 in the second row of the last column)
For now try to identify (using python language) whether or not there is at least 5-10 pixels for the presence of different color from white to each strip, however, does not always work because the scanned image is not of high quality and the strip changes color very similar to color that surrounds it.
Thanks.
This looks to me a connected component labeling in an image to identify discrete regions of certain color range. You can have a look to cvBlobLib. Some pre-processing would be required to merge the pixels if there are holes or small variations between neighbors.
Not going to happen. The human visual system is far better than any image processing system, and I don't see anything in the 2nd row of #3. #1 and #5 are also debatable.
You need to find some way to increase the optical quality of your input.
Search for segmentation algorithm ,with a low threshold.
It should give you good results as the edges are sharp.
Sobel would be a good start ;)