Comparing fluorescence intensity of finger print residue after 5 contacts - python

I have an image as follows that shows the residue of fluorescent powder left on a surface after 5 sequential contacts. Is it possible to quantify the a difference in the amount of residue between contacts?
I have looked at Fiji/ImageJ and selected each finger print at a time to get the mean grey value but I don't see much difference to be honest. Any help or thoughts would be very much appreciated. Happy to think about python or matlab.

In order for the quantification of the intensities to be useful, I would imagine you would need to assume your image was evenly lit and that any fluorescence you see isn't a result of oversaturation. That being said, in principle, you could contour the given fingerprint, duplicate it to a new image, and then measure the stack histogram after adjusting the threshold such that regions darker than your fingerprint powder are set to black. Perhaps the change in distribution will illustrate the change that occurs, if any.
Edit:
First: merge the RGB channels by adding them to one another using the channel calculator function. Your signal is the result of multiple colors, so splitting it does not make sense to me here.
The steps would then be:
Duplicate your given print to a new window.
Use "Adjust Threshold" to set a threshold of 0-n, where n is the highest intensity that doesn't include your fingerprint.
Run the command "Edit/Selection/Create Selection."
Run the command "Edit/Clear."
Press "ctrl+H" to measure the histogram of the pixels, and then "List" to get the actual values.
Repeat for each print and plot on the same chart.
When you are already obtaining the actual histogram values, and not just the range from the particle analyzer, then I'm not sure there's much else that I can personally suggest.

Related

Creating string art from image

I am relatively new to python. I would like to make some string-art portraits. I was watching this video which really intrigued me:
https://youtu.be/RSRNZaq30W0?t=56
I understand that to achieve this, I would first need to load the image, then do some edge-detection and then use some form of Delaunay triangulation but have no idea where to even start.
I looked up some sample code for OpenCV and figured out how to do basic edge-detection. How do I then convert those to points? And then what sort of algorithm would I need to "fill in" the different gradients?
I don't even know if this is the right approach to achieve this. Could someone please point me in the right direction and perhaps give me some sample code to get started? I would really appreciate it very much.
Edge detection or triangulation is less important in this application. The core part is to understand the pseudo-code at 1:27 of the video. The final product uses a single string at wrap around different nails in particular way, so that: darker areas in original image have less string density, and brighter areas have more strings crossing over.
The initial preparation is to:
generate an edge dection version of the image (A)
generate a blurred version of the image (B)
Then the first step is to create random positions for the nails. Apparently to achieve a good outcome, if a random-generated nail is close enough to the 'edge' of a black-white image, you should 'snap' it to the edge, so that later the strings wrapping around these edge nails will create an accurate boundary just like in the original picture. Here you use the image A) to adjust your nails. For example, just perform some potential minimization:
Add small random position change to the nails. If a nail now gets
close enough to a white point (edge) in image A), directly change to
that position.
Compute the potential. Make sure your potential function
penalizes two points that come too close. Repeat 1) 100 times to
pick one with lowest potential.
Iterate 1) and 2) 20 times
Next you decide how you want the strings to wrap around the nails.
Starting from a point A, look at some neighboring points (within certain radius) B1, B2, B3, etc. Imagine if you attach a string with certain width from A to Bi, it visually changes your string image P in a slight way. Render line segment A-B1 on P to get P1, render A-B2 on P to get P2, etc.
Find the best Bi so that the new image Pi looks closer to the original. You can just do a pixel-wise comparison between the string image and the original picture, and use this measurement to score each Bi. The video author used a blurred image B) to get rid of textures that may randomly impact his scoring algorithm.
Now the optimal Bi becomes the new A. Find its neighbors and loop over. The algorithm may stop if adding any new strings only negatively impacts the score.
There are cases where bright areas in a photo are widely separated, so any white strings crossing the dark gap will only decrease the score. Use your judgement to tweak the algorithm to workaround those non-convex scenarios.

Image Segmentation Based on Colour

I was looking for ways to classify the different colours present in the bands of the resistor using openCV and python.
What algorithm can be used to segment the image into different bands etc. I tried using the watershed algorithm but i couldnt get the markers right and didnt get the desired results.
Sample Image I used:
The possible colors of resistor codes are known apriori.
You could simply scan across the resistor and check which of those colors are present in which order.
The final implementation would of course depend on many things.
If you just have a random snapshot of a resistor it will be more difficult than having the same orientation, position, perspective and scale every time.
Finding the resistor and its main axis should be rather simple, then all you need is a scan line.
Another option: Transform the image to HUE, then use the resistors body colour and the background colour for two threshold operations, which should leave you with the colour bands.

python2.7 histogram comparison - white background anomaly

my program's purpose is to take 2 images and decide how similar they are.
im not talking here about identical, but similarity. for example, if i take 2 screenshots of 2 different pages of the same website, their theme colors would probably be very similar and therefor i want the program to declare that they are similar.
my problem starts when both images have a white background that pretty much takes over the histogram calculation (over then 30% of the image is white and the rest is distributed).
in that case, the cv2.compareHist (using correlation method, which works for the other cases) gives very bad results, that is, the grade is very high even though they look very different.
i have thought about taking the white (255) off the histogram before comparing, but that requires me to calculate the histogram with 256 bins, which is not good when i want to check similarity (i thought that using 32 or 64 bins would be best)
unfortunately i cant add images im working with due to legal reasons
if anyone can help with an idea, or code that solves it i would be very grateful
thank you very much
You can remove the white color, rebin the histogra and then compare:
Compute a histrogram with 256 bins.
Remove the white bin (or make it zero).
Regroup the bins to have 64 bins by adding the values of 4 consecutive bins.
Perform the compareHist().
This would work for any "predominant color". To generalize, you can do the following:
Compare full histrograms. If they are different, then finish.
If they are similar, look for the predominant color (with a 256-bin histogram), and perform the procedure described above, to remove the predominant color from the comparisson.

Python - plotting different coloured clusters using PIL

I have a few thousand data-points with labels which I'm plotting in gray-scale as an image using PIL (Python Image Library). I'm using the function "render()" available here. I would now also like to pass cluster labels into the function for each point and plot the clusters in different colours. For this I have to generate different colours randomly.
Can someone suggest how I can do this colour generation?
Thanks!
A nice colour generator is the one Dopplr came up with for city labels:
We wanted a deterministic RGB colour value
for each city. At first, we tried mapping the
latitude and longitude of a city to a point in
colour space, but we found that this made
neighbouring cities too similar in colour. This
means that people who travel frequently between Glasgow and Edinburgh wouldn’t
clearly see the difference in colour between
the two. Also, since so much of the Earth’s
surface is covered in water rather than cities,
it leads to a sparse use of the potential colour
space. In the end, we went with a much simpler approach: we take the MD5 digest of the city’s name, convert it to hex and take the first 6 characters as a CSS RGB value.
From the Dopplr blog, saved by Ian Kennedy.
http://everwas.com/2009/03/dopplr-city-colors.html
This is easy to implement in Python and you can input your label names and get an RGB colour out.

How to identify stripes of different colors

how can I identify the presence or absence of regular stripes of different colors, but ranging from very very very very light pink to black inside of a scanned image (bitmap 200x200dpi 24-bit).
Carry a few examples.
Example 1
Example 2 (the lines are in all the columns except 7 in the second row of the last column)
For now try to identify (using python language) whether or not there is at least 5-10 pixels for the presence of different color from white to each strip, however, does not always work because the scanned image is not of high quality and the strip changes color very similar to color that surrounds it.
Thanks.
This looks to me a connected component labeling in an image to identify discrete regions of certain color range. You can have a look to cvBlobLib. Some pre-processing would be required to merge the pixels if there are holes or small variations between neighbors.
Not going to happen. The human visual system is far better than any image processing system, and I don't see anything in the 2nd row of #3. #1 and #5 are also debatable.
You need to find some way to increase the optical quality of your input.
Search for segmentation algorithm ,with a low threshold.
It should give you good results as the edges are sharp.
Sobel would be a good start ;)

Categories

Resources