Scikit-Image Questions (specifically re: `blob_log`) - python

I'm trying to use blob detection in scikit-image. blob_log is supposed to return an array of either Nx3 for a 2D image, or an Nx4 for a 3D image (?) The first two values in a 2D image are the (x, y, sigma) and in 3D are (p, x, y sigma)
I'm attempting to load this image into my code which looks like it has quite a few observable blobs & it is a 2D image.
I've got a few questions:
(1) the blob_log is returning a Nx4 array which means its loading it as a 3D image. When I try to print it, it looks like its just a bunch of empty arrays which I don't understand how because when I plt.show() it is a 2D image.
(2) If N is the number of blobs it has calculated, then it is only giving me less than 10% of the total images - I believe this is due to the fact that the image is on a white background making it more difficult for the blob_log to notice - is that correct?
(3) I don't understand how the for loop at the end of the Blob documentation works? How is it plotting the circles over the image? I'm sorry if this is an elementary question, but its frustrating me because I think that would help me with some of the other things I was wondering about.
Attempts to figure out what is going on:
(1) Loading data.coins() and printing it gives me a nice array of values which I assume are the 2D, it still doesn't explain why the image I want to load isn't being recognized as 2D.
(2) I tried to load the data.coins() which should be an obviously provided image with circular images and fooled around with the sigma and threshold settings, I'm getting a variety of different values depending on the settings - is there a good way of figuring out which are the best without having to fool around with the settings until I get one that works?
Due to the length of my code and my question, below is just the applicable parts, but my entire code can be found here
from skimage import data, feature, exposure, io
import matplotlib
import matplotlib.pyplot as plt
img = data.coins()
#img = io.imread('gfp_test.png') #this is the image I linked above just in my dir
print(img)
print(type(img))
A = feature.blob_log(imgG, max_sigma = 30, num_sigma = 10, threshold = .4)
print (A)
Thank you for your help!

(1) You have a color image, while blob_* expect a grayscale image. Use skimage.color.rgb2gray to convert your image to grayscale before using the blob finding functions. See our crash course on NumPy for images for more details.
(2) Let's see if the above fixes your problem. I think blob finding is a local operation, so the white frame around the edges is probably not a problem.
(3) Yes, the variable naming could be clearer. The key is here: sequence = zip(blobs_list, colors, titles). If you look at what those individual variables are, they are length-3 lists with the results from the three different blob-finding methods, three different colors, and three different titles (the names of the three methods). So the outer for-loop is iterating through the methods, and the three panels of the figure. (You should look at the matplotlib documentation for subplots for more on this.)
The inner loop, then, is going through the results of a single blob-finding method and putting a circle around each result. You'll see the x/y transposed, and this is a consequence of the different coordinate conventions between our images (see the crash course linked above) and the matplotlib canvas. Then we create a circle with the appropriate radius for each blob, and add it to the matplotlib axes. See the examples linked from the Circle documentation for more information on adding patches.
Hope this helps!

Related

Image Operations with Python

I hope you're all doing well!
I'm new to Image Manipulation, and so I want to apologize right here for my simple question. I'm currently working on a problem that involves classifying an object called jet into two known categories. This object is made of sub-objects. My idea is to use this sub-objects to transform each jet in a pixel image, and then applying convolutional neural networks to find the patterns.
Here is an example of the pixel images:
jet's constituents pixel distribution
To standardize all the images, I want to find the two most intense pixels and make sure the axis connecting them is in the vertical direction, as well as make sure that the most intense pixel is at the top. It also would be good to impose that one of the sides (left or right) of the image contains the majority of the intensity and to normalize the intensity of the whole image to 1.
My question is: as I'm new to this kind of processing, I don't know if there is a library in Python that can handle these operations. Are you aware of any?
PS: the picture was taken from here:https://arxiv.org/abs/1407.5675
You can look into OpenCV library for Python:
https://docs.opencv.org/master/d6/d00/tutorial_py_root.html.
It supports a lot of image processing functions.
In your case, it probably would be easier to convert the image into a more suitable color space in which one axis stands for color intensity (e.g HSI, HSL, HSV) and trying to find indices of the maximum values along this axis (this should return the pixels with the highest intensity in the image).
Generally, in Python, we use PIL library for basic manipulations with images and OpenCV for advances ones.
But, if understand your task correctly, you can just think of an image as a multidimensional array and use numpy to manipulate it.
For example, if your image is stored in a variable of type numpy.array called img, you can find maximum value along the desired axis just by writing:
img.max(axis=0)
To normalize image you can use:
img /= img.max()
To find which image part is brighter, you can split an img array into desired parts and calculate their mean:
left = img[:, :int(img.shape[1]/2), :]
right = img[:, int(img.shape[1]/2):, :]
left_mean = left.mean()
right_mean = right.mean()

Is there a way to take a large group of 2D images and turn the into a 3D image?

I am currently working on a summer research project and we have generated 360 slices of a tumor. I now need to compile (if that's the right word) these images into one large 3D image. Is there a way to do this with either a python module or an outside source? I would prefer to use a free software if that is possible.
Perhaps via matplotlib, but anyway may require preprocessing I suppose:
https://www.youtube.com/watch?v=5E5mVVsrwZw
In your case, the z axis (3rd dimension) should be specified by your vector of images. Nonetheless, before proceeding, I suppose you would need to extract the shapes of the object you want to reconstruct. For instance, if i take any image of the many 2D you have, I expect to find RGB value for each pixel, but then, for instance if you want to plot a skull like in the video link, as I understand you would need to extract the borders of your object and from each of its frame (2D shape) and then plot their serie. But anyway, the processing may depend on the encoding of the information you have. Perhaps is sufficient to simply plot the series of images.
Some useful link I found:
https://www.researchgate.net/post/How_to_reconstruct_3D_images_from_two_or_four_2D_images
Python: 3D contour from a 2D image - pylab and contourf

shape detection

I have tried 3 algorithms:
Compare by Compare_ssim.
Difference detection by PIL (ImageChops.difference).
Images subtraction.
The first algorithm:
(score, diff) = compare_ssim(img1, img2, full=True)
diff = (diff * 255).astype("uint8")
The second algorithm:
from PIL import Image ,ImageChops
img1=Image.open("canny1.jpg")
img2=Image.open("canny2.jpg")
diff=ImageChops.difference(img1,img2)
if diff.getbbox():
diff.show()
The third algorithm:
image3= cv2.subtract(image1,image2)
The problem is these algorithms are so sensitive. If the images have different noise, they consider that the two images are totally different. Any ideas to fix that?
These pictures are different in many ways (deformation, lighting, colors, shape) and simple image processing just cannot handle all of this.
I would recommend a higher level method that tries to extract the geometry and color of those tubes, in the form of a simple geometric graph. Then compare the graphs rather than the images.
I acknowledge that this is easier said than done, and will only work with this particular kind of scene.
It is very difficult to help since we don't really know which parameters you can change, like can you keep your camera fixed? Will it always be just about tubes? What about tubes colors?
Nevertheless, I think what you are looking for is a framework for image registration and I propose you to use SimpleElastix. It is mainly used for medical images so you might have to get familiar with the library SimpleITK. What's interesting is that you have a lot of parameters to control the registration. I think that you will have to look into the documentation to find out how to control a specific image frequency, the one that create the waves and deform the images. Hereafter I did not configured it to have enough local distortion, you'll have to find the best trade-off, but I think it should be flexible enough.
Anyway, you can get such result with the following code, I don't know if it helps, I hope so:
import cv2
import numpy as np
import matplotlib.pyplot as plt
import SimpleITK as sitk
fixedImage = sitk.ReadImage('1.jpg', sitk.sitkFloat32)
movingImage = sitk.ReadImage('2.jpg', sitk.sitkFloat32)
elastixImageFilter = sitk.ElastixImageFilter()
affine_registration_parameters = sitk.GetDefaultParameterMap('affine')
affine_registration_parameters["NumberOfResolutions"] = ['6']
affine_registration_parameters["WriteResultImage"] = ['false']
affine_registration_parameters["MaximumNumberOfSamplingAttempts"] = ['4']
parameterMapVector = sitk.VectorOfParameterMap()
parameterMapVector.append(affine_registration_parameters)
parameterMapVector.append(sitk.GetDefaultParameterMap("bspline"))
elastixImageFilter.SetFixedImage(fixedImage)
elastixImageFilter.SetMovingImage(movingImage)
elastixImageFilter.SetParameterMap(parameterMapVector)
elastixImageFilter.Execute()
registeredImage = elastixImageFilter.GetResultImage()
transformParameterMap = elastixImageFilter.GetTransformParameterMap()
resultImage = sitk.Subtract(registeredImage, fixedImage)
resultImageNp = np.sqrt(sitk.GetArrayFromImage(resultImage) ** 2)
cv2.imwrite('gray_1.png', sitk.GetArrayFromImage(fixedImage))
cv2.imwrite('gray_2.png', sitk.GetArrayFromImage(movingImage))
cv2.imwrite('gray_2r.png', sitk.GetArrayFromImage(registeredImage))
cv2.imwrite('gray_diff.png', resultImageNp)
Your first image resized to 256x256:
Your second image:
Your second image registered with the first one:
Here is the difference between the first and second image which could show what's different:
This is one of the classical problems of image treatment - and one which does not have an answer which holds universally. The possible answers depend highly on what type of images you have, and what type of information you want to extract from them and the differences between them.
You can reduce noise by two means:
a) take several images of the same object, such that the object does not change. You can stack the images and noise is reduced by square-root of the number of images.
b) You can run a blur filter over the image. The more you blur, the more noise is averaged. Noise is here reduced by square-root of the number of pixels you average over. But so is detail in the images.
In both cases (a) and (b) you run the difference analysis after you applied either method.
Probably not applicable to you as you likely cannot get hold of either: it helps, if you can get hold of flatfields which give the inhomogeneity of illumination and pixel sensitivity of your camera and allow correcting the images prior to any treatment. Similar goes for darkfields which give an estimate of the influence of the read-out noise of the camera and allow correcting images for those.
There is somewhat another 3rd option, which is more high-level: run your object analysis first at a detailed-enough level. And compare the results.

Proper way to overlay multiband images?

I want to overlay two views of the same scene - one is a white-light image (monochrome, used for reference) and the other is an image in a specific band (that has the real data I'm showing).
The white-light image is "reference", the data image is "data". They're ordinary 2D numpy arrays of identical dimensions. I want to show the white reference image using the 'gray' color map, and the data image using the 'hot' color map.
What is the "proper" way to do this?
I started with this:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
hotm = cm.ScalarMappable(cmap='hot')
graym = cm.ScalarMappable(cmap='gray')
ref_rgb = graym.to_rgba(reference) # rgba reference image, 'gray' color map
data_rgb = hotm.to_rgb(data) # rgba data image, 'hot' color map
plt.imshow(ref_rgb + data_rgb)
That didn't work well because in the plt.imshow() call the sum overflowed the range 0..1 (or maybe 0..255; this is confusing) and gave me crazy colors.
Then I replaced the last line with this:
plt.imshow(ref_rgb/2 + data_rgb/2)
That worked, but gives me a very washed-out, low-contrast image.
Finally, I tried this:
plt.imshow(np.maximum(ref_rgb, data_rgb))
That seems to give the best result, but I'm worried that much of my "data" is lost by having lower r, g, or b values than the reference image.
What is the "proper", or "usual" way to do this?
I'm not exactly sure what you're trying to achieve, but hopefully this will give you some ideas. :)
I've never used matplotlib, but from a quick look at the docs, it looks like matplotlib.cm gives you the option to have the pixel data as integers in the 0..255 range or as floats in the 0.0..1.0 range. The float format is more convenient for arithmetic image processing, so I'll assume that's the case in the rest of this answer.
We can do basic image processing by doing simple arithmetic on the RGB pixel values. Roughly speaking, adding (or subtracting) a constant to the RGB value of all your pixels changes the image brightness, multiplying your pixels by a constant changes the image contrast, and raising your pixels to a constant (positive) power changes the image gamma. Of course, you do need to make sure that these operations don't cause the colour values to go out of range. That's not a problem for gamma adjustment, or contrast adjustment (assuming the constant is in the 0.0..1.0 range), but it can be a problem for brightness modification. More subtle brightness & contrast modification can be achieved by suitable combinations of addition and multiplication.
When doing this sort of thing it's often a Good Idea to normalize the pixel values in your image data to the 0.0..1.0 range, either before &/or after you've done your main processing.
Your code above is essentially treating the grey reference data as a kind of mask and using its values, instead of using a constant, to operate on the colour data pixel by pixel. As you've seen, taking the mean of ref_rgb & data_rgb results in a washed-out image because you are reducing the contrast. But see what happens when you multiply ref_rgb & data_rgb: contrast will generally be increased because dark areas in ref_rgb will darken the corresponding pixels in data_rgb but bright areas in ref_rgb will leave the corresponding pixels in data_rgb virtually untouched.
ImageMagick has some nice examples of arithmetic image processing.
Another thing to try is to convert your data_rgb to HSV format, and replace the V (value) data with the greyscale data from ref_rgb. And you can do similar tricks with the S (saturation) data, although the effect is generally a bit subtler.

Display and Save Large 2D Matrix with Full Resolution in Python

I have a large 2D array (4000x3000) saved as a numpy array which I would like to display and save while keeping the ability to look at each individual pixels.
For the display part, I currently use matplotlib imshow() function which works very well.
For the saving part, it is not clear to me how I can save this figure and preserve the information contained in all 12M pixels. I tried adjusting the figure size and the resolution (dpi) of the saved image but it is not obvious which figsize/dpi settings should be used to match the resolution of the large 2D matrix displayed. Here is an example code of what I'm doing (arr is a numpy array of shape (3000,4000)):
fig = pylab.figure(figsize=(16,12))
pylab.imshow(arr,interpolation='nearest')
fig.savefig("image.png",dpi=500)
One option would be to increase the resolution of the saved image substantially to be sure all pixels will be properly recorded but this has the significant drawback of creating an image of extremely large size (at least much larger than the 4000x3000 pixels image which is all that I would really need). It also has the disadvantage that not all pixels will be of exactly the same size.
I also had a look at the Python Image Library but it is not clear to me how it could be used for this purpose, if at all.
Any help on the subject would be much appreciated!
I think I found a solution which works fairly well. I use figimage to plot the numpy array without resampling. If you're careful in the size of the figure you create, you can keep full resolution of your matrix whatever size it has.
I figured out that figimage plots a single pixel with size 0.01 inch (this number might be system dependent) so the following code will for example save the matrix with full resolution (arr is a numpy array of shape (3000,4000)):
rows = 3000
columns = 4000
fig = pylab.figure(figsize=(columns*0.01,rows*0.01))
pylab.figimage(arr,cmap=cm.jet,origin='lower')
fig.savefig("image.png")
Two issues I still have with this options:
there is no markers indicating column/row numbers making it hard to know which pixel is which besides the ones on the edges
if you decide to interactively look at the image, it is not possible to zoom in/out
A solution that also solves the above 2 issues would be terrific, if it exists.
The OpenCV library was designed for scientific analysis of images. Consequently, it doesn't "resample" images without your explicitly asking for it. To save an image:
import cv2
cv2.imwrite('image.png', arr)
where arr is your numpy array. The saved image will be the same size as your array arr.
You didn't mention the color-model that you are using. Pngs, like jpegs, are usually 8-bit per color channel. OpenCV will support up to 16-bits per channel if you request it.
Documentation on OpenCV's imwrite is here.

Categories

Resources