I'm new in image analysis (with Python) and I would like to apply the richardson_lucy deconvolution (from skimage) on my data (CT scans). For this reason, I estimated the PSF in "number of voxels" by means of a specific software. Its value is roughly 6.73 voxels, but I don't know how to use it as a paramter in the function.
The function uses the PSF parameter as a ndarray, so I tried in this way:
from skimage import io
from pylab import array
img = io.imread ("Slice1.tif")
import skimage.restoration as rst
PSF = array (6.7)
img_dbl = rst.richardson_lucy (img, PSF, iterations=10)
It shows me this error: IndexError: too many indices for array
In CT scans, blurring between two different materials can be linked to a Gaussian PSF. If you have more tips for deblurring (maybe better than RL) just write them.
Can any one please help me.
Have a similar problem and still researching it. In my case it didn't work if I didn't use np.uint8 as type. CT data should be 16 bit but only uses the first 12 bits (which are mapped to values between [-1024, 3096]. So I had to rescale my image data to [0-255] before getting anything than black or white out it.
If I understood that correctly, the sum of the PSF should always be 1. What I can guess from your question is that you assume the point spread function to be a gaussian with a meaningful (95% of values?) spread of 6.7 pixels. In that case you would have to model the PSF as a gaussian (that's what I came here for).
You can create one with the method described by #FuzzyDuck in this post.
PSF = gkern(5,2)
This would create a gaussian 5x5 kernel with sum 1 using the proposed method by #FuzzyDuck with a sigma of 2. Note that point spread functions could be applied several times so you have to experiment a little bit with the values (or use an algorithm to approximate that).
Related
Given two images of the same scene with potential alignment, focus, lighting differences and noise, I am looking for an operation that I can run on these images that produces another image of the difference between them that minimizes these differences or is more sensitive to the structural differences between them than the global differences. My initial thought was a comparison between the corresponding neighborhoods around a pixel in image A and the same pixel in image B might work.
Is this function already implemented in OpenCV or some other Python library (scipy, numpy etc)?
My musings:
A simple frame difference would tell me where absolute differences occur but is very brittle to alignment, lighting differences. Maybe there is a way to find the standard deviation over a pixel's neighborhood. Numpy's std only works by axis...
This seems like I want the correlation between two signals but I don't know how to extend this to a non-repeating 2D world. scipy.signal.correlate2d seems like it may work if there was an efficient way to just pass corresponding neighborhoods to it. However, I don't have a good feel for what is going on under the hood.
A convolution of one image where the kernel comes from corresponding locations in the other images would give a comparison that would handle noise and focus issues well but I don't know how to use a dynamic kernel for a convolution.
If I had a library of basically identical images (not an original assumption but doable) to compare one image to, I could use a mean difference or mixture of gaussians. But I don't think this would help with alignment or lighting. I could align the image first and then do the comparison.
Per the comment below, I looked up the skimage SSIM (Structural Similarity Index) method that is used to measure image degradation due to things like lossy compression and decompression. It actually expects two copies of the same image - one a truth source, one in a potentially degraded state due to lossy compression and decompression. This method is soft on global bias (lighting), which is good, but sensitive to noise (by design) and especially sensitive to misalignment.
The comment led me to MSE which acts globally, but if iterated over an image by neighborhood, it gives a good result - insensitive to bias, noise but not structural differences. However, it is fairly sensitive to alignment differences and very slow in python...
# Mean Squared Error
def mse(imageA, imageB):
return np.mean((imageA - imageB)**2)
from scipy import misc
import numpy as np
face = misc.face()
nface = np.array(face)
nface[295:305,395:415] = face[195:205,495:515] #discontinuous region
nface = cv2.blur(nface, (3,3)) # focus effects
nface = nface + (np.random.randn(*face.shape) * 10 - 5) # noise
nface = (nface * .9 + 20).astype(int) #lighting
n = m = 3
output = np.zeros(face.shape[:2])
for i in range(face.shape[0]):
for j in range(face.shape[1]):
if i > n and i < face.shape[0]-n and j > m and j < face.shape[1]-m:
output[i,j] = mse(nface[i-n:i+n, j-m:j+m], face[i-n:i+n, j-m:j+m])
Is this a common image processing technique? Is there an name for this or an optimized implementation in openCV or Numpy?
I have tried 3 algorithms:
Compare by Compare_ssim.
Difference detection by PIL (ImageChops.difference).
Images subtraction.
The first algorithm:
(score, diff) = compare_ssim(img1, img2, full=True)
diff = (diff * 255).astype("uint8")
The second algorithm:
from PIL import Image ,ImageChops
img1=Image.open("canny1.jpg")
img2=Image.open("canny2.jpg")
diff=ImageChops.difference(img1,img2)
if diff.getbbox():
diff.show()
The third algorithm:
image3= cv2.subtract(image1,image2)
The problem is these algorithms are so sensitive. If the images have different noise, they consider that the two images are totally different. Any ideas to fix that?
These pictures are different in many ways (deformation, lighting, colors, shape) and simple image processing just cannot handle all of this.
I would recommend a higher level method that tries to extract the geometry and color of those tubes, in the form of a simple geometric graph. Then compare the graphs rather than the images.
I acknowledge that this is easier said than done, and will only work with this particular kind of scene.
It is very difficult to help since we don't really know which parameters you can change, like can you keep your camera fixed? Will it always be just about tubes? What about tubes colors?
Nevertheless, I think what you are looking for is a framework for image registration and I propose you to use SimpleElastix. It is mainly used for medical images so you might have to get familiar with the library SimpleITK. What's interesting is that you have a lot of parameters to control the registration. I think that you will have to look into the documentation to find out how to control a specific image frequency, the one that create the waves and deform the images. Hereafter I did not configured it to have enough local distortion, you'll have to find the best trade-off, but I think it should be flexible enough.
Anyway, you can get such result with the following code, I don't know if it helps, I hope so:
import cv2
import numpy as np
import matplotlib.pyplot as plt
import SimpleITK as sitk
fixedImage = sitk.ReadImage('1.jpg', sitk.sitkFloat32)
movingImage = sitk.ReadImage('2.jpg', sitk.sitkFloat32)
elastixImageFilter = sitk.ElastixImageFilter()
affine_registration_parameters = sitk.GetDefaultParameterMap('affine')
affine_registration_parameters["NumberOfResolutions"] = ['6']
affine_registration_parameters["WriteResultImage"] = ['false']
affine_registration_parameters["MaximumNumberOfSamplingAttempts"] = ['4']
parameterMapVector = sitk.VectorOfParameterMap()
parameterMapVector.append(affine_registration_parameters)
parameterMapVector.append(sitk.GetDefaultParameterMap("bspline"))
elastixImageFilter.SetFixedImage(fixedImage)
elastixImageFilter.SetMovingImage(movingImage)
elastixImageFilter.SetParameterMap(parameterMapVector)
elastixImageFilter.Execute()
registeredImage = elastixImageFilter.GetResultImage()
transformParameterMap = elastixImageFilter.GetTransformParameterMap()
resultImage = sitk.Subtract(registeredImage, fixedImage)
resultImageNp = np.sqrt(sitk.GetArrayFromImage(resultImage) ** 2)
cv2.imwrite('gray_1.png', sitk.GetArrayFromImage(fixedImage))
cv2.imwrite('gray_2.png', sitk.GetArrayFromImage(movingImage))
cv2.imwrite('gray_2r.png', sitk.GetArrayFromImage(registeredImage))
cv2.imwrite('gray_diff.png', resultImageNp)
Your first image resized to 256x256:
Your second image:
Your second image registered with the first one:
Here is the difference between the first and second image which could show what's different:
This is one of the classical problems of image treatment - and one which does not have an answer which holds universally. The possible answers depend highly on what type of images you have, and what type of information you want to extract from them and the differences between them.
You can reduce noise by two means:
a) take several images of the same object, such that the object does not change. You can stack the images and noise is reduced by square-root of the number of images.
b) You can run a blur filter over the image. The more you blur, the more noise is averaged. Noise is here reduced by square-root of the number of pixels you average over. But so is detail in the images.
In both cases (a) and (b) you run the difference analysis after you applied either method.
Probably not applicable to you as you likely cannot get hold of either: it helps, if you can get hold of flatfields which give the inhomogeneity of illumination and pixel sensitivity of your camera and allow correcting the images prior to any treatment. Similar goes for darkfields which give an estimate of the influence of the read-out noise of the camera and allow correcting images for those.
There is somewhat another 3rd option, which is more high-level: run your object analysis first at a detailed-enough level. And compare the results.
I have a HEALPix all-sky map, from the AKARI Far Infrared Surveyor databse (publicly released). I have tried to "smooth" the map using healpy, but the result looks very strange. Is there a better way? My question however relates to any all-sky HEALPix map (i.e. IRAS, Planck, WISE, WMAP).
My objective is to "smooth" the effective point-spread function of this AKARI map to an angular resolution of 1-degree (the original data has a PSF of about 1 arcminute). This is so that I can compare the far infrared AKARI map to lower resolution microwave maps (specifically, those of the anomalous microwave foreground).
In my example below, I'm using a degraded version of the map, so it'd be small enough to upload to Github. This means that the pixels are about 3.42 arcminutes. I wouldn't degrade the pixel scale so much, before PSF smoothing, normally- but this is just an example:
#Load the packages needed for visualization, and HEALPix processing
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import healpy as hp
import healpy.projector as pro
#Loads the HEALPix .FITS file into an array
map_in = hp.read_map("akari_WideL_1_1024.fits", nest = True)
#Visualizes the all-sky map, before any processing is done.
hp.mollview(map_in, title='AKARI All-Sky Map:', nest = True, norm = 'hist')
#Smoothes the map with a 1-degree FWHM Gaussian (fwhm given in radians).
map_out = hp.sphtfunc.smoothing(map_out, fwhm = 0.017, iter = 1)
#Visualizes the the map after smoothing
hp.mollview(map_out, title='AKARI All-Sky Map:', nest = True, norm = 'hist')
I have tried the healpy.sphtfunct.smoothing routine (https://healpy.readthedocs.org/en/latest/generated/healpy.sphtfunc.smoothing.html#healpy.sphtfunc.smoothing).As far as I understand, smoothing converts the map into spherical harmonics, then convolves with the gaussian, and then converts it back into a spatial map.
I've saved the ipython notebook as well as the low-res .FITS HEALpix map in a github repository, here:
https://github.com/aaroncnb/healpy_smoothing_test
(You'll need to have the healpy package installed)
By running the code in the notebook, you can easily visualize the trouble I'm having- after smoothing the map, there are some strange "artifacts", as if the pixels had been iteratively box-averaged, rather than smoothed with a circular guassian profile. What I expect to see, is just a blurrier version of the input map.
I think I'm missing something fundamental about the conversion to spherical harmonics, before the smoothing is done.
Has anyone tried to do this kind of all-sky smoothing before, on a HEALPix map?
I believe another option is to convert the map to a standard, rectangular array, and then conduct the smoothing. However I remain curious about solving the problem without leaving the HEALPix format.
It appears smoothing works on a RINGed map only (it kind of makes sense to me, since this seems a bit easier to handle mathematically). Thus, you'll need to convert your input map to a RINGed format:
map_ring = hp.pixelfunc.reorder(map_in, inp='NEST', out='RING')
map_out = hp.sphtfunc.smoothing(map_ring, fwhm = 0.17, iter = 1)
hp.mollview(map_out, title='AKARI All-Sky Map:', nest = False, norm = 'hist')
This answer is from a bit of trial and error, because I can't find anything definitive on it in the documentation, and I haven't dived into the source code (though, with the below result, it may be easy to verify whether my assumption is correct by looking through the relevant source code).
Or, you may want to ask the healpix/healpy people directly.
(I'd suggest it is in fact a shortcoming in the documentation: the docs for healpy.sphtfunc.smoothing don't mention the required form for the input. I guess that's a healpy issue/PR for another day.)
Btw, bonus points for creating a SSCCE as a notebook file on Github! (Now if only StackOverflow also rendered notebooks.)
I was wondering if anyone knew why there is no documentation for HOGDescriptors in the Python bindings of OpenCV.
Maybe I've just missed them, but the only code I've found of them is this thread: Get HOG image features from OpenCV + Python?
If you scroll down in that thread, this code is found in there:
import cv2
hog = cv2.HOGDescriptor()
im = cv2.imread(sample)
h = hog.compute(im)
I've tested this and it works -- so the Python Bindings do exist, just the documentation doesn't. I was wondering if anyone knew why documentation for the Python bindings for HOG is so difficult to find / non-existent. Does anyone know if there is a tutorial I can read anywhere about HOG (especially via the Python Bindings)? I'm new to HOG and would like to see a few examples of how OpenCV does stuff before I start writing my own stuff.
1. Get Inbuilt Documentation:
Following command on your python console will help you know the structure of class HOGDescriptor:
import cv2
help(cv2.HOGDescriptor())
2. Example code:
Here is a snippet of code to initialize an cv2.HOGDescriptor with different parameters (The terms I used here are standard terms which are well defined in OpenCV documentation here):
import cv2
image = cv2.imread("test.jpg",0)
winSize = (64,64)
blockSize = (16,16)
blockStride = (8,8)
cellSize = (8,8)
nbins = 9
derivAperture = 1
winSigma = 4.
histogramNormType = 0
L2HysThreshold = 2.0000000000000001e-01
gammaCorrection = 0
nlevels = 64
hog = cv2.HOGDescriptor(winSize,blockSize,blockStride,cellSize,nbins,derivAperture,winSigma,
histogramNormType,L2HysThreshold,gammaCorrection,nlevels)
#compute(img[, winStride[, padding[, locations]]]) -> descriptors
winStride = (8,8)
padding = (8,8)
locations = ((10,20),)
hist = hog.compute(image,winStride,padding,locations)
3. Reasoning:
The resultant hog descriptor will have dimension as:
9 orientations X (4 corner blocks that get 1 normalization + 6x4 blocks on the edges that get 2 normalizations + 6x6 blocks that get 4 normalizations) = 1764. as I have given only one location for hog.compute().
4. Different way to initialize HOGDescriptor:
One more way to initialize is from xml file which contains all parameter values:
hog = cv2.HOGDescriptor("hog.xml")
To get an xml file one can do following:
hog = cv2.HOGDescriptor()
hog.save("hog.xml")
and edit the respective parameter values in xml file.
I was wondering the same. Almost none documentation can be found for OpenCV HOGDescriptor, other than the source cpp code.
Scikit-image has a good example page on extracting and illustrating HOG feature. It provides an alternative to explore HOG. It is documented here.
However, there is one thing to point out about scikit-image's hog implementation. Its Python code for hog function does not implement weighted vote for histogram orientation binning, but only does simple binning based on magnitude value falling into which bin. See its hog_histogram function. This is not following exactly Dalal and Triggs's paper.
Actually, I found that object detection based on OpenCV's implementation of HOG is more accurate than with the api from scikit-image. It makes sense to me, because weighted vote is important. By casting weighted votes to bins, variation in histogram is greatly reduced when gradient magnitude falls on or around the boundary. Chris McCormick wrote a very insightful blog on hog, in which orientation binning is clearly described as
For each gradient vector, it’s contribution to the histogram is given by the magnitude of the vector (so stronger gradients have a bigger impact on the histogram). We split the contribution between the two closest bins. So, for example, if a gradient vector has an angle of 85 degrees, then we add 1/4th of its magnitude to the bin centered at 70 degrees, and 3/4ths of its magnitude to the bin centered at 90.
I believe the intent of splitting the contribution is to minimize the problem of gradients which are right on the boundary between two bins. Otherwise, if a strong gradient was right on the edge of a bin, a slight change in the gradient angle (which nudges the gradient into the next bin) could have a strong impact on the histogram.
So, use OpenCV to compute hog if possible (haven't digged into its code and don't feel like doing so, but I suppose OpenCV's way of hog implementation is more appropriate). Not only I found an improvement in detection accuracy, but it also runs faster. Compared to scikit-image's hog code with wonderful comments, its documentation is almost none. Yet it is still feasible that one could get OpenCV's version working in practice - it's a matter of passing the right parameter for window size, cell size, block size, block stride, number of orientations, etc. Other parameters I just went with default.
does anybody have an idea how to normalize the
scipy.ndimage.filters.correlate
function to get :
XCM = 1/N(xc(a-mu_a,b-mu_b)/(sig_a*sig_b))
What is N for the correlation? It usually is the # of datapoints / pixels for images.
Which value shall I choose for scipy.ndimage.filters.correlate?
My images differ in size. I guess the scipy correlate function pads the small image into zeros?
The size of the final matrix
N = XCM.sizeX() * XCM.sizeY()
?
Thanks,
El
It looks to me like you're trying to compute the normalized cross-correlation of two images (I suspect you're probably trying to do template matching?). This answer assumes that the normalized cross-correlation is what you want.
When you compute the normalized cross-correlation between your two images, you're doing the equivalent of subtracting the mean and dividing by the standard deviation to both your template and reference image in the region where they overlap.
Here, N would be equal to the number of pixels in your template, which is the same as the number of pixels in the local region of overlap between the template and the reference image as you slide the template over the reference.
You should read the Wikipedia article on cross-correlation, and in particular this bit for the definition of normalized cross-correlation and some explanation for what each of the terms mean.
This article by Lewis (1995) has a more in-depth explanation, and also describes some neat tricks for efficiently computing the normalized cross-correlation.
I also wrote my own Python functions for template matching including normalized cross-correlation based on Lewis and some snippets of MATLAB. You can find the source here.
Let me know if you have more questions and I'll have a go at explaining.
Normalized Cross-Correlation (NCC) is also included in scikit-image as skimage.feature.match_template. See this template matching example.
You can also do the same with OpenCV with the matchTemplate method. There are many good bindings from Python to OpenCV, but it's a bit overkill if you only need template matching. I'd go with scikit-image.