Cubic interpolation python - python

I have a python assignment related to image processing.
One of the functions I need to implement is called compute_cubic_interpolation_coefficients(coefficient[][][]), and the description of it is:
This function computes the 4x4 = 16 cubic coefficients for each
quarter of a pixel using a cubic approximation of a Gaussian function.
I must admit I don't fully understand how that should be calculated, I guess (just guess) that the first two dimensions of the array represents the 2D pixels map, while the 3rd dimension array should contain 16 values per each pixels (or something like that).
Note: this is my first assignment for this course so it would be really valuable if anyone may help me to understand how to implement that correctly.

Related

adding noise to an array. Is it addition or multiplication?

I have some code that just makes some random noise using the numpy random normal distribution function and then I add this to a numpy array that contains an image of my chosen object. I then have to clip the array to between values of -1 and 1.
I am just trying to get my head round whether I should be adding this to the array and clipping or multiplying the array by the noise and clipping?
I can't conceptually understand which I should be doing. Could someone please help?
Thanks
It depends what sort of physical model you are trying to represent; additive and multiplicative noise do not correspond to the same phenomenon. Your image can be considered a variable that changes through time. Noise is an extra term that varies randomly as time passes. If this noise term depends on the state of the image in time, then the image and the noise are correlated and noise is multiplicative. If the two terms are uncorrelated, noise is additive.
Well as you have said it yourself, the problem is that you don't know what you want.
Both methods will increase the entropy of the original data.
What is the purpose of your task?
If you want to simulate something like sensor noise, the addition will do just fine.
You can try both and observe what happens to the distribution of your original data set after the application.

Python image filter working on N spatial and M measurement dimensions

In short:
I’m searching for a way to calculate a multidimensional custom image filter on more than one axis of values in python.
What I mean is:
With scipy’s ndimage, I can use ndimage.generic_filter to apply the custom function myfunc to an N-dimensional numpy array. In myfunc, I just need to indicate how to process the pixel neighborhood of shape (size[0],…,size[N-1]) which is passed to the function.
Slightly different from that, what I would like to do is to provide an array of shape (S1,…,SN,V1,…VM) and apply the filter only along the spatial dimensions and interpret the remaining M axes as axes of values. The pixel neighborhood to process would then be of shape (size[0],…,size[N-1],V1,…,VM).
So far I’m having my own relatively naive implementation of such a filter, however it would be good to have a version handling the general case and dealing with border effects.
Thanks a lot in advance for hints or ideas! Cheers

Python get transformation matrix from two sets of points

I have to images, one simulation, one real data, with bright spots.
Simulation:
Reality:
I can detect the spots just fine and get the coordinates. Now I need to compute transformation matrix (scale, rotation, translation, maybe shear) between the two coordinate systems. If needed, I can pick some (5-10) corresponding points by hand to give to the algorithm
I tried a lot of approaches already, including:
2 implementations of ICP:
https://engineering.purdue.edu/kak/distICP/ICP-2.0.html#ICP
https://github.com/KojiKobayashi/iterative_closest_point_2d
Implementing affine transformations:
https://math.stackexchange.com/questions/222113/given-3-points-of-a-rigid-body-in-space-how-do-i-find-the-corresponding-orienta/222170#222170
Implementations of affine transformations:
Determining a homogeneous affine transformation matrix from six points in 3D using Python
how to perform coordinates affine transformation using python? part 2
Most of them simply fail somehow like this:
The red points are the spots from the simulation transformed into the reality - coordinate system.
The best approach so far is this one how to perform coordinates affine transformation using python? part 2 yielding this:
As you see, the scaling and translating mostly works, but the image still needs to be rotated / mirrored.
Any ideas on how to get a working algorithm? If neccessary, I can provide my current non-working implementations, but they are basically as linked.
I found the error.
I used plt.imshow to display both the simulated and real image and from there, pick the reference points from which to calculate the transformation.
Turns out, due to the usual array-to-image-index-flipping-voodoo (or a bad missunderstanding of the transformation on my side), I need to switch the x and y indices of the reference points from the simulated image.
With this, everything works fine using this how to perform coordinates affine transformation using python? part 2

Define a 2D Gaussian probability with five peaks

I have a 2D data and it contains five peaks. Could I fit five 2D Gaussians function to obtain the peaks? In my problem, the peaks do not refer to the clustering problem. Which I think EM would be an appropriate answer for it.
In my case I measure a variable in x-y space and it shows maximum in more than one position. Is still fitting Fourier series or using Expectation-Maximization method an applicable solution to my problem?
In order to make my likelihood, do I need to just add up the five 2D Gaussians distributions with x and y and the height of each peak as variables?
If I understand what you're asking, check out Gaussian Mixture Models and Expectation Maximization. I don't know of any pre-implemented versions of these in Python, although I haven't looked too hard.

Get a spatial frequency breakdown for greyscale image (2d array)

I would like to get a plot of how much each spatial frequency is present in a grayscale image.
I have been told to try np.fft.fft2 but apparently this is not what I need (according to this question). I was then told to look into np.fft.fftfreq - and while that sounds like what I need it will only take an integer as input, so
np.fft.fftfreq(np.fft.fft2(image))
won't work. Nor does:
np.fft.fftfreq(np.abs(np.fft.fft2(image)))
How else could I try to do this? it seems like a rather trivial task for a fourier transform. It's actually the task of the fourier transform. I don't understand why np.fft.fft2 doesn't have a flag to make the frequency analysis orientation-agnostic.
Maybe you should reread the comments in the linked question, as well as the documentation also posted in the last comment. You are supposed to pass the image shape to np.fft.fftfreq:
freqx = np.fft.fftfreq(image.shape[0])
for x-direction and
freqy = np.fft.fftfreq(image.shape[1])
for y-direction.
The results will be the centers of the frequency bins returned by fft2, for example:
image_fft = np.fft.fft2(image)
Then the frequency corresponding to the amplitude image_fft[i,j] is freqx[i] in x-direction and freqy[i] in y-direction.
Your last sentence indicates that you want to do something completely different, though. The Fourier transform of a two-dimensional input is by common definition also two-dimensional. What deviating definition do you want to use?

Categories

Resources