Fourier deconvolution with numpy - python

I am attempting to remove my probes function from a signal using Fourier deconvolution, but I can not get a correct output with test signals.
t = np.zeros(30)
t = np.append(t, np.arange(0, 20, 0.1))
sigma = 2
mu = 5.
g = 1/np.sqrt(2*np.pi*sigma**2) * np.exp(-(np.arange(mu-3*sigma,mu+3*sigma,0.1)-mu)**2/(2*sigma**2))
def pad_signals(s1, s2):
size = t.size +g.size - 1
size = int(2 ** np.ceil(np.log2(size)))
s1 = np.pad(s1, ((size-s1.size)//2, int(np.ceil((size-s1.size)/2))), 'constant', constant_values=(0, 0))
s2 = np.pad(s2, ((size-s2.size)//2, int(np.ceil((size-s2.size)/2))), 'constant', constant_values=(0, 0))
return s1, s2
def decon_fourier_ratio(signal, removed_signal):
signal, removed_signal = pad_signals(signal, removed_signal)
recovered = np.fft.fftshift(np.fft.ifft(np.fft.fft(signal)/np.fft.fft(removed_signal)))
return np.real(recovered)
gt = (np.convolve(t, g, mode='full') / g.sum())[:230]
tr = decon_fourier_ratio(gt, g)
fig, ax = plt.subplots(nrows=2, ncols=2, sharex=True)
ax[0,0].plot(np.arange(0,np.fft.irfft(np.fft.rfft(t)).size), np.fft.irfft(np.fft.rfft(t)), label='thickness')
ax[0,1].plot(np.arange(0,np.fft.irfft(np.fft.rfft(g)).size), np.fft.irfft(np.fft.rfft(g)), label='probe shape')
ax[1,0].plot(np.arange(0,gt.size),gt, label='recorded signal')
ax[1,1].plot(np.arange(0,tr.size),tr, label='deconvolved signal')
plt.show()
The above script creates a demo sample (t), and a probe with Gaussian shape (g). Then, it convolves them to a signal gt, which is what a sample would look like when probed. I pad the signal to the nearest 2^N with pad_signals(), for efficiency and to fix any non-periodicity. Then I try to remove the gaussian probe with decon_fourier_ratio(). As is clear from the images, I do not recover the initial thickness gradient. Any ideas why the deconvolution is not working?
Note: I have also tried SciPy's deconvolve. But, this function only works for gaussians of certain widths.
Any help is greatly appreciated,
Eric

Any reason you are not doing the full convolution? If I change the construction of gt to:
g /= g.sum() # so the deconvolved signal has the same amplitude
gt = np.convolve(t, g, mode='full')
Then I get the following plots:
I can't quite tell you why your seeing this behavior, other than the partial convolution is probably altering the frequency content. Alternatively, you can pad your input signal with zeros if you want to get the same behavior and use same.

Related

Dry/wet (i.e. include partial raw signal) parameter on IIR notch filter before convolution with other filters

I have a group of scipy.signal.iirnotch filters convolving with one another and I would like each to implement independent dry/wet values for each of them. I know I can achieve a pseudo-dry/wet parameter by adding them to one another instead of performing a convolution on them but that is janky and difficult to control.
I am just at the start of my DSP learning but I haven't been able find any documentation/information on something like this so I have no idea where else to start other than poking around at the coefficients (to no avail). Here is the isolated relevant code so far:
from scipy import signal
notch_freqs = (220, 440, 880) # could be any frequencies, these are just placeholders
notch_coeffs = [signal.iirnotch(freq, BANDWIDTH, samplerate) for freq in notch_freqs]
# here would be the step in applying the dry/wet to each coefficient in notch_coeffs
# and then the convolution would happen...
def convolve_notches(): # I know this can probably be optimized but it works for now
filter_conv_a = signal.fftconvolve(*[filt[0] for filt in notch_coeffs[:2]])
filter_conv_b = signal.fftconvolve(*[filt[1] for filt in notch_coeffs[:2]])
if len(notch_coeffs) > 2:
for filt in notch_coeffs[:2]:
filter_conv_a = signal.fftconvolve(filter_conv_a, filt[0])
filter_conv_b = signal.fftconvolve(filter_conv_b, filt[1])
return filter_conv_a, filter_conv_b
# ... then apply using signal.filtfilt
This is not a real-time audio application so I'm not too concerned about timing.
Rereading this, I want to clarify dry/wet; here's an ASCII representation:
Wet: 25% 50% 100%
---\ /----\ /----\ /----
\__/ \ / \ /
\_/ \ /
| |
| |
Here's one way you can do this. First, a couple observations:
To incorporate the "wet" factor, we can use a little algrebra.
An IIR filter can be represented as the rational function
P_B(z)
H(z) = ------
P_A(z)
where P_A(z) and P_B(z) are polynomials in z⁻¹. A notch filter
will have a gain of 0 at the center frequency, but you want to
mix the "wet" and "dry" (i.e. filtered and unfiltered) signals.
Let w be the "wet" fraction, 0 <= w <= 1, so w = 0 implies no
filtering, w=0.25 implies the gain is 0.75 at the notch frequency,
etc. Such a filter can be expressed as
P_B(z) (1 - w)*P_A(z) + w*P_B(z)
H(z) = (1 - w) + (w) * ------ = -------------------------
P_A(z) P_A(z)
In terms of the arrays of coefficients returned in a call such
as b, a = iirnotch(w0, Q, fs), the coefficients of the modified
filter are b_mod = (1-w)*a + w*b and a_mod = a.
iirnotch(w0, Q, fs) returns two 1-d arrays of length 3. These
are the coefficients of the "biquad" notch filter. To create
a new filter that is a cascade of several notch filters, you
can simply stack the arrays returned by iirnotch into an
array with shape (n, 6); this is the SOS (second order sections)
format for a filter. This format is actually the recommended
format for filters beyond a few orders, because it is numerically
more robust than the (b, a) format (which requires high order
polynomials).
Here's a script that demonstrate how these points can be used
to implement your desired cascade of notch filters with varying
"wetness" for each notch:
import numpy as np
from scipy.signal import iirnotch, sosfreqz
import matplotlib.pyplot as plt
samplerate = 4000
notch_freqs = (220, 440, 880)
Q = 12.5
filters = [iirnotch(freq, Q, samplerate) for freq in notch_freqs]
# Stack the filter coefficients into an array with shape
# (len(notch_freqs, 6)). This array of "second order sections"
# can be used with sosfilt, sosfreqz, etc.
# This version, `sos`, is the combined filter without taking
# into account the desired "wet" factor. This is created just
# for comparison to the filter with "wet" factors.
sos = np.block([[b, a] for b, a in filters])
# sos2 includes the desired "wet" factor of the notch filters.
wet = (0.25, 0.5, 1.0)
sos2 = np.block([[(1-w)*a + w*b, a] for w, (b, a) in zip(wet, filters)])
# Compute the frequency responses of the two filters.
w, h = sosfreqz(sos, worN=8000, fs=samplerate)
w2, h2 = sosfreqz(sos2, worN=8000, fs=samplerate)
plt.subplot(2, 1, 1)
plt.plot(w, np.abs(h), '--', alpha=0.75, label='Simple cascade')
plt.plot(w2, np.abs(h2), alpha=0.8, label='With "wet" factor')
plt.title('Frequency response of cascaded notch filters')
for yw in wet:
plt.axhline(1 - yw, color='k', alpha=0.4, linestyle=':')
plt.grid(alpha=0.25)
plt.ylabel('|H(f)|')
plt.legend(framealpha=1, shadow=True, loc=(0.6, 0.25))
plt.subplot(2, 1, 2)
plt.plot(w, np.angle(h), '--', alpha=0.75, label='Simple cascade')
plt.plot(w2, np.angle(h2), alpha=0.8, label='With "wet" factor')
plt.xlabel('frequency')
plt.grid(alpha=0.25)
plt.ylabel('∠H(f) (radians)')
plt.legend(framealpha=1, shadow=True, loc=(0.6, 0.25))
plt.show()
Here's the plot that is generated by the script:

Estimate joint density with 2d Gaussian kernel

I have the following data set where I have to estimate the joint density of 'bwt' and 'age' using kernel density estimation with a 2-dimensional Gaussian kernel and width h=5. I can't use modules such as scipy where there are ready functions to do this and I have to built functions to calculate the density. Here's what I've gotten so far.
import numpy as np
import pandas as pd
babies_full = pd.read_csv("https://www2.helsinki.fi/sites/default/files/atoms/files/babies2.txt", sep='\t')
#Getting the columns I need
babies_full1=babies_full[['gestation', 'age']]
x=np.array(babies_full1,'int')
#2d Gaussian kernel
def k_2dgauss(x):
return np.exp(-np.sum(x**2, 1)/2) / np.sqrt(2*np.pi)
#Multivariate kernel density
def mv_kernel_density(t, x, h):
d = x.shape[1]
return np.mean(k_2dgauss((t - x)/h))/h**d
t = np.linspace(1.0, 5.0, 50)
h=5
print(mv_kernel_density(t, x, h))
However, I get a value error 'ValueError: operands could not be broadcast together with shapes (50,) (1173,2)' which think is because different shape of the matrices. I also don't understand why k_2dgauss(x) for me returns an array of zeros since it should only return one value. In general, I am new to the concept of kernel density estimation I don't really know if I've written the functions right so any hints would help!
Following on from my comments on your original post, I think this is what you want to do, but if not then come back to me and we can try again.
# info supplied by OP
import numpy as np
import pandas as pdbabies_full = \
pd.read_csv("https://www2.helsinki.fi/sites/default/files/atoms/files/babies2.txt", sep='\t')
#Getting the columns I need
babies_full1=babies_full[['gestation', 'age']]
x=np.array(babies_full1,'int')
# my contributions
from math import floor, ceil
def binMaker(arr, base):
"""function I already use for this sort of thing.
arr is the arr I want to make bins for
base is the bin separation, but does require you to import floor and ceil
otherwise you can make these bins manually yourself"""
binMin = floor(arr.min() / base) * base
binMax = ceil(arr.max() / base) * base
return np.arange(binMin, binMax + base, base)
bins1 = binMaker(x[:,0], 20.) # bins from 140. to 360. spaced 20 apart
bins2 = binMaker(x[:,1], 5.) # bins from 15. to 45. spaced 5. apart
counts = np.zeros((len(bins1)-1, len(bins2)-1)) # empty array for counts to go in
for i in range(0, len(bins1)-1): # loop over the intervals, hence the -1
boo = (x[:,0] >= bins1[i]) * (x[:,0] < bins1[i+1])
for j in range(0, len(bins2)-1): # loop over the intervals, hence the -1
counts[i,j] = np.count_nonzero((x[boo,1] >= bins2[j]) *
(x[boo,1] < bins2[j+1]))
# if you want your PDF to be a fraction of the total
# rather than the number of counts, do the next line
counts /= x.shape[0]
# plotting
import matplotlib.pyplot as plt
from matplotlib.colors import BoundaryNorm
# setting the levels so that each number in counts has its own colour
levels = np.linspace(-0.5, counts.max()+0.5, int(counts.max())+2)
cmap = plt.get_cmap('viridis') # or any colormap you like
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
fig, ax = plt.subplots(1, 1, figsize=(6,5), dpi=150)
pcm = ax.pcolormesh(bins2, bins1, counts, ec='k', lw=1)
fig.colorbar(pcm, ax=ax, label='Counts (%)')
ax.set_xlabel('Age')
ax.set_ylabel('Gestation')
ax.set_xticks(bins2)
ax.set_yticks(bins1)
plt.title('Manually making a 2D (joint) PDF')
If this is what you wanted, then there is an easier way with np.histgoram2d, although I think you specified it had to be using your own methods, and not built in functions. I've included it anyway for completeness' sake.
pdf = np.histogram2d(x[:,0], x[:,1], bins=(bins1,bins2))[0]
pdf /= x.shape[0] # again for normalising and making a percentage
levels = np.linspace(-0.5, pdf.max()+0.5, int(pdf.max())+2)
cmap = plt.get_cmap('viridis') # or any colormap you like
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
fig, ax = plt.subplots(1, 1, figsize=(6,5), dpi=150)
pcm = ax.pcolormesh(bins2, bins1, pdf, ec='k', lw=1)
fig.colorbar(pcm, ax=ax, label='Counts (%)')
ax.set_xlabel('Age')
ax.set_ylabel('Gestation')
ax.set_xticks(bins2)
ax.set_yticks(bins1)
plt.title('using np.histogram2d to make a 2D (joint) PDF')
Final note - in this example, the only place where counts doesn't equal pdf is for the bin between 40 <= age < 45 and 280 <= gestation 300, which I think is due to how, in my manual case, I've used <= and <, and I'm a little unsure how np.histogram2d handles values outside the bin ranges, or on the bin edges etc. We can see the element of x that is responsible
>>> print(x[1011])
[280 45]

Fit the gamma distribution only to a subset of the samples

I have the histogram of my input data (in black) given in the following graph:
I'm trying to fit the Gamma distribution but not on the whole data but just to the first curve of the histogram (the first mode). The green plot in the previous graph corresponds to when I fitted the Gamma distribution on all the samples using the following python code which makes use of scipy.stats.gamma:
img = IO.read(input_file)
data = img.flatten() + abs(np.min(img)) + 1
# calculate dB positive image
img_db = 10 * np.log10(img)
img_db_pos = img_db + abs(np.min(img_db))
data = img_db_pos.flatten() + 1
# data histogram
n, bins, patches = plt.hist(data, 1000, normed=True)
# slice histogram here
# estimation of the parameters of the gamma distribution
fit_alpha, fit_loc, fit_beta = gamma.fit(data, floc=0)
x = np.linspace(0, 100)
y = gamma.pdf(x, fit_alpha, fit_loc, fit_beta)
print '(alpha, beta): (%f, %f)' % (fit_alpha, fit_beta)
# plot estimated model
plt.plot(x, y, linewidth=2, color='g')
plt.show()
How can I restrict the fitting only to the interesting subset of this data?
Update1 (slicing):
I sliced the input data by keeping only values below the max of the previous histogram, but the results were not really convincing:
This was achieved by inserting the following code below the # slice histogram here comment in the previous code:
max_data = bins[np.argmax(n)]
data = data[data < max_data]
Update2 (scipy.optimize.minimize):
The code below shows how scipy.optimize.minimize() is used to minimize an energy function to find (alpha, beta):
import matplotlib.pyplot as plt
import numpy as np
from geotiff.io import IO
from scipy.stats import gamma
from scipy.optimize import minimize
def truncated_gamma(x, max_data, alpha, beta):
gammapdf = gamma.pdf(x, alpha, loc=0, scale=beta)
norm = gamma.cdf(max_data, alpha, loc=0, scale=beta)
return np.where(x < max_data, gammapdf / norm, 0)
# read image
img = IO.read(input_file)
# calculate dB positive image
img_db = 10 * np.log10(img)
img_db_pos = img_db + abs(np.min(img_db))
data = img_db_pos.flatten() + 1
# data histogram
n, bins = np.histogram(data, 100, normed=True)
# using minimize on a slice data below max of histogram
max_data = bins[np.argmax(n)]
data = data[data < max_data]
data = np.random.choice(data, 1000)
energy = lambda p: -np.sum(np.log(truncated_gamma(data, max_data, *p)))
initial_guess = [np.mean(data), 2.]
o = minimize(energy, initial_guess, method='SLSQP')
fit_alpha, fit_beta = o.x
# plot data histogram and model
x = np.linspace(0, 100)
y = gamma.pdf(x, fit_alpha, 0, fit_beta)
plt.hist(data, 30, normed=True)
plt.plot(x, y, linewidth=2, color='g')
plt.show()
The algorithm above converged for a subset of data, and the output in o was:
x: array([ 16.66912781, 6.88105559])
But as can be seen on the screenshot below, the gamma plot doesn't fit the histogram:
You can use a general optimization tool such as scipy.optimize.minimize to fit a truncated version of the desired function, resulting in a nice fit:
First, the modified function:
def truncated_gamma(x, alpha, beta):
gammapdf = gamma.pdf(x, alpha, loc=0, scale=beta)
norm = gamma.cdf(max_data, alpha, loc=0, scale=beta)
return np.where(x<max_data, gammapdf/norm, 0)
This selects values from the gamma distribution where x < max_data, and zero elsewhere. The np.where part is not actually important here, because the data is exclusively to the left of max_data anyway. The key is normalization, because varying alpha and beta will change the area to the left of the truncation point in the original gamma.
The rest is just optimization technicalities.
It's common practise to work with logarithms, so I used what's sometimes called "energy", or the logarithm of the inverse of the probability density.
energy = lambda p: -np.sum(np.log(truncated_gamma(data, *p)))
Minimize:
initial_guess = [np.mean(data), 2.]
o = minimize(energy, initial_guess, method='SLSQP')
fit_alpha, fit_beta = o.x
My output is (alpha, beta): (11.595208, 824.712481). Like the original, it is a maximum likelihood estimate.
If you're not happy with the convergence rate, you may want to
Select a sample from your rather big dataset:
data = np.random.choice(data, 10000)
Try different algorithms using the method keyword argument.
Some optimization routines output a representation of the inverse hessian, which is useful for uncertainty estimation. Enforcement of nonnegativity for the parameters may also be a good idea.
A log-scaled plot without truncation shows the entire distribution:
Here's another possible approach using a manually created dataset in excel that more or less matched the plot given.
Raw Data
Outline
Imported data into a Pandas dataframe.
Mask the indices after the
max response index.
Create a mirror image of the remaining data.
Append the mirror image while leaving a buffer of empty space.
Fit the desired distribution to the modified data. Below I do a normal fit by the method of moments and adjust the amplitude and width.
Working Script
# Import data to dataframe.
df = pd.read_csv('sample.csv', header=0, index_col=0)
# Mask indices after index at max Y.
mask = df.index.values <= df.Y.argmax()
df = df.loc[mask, :]
scaled_y = 100*df.Y.values
# Create new df with mirror image of Y appended.
sep = 6
app_zeroes = np.append(scaled_y, np.zeros(sep, dtype=np.float))
mir_y = np.flipud(scaled_y)
new_y = np.append(app_zeroes, mir_y)
# Using Scipy-cookbook to fit a normal by method of moments.
idxs = np.arange(new_y.size) # idxs=[0, 1, 2,...,len(data)]
mid_idxs = idxs.mean() # len(data)/2
# idxs-mid_idxs is [-53.5, -52.5, ..., 52.5, len(data)/2]
scaling_param = np.sqrt(np.abs(np.sum((idxs-mid_idxs)**2*new_y)/np.sum(new_y)))
# adjust amplitude
fmax = new_y.max()*1.2 # adjusted function max to 120% max y.
# adjust width
scaling_param = scaling_param*.7 # adjusted by 70%.
# Fit normal.
fit = lambda t: fmax*np.exp(-(t-mid_idxs)**2/(2*scaling_param**2))
# Plot results.
plt.plot(new_y, '.')
plt.plot(fit(idxs), '--')
plt.show()
Result
See the scipy-cookbook fitting data page for more on fitting a normal using method of moments.

Finding first derivative using DFT in Python

I want to find the first derivative of exp(sin(x)) on the interval [0, 2/pi] using a discrete Fourier transform. The basic idea is to first evaluate the DFT of exp(sin(x)) on the given interval, giving you say v_k, followed by computing the inverse DFT of ikv_k giving you the desired answer. In reality, due to the implementations of Fourier transforms in programming languages, you might need to reorder the output somewhere and/or multiply by different factors here and there.
I first did it in Mathematica, where there is an option FourierParameters, which enables you to specify a convention for the transform. Firstly, I obtained the Fourier series of a Gaussian, in order to see what the normalisation factors are that I have to multiply by and then went on finding the derivative. Unfortunately, translating my Mathematica code into Python thereafter (whereby again I first did the Fourier series of a Gaussian - this was successful), I didn't get the same results. Here is my code:
N=1000
xmin=0
xmax=2.0*np.pi
step = (xmax-xmin)/(N)
xdata = np.linspace(xmin, xmax-step, N)
v = np.exp(np.sin(xdata))
derv = np.cos(xdata)*v
vhat = np.fft.fft(v)
kvals1 = np.arange(0, N/2.0, 1)
kvals2 = np.arange(-N/2.0, 0, 1)
what1 = np.zeros(kvals1.size+1)
what2 = np.empty(kvals2.size)
it = np.nditer(kvals1, flags=['f_index'])
while not it.finished:
np.put(what1, it.index, 1j*(2.0*np.pi)/((xmax-xmin))*it[0]*vhat[[int(it[0])]])
it.iternext()
it = np.nditer(kvals2, flags=['f_index'])
while not it.finished:
np.put(what2, it.index, 1j*(2.0*np.pi)/((xmax-xmin))*it[0]*vhat[[int(it[0])]])
it.iternext()
xdatafull = np.concatenate((xdata, [2.0*np.pi]))
what = np.concatenate((what1, what2))
w = np.real(np.fft.ifft(what))
fig = plt.figure()
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
plt.plot(xdata, derv, color='blue')
plt.plot(xdatafull, w, color='red')
plt.show()
I can post the Mathematica code, if people want me to.
Turns out the problem is that np.zeros gives you an array of real zeroes and not complex ones, hence the assignments after that don't change anything, as they are imaginary.
Thus the solution is quite simply
import numpy as np
N=100
xmin=0
xmax=2.0*np.pi
step = (xmax-xmin)/(N)
xdata = np.linspace(step, xmax, N)
v = np.exp(np.sin(xdata))
derv = np.cos(xdata)*v
vhat = np.fft.fft(v)
what = 1j*np.zeros(N)
what[0:N/2.0] = 1j*np.arange(0, N/2.0, 1)
what[N/2+1:] = 1j*np.arange(-N/2.0 + 1, 0, 1)
what = what*vhat
w = np.real(np.fft.ifft(what))
# Then plotting
whereby the np.zeros is replaced by 1j*np.zeros

scipy.ndimage.interpolation.zoom uses nearest-neighbor-like algorithm for scaling-down

While testing scipy's zoom function, I found that the results of scailng-down an array are similar to the nearest-neighbour algorithm, rather than averaging. This increases noise drastically, and is generally suboptimal for many application.
Is there an alternative that does not use nearest-neighbor-like algorithm and will properly average the array when downsizing? While coarsegraining works for integer scaling factors, I would need non-integer scaling factors as well.
Test case: create a random 100*M x 100*M array, for M = 2..20
Downscale the array by the factor of M three ways:
1) by taking the mean in MxM blocks
2) by using scipy's zoom with a scaling factor 1/M
3) by taking a first point within a
Resulting arrays have the same mean, the same shape, but scipy's array has the variance as high as the nearest-neighbor. Taking a different order for scipy.zoom does not really help.
import scipy.ndimage.interpolation
import numpy as np
import matplotlib.pyplot as plt
mean1, mean2, var1, var2, var3 = [],[],[],[],[]
values = range(1,20) # down-scaling factors
for M in values:
N = 100 # size of an array
a = np.random.random((N*M,N*M)) # large array
b = np.reshape(a, (N, M, N, M))
b = np.mean(np.mean(b, axis=3), axis=1)
assert b.shape == (N,N) #coarsegrained array
c = scipy.ndimage.interpolation.zoom(a, 1./M, order=3, prefilter = True)
assert c.shape == b.shape
d = a[::M, ::M] # picking one random point within MxM block
assert b.shape == d.shape
mean1.append(b.mean())
mean2.append(c.mean())
var1.append(b.var())
var2.append(c.var())
var3.append(d.var())
plt.plot(values, mean1, label = "Mean coarsegraining")
plt.plot(values, mean2, label = "mean scipy.zoom")
plt.plot(values, var1, label = "Variance coarsegraining")
plt.plot(values, var2, label = "Variance zoom")
plt.plot(values, var3, label = "Variance Neareset neighbor")
plt.xscale("log")
plt.yscale("log")
plt.legend(loc=0)
plt.show()
EDIT: Performance of scipy.ndimage.zoom on a real noisy image is also very poor
The original image is here http://wiz.mit.edu/lena_noisy.png
The code that produced it:
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
from scipy.ndimage.interpolation import zoom
im = Image.open("/home/magus/Downloads/lena_noisy.png")
im = np.array(im)
plt.subplot(131)
plt.title("Original")
plt.imshow(im, cmap="Greys_r")
plt.subplot(132)
im2 = zoom(im, 1 / 8.)
plt.title("Scipy zoom 8x")
plt.imshow(im2, cmap="Greys_r", interpolation="none")
im.shape = (64, 8, 64, 8)
im3 = np.mean(im, axis=3)
im3 = np.mean(im3, axis=1)
plt.subplot(133)
plt.imshow(im3, cmap="Greys_r", interpolation="none")
plt.title("averaging over 8x8 blocks")
plt.show()
Nobody posted a working answer, so I will post a solution I currently use. Not the most elegant, but works.
import numpy as np
import scipy.ndimage
def zoomArray(inArray, finalShape, sameSum=False,
zoomFunction=scipy.ndimage.zoom, **zoomKwargs):
"""
Normally, one can use scipy.ndimage.zoom to do array/image rescaling.
However, scipy.ndimage.zoom does not coarsegrain images well. It basically
takes nearest neighbor, rather than averaging all the pixels, when
coarsegraining arrays. This increases noise. Photoshop doesn't do that, and
performs some smart interpolation-averaging instead.
If you were to coarsegrain an array by an integer factor, e.g. 100x100 ->
25x25, you just need to do block-averaging, that's easy, and it reduces
noise. But what if you want to coarsegrain 100x100 -> 30x30?
Then my friend you are in trouble. But this function will help you. This
function will blow up your 100x100 array to a 120x120 array using
scipy.ndimage zoom Then it will coarsegrain a 120x120 array by
block-averaging in 4x4 chunks.
It will do it independently for each dimension, so if you want a 100x100
array to become a 60x120 array, it will blow up the first and the second
dimension to 120, and then block-average only the first dimension.
Parameters
----------
inArray: n-dimensional numpy array (1D also works)
finalShape: resulting shape of an array
sameSum: bool, preserve a sum of the array, rather than values.
by default, values are preserved
zoomFunction: by default, scipy.ndimage.zoom. You can plug your own.
zoomKwargs: a dict of options to pass to zoomFunction.
"""
inArray = np.asarray(inArray, dtype=np.double)
inShape = inArray.shape
assert len(inShape) == len(finalShape)
mults = [] # multipliers for the final coarsegraining
for i in range(len(inShape)):
if finalShape[i] < inShape[i]:
mults.append(int(np.ceil(inShape[i] / finalShape[i])))
else:
mults.append(1)
# shape to which to blow up
tempShape = tuple([i * j for i, j in zip(finalShape, mults)])
# stupid zoom doesn't accept the final shape. Carefully crafting the
# multipliers to make sure that it will work.
zoomMultipliers = np.array(tempShape) / np.array(inShape) + 0.0000001
assert zoomMultipliers.min() >= 1
# applying scipy.ndimage.zoom
rescaled = zoomFunction(inArray, zoomMultipliers, **zoomKwargs)
for ind, mult in enumerate(mults):
if mult != 1:
sh = list(rescaled.shape)
assert sh[ind] % mult == 0
newshape = sh[:ind] + [sh[ind] // mult, mult] + sh[ind + 1:]
rescaled.shape = newshape
rescaled = np.mean(rescaled, axis=ind + 1)
assert rescaled.shape == finalShape
if sameSum:
extraSize = np.prod(finalShape) / np.prod(inShape)
rescaled /= extraSize
return rescaled
myar = np.arange(16).reshape((4,4))
rescaled = zoomArray(myar, finalShape=(3, 5))
print(myar)
print(rescaled)
FWIW i found that order=1 at least preserves the mean a lot better than the default or order=3 (as expected really)

Categories

Resources