i have to often normalize the result of circular-convolution, so I 'borrowed'-and-modfied the following routine :
def fft_normalize(x):# Normalize a vector x in complex domain.
c = np.fft.rfft(x)
# Look at real and image as if they were real
ri = np.vstack([c.real, c.imag])
# Normalize magnitude of each complex/real pair
norm = np.linalg.norm(ri, axis=0)
if np.any(norm==0): norm[norm == 0] = np.float64(1e-308) #!fixme
ri= np.divide(ri,norm)
c_proj = ri[0,:] + 1j * ri[1,:]
rv = np.fft.irfft(c_proj, n=x.shape[-1])
return rv
def fft_convolution(a, b):
return np.fft.irfft(np.fft.rfft(a) * np.fft.rfft(b))
so that I do this :
fft_normalize(fft_convolution(a,b))
i see in the numpy docs there is a 'norm' option. Is this equivalent to what i'm doing ?
And if yes, which option should I use ?
def fft_convolution2(a, b):
return np.fft.irfft(np.fft.rfft(a) * np.fft.rfft(b), norm='ortho')
When I test it it behaves better when I do fft_normalize()
Second, i had to add this line, but it does not seem, right. any ideas ?
if np.any(norm==0): norm[norm == 0] = np.float64(1e-308) #!fixme
As a side note, if you know !! numpy docs mentions that they promote float32 to float64 and that scipy.fftpack does not !!
Would fftpack be faster ! because scipy says fftpack is obsolete and there is no info on the new scipy ?
#cris are you sugesting i do it this way :
def fft_normalize(x):# Normalize a vector x in complex domain.
c = np.fft.rfft(x)
ri = np.vstack([c.real, c.imag])
norm = np.abs(c)
if np.any(norm==0): norm[norm == 0] = MIN #!fixme
ri= np.divide(ri,norm)
c_proj = ri[0,:] + 1j * ri[1,:]
rv = np.fft.irfft(c_proj, n=x.shape[-1])
return rv
The norm argument to the FFT functions in NumPy determine whether the transform result is multiplied by 1, 1/N or 1/sqrt(N), with N the number of samples in the array. Normally, the inverse transform is normalized by dividing by N, and the forward transform is not. Specifying “ortho” here causes both transforms to be normalized by 1/sqrt(2). Specifying “forward” causes only the forward transform to be normalized by 1/N.
These normalizations are very different from the one you apply, where you normalize each element in the frequency domain.
Related
Dear experts i have a data set.i just want to calculate signal to noise ratio of the data. data is loaded here https://i.fluffy.cc/jwg9d7nRNDFqdzvg1Qthc0J7CNtKd5CV.html
my code is given below:
import numpy as np
from scipy import signaltonoise
import scipy.io
dat=scipy.io.loadmat('./data.mat')
arr=dat['dn']
snr=scipy.stats.signaltonoise(arr, axis=0, ddof=0)
but i am getting error like importError: cannot import name 'signaltonoise' from 'scipy'
if it doesn't exist how to calculate snr,please suggest some other way to do it with this data set using python.
scipy.stats.signaltonoise was removed in scipy 1.0.0. You can either downgrade your scipy version or create the function yourself:
def signaltonoise(a, axis=0, ddof=0):
a = np.asanyarray(a)
m = a.mean(axis)
sd = a.std(axis=axis, ddof=ddof)
return np.where(sd == 0, 0, m/sd)
Source: https://github.com/scipy/scipy/blob/v0.16.0/scipy/stats/stats.py#L1963
See the github link for the docstring.
Edit: the full script would then look as follows
import numpy as np
import scipy.io
def signaltonoise(a, axis=0, ddof=0):
a = np.asanyarray(a)
m = a.mean(axis)
sd = a.std(axis=axis, ddof=ddof)
return np.where(sd == 0, 0, m/sd)
dat = scipy.io.loadmat('./data.mat')
arr = dat['dn']
snr = signaltonoise(arr)
More generally speaking, it depends on the application. For many applications, the relation between mean and standard deviation might be sufficient.
As JuliettVictor pointed out, the old scipy implementation's source code can be found online easily and is the most common one. In order to convert this to decibels, one would add
. Before that one should calculate the absolute value, in case the signal's mean is negative:
def signaltonoise_dB(a, axis=0, ddof=0):
a = np.asanyarray(a)
m = a.mean(axis)
sd = a.std(axis=axis, ddof=ddof)
return 20*np.log10(abs(np.where(sd == 0, 0, m/sd)))
This runs into problems though, when the signal of interest contains higher frequencies (e.g. in audio applications, here, DC is often filtered out even).
In matlab's snr() function a kaiser windowed periodogram is used, the peak of the fundamental is detected and its power is computed. The same happens for the harmonics. The rest of the signal is assumed to be noise and their corresponding power levels are calculated.
Other approaches involve low-pass filtering of the signal (similar to calculating its mean).
Yet another python based example can be found here.
An incomplete overview of methods (including a matlab like periodogram-based one) in python can be found here
You can use Sox for that:
def add_noise(exp,sig, n_directory, snrdb):
""" This function calculate the mixture of an audio
and an audio file as a noise with a desired SNR in db
Input:
sig: signal, the output vector of wav.read
rate: the signal's rate
n_directory: directory of noise
snrdb: Desired signal-to-noise ratio in db
Output:
d = s + n such that SNR = Ps/Pn """
direc_st = exp + "mixed.wav"
write(exp + 'sig_t.wav', 16000, sig)
s = np.expand_dims(sig, -1)
s = s.astype('float64')
rate, n = wav.read(n_directory)
n = np.expand_dims(n, -1)
n = n.astype('float64')
snr = 10 ** (snrdb * 0.1)
Es = np.sum(s ** 2)
En = np.sum(n ** 2)
snr = 10 ** (snrdb * 0.1)
alpha = np.sqrt(Es / (snr * En))
cmd = "sox -m -v" + str(
alpha) + " " + n_directory + ' ' + exp + 'sig_t.wav ' + direc_st
os.system(cmd)
rate, d = wav.read(direc_st)
return d
Is there any python implementation of MINRES pseudoinversion algorithm that can deal with Hermitian matrices?
I have found a few sources, but all of them are only capable of working with real matrices and do not seem to be easily generalizable onto the complex case:
https://searchcode.com/codesearch/view/89958680/
https://github.com/pascanur/theano_optimize
(there are a couple of other links, but my reputation does not allow me to post them)
A Hermitian system of size $n$
$$\mathbf y = \mathbf H^{-1}\mathbf v$$
can be embedded in a real, symmetric system of size $2n$:
\begin{equation}
\begin{bmatrix}
\Re(\mathbf y)\\Im(\mathbf y)
\end{bmatrix}=
\begin{bmatrix}
\Re(\mathbf H)&-\Im(\mathbf H)\\Im(\mathbf H)&\Re(\mathbf H)
\end{bmatrix}^{-1}
\begin{bmatrix}
\Re(\mathbf v)\\Im(\mathbf v)
\end{bmatrix}.
\end{equation}
Minimum-residual methods are often used for large problems, where constructing $H$ is impractical. In which case we may have an operation which computes a matrix-vector product, $f: \mathbb C^n \to \mathbb C^n; ,, f(\mathbf v) = \mathbf H\mathbf v.$ This function can be wrapped to operate on $\mathbf x \in \mathbb R^{2n}$ by converting $\mathbf x$ back to a complex vector, applying $f$, and then embedding the result back in $\mathbb R^{2n}$.
Here is an example in python / numpy / scipy:
from scipy.sparse.linalg import minres, LinearOperator
from pylab import *
# Problem size
N = 100
# error helper
er = lambda t,a,b:print('%s error:'%t,mean(abs(a-b)))
# random Hermitian matrix
Q = randn(N,N) + 1j*randn(N,N)
H = Q#conj(Q.T)
# random complex vector
v = randn(N) + 1j*randn(N)
# ground-truth solution
x0 = inv(H)#v
# Pack/unpack complex vector as stacked real vector
c2r = lambda v:block([real(v),imag(v)])
r2c = lambda v:kron([1,1j],eye(N))#v
# Verify that we can embed C^n in R^(2N)
Hr = real(H)
Hi = imag(H)
Hs = block([[Hr,-Hi],[Hi,Hr]])
vs = c2r(v)
xs = inv(Hs)#vs
x1 = r2c(xs)
er('Embed',x0,x1)
# Verify that minres works as expected in R-embed
x2 = r2c(minres(Hs,vs,tol=1e-12)[0])
er('Minres 1',x0,x2)
# Demonstrate using operators
Av = lambda u:c2r( H # r2c(u) )
A = LinearOperator((N*2,)*2,Av,Av)
# Minres, converting input/output to/from complex/real
x3 = r2c(minres(Hs,vs,tol=1e-12)[0])
er('Minres 2',x0,x3)
>>> Embed error: 5.317184726020268e-12
>>> Minres 1 error: 6.641342200989796e-11
>>> Minres 2 error: 6.641342200989796e-11
I am looking for an efficient way to implement a simple filter with one coefficient that is time-varying and specified by a vector with the same length as the input signal.
The following is a simple implementation of the desired behavior:
def myfilter(signal, weights):
output = np.empty_like(weights)
val = signal[0]
for i in range(len(signal)):
val += weights[i]*(signal[i] - val)
output[i] = val
return output
weights = np.random.uniform(0, 0.1, (100,))
signal = np.linspace(1, 3, 100)
output = myfilter(signal, weights)
Is there a way to do this more efficiently with numpy or scipy?
You can trade in the overhead of the loop for a couple of additional ops:
import numpy as np
def myfilter(signal, weights):
output = np.empty_like(weights)
val = signal[0]
for i in range(len(signal)):
val += weights[i]*(signal[i] - val)
output[i] = val
return output
def vectorised(signal, weights):
wp = np.r_[1, np.multiply.accumulate(1 - weights[1:])]
sw = weights * signal
sw[0] = signal[0]
sws = np.add.accumulate(sw / wp)
return wp * sws
weights = np.random.uniform(0, 0.1, (100,))
signal = np.linspace(1, 3, 100)
print(np.allclose(myfilter(signal, weights), vectorised(signal, weights)))
On my machine the vectorised version is several times faster. It uses a "closed form" solution of your recurrence equation.
Edit: For very long signal / weight (100,000 samples, say) this method doesn't work because of overflow. In that regime you can still save a bit (more than 50% on my machine) using the following trick, which has the added bonus that you needn't solve the recurrence formula, only invert it.
from scipy import linalg
def solver(signal, weights):
rw = 1 / weights[1:]
v = np.r_[1, rw, 1-rw, 0]
v.shape = 2, -1
return linalg.solve_banded((1, 0), v, signal)
This trick uses the fact that your recurrence is formally similar to a Gauss elimination on a matrix with only one nonvanishing subdiagonal. It piggybacks on a library function that specialises in doing precisely that.
Actually, quite proud of this one.
The aim of this post is to properly understand Numerical Fourier Transform on Python or Matlab with an example in which the Analytical Fourier Transform is well known. For this purpose I choose the rectangular function, the analytical expression of it and its Fourier Transform are reported here
https://en.wikipedia.org/wiki/Rectangular_function
Here the code in Matlab
x = -3 : 0.01 : 3;
y = zeros(length(x));
y(200:400) = 1;
ffty = fft(y);
ffty = fftshift(ffty);
plot(real(ffty))
And here the code in Python
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(-3, 3, 0.01)
y = np.zeros(len(x))
y[200:400] = 1
ffty = np.fft.fft(y)
ffty = np.fft.fftshift(ffty)
plt.plot(np.real(ffty))
In both the two programming langueages I have the some result with the some problems:
First of all the fourier transfrom is not real as expected, moreover even choosing the real part, the solution does not looks like the analytical solution: in fact the first plot reported here is as it should be at least in shape and the second plot is what I get from my calculations.
Is there anyone who could suggest me how to analytically calculate the fourier transform of the rectangular function?
There are two problems in your Matlab code:
First, y = zeros(length(x)); should be y = zeros(1,length(x));. Currently you create a square matrix, not a vector.
Second, the DFT (or FFT) will be real and symmetric if y is. Your y should be symmetric, and that means with respect to 0. So, instead of y(200:400) = 1; use y(1:100) = 1; y(end-98:end) = 1;. Recall that the DFT is like the Fourier series of a signal from which your input is just one period, and the first sample corresponds to time instant 0.
So:
x = -3 : 0.01 : 3;
y = zeros(1,length(x));
y(1:100) = 1; y(end-98:end) = 1;
ffty = fft(y);
ffty = fftshift(ffty);
plot(ffty)
gives
>> isreal(ffty)
ans =
1
The code in Python is
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(-3, 3, 0.01)
y = np.zeros(len(x))
y[200:400] = 1
yShift = np.fft.fftshift(y)
fftyShift = np.fft.fft(yShift)
ffty = np.fft.fftshift(fftyShift)
plt.plot(ffty)
plt.show()
I have installed Numpy and SciPy, but I'm not quite understand their documentation about polyfit.
For exmpale, Here's my three data samples:
[-0.042780748663101636, -0.0040771571786609945, -0.00506567946276074]
[0.042780748663101636, -0.0044771571786609945, -0.10506567946276074]
[0.542780748663101636, -0.005771571786609945, 0.30506567946276074]
[-0.342780748663101636, -0.0304077157178660995, 0.90506567946276074]
The first two columns are sample features, the third column is output, My target is to get a function that could take two parameters(first two columns) and return its prediction(the output).
Any simple example ?
====================== EDIT ======================
Note that, I need to fit something like a curve, not only straight lines. The polynomial should be something like this ( n = 3):
a*x1^3 + b*x2^2 + c*x3 + d = y
Not:
a*x1 + b*x2 + c*x3 + d = y
x1, x2, x3 are features of one sample, y is the output
Try something like
edit: added an example function that used results of linear regression to estimate output.
import numpy as np
data =np.array(
[[-0.042780748663101636, -0.0040771571786609945, -0.00506567946276074],
[0.042780748663101636, -0.0044771571786609945, -0.10506567946276074],
[0.542780748663101636, -0.005771571786609945, 0.30506567946276074],
[-0.342780748663101636, -0.0304077157178660995, 0.90506567946276074]])
coefficient = data[:,0:2]
dependent = data[:,-1]
x,residuals,rank,s = np.linalg.lstsq(coefficient,dependent)
def f(x,u,v):
return u*x[0] + v*x[1]
for datum in data:
print f(x,*datum[0:2])
Which gives
>>> x
array([ 0.16991146, -30.18923739])
>>> residuals
array([ 0.07941146])
>>> rank
2
>>> s
array([ 0.64490113, 0.02944663])
and the function created with your coefficients gave
0.115817326583
0.142430900298
0.266464019171
0.859743371665
More info can be found at the documentation I posted as a comment.
edit 2: fitting your data to an arbitrary model.
edit 3: made my model a function for ease of understanding.
edit 4: made code more easily read/ changed model to a quadratic fit, but you should be able to read this code and know how to make it minimize any residual you want now.
contrived example:
import numpy as np
from scipy.optimize import leastsq
data =np.array(
[[-0.042780748663101636, -0.0040771571786609945, -0.00506567946276074],
[0.042780748663101636, -0.0044771571786609945, -0.10506567946276074],
[0.542780748663101636, -0.005771571786609945, 0.30506567946276074],
[-0.342780748663101636, -0.0304077157178660995, 0.90506567946276074]])
coefficient = data[:,0:2]
dependent = data[:,-1]
def model(p,x):
a,b,c = p
u = x[:,0]
v = x[:,1]
return (a*u**2 + b*v + c)
def residuals(p, y, x):
a,b,c = p
err = y - model(p,x)
return err
p0 = np.array([2,3,4]) #some initial guess
p = leastsq(residuals, p0, args=(dependent, coefficient))[0]
def f(p,x):
return p[0]*x[0] + p[1]*x[1] + p[2]
for x in coefficient:
print f(p,x)
gives
-0.108798280153
-0.00470479385807
0.570237823475
0.413016072653