Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Want to know the correct method of calculating SNR of a single image.Added some gaussian noise to original image and want to calculate SNR.
You can create a function as:
def signaltonoise(a, axis=0, ddof=0):
"""
The signal-to-noise ratio of the input data.
Returns the signal-to-noise ratio of `a`, here defined as the mean
divided by the standard deviation.
Parameters
----------
a : array_like
An array_like object containing the sample data.
axis : int or None, optional
Axis along which to operate. Default is 0. If None, compute over
the whole array `a`.
ddof : int, optional
Degrees of freedom correction for standard deviation. Default is 0.
Returns
-------
s2n : ndarray
The mean to standard deviation ratio(s) along `axis`, or 0 where the
standard deviation is 0.
"""
a = np.asanyarray(a)
m = a.mean(axis)
sd = a.std(axis=axis, ddof=ddof)
return np.where(sd == 0, 0, m/sd)
Source: https://github.com/scipy/scipy/blob/v0.16.0/scipy/stats/stats.py#L1963
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
My question is simple. I think I kind of understand FFT and DFT. what I dont understand is why, in Python or matlab, do we use FFT size as the number of samples? why does every sample taken in the time domain corresponds to a frequency bin in the frequency domain.
For example the Scipy's fft pack, in order to plot the spectrum of a .wav file signal we use:
FFT = abs(scipy.fftpack.fft(time_domain_signal));
Frequency_Vector = fftpack.fftfreq(len(FFT_out), (1/Sampling_rate))
Now if I type the len(FFT_out)
it is the the same as the number of samples (ie sampling freq * time of the audio signal) and since ffreq is the frequency vector that contains the frequency bins, therefore Len(fft) = number of frequency bins.
A simple explanation would be appreciated.
Mathematically a key property of the fourier transform is that it is linear and invertible. The latter means that if two signals have the same fourier transform they are equal, and that for any spectrum there is a signal with that spectrum.
For implementations with a finite collection of samples of a signal the first property means that the fourier tramsform can be represented by a N x M matrix where N is the number of time samples and M the number of frequency samples. The second property means that the matrix must be invertible, and so square, ie we must have M == N.
You say that time bins and frequency correspond, and that is true in the sense that there are the same number of them. However the value in each frequency bin will depend on all time values.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
import tensorflow as tf
input=[50,10]
O1 = layers.fully connected(input, 20, tf.sigmoid)
Why my input is wrong?
I am not sure I understand the question, but...
The sigmoid layer will output an array with numbers between 0 and 1, but you can't really calculate what the standard deviation will be before feeding your network.
If you are talking about the matrix that contains the weight parameters, then this depends on how you initialize them. But after the training of the network, the deviation will not be the same as before the training.
EDIT:
Ok, so you simply want to calculate the standard deviation for a matrix. In that case see numpy.
a = np.array([[1, 2], [3, 4]]) # or your 50 by 50 matrix
np.std(a)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I would like to normalize a vector such that the mean of the normalized vector would be a certain pre-defined value. For instance, I want the mean to be 0.1 in the following example:
import numpy as np
from sklearn.preprocessing import normalize
array = np.arange(1,11)
array_norm = normalize(array[:,np.newaxis], axis=0).ravel()
Of course, np.mean(array_norm) is 0.28 and not 0.1. Is there a way to this in Python?
You could just multiply each element by mean_you_want / current_mean. If you multiply each element by a scalar, the mean will also be multiplied by that scalar. In your case, that would be 0.1/np.mean(array_norm)
array_norm *= 0.1/np.mean(array_norm)
This should do the trick.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a particle trajectory in 1D, j=[] and for the time=np.arange(0, 10 + dt, dt) where dt is the time step. I have calculate the MSD according to this article.
I have searched in google as well as here for 1d MSD in python but did not find any suitable one as my python knowledge is very beginner level. I have written one code and it is working without any error but I am not sure that it represent the same thing according to the given article. Here is my code,
j_i = np.array(j)
MSD=[]
diff_i=[]
tau_i=[]
for l in range(0,len(time)):
tau=l*dt
tau_i.append(tau)
for i in range(0,(len(time)-l)):
diff=(j_i[l+i]-j_i[i])**2
diff_i.append(diff)
MSD_j=np.sum(diff_i)/np.max(time)
MSD.append(MSD_j)
Can anyone please check verify the code and give suggestion if it is wrong.
The code is mostly correct, here is a modified version where:
I simplified some expressions (e.g. range)
I corrected the average, directly using np.mean because the MSD is a squared displacement [L^2], not a ratio [L^2] / [T].
Final code:
j_i = np.array(j)
MSD = []
diff_i = []
tau_i = []
for l in range(len(time)):
tau = l*dt
tau_i.append(tau)
for i in range(len(time)-l):
diff = (j_i[l+i]-j_i[i])**2
diff_i.append(diff)
MSD_j = np.mean(diff_i)
MSD.append(MSD_j)
EDIT: I realized I forgot to mention it because I was focusing on the code, but the ensemble average denoted by <.> in the paper should, as the name implies, be performed over several particles, preferentially comparing the initial position of each particle with its new position after a time tau, and not as you did with a kind of time-running average
EDIT 2: here is a code that shows how to do a proper ensemble average to implement exactly the formula in the article
js = # an array of shape (N, M), with N the number of particles and
# M the number of time points
MSD_i = np.zeros((N, M))
taus = []
for l in range(len(time)):
taus.append(l*dt) # store the values of tau
# compute all squared displacements at current tau
MSD_i[:, l] = np.square(js[:, 0] - js[:, l])
# then, compute the ensemble average for each tau (over the N particles)
MSD = np.mean(MSD_i, axis=0)
And now you can plot MSD versus taus and Bob's your uncle
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm adapting and extending the Matlab "Chroma Toolbox" by Meinard Müller and Sebastien Ewert to python. It aims to detect what musical pitches are present at each analysis frame for an audio recording.
The first step is to determine the tuning of the music, and the Chroma Toolbox tests whether the music is tuned at the standard A=440Hz, or down a quarter, third, half, two-thirds, or three-quarters of a semitone. That's ok, but in my application, I need more resolution in the tuning detection.
Once the tuning is selected from one of those choices, a corresponding filterbank is chosen, which is used to find how much energy there is at each musical pitch over the range of the piano. (Also, the waveform is resampled to 22050, 4410, and 882 Hz)
The coefficients for the filterbanks are stored in .mat files, given by the Chroma Toolbox. For example, the coefficients for detecting energy at standard-tuned middle C (261.63 Hz) are b = [ 1., -7.43749873, 24.72954997, -47.94740681, 59.25189976,
-47.77885707, 24.55599193, -7.35933913, 0.98601284] and a = [0.00314443, -0.02341175, 0.07794208, -0.15134062, 0.18733283, -0.15134062, 0.07794208, -0.02341175, 0.00314443], and the sample rate for middle C is 4410 Hz.
These coefficients are used in a call to filtfilt: I use scipy.signal.filtfilt(b, a, x) where x is the waveform at an appropriate sampling frequency, low for the low notes, high for the higher ones. This step is done in the file "audio_to_pitch_via_FB.m".
the question:
Because I want to allow for different tuning levels than those designed into the Chroma Toolbox, I need to make my own filterbanks, and so need to know how to compute the filter coefficients. To do so, I need a function coeffs(freq, fs) that will find the right coefficients to find the energy at a given frequency freq, for a signal at sample frequency fs. How do I do it?
Here's the name of one of the .mat files, in case it contains a useful clue. "MIDI_FB_ellip_pitch_60_96_22050_Q25_minusQuarter.mat"
The code that generates the filters is in generateMultiratePitchFilterbank.m file. The ellip function returns a and b the other way round, but otherwise it's more or less the same.
The following recipe reproduces the numbers you quoted:
import numpy as np
import scipy.signal as ss
def coeffs(pitch, fs, Q=25, stop=2, Rp=1, Rs=50):
"""Calculate filter coeffs for a given pitch and sampling rate, fs.
See their source code for description of Q, stop, Rp, Rs"""
nyq = fs/2. # Nyquist frequency
pass_rel = 1/(2.*Q)
stop_rel = pass_rel * stop
# The min-max edges of the pass band
Wp = np.array([pitch - pass_rel*pitch, pitch+pass_rel*pitch])/nyq
# And the stop band(s)
Ws = np.array([pitch - stop_rel*pitch, pitch+stop_rel*pitch])/nyq
# Get the order, natural freq
n, Wn = ss.ellipord(Wp, Ws, Rp, Rs)
# Get a and b:
a, b = ss.ellip(n, Rp, Rs, Wn, btype="bandpass")
return a, b