I am currently trying to get the frequency of the audio data that I have obtained from pyAudioStream.read(). I have already gotten the number of zero crosses there were in the chunk but now I want to determine the frequency based on the zero crosses, I heard it was possible but idk how to do it and I cant seem to find it on a google search, can anyone help me out here?
Let assume that the variable num_crossings holds the number of zero crossings in your chunk.
Therefore, you have:
frequency = (num_crossings * sampling_rate / (2 * (len(chunk))))
For frequency detection, you can also use Fourier transform (with numpy.fft for instance).
Related
I have a set of data, where I am measuring the displacement of a body and the corresponding force required. The is randomly oscillating. I need to measure when the displacement data is changing from negative to positive, this will count as a completed cycle. Each cycle will be saved separately in a data frame which will contain the displacement and their corresponding force values.
I am able to perform this if the total number of cycles are known to me. But for this particular problem there is no knowledge of how many cycles or how many datasets will be present within each cycle.
Any help will be appreciated in this regard.
I'm only asking this question because I've recently stumbled upon some clever code that I would have never thought of on my own. The code I refer to uses the numpy python library. It coverts the signal to true/false array based on if the signal is above a threshold. Then it generates an array that aligns with the middle of each bit. Then it reshapes the data into groups of 8. It takes a half dozen lines of code to analyze thousands of points of data. I've written code that does similar things but it walks through the entire dataset using for loops looking for edges and then converts those edges to bits. It takes literally hundreds of lines of code to do.
Pictured is an example of a dataset I'm trying to analyze. The beginning always has a preamble of 8 bits that are the same. I want to extract what the period of the signal is using the preamble.
Are there any methods for doing so in python without painstakingly looking for edges?
# Find the transitions
edges = np.abs(x[:-1] - x[1:]) > limit
# Find the indices where the transitions occur
indices = np.where(edges)[0]
# Count the elements as the difference between the indices
counts = indices[1:] - indices[:-1]
I have some trouble understanding the output of the Discrete Cosine Transform.
Background:
I want to achive a simple audio compression by saving only the most relevant frequencies of a DCT. In order to be somewhat general, I would cut several audio tracks into pieces of a fixed size, say 5 seconds.
Then I would do a DCT on each sample and find out which are the most important frequencies among all short snippets.
This however does not work, which might be due to my missunderstanding of the DCT. See for example the images below:
The first image shows the DCT of the first 40 seconds of an audio track (wanted to make it long enough so that I get a good mix of frequencies).
The second image shows the DCT of the first ten seconds.
The thrird image shows the DCT of a reverse concatination (like abc->abccba) of the first 40 seconds
I added a vertical mark at 2e5 for comparison. Samplerate of the music is the usual 44.1 khz
So here are my questions:
What is the frequency that corresponds to an individual value of the DCT-output-vector? Is it bin/2? Like if I have a spike at bin=10000, which frequency in the real world does this correspond to?
Why does the first plot show strong amplitudes for so many more frquencies than the seond? My intuition was that the DCT would yield values for all frequencies up to 44.l khz (so bin number 88.2k if my assumption in #1 is correct), only that the scale of the spikes would be different, which would then make up the difference in the music.
Why does the third plot show strong amplitudes for more frequencies than the first does? I thought that by concatenating the data, I would not get any new frequencies.
As DCTand FFT/DFT are very similar, I tried to learn more about ft (this and this helped), but apparently it didn't suffice.
Figured it out myself. And it was indeed written in the link I posted in the question. The frequency that corresponds to a certain bin_id is given by (bin_id * freq/2) / (N/2). Which essentially boils down to bin_id*1/t with N=freq*t. This means that the plots just have different granularities. So if plot#1 has a high point at position x, plot#2 will likely show a high point at x/4 and plot#3 at x*2
The image blow shows the data of plot#1 stretched to twice its size (in blue) and the data of plot#3 in yellow
I've got a set of 780 monthly temperature anomalies over 65 years and I want to analyse it for frequencies that are driving the anomalies. I've used the spectrum package to do this, I've included pictures of the series before and after the analysis.
from spectrum import *
p = Periodogram(anomalies, sampling=1/12)
p.run()
plt.title('Power Spectrum of Monthly Temperature Anomalies')
p.plot(marker='.')
plt.show()
The resulting spectrum has several clear negative spikes. Now I understand that a negative value in Db isn't actually a negative absolute value, but why is this happening? Does it imply there's some specific frequency missing from my data? Because a positive spike implies one is present.
Also, why are most of the frequencies displayed negative? What is the reference value for Db to be an amplification of?
A part of me thinks I should be taking the absolute value of this spectrum but I'd like to understand why if that's the case. Also I put in the value for sampling as 1/12 because the data points are monthly so hopefully the frequency scale is in per year?
Many thanks, this is my first post here so let me know if I need to be clearer about anything.
Negative Energies
The Series being Analysed
As you can see in the plots, on the y-axis, the units are in dB (decibel, https://en.wikipedia.org/wiki/Decibel). so what is see is not the raw data (in the frequency domain) but something like 10*log10(data). This explains the presence of negative values and is perfectly normal.
Here you have positive and negative values but in general, you would normalise the data (by the maximum) so that all values are negative and the highest value is set to 0. This is possible using :
p.plot(norm=True)
You can plot the raw data (without the log function) but you would need to use the raw data (in the frequency domain). For instance to reproduce the behaviour of p.plot function, you can use:
from pylab import plot
plot(p.frequencies(), 10*log10(p.psd/p.psd.max())
So, if you do not want to use the decibel unit, use:
from pylab import plot
plot(p.frequencies(), p.psd)
disclaimer: I'm the main author of Spectrum (http://pyspectrum.readthedocs.io/).
I am trying to determine the conditions of a wireless channel by analysis of captured I/Q samples. Indeed, I have a 50000 data samples and as it is shown in the attached figure, there are some sparks in the graphs when there is an activity (e.g. data transmission) over the channel. I am trying to count the number of sparks which are data values higher than a threshold.
I need to have an accurate estimation of the threshold and then I can find the channel load. the threshold value in the attached figure is around 0.0025 and it should be noted that it varies over time. So, each time that I took 50000 samples, I have to find the threshold value first using some sort of unsupervised learning.
I tried k-means (in python scikit-learn) to cluster the data and find the centroids of the estimated clusters, but it can't give me good estimation on the threshold value (especially when there is no activity over the channel and the channel is idle).
I would like to know is there anyone who has prior experience on similar topics?
Captured data
Since the idle noise seems relatively consistent and very different from when data is transmitted, I can think of several simple algorithms which could give you a reasonable threshold in an unsupervised manner.
The most direct method would be to sort the values (perhaps first group into buckets), then find the lowest-valued region where a large enough proportion (at least ~5%) of values fall. Take a reasonable margin above the highest values (50%?) and you should be good to go.
You'll need to fiddle with the thresholds a bit. I'd collect sample data and tweak the values until I get it working 100% of the time and the values used make sense.