I essentially have three data points in three separate numpy array and I want to plot the time along the x-axis, the frequency along the y-axis and the magnitude should represent the colors. But this is all happening in real time so it should look like the spectrogram is drawing itself as the data updates.
I have simplified the code to show that I am running a loop that takes at most 10 ms to run and during that time I am getting new data from functions.
import numpy as np
import random
from time import sleep
def new_val_freq():
return random.randint(0,22)
def new_val_mag():
return random.randint(100,220)
x_axis_time = np.array([1])
y_axis_frequency = np.array([10])
z_axis_magnitude = np.array([100])
t = 1
while True:
x_axis_time = np.append (x_axis_time , [t+1])
t+=1
y_axis_frequency = np.append (y_axis_frequency , [new_val_freq()])
z_axis_magnitude = np.append (z_axis_magnitude, [new_val_mag()])
time.sleep(0.01)
#Trying to figure out how Create/Update spectrogram plot with above additional
#data in real time without lag
Ideally I would like this to be as fast as it possibly can rather than having to redraw the whole spectrogram. It seems matplolib is not good for plotting dynamic spectrograms and I have not come across any dynamic spectrogram examples so I was wondering how I could do this?
Related
I'm trying to make sense of the output produced by the python FFT library.
I have a sqlite database where I have logged several series of ADC values. Each series consist of 1024 samples taken with a frequency of 1 ms.
After importing a dataseries, I normalize it and run int through the fft method. I've included a few plots of the original signal compared to the FFT output.
import sqlite3
import struct
import numpy as np
from matplotlib import pyplot as plt
import time
import math
conn = sqlite3.connect(r"C:\my_test_data.sqlite")
c = conn.cursor()
c.execute('SELECT ID, time, data_blob FROM log_tbl')
for row in c:
data_raw = bytes(row[2])
data_raw_floats = struct.unpack('f'*1024, data_raw)
data_np = np.asarray(data_raw_floats)
data_normalized = (data_np - data_np.mean()) / (data_np.max() - data_np.min())
fft = np.fft.fft(data_normalized)
N = data_normalized .size
plt.figure(1)
plt.subplot(211)
plt.plot(data_normalized )
plt.subplot(212)
plt.plot(np.abs(fft)[:N // 2] * 1 / N)
plt.show()
plt.clf()
The signal clearly contains some frequencies, and I was expecting them to be visible from the FFT output.
What am I doing wrong?
You need to make sure that your data is evenly spaced when using np.fft.fft, otherwise the output will not be accurate. If they are not evenly spaced, you can use LS periodograms for example: http://docs.astropy.org/en/stable/stats/lombscargle.html.
Or look up non-uniform fft.
About the plots:
I don't think that you are doing something obviously wrong. Your signal consists a signal with period in the order of magnitude 100, so you can expect a strong frequency signal around 1/period=0.01. This is what is visible on your graphs. The time-domain signals are not that sinusoidal, so your peak in the frequency domain will be blurry, as seen on your graphs.
I have two sets of data that I would like to plot on the same graph. Both sets of data have 200 seconds worth of data. DatasetA (BLUE) is sampled at 25 Hz and DatasetB (Red) is sampled at 40Hz. Hence DatasetA has 25*200 = 5000 (time,value) samples... and DatasetB has 40*200 = 8000 (time,value) samples.
datasets with different sample rates
As you can see above, I have managed to plot these in matplotlib using the 'plot_date' function. As far as I can tell, the 'plot' function will not work because the number of (x,y) pairs are different in each sample. The issue I have is the format of the xaxis. I would like the time to be a duration in seconds, rather than an exact time of the format hh:mm:ss. Currently, the seconds value resets back to zero when it hits each minute (as seen in the zoomed out image below).
zoomed out full time scale
How can I make the plot show the time increasing from 0-200 seconds rather than showing hours:min:sec ?
Is there a matplotlib.dates.DateFormatter that can do this (I have tried, but can't figure it out...)? Or do I somehow need to manipulate the datetime x-axis values to be a duration, rather than an exact time? (how to do this)?
FYI:
The code below is how I am converting the original csv list of float values (in seconds) into datetime objects, and again into matplotlib date-time objects -- to be used with the axes.plot_date() function.
from matplotlib import dates
import datetime
## arbitrary start date... we're dealing with milliseconds here.. so only showing time on the graph.
base_datetime = datetime.datetime(2018,1,1)
csvDateTime = map(lambda x: base_datetime + datetime.timedelta(seconds=x), csvTime)
csvMatTime = map(lambda x: dates.date2num(x), csvDateTime)
Thanks for your help/suggestions!
Well, thanks to ImportanceOfBeingErnst for pointing out that I was vastly over-complicating things...
It turns out that I really only need the ax.plot(x,y) function rather than the ax.plot_date(mdatetime, y) function. Plot can actually plot varied lengths of data as long as each individual trace has the same number of x and y values. Since the data is all given in seconds I can easily plot using 0 as my "reference time".
For anyone else struggling with plotting duration rather than exact times, you can simply manipulate the "time" (x) data by using python's map() function, or better yet a list comprehension to "time shift" the data or convert to a single unit of time (e.g. simply turn minutes into seconds by dividing by 60).
"Time Shifting" might look like:
# build some sample 25 Hz time data
time = range(0,1000,1)
time = [x*.04 for x in time]
# "time shift it by 5 seconds, since this data is recorded 5 seconds after the other signal
time = [x+5 for x in time]
Here is my plotting code for any other matplotlib beginners like me :) (this will not run, since I have not converted my variables to generic data... but nevertheless it is a simple example of using matplotlib.)
fig,ax = plt.subplots()
ax.grid()
ax.set_title(plotTitle)
ax.set_xlabel("time (s)")
ax.set_ylabel("value")
# begin looping over the different sets of data.
tup = 0
while (tup < len(alldata)):
outTime = alldata[tup][1].get("time")
# each signal is time shifted 5 seconds later.
# in addition each signal has different sampling frequency,
# so len(outTime) is different for almost every signal.
outTime = [x +(5*tup) for x in outTime]
for key in alldata[tup][1]:
if(key not in channelSelection):
## if we dont want to plot that data then skip it.
continue
else:
data = alldata[tup][1].get(key)
## using list comprehension to scale y values.
data = [100*x for x in data]
ax.plot(outTime,data,linestyle='solid', linewidth='1', marker='')
tup+=1
plt.show()
I am doing a project where I want to use the data of a .wav file to drive animation. The problems I am facing are mainly due to the fact that the animation is 25fps and I have 44100 samples per second in the .wav file, so I've broken down apart to 44100/25 samples. Working with the amplitude is fine and I created an initial test to try it out and it worked. This is the code
import wave
import struct
wav = wave.open('test.wav', 'rb')
rate = 44100
nframes = wav.getnframes()
data = wav.readframes(-1)
wav.close()
data_c = [data[offset::2] for offset in range(2)]
ch1 = struct.unpack('%ih' % nframes, data_c[0])
ch2 = struct.unpack('%ih' % nframes, data_c[1])
kf = []
for i in range(0, len(ch2), 44100/25):
cur1 = 0
cur2 = 0
for j in range(i, i+44100/25):
cur1+=ch2[j]
cur2+=ch1[j]
cur = (cur1+cur2) / 44100. / 25. / 2.
kf.append(cur)
min_v = min(kf)
max_v = max(kf)
if abs(max_v) > abs(min_v):
kf = [float(i)/max_v for i in kf]
else:
kf = [float(i)/min_v for i in kf]
Now I want to get the spectrum for each separate keyframe as I do for the amplitude, but I am struggling to think of a way to do it. I can get the spectrum for the whole file using FFT, but that's not I want, because ideally I would like to have different movements of the objects in accordance to different frequencies.
Look at scipy wavfile. It'll turn the wave file into a numpy array. Numpy also has fft functions. Scipy/matplotlib has a spectrogram plot for the entire spectrogram.
from scipy.io import wavfile
sample_rate, data = wavfile.read(filename)
Then you have to get your timing of how you want to read the data. Matplotlib has animation tools that will call a function at a given interval. The other way of doing it is to use PyAudio. If you use pyaudio you can listen to the data while it is displayed.
Next run the data through the FFT. Store the FFT values in a spectrogram array and use matplotlib imshow to display the spectrogram array. You will probably have to rotate the array in some fashion when you display the spectrogram.
From personal experience be careful of python threads. Threading works for I/O, but for calculations the thread can just dominate the whole application slowing everything down. Also GUI elements (like plotting) don't really work in threads. Use matplotlibs animation tools for the plotting.
I'm making a demonstration of a different types of regression in numpy with ipython, and so far, I've been able to plot a simple linear regression without difficulty. Now, when I go on to make a quadratic fit to my data and go to plot it, I don't get a quadratic curve but instead get many lines. Here's the code I'm running that generates the problem:
import numpy
from numpy import random
from matplotlib import pyplot as plt
import math
# Generate random data
X = random.random((100,1))
epsilon=random.randn(100,1)
f = 3+5*X+epsilon
# least squares system
A =numpy.array([numpy.ones((100,1)),X,X**2])
A = numpy.squeeze(A)
A = A.T
quadfit = numpy.linalg.solve(numpy.dot(A.transpose(),A),numpy.dot(A.transpose(),f))
# plot the data and the fitted parabola
qdbeta0,qdbeta1,qdbeta2 = quadfit[0][0],quadfit[1][0],quadfit[2][0]
plt.scatter(X,f)
plt.plot(X,qdbeta0+qdbeta1*X+qdbeta2*X**2)
plt.show()
What I get is this picture (zoomed in to show the problem):
You can see that rather than having a single parabola that fits the data, I have a huge number of individual lines doing something that I'm not sure of. Any help would be greatly appreciated.
Your X is ordered randomly, so it's not a good set of x values to use to draw one continuous line, because it has to double back on itself. You could sort it, I guess, but TBH I'd just make a new array of x coordinates and use those:
plt.scatter(X,f)
x = np.linspace(0, 1, 1000)
plt.plot(x,qdbeta0+qdbeta1*x+qdbeta2*x**2)
gives me
I had a question on how matplotlib worked. Basically, I want to flip x and y in my image just as this person asked.
However, I do not want to resort to transposing the array before sending it.
I assume this would result in a loss of performance. Now here is my reasoning. My guess is that matplotlib probably tries to copy an image from numpy to matplotlib by iterating over the rapidly varying index in both (where rapidly varying index here is assumed the index that leads to accessing contiguous elements in physical memory). If I transpose the array, one of two cases can probably happen:
What matplotlib thinks to be the rapidly varying index in memory is no longer true and thus it will no longer be accessing the numpy array in a memory contiguous fashion, resulting in slower readout. (i.e. numpy just changes it's "view" into the matrix)
The Numpy array is actually copied into a new array where the rapidly varying index is transposed. The reading into matplotlib is fast at the cost of copying a new matrix into memory.
Both of these cases are not ideal in my case as I would like to achieve reasonably high refresh rates. The arrays are loaded from images already stored on a hard drive stored in this fashion.
If my assumption is true, is there some method to have matplotlib change its rapidly varying index for the case of an image? This would be a very useful feature I believe.
I hope to have communicated my line of reasoning. I want to make sure every read and write in the chain is memory contiguous, from the hard drive to the numpy array to matplotlib. I believe that simply finding (or offering in the future) an option for matplotlib to reverse its method of ordering would save time.
I am definitely open to other solutions. If there's something obvious I have missed, I'd like to hear it thanks :-)
Edit: Thanks for the comments. I believe you are right in that I should have done some testing before hand. I have an interesting result. The transpose seems to be faster than the non-transpose. Since the numpy reference is simply a view into the array, it is possible matplotlib uses this to its advantage to cleverly decide whether or not to transpose last minute? This results suggests so, that or my code has a flaw (which is very possible). Seriously, if that's true, good friggin' job matplotlib developers! :-)
Here are the figures:
Figure 1 : Transposed image (blue) and non transposed image (green). Each data point is the time taken to draw all 10 frames in sequence. This is repeated 100 times. I did not use the machine for the duration of these experiments.
Figure 2 : Zoom of same figure
The code (it's crude, learning and limited time to spend on this. I ran the first one for 1000 frames, closed the window, uncommented the second animation, commented the first and ran again for 1000 frames then plotted the two.):
`
from matplotlib.pyplot import *
from matplotlib import animation
from time import time
import pylab
import numpy as np
data = np.random.random((10,1000,1000))
ion()
fig = figure(0)
im = imshow(data[0,:,:])
data[0,:,:] *= 0
time1 = time()
time0 = time()
timingarraynoT = np.zeros(100)
timingarrayT = np.zeros(100)
def animatenoT(i):
global time0,time1,timingarraynoT
if(i%10==0):
time0 = time()
if(i%10 == 9):
time1 = time()
#print("Time for 10 frames: {}".format(time1-time0))
timingarraynoT[i/10] = time1-time0
im.set_data(data[i%10,:,:])
if(i == 1000):
return -1
return im
def animateT(i):
global time0,time1,timingarrayT
if(i%10==0):
time0 = time()
if(i%10 == 9):
time1 = time()
#print("Time for 10 frames: {}".format(time1-time0))
timingarrayT[i/10] = time1-time0
im.set_data(data[i%10,:,:].T)
if(i == 1000):
return -1
return im
#anim = animation.FuncAnimation(fig, animateT,interval=0)
anim = animation.FuncAnimation(fig, animatenoT,interval=0)
`