Python memory issues with matplotlib.animation - python

I am creating a simulation of diffusion in a complex system taking arbitrary images as a substrate and allowing arbitrary creation of diffusion fronts and allowing both surface reactions as well as deposition of new material on the starting substrates. The results I'm quite proud of so far, and you can check out the movies I made with it here for CVD and SFD deposition on particles.
CVD Movie
SFD Movie
Unfortunately I cannot generate more than 50 or so frames because it runs out of memory. I have tried clearing things as much as possible throughout the simulation, but I think I must be missing something. To summarize:
I start out by creating an empty list
ims = []
Then, each time my "simulation" runs, if frame number % frame "rate" == 0, it generates a frame which is:
displayed using plt.ion() through plt.draw() and
uses ims.append() to add the rendered plot to an array of animated frames.
Before each frame render, I run plt.clf() to prevent the plot from just having increasing numbers of overlaid plots.
Without the ims.append() step, the code consumes between 140 and 170MB of RAM. With that step, 50 frames consumes nearly 1.4GB of RAM. Obviously, this is very limiting. 50 frames is nice, but I'd really like at least 350. That may be impossible with this route, but this suggests a memory usage purely by the ims array of roughly 24MB per frame.
A workaround is to create the frame and render it to an .svg or .png file inside the loop and save it to disk. I find that this rendering process is very CPU intensive so doing that often makes the code quite slow. Additionally, creating 350 PNG files and then converting them manually into a video is pretty messy, so I'd love to somehow get it all inside of the program itself.
Does anyone have an idea for how to decrease the memory usage of this example code without resorting to rendering and writing each frame to disk?
In this toy code, I just used a random number generator to populate the two datasets as described in the comments to speed things up.
The code:
import matplotlib.pyplot as plt
import matplotlib.animation as anim
from numpy import *
from matplotlib import *
import time
# Defines the number of frames of animation to render.
outputframes = 50
# Defines the size of the canned simulation.
nx = 800
ny = 800
# Defines the number of actual simulation timesteps
nt = 100
# This gets the number of timesteps between outputframes.
framestep = 2
# For reporting.
framenum = 0
# Creates two steps, one for the stepped simulated step,
# and one for the prior state. There are two independently
# changing materials, each of which will have half the simulation
# space containing random values here, plus 10% overlap in the
# middle.
p1 = zeros((nx, ny, 2))
p1[360:800,:,0] = random.rand(440, ny)
p2 = zeros((nx, ny, 2))
p2[0:440,:,0] = random.rand(440, ny)
# Animation colormap setup
norm = colors.Normalize(vmin=0, vmax = 1)
# And sets up two corresponding colormaps, one blue and one
# red for p1 and p2 respectively (goal is overlaid).
cmap1 = cm.Blues
cmap2 = cm.Reds
# Sets up an empty array to hold animation frames.
ims = []
# Sets up and uses ion to draw the figure without blocking.
plt.ion()
fig = plt.figure()
plt.draw()
# Run the simulation.
for t in range(nt):
# This looks to see how far we are, and if we're at a point
# where t is an even multiple of framestep, we should render
# a new frame.
if (t%framestep == 0):
print('Frame ' + str(framenum))
framenum = framenum + 1
plt.clf()
# In here I did a bunch of stuff to get special colors in
# the colormap to get substrates and surfaces and other
# features clearly identified. I am creating a new frame1
# and frame2 object because in reality I will be doing a
# log plot math to convert to the graphic frame.
frame1 = p1[:,:,0]
# This part is necessary in my real program because
# I manually modify the colormap after it's created
# to include the above mentioned special colors.
frame1_colors = cmap1(norm(frame1))
# This is my (not quite right) attempt to do overlaid plots.
plt.imshow(frame1_colors, alpha = 0.5)
# Do the same for the second set of data.
frame2 = p2[:,:,0]
frame2_colors = cmap2(norm(frame2))
# The goal here was to take the combined output and make
# it into an animation frame to append to ims, the image
# array.
# This is where I start to run into problems. Without the
# ims.append, the program has constant memory usage. With
# it, I am using 1340MB by the 50th frame. This is the
# biggest issue. Even throwing away all other simulation
# data, this image array for animation is *enormous*.
# With the ims.append line replaced with the plt.imshow
# line alone, memory usage is much smaller, ranging from
# 140-170MB depending on execution point, but relatively
# constant.
ims.append([plt.imshow(frame2_colors, alpha = 0.5)])
# plt.imshow(frame2_colors, alpha = 0.5)
# Then try to draw updating animation to show progress
# using draw(). As best I can tell, this basically works,
# in that the plot is displaying with all components.
plt.draw()
# I'll put in a timer so that this doesn't go too fast, since
# the actual calculation is very complex.
time.sleep(0.01)
# Proxy for the actual calculation. Just overwrite with new
# random data in the overlapping ranges to show some change
# visually.
p1[360:800,:,1] = random.rand(440, ny)
p2[0:440,:,1] = random.rand(440, ny)
# In this version, it is trivial, but in the real simulation
# p1[:,:,1] does not end up equal to p1[:,:,0], so the following
# resets the simulation for the next timestep, overwriting the
# old values to avoid memory overflow from the p1 and p2 arrays
# being enormous.
# Copy new values into old values.
p1[:,:,0] = p1[:,:,1]
p2[:,:,0] = p2[:,:,1]
# This is just a repeat for the final frame.
plt.clf()
frame1 = p1[:,:,0]
frame1_colors = cmap1(norm(frame1))
plt.imshow(frame1_colors, alpha = 0.5)
frame2 = p2[:,:,0]
frame2_colors = cmap2(norm(frame2))
# As above, the ims.append uses tons of memory, the imshow alone works well.
ims.append([plt.imshow(frame2_colors, alpha = 0.5)])
# plt.imshow(frame2_colors, alpha = 0.5)
plt.draw()
anim = anim.ArtistAnimation(fig, ims, blit=True)
anim.save('test.mp4', fps=10, writer='avconv')

In the end, I decided the only reasonable way to do this was by rendering each frame I needed to disk as a .png and then generating a movie from the images afterwards with avconv.
Thanks for all the suggestions, but it looks like this is just a limitation due to the RAM usage of uncompressed images.

Related

Drawing Dynamic Spectrogram in real time

I essentially have three data points in three separate numpy array and I want to plot the time along the x-axis, the frequency along the y-axis and the magnitude should represent the colors. But this is all happening in real time so it should look like the spectrogram is drawing itself as the data updates.
I have simplified the code to show that I am running a loop that takes at most 10 ms to run and during that time I am getting new data from functions.
import numpy as np
import random
from time import sleep
def new_val_freq():
return random.randint(0,22)
def new_val_mag():
return random.randint(100,220)
x_axis_time = np.array([1])
y_axis_frequency = np.array([10])
z_axis_magnitude = np.array([100])
t = 1
while True:
x_axis_time = np.append (x_axis_time , [t+1])
t+=1
y_axis_frequency = np.append (y_axis_frequency , [new_val_freq()])
z_axis_magnitude = np.append (z_axis_magnitude, [new_val_mag()])
time.sleep(0.01)
#Trying to figure out how Create/Update spectrogram plot with above additional
#data in real time without lag
Ideally I would like this to be as fast as it possibly can rather than having to redraw the whole spectrogram. It seems matplolib is not good for plotting dynamic spectrograms and I have not come across any dynamic spectrogram examples so I was wondering how I could do this?

Unexpected FFT Results with Python

I'm analyzing what is essentially a respiratory waveform, constructed in 3 different shapes (the data originates from MRI, so multiple echo times were used; see here if you'd like some quick background). Here are a couple of segments of two of the plotted waveforms for some context:
For each waveform, I'm trying to perform a DFT in order to discover the dominant frequency or frequencies of respiration.
My issue is that when I plot the DFTs that I perform, I get one of two things, dependent on the FFT library that I use. Furthermore, neither of them is representative of what I am expecting. I understand that data doesn't always look the way we want, but I clearly have waveforms present in my data, so I would expect a discrete Fourier transform to produce a frequency peak somewhere reasonable. For reference here, I would expect about 80 to 130 Hz.
My data is stored in a pandas data frame, with each echo time's data in a separate series. I'm applying the FFT function of choice (see the code below) and receiving different results for each of them.
SciPy (fftpack)
import pandas as pd
import scipy.fftpack
# temporary copy to maintain data structure
lead_pts_fft_df = lead_pts_df.copy()
# apply a discrete fast Fourier transform to each data series in the data frame
lead_pts_fft_df.magnitude = lead_pts_df.magnitude.apply(scipy.fftpack.fft)
lead_pts_fft_df
NumPy:
import pandas as pd
import numpy as np
# temporary copy to maintain data structure
lead_pts_fft_df = lead_pts_df.copy()
# apply a discrete fast Fourier transform to each data series in the data frame
lead_pts_fft_df.magnitude = lead_pts_df.magnitude.apply(np.fft.fft)
lead_pts_fft_df
The rest of the relevant code:
ECHO_TIMES = [0.080, 0.200, 0.400] # milliseconds
f_s = 1 / (0.006) # 0.006 = time between samples
freq = np.linspace(0, 29556, 29556) * (f_s / 29556) # (29556 = length of data)
# generate subplots
fig, axes = plt.subplots(3, 1, figsize=(18, 16))
for idx in range(len(ECHO_TIMES)):
axes[idx].plot(freq, lead_pts_fft_df.magnitude[idx].real, # real part
freq, lead_pts_fft_df.magnitude[idx].imag, # imaginary part
axes[idx].legend() # apply legend to each set of axes
# show the plot
plt.show()
Post-DFT (SciPy fftpack):
Post-DFT (NumPy)
Here is a link to the dataset (on Dropbox) used to create these plots, and here is a link to the Jupyter Notebook.
EDIT:
I used the posted advice and took the magnitude (absolute value) of the data, and also plotted with a logarithmic y-axis. The new results are posted below. It appears that I have some wraparound in my signal. Am I not using the correct frequency scale? The updated code and plots are below.
# generate subplots
fig, axes = plt.subplots(3, 1, figsize=(18, 16))
for idx in range(len(ECHO_TIMES)):
axes[idx].plot(freq[1::], np.log(np.abs(lead_pts_fft_df.magnitude[idx][1::])),
label=lead_pts_df.index[idx], # apply labels
color=plot_colors[idx]) # colors
axes[idx].legend() # apply legend to each set of axes
# show the plot
plt.show()
I've figured out my issues.
Cris Luengo was very helpful with this link, which helped me determine how to correctly scale my frequency bins and plot the DFT appropriately.
ADDITIONALLY: I had forgotten to take into account the offset present in the signal. Not only does it cause issues with the huge peak at 0 Hz in the DFT, but it is also responsible for most of the noise in the transformed signal. I made use of scipy.signal.detrend() to eliminate this and got a very appropriate looking DFT.
# import DFT and signal packages from SciPy
import scipy.fftpack
import scipy.signal
# temporary copy to maintain data structure; original data frame is NOT changed due to ".copy()"
lead_pts_fft_df = lead_pts_df.copy()
# apply a discrete fast Fourier transform to each data series in the data frame AND detrend the signal
lead_pts_fft_df.magnitude = lead_pts_fft_df.magnitude.apply(scipy.signal.detrend)
lead_pts_fft_df.magnitude = lead_pts_fft_df.magnitude.apply(np.fft.fft)
lead_pts_fft_df
Arrange frequency bins accordingly:
num_projections = 29556
N = num_projections
T = (6 * 3) / 1000 # 6*3 b/c of the nature of my signal: 1 pt for each waveform collected every third acquisition
xf = np.linspace(0.0, 1.0 / (2.0 * T), num_projections / 2)
Then plot:
# generate subplots
fig, axes = plt.subplots(3, 1, figsize=(18, 16))
for idx in range(len(ECHO_TIMES)):
axes[idx].plot(xf, 2.0/num_projections * np.abs(lead_pts_fft_df.magnitude[idx][:num_projections//2]),
label=lead_pts_df.index[idx], # apply labels
color=plot_colors[idx]) # colors
axes[idx].legend() # apply legend to each set of axes
# show the plot
plt.show()

Matplotlib: How to increase colormap/linewidth quality in streamplot?

I have the following code to generate a streamplot based on an interp1d-Interpolation of discrete data:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from scipy.interpolate import interp1d
# CSV Import
a1array=pd.read_csv('a1.csv', sep=',',header=None).values
rv=a1array[:,0]
a1v=a1array[:,1]
da1vM=a1array[:,2]
a1 = interp1d(rv, a1v)
da1M = interp1d(rv, da1vM)
# Bx and By vector components
def bx(x ,y):
rad = np.sqrt(x**2+y**2)
if rad == 0:
return 0
else:
return x*y/rad**4*(-2*a1(rad)+rad*da1M(rad))/2.87445E-19*1E-12
def by(x ,y):
rad = np.sqrt(x**2+y**2)
if rad == 0:
return 4.02995937E-04/2.87445E-19*1E-12
else:
return -1/rad**4*(2*a1(rad)*y**2+rad*da1M(rad)*x**2)/2.87445E-19*1E-12
Bx = np.vectorize(bx, otypes=[np.float])
By = np.vectorize(by, otypes=[np.float])
# Grid
num_steps = 11
Y, X = np.mgrid[-25:25:(num_steps * 1j), 0:25:(num_steps * 1j)]
Vx = Bx(X, Y)
Vy = By(X, Y)
speed = np.sqrt(Bx(X, Y)**2+By(X, Y)**2)
lw = 2*speed / speed.max()+.5
# Star Radius
circle3 = plt.Circle((0, 0), 16.3473140, color='black', fill=False)
# Plot
fig0, ax0 = plt.subplots(num=None, figsize=(11,9), dpi=80, facecolor='w', edgecolor='k')
strm = ax0.streamplot(X, Y, Vx, Vy, color=speed, linewidth=lw,density=[1,2], cmap=plt.cm.jet)
ax0.streamplot(-X, Y, -Vx, Vy, color=speed, linewidth=lw,density=[1,2], cmap=plt.cm.jet)
ax0.add_artist(circle3)
cbar=fig0.colorbar(strm.lines,fraction=0.046, pad=0.04)
cbar.set_label('B[GT]', rotation=270, labelpad=8)
cbar.set_clim(0,1500)
cbar.draw_all()
ax0.set_ylim([-25,25])
ax0.set_xlim([-25,25])
ax0.set_xlabel('x [km]')
ax0.set_ylabel('z [km]')
ax0.set_aspect(1)
plt.title('polyEos(0.05,2), M/R=0.2, B_r(0,0)=1402GT', y=1.01)
plt.savefig('MR02Br1402.pdf',bbox_inches=0)
plt.show(fig0)
I uploaded the csv-file here if you want to try some stuff https://www.dropbox.com/s/4t7jixpglt0mkl5/a1.csv?dl=0.
Which generates the following plot:
I am actually pretty happy with the result except for one small detail, which I can not figure out: If one looks closely the linewidth and the color change in rather big steps, which is especially visible at the center:
Is there some way/option with which I can decrease the size of this steps to especially make the colormap smother?
I had another look at this and it wasnt as painful as I thought it might be.
Add:
subdiv = 15
points = np.arange(len(t[0]))
interp_points = np.linspace(0, len(t[0]), subdiv * len(t[0]))
tgx = np.interp(interp_points, points, tgx)
tgy = np.interp(interp_points, points, tgy)
tx = np.interp(interp_points, points, tx)
ty = np.interp(interp_points, points, ty)
after ty is initialised in the trajectories loop (line 164 in my version). Just substitute whatever number of subdivisions you want for subdiv = 15. All the segments in the streamplot will be subdivided into as many equally sized segments as you choose. The colors and linewidths for each will still be properly obtained from interpolating the data.
Its not as neat as changing the integration step but it does plot exactly the same trajectories.
If you don't mind changing the streamplot code (matplotlib/streamplot.py), you could simply decrease the size of the integration steps. Inside _integrate_rk12() the maximum step size is defined as:
maxds = min(1. / dmap.mask.nx, 1. / dmap.mask.ny, 0.1)
If you decrease that, lets say:
maxds = 0.1 * min(1. / dmap.mask.nx, 1. / dmap.mask.ny, 0.1)
I get this result (left = new, right = original):
Of course, this makes the code about 10x slower, and I haven't thoroughly tested it, but it seems to work (as a quick hack) for this example.
About the density (mentioned in the comments): I personally don't see the problem of that. It's not like we are trying to visualize the actual path line of (e.g.) a particle; the density is already some arbitrary (controllable) choice, and yes it is influenced by choices in the integration, but I don't thing that it changes the (not quite sure how to call this) required visualization we're after.
The results (density) do seem to converge a bit for decreasing step sizes, this shows the results for decreasing the integration step with a factor {1,5,10,20}:
You could increase the density parameter to get more smooth color transitions,
but then use the start_points parameter to reduce your overall clutter.
The start_points parameter allows you to explicity choose the location and
number of trajectories to draw. It overrides the default, which is to plot
as many as possible to fill up the entire plot.
But first you need one little fix to your existing code:
According to the streamplot documentation, the X and Y args should be 1d arrays, not 2d arrays as produced by mgrid.
It looks like passing in 2d arrays is supported, but it is undocumented
and it is currently not compatible with the start_points parameter.
Here is how I revised your X, Y, Vx, Vy and speed:
# Grid
num_steps = 11
Y = np.linspace(-25, 25, num_steps)
X = np.linspace(0, 25, num_steps)
Ygrid, Xgrid = np.mgrid[-25:25:(num_steps * 1j), 0:25:(num_steps * 1j)]
Vx = Bx(Xgrid, Ygrid)
Vy = By(Xgrid, Ygrid)
speed = np.hypot(Vx, Vy)
lw = 3*speed / speed.max()+.5
Now you can explicitly set your start_points parameter. The start points are actually
"seed" points. Any given stream trajectory will grow in both directions
from the seed point. So if you put a seed point right in the center of
the example plot, it will grow both up and down to produce a vertical
stream line.
Besides controlling the number of trajectories, using the
start_points parameter also controls the order they are
drawn. This is important when considering how trajectories terminate.
They will either hit the border of the plot, or they will terminate if
they hit a cell of the plot that already has a trajectory. That means
your first seeds will tend to grow longer and your later seeds will tend
to get limited by previous ones. Some of the later seeds may not grow
at all. The default seeding strategy is to plant a seed at every cell,
which is pretty obnoxious if you have a high density. It also orders
them by planting seeds first along the plot borders and spiraling inward.
This may not be ideal for your particular case. I found a very simple
strategy for your example was to just plant a few seeds between those
two points of zero velocity, y=0 and x from -10 to 10. Those trajectories
grow to their fullest and fill in most of the plot without clutter.
Here is how I create the seed points and set the density:
num_streams = 8
stptsy = np.zeros((num_streams,), np.float)
stptsx_left = np.linspace(0, -10.0, num_streams)
stptsx_right = np.linspace(0, 10.0, num_streams)
stpts_left = np.column_stack((stptsx_left, stptsy))
stpts_right = np.column_stack((stptsx_right, stptsy))
density = (3,6)
And here is how I modify the calls to streamplot:
strm = ax0.streamplot(X, Y, Vx, Vy, color=speed, linewidth=lw, density=density,
cmap=plt.cm.jet, start_points=stpts_right)
ax0.streamplot(-X, Y, -Vx, Vy, color=speed, linewidth=lw,density=density,
cmap=plt.cm.jet, start_points=stpts_left)
The result basically looks like the original, but with smoother color transitions and only 15 stream lines. (sorry no reputation to inline the image)
I think your best bet is to use a colormap other than jet. Perhaps cmap=plt.cmap.plasma.
Wierd looking graphs obscure understanding of the data.
For data which is ordered in some way, like by the speed vector magnitude in this case, uniform sequential colormaps will always look smoother. The brightness of sequential maps varies monotonically over the color range, removing large percieved color changes over small ranges of data. The uniform maps vary linearly over their whole range which makes the main features in the data much more visually apparent.
(source: matplotlib.org)
The jet colormap spans a very wide variety of brightnesses over its range with in inflexion in the middle. This is responsible for the particularly egregious red to blue transition around the center region of your graph.
(source: matplotlib.org)
The matplotlib user guide on choosing a color map has a few recomendations for about selecting an appropriate map for a given data set.
I dont think there is much else you can do to improve this by just changing parameters in your plot.
The streamplot divides the graph into cells with 30*density[x,y] in each direction, at most one streamline goes through each cell. The only setting which directly increases the number of segments is the density of the grid matplotlib uses. Increasing the Y density will decrease the segment length so that the middle region may transition more smoothly. The cost of this is an inevitable cluttering of the graph in regions where the streamlines are horizontal.
You could also try to normalise the speeds differently so the the change is artifically lowered in near the center. At the end of the day though it seems like it defeats the point of the graph. The graph should provide a useful view of the data for a human to understand. Using a colormap with strange inflexions or warping the data so that it looks nicer removes some understanding which could otherwise be obtained from looking at the graph.
A more detailed discussion about the issues with colormaps like jet can be found on this blog.

matplotlib flip x and y

I had a question on how matplotlib worked. Basically, I want to flip x and y in my image just as this person asked.
However, I do not want to resort to transposing the array before sending it.
I assume this would result in a loss of performance. Now here is my reasoning. My guess is that matplotlib probably tries to copy an image from numpy to matplotlib by iterating over the rapidly varying index in both (where rapidly varying index here is assumed the index that leads to accessing contiguous elements in physical memory). If I transpose the array, one of two cases can probably happen:
What matplotlib thinks to be the rapidly varying index in memory is no longer true and thus it will no longer be accessing the numpy array in a memory contiguous fashion, resulting in slower readout. (i.e. numpy just changes it's "view" into the matrix)
The Numpy array is actually copied into a new array where the rapidly varying index is transposed. The reading into matplotlib is fast at the cost of copying a new matrix into memory.
Both of these cases are not ideal in my case as I would like to achieve reasonably high refresh rates. The arrays are loaded from images already stored on a hard drive stored in this fashion.
If my assumption is true, is there some method to have matplotlib change its rapidly varying index for the case of an image? This would be a very useful feature I believe.
I hope to have communicated my line of reasoning. I want to make sure every read and write in the chain is memory contiguous, from the hard drive to the numpy array to matplotlib. I believe that simply finding (or offering in the future) an option for matplotlib to reverse its method of ordering would save time.
I am definitely open to other solutions. If there's something obvious I have missed, I'd like to hear it thanks :-)
Edit: Thanks for the comments. I believe you are right in that I should have done some testing before hand. I have an interesting result. The transpose seems to be faster than the non-transpose. Since the numpy reference is simply a view into the array, it is possible matplotlib uses this to its advantage to cleverly decide whether or not to transpose last minute? This results suggests so, that or my code has a flaw (which is very possible). Seriously, if that's true, good friggin' job matplotlib developers! :-)
Here are the figures:
Figure 1 : Transposed image (blue) and non transposed image (green). Each data point is the time taken to draw all 10 frames in sequence. This is repeated 100 times. I did not use the machine for the duration of these experiments.
Figure 2 : Zoom of same figure
The code (it's crude, learning and limited time to spend on this. I ran the first one for 1000 frames, closed the window, uncommented the second animation, commented the first and ran again for 1000 frames then plotted the two.):
`
from matplotlib.pyplot import *
from matplotlib import animation
from time import time
import pylab
import numpy as np
data = np.random.random((10,1000,1000))
ion()
fig = figure(0)
im = imshow(data[0,:,:])
data[0,:,:] *= 0
time1 = time()
time0 = time()
timingarraynoT = np.zeros(100)
timingarrayT = np.zeros(100)
def animatenoT(i):
global time0,time1,timingarraynoT
if(i%10==0):
time0 = time()
if(i%10 == 9):
time1 = time()
#print("Time for 10 frames: {}".format(time1-time0))
timingarraynoT[i/10] = time1-time0
im.set_data(data[i%10,:,:])
if(i == 1000):
return -1
return im
def animateT(i):
global time0,time1,timingarrayT
if(i%10==0):
time0 = time()
if(i%10 == 9):
time1 = time()
#print("Time for 10 frames: {}".format(time1-time0))
timingarrayT[i/10] = time1-time0
im.set_data(data[i%10,:,:].T)
if(i == 1000):
return -1
return im
#anim = animation.FuncAnimation(fig, animateT,interval=0)
anim = animation.FuncAnimation(fig, animatenoT,interval=0)
`

How can I improve my paw detection?

After my previous question on finding toes within each paw, I started loading up other measurements to see how it would hold up. Unfortunately, I quickly ran into a problem with one of the preceding steps: recognizing the paws.
You see, my proof of concept basically took the maximal pressure of each sensor over time and would start looking for the sum of each row, until it finds on that != 0.0. Then it does the same for the columns and as soon as it finds more than 2 rows with that are zero again. It stores the minimal and maximal row and column values to some index.
As you can see in the figure, this works quite well in most cases. However, there are a lot of downsides to this approach (other than being very primitive):
Humans can have 'hollow feet' which means there are several empty rows within the footprint itself. Since I feared this could happen with (large) dogs too, I waited for at least 2 or 3 empty rows before cutting off the paw.
This creates a problem if another contact made in a different column before it reaches several empty rows, thus expanding the area. I figure I could compare the columns and see if they exceed a certain value, they must be separate paws.
The problem gets worse when the dog is very small or walks at a higher pace. What happens is that the front paw's toes are still making contact, while the hind paw's toes just start to make contact within the same area as the front paw!
With my simple script, it won't be able to split these two, because it would have to determine which frames of that area belong to which paw, while currently I would only have to look at the maximal values over all frames.
Examples of where it starts going wrong:
So now I'm looking for a better way of recognizing and separating the paws (after which I'll get to the problem of deciding which paw it is!).
Update:
I've been tinkering to get Joe's (awesome!) answer implemented, but I'm having difficulties extracting the actual paw data from my files.
The coded_paws shows me all the different paws, when applied to the maximal pressure image (see above). However, the solution goes over each frame (to separate overlapping paws) and sets the four Rectangle attributes, such as coordinates or height/width.
I can't figure out how to take these attributes and store them in some variable that I can apply to the measurement data. Since I need to know for each paw, what its location is during which frames and couple this to which paw it is (front/hind, left/right).
So how can I use the Rectangles attributes to extract these values for each paw?
I have the measurements I used in the question setup in my public Dropbox folder (example 1, example 2, example 3). For anyone interested I also set up a blog to keep you up to date :-)
If you're just wanting (semi) contiguous regions, there's already an easy implementation in Python: SciPy's ndimage.morphology module. This is a fairly common image morphology operation.
Basically, you have 5 steps:
def find_paws(data, smooth_radius=5, threshold=0.0001):
data = sp.ndimage.uniform_filter(data, smooth_radius)
thresh = data > threshold
filled = sp.ndimage.morphology.binary_fill_holes(thresh)
coded_paws, num_paws = sp.ndimage.label(filled)
data_slices = sp.ndimage.find_objects(coded_paws)
return object_slices
Blur the input data a bit to make sure the paws have a continuous footprint. (It would be more efficient to just use a larger kernel (the structure kwarg to the various scipy.ndimage.morphology functions) but this isn't quite working properly for some reason...)
Threshold the array so that you have a boolean array of places where the pressure is over some threshold value (i.e. thresh = data > value)
Fill any internal holes, so that you have cleaner regions (filled = sp.ndimage.morphology.binary_fill_holes(thresh))
Find the separate contiguous regions (coded_paws, num_paws = sp.ndimage.label(filled)). This returns an array with the regions coded by number (each region is a contiguous area of a unique integer (1 up to the number of paws) with zeros everywhere else)).
Isolate the contiguous regions using data_slices = sp.ndimage.find_objects(coded_paws). This returns a list of tuples of slice objects, so you could get the region of the data for each paw with [data[x] for x in data_slices]. Instead, we'll draw a rectangle based on these slices, which takes slightly more work.
The two animations below show your "Overlapping Paws" and "Grouped Paws" example data. This method seems to be working perfectly. (And for whatever it's worth, this runs much more smoothly than the GIF images below on my machine, so the paw detection algorithm is fairly fast...)
Here's a full example (now with much more detailed explanations). The vast majority of this is reading the input and making an animation. The actual paw detection is only 5 lines of code.
import numpy as np
import scipy as sp
import scipy.ndimage
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
def animate(input_filename):
"""Detects paws and animates the position and raw data of each frame
in the input file"""
# With matplotlib, it's much, much faster to just update the properties
# of a display object than it is to create a new one, so we'll just update
# the data and position of the same objects throughout this animation...
infile = paw_file(input_filename)
# Since we're making an animation with matplotlib, we need
# ion() instead of show()...
plt.ion()
fig = plt.figure()
ax = fig.add_subplot(111)
fig.suptitle(input_filename)
# Make an image based on the first frame that we'll update later
# (The first frame is never actually displayed)
im = ax.imshow(infile.next()[1])
# Make 4 rectangles that we can later move to the position of each paw
rects = [Rectangle((0,0), 1,1, fc='none', ec='red') for i in range(4)]
[ax.add_patch(rect) for rect in rects]
title = ax.set_title('Time 0.0 ms')
# Process and display each frame
for time, frame in infile:
paw_slices = find_paws(frame)
# Hide any rectangles that might be visible
[rect.set_visible(False) for rect in rects]
# Set the position and size of a rectangle for each paw and display it
for slice, rect in zip(paw_slices, rects):
dy, dx = slice
rect.set_xy((dx.start, dy.start))
rect.set_width(dx.stop - dx.start + 1)
rect.set_height(dy.stop - dy.start + 1)
rect.set_visible(True)
# Update the image data and title of the plot
title.set_text('Time %0.2f ms' % time)
im.set_data(frame)
im.set_clim([frame.min(), frame.max()])
fig.canvas.draw()
def find_paws(data, smooth_radius=5, threshold=0.0001):
"""Detects and isolates contiguous regions in the input array"""
# Blur the input data a bit so the paws have a continous footprint
data = sp.ndimage.uniform_filter(data, smooth_radius)
# Threshold the blurred data (this needs to be a bit > 0 due to the blur)
thresh = data > threshold
# Fill any interior holes in the paws to get cleaner regions...
filled = sp.ndimage.morphology.binary_fill_holes(thresh)
# Label each contiguous paw
coded_paws, num_paws = sp.ndimage.label(filled)
# Isolate the extent of each paw
data_slices = sp.ndimage.find_objects(coded_paws)
return data_slices
def paw_file(filename):
"""Returns a iterator that yields the time and data in each frame
The infile is an ascii file of timesteps formatted similar to this:
Frame 0 (0.00 ms)
0.0 0.0 0.0
0.0 0.0 0.0
Frame 1 (0.53 ms)
0.0 0.0 0.0
0.0 0.0 0.0
...
"""
with open(filename) as infile:
while True:
try:
time, data = read_frame(infile)
yield time, data
except StopIteration:
break
def read_frame(infile):
"""Reads a frame from the infile."""
frame_header = infile.next().strip().split()
time = float(frame_header[-2][1:])
data = []
while True:
line = infile.next().strip().split()
if line == []:
break
data.append(line)
return time, np.array(data, dtype=np.float)
if __name__ == '__main__':
animate('Overlapping paws.bin')
animate('Grouped up paws.bin')
animate('Normal measurement.bin')
Update: As far as identifying which paw is in contact with the sensor at what times, the simplest solution is to just do the same analysis, but use all of the data at once. (i.e. stack the input into a 3D array, and work with it, instead of the individual time frames.) Because SciPy's ndimage functions are meant to work with n-dimensional arrays, we don't have to modify the original paw-finding function at all.
# This uses functions (and imports) in the previous code example!!
def paw_regions(infile):
# Read in and stack all data together into a 3D array
data, time = [], []
for t, frame in paw_file(infile):
time.append(t)
data.append(frame)
data = np.dstack(data)
time = np.asarray(time)
# Find and label the paw impacts
data_slices, coded_paws = find_paws(data, smooth_radius=4)
# Sort by time of initial paw impact... This way we can determine which
# paws are which relative to the first paw with a simple modulo 4.
# (Assuming a 4-legged dog, where all 4 paws contacted the sensor)
data_slices.sort(key=lambda dat_slice: dat_slice[2].start)
# Plot up a simple analysis
fig = plt.figure()
ax1 = fig.add_subplot(2,1,1)
annotate_paw_prints(time, data, data_slices, ax=ax1)
ax2 = fig.add_subplot(2,1,2)
plot_paw_impacts(time, data_slices, ax=ax2)
fig.suptitle(infile)
def plot_paw_impacts(time, data_slices, ax=None):
if ax is None:
ax = plt.gca()
# Group impacts by paw...
for i, dat_slice in enumerate(data_slices):
dx, dy, dt = dat_slice
paw = i%4 + 1
# Draw a bar over the time interval where each paw is in contact
ax.barh(bottom=paw, width=time[dt].ptp(), height=0.2,
left=time[dt].min(), align='center', color='red')
ax.set_yticks(range(1, 5))
ax.set_yticklabels(['Paw 1', 'Paw 2', 'Paw 3', 'Paw 4'])
ax.set_xlabel('Time (ms) Since Beginning of Experiment')
ax.yaxis.grid(True)
ax.set_title('Periods of Paw Contact')
def annotate_paw_prints(time, data, data_slices, ax=None):
if ax is None:
ax = plt.gca()
# Display all paw impacts (sum over time)
ax.imshow(data.sum(axis=2).T)
# Annotate each impact with which paw it is
# (Relative to the first paw to hit the sensor)
x, y = [], []
for i, region in enumerate(data_slices):
dx, dy, dz = region
# Get x,y center of slice...
x0 = 0.5 * (dx.start + dx.stop)
y0 = 0.5 * (dy.start + dy.stop)
x.append(x0); y.append(y0)
# Annotate the paw impacts
ax.annotate('Paw %i' % (i%4 +1), (x0, y0),
color='red', ha='center', va='bottom')
# Plot line connecting paw impacts
ax.plot(x,y, '-wo')
ax.axis('image')
ax.set_title('Order of Steps')
I'm no expert in image detection, and I don't know Python, but I'll give it a whack...
To detect individual paws, you should first only select everything with a pressure greater than some small threshold, very close to no pressure at all. Every pixel/point that is above this should be "marked." Then, every pixel adjacent to all "marked" pixels becomes marked, and this process is repeated a few times. Masses that are totally connected would be formed, so you have distinct objects. Then, each "object" has a minimum and maximum x and y value, so bounding boxes can be packed neatly around them.
Pseudocode:
(MARK) ALL PIXELS ABOVE (0.5)
(MARK) ALL PIXELS (ADJACENT) TO (MARK) PIXELS
REPEAT (STEP 2) (5) TIMES
SEPARATE EACH TOTALLY CONNECTED MASS INTO A SINGLE OBJECT
MARK THE EDGES OF EACH OBJECT, AND CUT APART TO FORM SLICES.
That should about do it.
Note: I say pixel, but this could be regions using an average of the pixels. Optimization is another issue...
Sounds like you need to analyze a function (pressure over time) for each pixel and determine where the function turns (when it changes > X in the other direction it is considered a turn to counter errors).
If you know at what frames it turns, you will know the frame where the pressure was the most hard and you will know where it was the least hard between the two paws. In theory, you then would know the two frames where the paws pressed the most hard and can calculate an average of those intervals.
after which I'll get to the problem of deciding which paw it is!
This is the same tour as before, knowing when each paw applies the most pressure helps you decide.

Categories

Resources