I am wondering what's the best approach to turn a large number of images into a moving one in Python. A lot of examples I've found seem to deal with actual videos or video games, such as pygame, which seems over complicated for what I'm looking to do.
I have created a loop, and would like the image to update every time the code runs through the loop. Is there a possibly a method in python to overplot each image and erase the previous image with each iteration?
sweeps_no = 10
for t in range(sweeps_no):
i = np.random.randint(N)
j = np.random.randint(N)
arr = nearestneighbours(lat, N, i, j)
energy = delta_E(lat[i,j], arr, J)
if energy <= 0:
matrix[i,j] *= matrix[i,j]
elif np.exp(energy/T) >= np.random.random():
matrix[i,j] *= -matrix[i,j]
else:
matrix[i,j] = matrix[i,j]
t +=1
print t
res.append(switch)
image = plt.imshow(lat)
plt.show()
Also, I can't understand why the loop above doesn't result in 10 different images showing up when the image is contained in the loop.
You can update a single figure using fig.canvas.draw() after your call to imshow(). It is important to include a pause i.e. plt.pause(2), so that you can see the changes to your figure.
The following is a runnable example:
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure() # create the figure
for i in range(10):
data = np.random.randn(25).reshape(5,5) # some fake data
plt.imshow(data)
fig.canvas.draw()
plt.pause(2) # pause for 2 seconds
Related
I have the following code, which reads in a set of (small) observations, runs a cross-correlation calculation on them, and then saves some plots:
import matplotlib.pyplot as plt
import numpy as np
import astropy.units as u
from sunkit_image.time_lag import cross_correlation, get_lags, max_cross_correlation, time_lag
time=np.linspace(0,43200,num=int(43200/12))
timeu = time * u.s
for i in range(len(folders)): # loop over all dates
os.chdir('/Volumes/LaCie/timelags/RARs/'+folders[i])
print(folders[i])
for j in range(len(pairs)): # iterates over every pair of data sets
for x in range(36): # sets up a sliding 2-hour window that shifts 20 min at a time
ch_a = np.load('dc'+pairs[j][0]+'.npy',allow_pickle=True)[()][100*x:(100*x)+600,:,:] # read in only necessary data (but entire file is only ~6 Gb)
ch_b = np.load('dc'+pairs[j][1]+'.npy',allow_pickle=True)[()][100*x:(100*x)+600,:,:] # read in only necessary data (but entire file is only ~6 Gb)
ctime= timeu[100*x:(100*x)+600] # sets up the correct time array
print('ctime range:',ctime[0],ctime[-1],len(ctime))
max_cc_map = max_cross_correlation(ch_a, ch_b, ctime)
tl_map = time_lag(ch_a, ch_b, ctime)
del ch_a # trying to deal with memory issue
del ch_b # trying to deal with memory issue
plt.close('all') # making sure I don't just create endless open plots
fig = plt.figure()
ax = fig.add_subplot()
im = ax.imshow(np.flip(tl_map,axis=0), cmap="cubehelix", vmin=-6000, vmax=6000)
cax = make_axes_locatable(ax).append_axes("right", size="5%", pad="10%")
fig.colorbar(im, cax=cax,label=r"$\tau_{AB}$ [s]")
plt.tight_layout()
fig.savefig('timelag_'+pairs[j][0]+'_'+pairs[j][1]+'_'+str(x)+'.png',dpi=400)
fig = plt.figure()
ax = fig.add_subplot()
im = ax.imshow(np.flip(max_cc_map,axis=0), cmap="plasma",vmin=0,vmax=1)
cax = make_axes_locatable(ax).append_axes("right", size="5%", pad="10%")
fig.colorbar(im, cax=cax,label=r"Max Cross-correlation")
plt.tight_layout()
fig.savefig('maxcc_'+pairs[j][0]+'_'+pairs[j][1]+'_'+str(x)+'.png',dpi=400)
fig=plt.figure(figsize=(10,6))
values_tl, bins_tl, bars = plt.hist(np.ravel(np.asarray(tl_map)),bins=np.arange(-6000,6000,12000/50),log=True,label='Time Lags')
values_masked, bins_masked, bars = plt.hist(np.ravel(np.asarray(tl_map)[np.where(np.asarray(max_cc_map) > 0.25)])
,bins=np.arange(-6000,6000,12000/50),log=True,label='Masked CC > 0.25')
values_masked2, bins_masked2, bars = plt.hist(np.ravel(np.asarray(tl_map)[np.where(np.asarray(max_cc_map) > 0.5)])
,bins=np.arange(-6000,6000,12000/50),log=True,label='Masked CC > 0.5')
values_masked3, bins_masked3, bars = plt.hist(np.ravel(np.asarray(tl_map)[np.where(np.asarray(max_cc_map) > 0.75)])
,bins=np.arange(-6000,6000,12000/50),log=True,label='Masked CC > 0.75')
plt.ylabel('Pixel Occurrence')
plt.legend()
fig.savefig('hist_tl_cc_'+pairs[j][0]+'_'+pairs[j][1]+'_'+str(x)+'.png',dpi=400)
As noted in the comments, I've inserted a few lines to try to dump unnecessary data between iterations; I know a 3-deep for loop isn't the most efficient way to code, but the loop over the dates and channel pairs are very short -- almost all of the time/memory is spent in the innermost loop. The problem is that after a few minutes, the memory usage is oscillating between 30-55 GB. My Mac is becoming sluggish, and it's only at the beginning of the dataset. Is there something I'm missing here? Even if the entire files were being read in at the beginning instead of a subset, it's only ~ 12 Gb of data, and the code would crash if I was reading in the whole thing (i.e., it's definitely only reading in part of the raw data). I tried a with statement but that didn't take up less memory. Any suggestions would be very welcome!
Per loop you create 3 figures but you never close them. After each fig.savefig(...), you should close the figure with plt.close(fig).
I wish to plot some data from an array with multiple columns, and would like each column to be a different line on the same scrolling graph. As there are many columns, I think it would make sense to plot them within a loop. I'd also like to plot a second scrolling graph with a single line.
I can get the single line graph to scroll correctly, but the graph containing the multiple lines over-plots from the updated array without clearing the previous lines.
How do I get the lines to clear within the for loop. I thought that setData, might do the clearing. Do I have to have a pg.QtGui.QApplication.processEvents() or something similar within the loop? I tried to add that call but had it no effect.
My code:
#Based on example from PyQtGraph documentation
import numpy as np
import pyqtgraph as pg
win = pg.GraphicsLayoutWidget(show=True)
win.setWindowTitle('pyqtgraph example: Scrolling Plots')
timer = pg.QtCore.QTimer()
plot_1 = win.addPlot()
plot_2 = win.addPlot()
data1 = np.random.normal(size=(300))
curve1 = plot_1.plot(data1)
data_2d = np.random.normal(size=(3,300))
def update_plot():
global data1, data_2d
data1[:-1] = data1[1:]
data1[-1] = np.random.normal()
curve1.setData(data1)
for idx, n in enumerate(data_2d):
n[:-1] = n[1:]
n[-1] = np.random.normal()
curve2 = plot_2.plot(n,pen=(idx))
curve2.setData(n)
#pg.QtGui.QApplication.processEvents() #Does nothing
timer = pg.QtCore.QTimer()
timer.timeout.connect(update_plot)
timer.start(50)
if __name__ == '__main__':
pg.exec()
You could clear the plot of all curves each time with .clear(), but that wouldn't be very performant. A better solution would be to keep all the curve objects around and call setData on them each time, like you're doing with the single-curve plot. E.g.
curves_2d = [plot_2.plot(pen=idx) for idx, n in enumerate(data_2d)]
# ... in update_plot
curves_2d[idx].setData(n)
I have some code that creates a fluid simulation and I changed it so that it keeps running and advancing the system indefinitely until the user closes the window. Next, I would like to add a pause and go button and a few sliders for changing certain parameters (ie viscosity, inflow velocity, etc.) to make it more interactive. Every time the user moves one of the sliders, I want the animation to continue running but with the new parameters rather than starting all over again. I have not been able to find any resources that answer this question for updating live animations. How would I go about doing this? Thank you.
Here is my code to show you what I have now. I'm trying to do this in pure matplotlib. This code does work but just doesn't have the sliders and buttons added yet
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation
from fluid import Fluid
RESOLUTION = 85, 160
VISCOSITY = 1e-5
INFLOW_RADIUS = RESOLUTION[0]*0.12
INFLOW_VELOCITY = RESOLUTION[1]*0.14
directions = (np.array([-0.2960987711,0.9551573264]), np.array([0,0]), np.array([0.2960987711,-0.9551573264]))
points = (np.array([RESOLUTION[0]*(0.225), 0]), np.array([-100, 0]), np.array([RESOLUTION[0]*(0.775), RESOLUTION[1]]))
channels = 'r', 'g', 'b'
fluid = Fluid(RESOLUTION, VISCOSITY, channels)
inflow_dye_field = np.zeros((fluid.size, len(channels))) # creating the array to be iterated over. This is the environment that the fluid will exist in
inflow_velocity_field = np.zeros_like(fluid.velocity_field) # creating the array for representing the initial velocity field which will tell the fluid how it needs to advance through successive iterations
def setup_flow():
for i, p in enumerate(points):
distance = np.linalg.norm(fluid.indices - p, axis=1)
mask = distance <= INFLOW_RADIUS
for d in range(2):
inflow_velocity_field[..., d][mask] = directions[i][d] * INFLOW_VELOCITY
inflow_dye_field[..., i][mask] = 1
setup_flow()
fig, ax = plt.subplots(figsize=(12,9))
fig.canvas.set_window_title('Fluid Simulation')
def animate(args):
fluid.advect_diffuse()
fluid.velocity_field += inflow_velocity_field
for i, k in enumerate(channels):
fluid.quantities[k] += inflow_dye_field[..., i]
fluid.project()
rgb = np.dstack(tuple(fluid.quantities[c] for c in channels))
rgb = rgb.reshape((*RESOLUTION, 3))
rgb = (np.clip(rgb, 0, 1) * 255).astype('uint8')
im.set_data(rgb)
rgb = np.dstack(tuple(fluid.quantities[c] for c in channels))
rgb = rgb.reshape((*RESOLUTION, 3))
rgb = (np.clip(rgb, 0, 1) * 255).astype('uint8')
ax.xaxis.set_visible(False); ax.yaxis.set_visible(False)
im = ax.imshow(rgb, animated=True)
ani = animation.FuncAnimation(fig, animate, interval=30)
plt.show()
Edit: I have tried using Tkinter to do this and while it's not too difficult to embed a matplotlib animation in a Tkinter window, the buttons and sliders in Tkinter don't work for the embedded plot. Also, changing the rest of my code so that it's in pure Tkinter will work but it will be an absolute nightmare and I am not confident enough with that library yet to be able to do that given the amount of time I have.
I am trying to use trackpy (henceforth tp) for particle tracking. I have a series of images of cell samples. Naturally, there is some noise in the image. The first step in tracking is to choose, from the first image of the series, which clusters are cells and which clusters are not. This is done in large part by tp.locate. This is not perfect. I would like to be able to go through the 'candidates' chosen by tp.locate and indicate if each is a cell or not a cell.
I created the function ID in order to do this. The goal is to go through the list of 'candidates' generated by tp.locate. I wanted to do this by displaying (via matplotlib's imshow function) each 'candidate' while simultaneously prompting a user input to indicate whether or not the 'candidate' is a cell.
The problem is that asking for a user input seems to suppress the output from the imshow function. Each pass through the for loop asks about a different candidate but the imshow window never actually shows the candidate. I do not know how to work around this and I feel that I am VERY CLOSE to my end goal so I would really appreciate input.
I do not need a GUI, but is there some way I could handle this using tkinter? I am not familiar with tkinter, but I have read some things which make me think I may be able to solve this problem with it.
import numpy as np
import matplotlib.pyplot as plt
def framer(f,image,windowsize=(60,100)):
arr = image[:,:] #This makes a copy of image, so that when the buffers are
#added for the following process, the input image (image)
#is not modified.
h = windowsize[0]
w = windowsize[1]
hbuffer = np.zeros((h/2,arr.shape[1]))
arr = np.concatenate((hbuffer,arr,hbuffer),axis=0) #Buffer takes care of situations
#where the crop window extends
#beyond input image dimensions
wbuffer = np.zeros((arr.shape[0],w/2))
arr = np.concatenate((wbuffer,arr,wbuffer),axis=1)
narr = np.zeros((f.shape[0],h,w)) #Initialize array of crop windows
for i in range(f.shape[0]):
crop_arr = arr[f.get_value(i,'y'):f.get_value(i,'y') + h,f.get_value(i,'x'):f.get_value(i,'x') + w] #THIS MIGHT BE BACKWARDS
narr[i] = crop_arr
return narr
def ID(f,image,windowsize=(60,100)):
arr = framer(f,image,windowsize=(60,100))
f_cop = f[:]
reslist = np.zeros((arr.shape[0]))
for i in range(arr.shape[0]):
plt.imshow(arr[i],cmap='gray')
plt.annotate('particle '+repr(i),xy=(f.get_value(i,'x'),\
f.get_value(i,'y')),xytext=(f.get_value(i,'x')+20,f.get_value(i,'y')+20),\
arrowprops=dict(facecolor='red', shrink=0.05),fontsize=12,color='r')
res = input('Is this a cell? 1 for yes, 0 for no, 5 to exit')
if res == 1 or res == 0:
reslist[i] = res
if res == 5:
break
else:
print('Must give a valid input! (0,1 or 5)')
f_cop['res'] = reslist
return f_cop[f_cop.res == 1]
Try something like:
fig, ax = plt.subplots(1, 1)
for i in range(arr.shape[0]):
ax.cla()
im = ax.imshow(arr[i], cmap='gray', interpolation='nearest')
plt.annotate('particle ' + repr(i), xy=(f.get_value(i, 'x'), f.get_value(i,'y')),
xytext=(f.get_value(i,'x')+20, f.get_value(i,'y')+20),
arrowprops=dict(facecolor='red', shrink=0.05),
fontsize=12,color='r')
fig.canvas.draw()
res = None
while res not in (0, 1, 5):
res = input('Is this a cell? 1 for yes, 0 for no, 5 to exit')
if res == 1 or res == 0:
reslist[i] = res
elif res == 5:
break
else:
print "should never get here"
You should avoid using pyplot is scripting as much as possible, the state machine can cause you great trouble.
The Problem:
I'm currently loading column data from text files into numpy arrays, and then plotting them and saving the resulting image. Because the values will always lie on an equally spaced grid, it seemed an appropriate time to use pcolorfast. Each array is necessarily square, usually between 1024x1024 and 8192x8192. At present, I'm only concerned with this working up to and including 4096x4096 sizes. This needs to be done for hundreds of files, and while it successfully completes the fist image, subsequent images crash on a MemoryError.
Unsuccessful solutions:
I've ensured, as per here, that I have hold = False in rc.
Limitations:
The images must be saved using all 4096x4096 values, and cannot be scaled down to 1024x1024 (as suggested here).
Notes:
After watching memory usage during each phase (create empty array, load values, plot, save), the array A is still sitting in memory after makeFrame is complete. Is an explicit call to delete it required? Does fig need to be explicitly deleted, or should pylab take care of that? The ideal situation (probably obvious) would be to have memory usage return to ~the same level as it was before the call to makeFrame().
Any and all advice is greatly appreciated. I've been trying to resolve this for a few days, so it's not unlikely I've missed something obvious. And obvious solutions would be exciting (if the alternative should be that this is a more complicated problem).
Current code sample:
import numpy
import matplotlib
matplotlib.use("AGG")
import matplotlib.pylab as plt
def makeFrame(srcName, dstName, coloring, sideLength,
dataRanges, delim, dpi):
v,V,cmap = coloring
n = sideLength
xmin,xmax,ymin,ymax = dataRanges
A = numpy.empty((n,n),float)
dx = (xmax-xmin) / (n-1)
dy = (ymax-ymin) / (n-1)
srcfile = open(srcName,'rb')
for line in srcfile:
lineVals = line[:-1].split(delim)
x = float(lineVals[0])
y = float(lineVals[1])
c = float(lineVals[2])
#Find index from float value, adjust for rounding
i = (x-xmin) / dx
if (i - int(i) ) > .05: i += 1
j = (y-ymin) / dy
if (j - int(j) ) > .05: j += 1
A[i,j] = c
srcfile.close()
print "loaded vals"
fig = plt.figure(1)
fig.clf()
ax = fig.gca()
ScalarMap = ax.pcolorfast(A, vmin = v, vmax = V, cmap = cmap)
fig.colorbar(ScalarMap)
ax.axis('image')
fig.savefig(dstName, dpi = dpi)
plt.close(1)
print "saved image"
Caveats:
There might be a better way to deal
with this memory problem that I don't
know about.
I haven't been able to reproduce this
error. When I use
matplotlib.cbook.report_memory() my
memory usage seems to level out as
expected.
Despite the caveats, I thought I'd mention a general, cheap method of dealing with problems caused by a program refusing to release memory: Use the multiprocessing module to spawn the problematic function in a separate process. Wait for the function to end, then call it again. Each time a subprocess ends, you regain the memory it used.
So I suggest trying something like this:
import matplotlib.cbook as mc
import multiprocessing as mp
import matplotlib.cm as cm
if __name__=='__main__':
for _ in range(10):
srcName='test.data'
dstName='test.png'
vmin = 0
vmax = 5
cmap = cm.jet
sideLength = 500
dataRanges = (0.0,1.0,0.0,1.0)
delim = ','
dpi = 72
proc=mp.Process(target=makeFrame,args=(
srcName,dstName,(vmin,vmax,cmap),sideLength,
dataRanges,delim,dpi))
proc.start()
proc.join()
usage=mc.report_memory()
print(usage)