I am making a gui to manipulate some custom laser hardware. Via this gui I can grab data from imaging experiments (scan in some rectangular area). During experiment, I fill a matrix previously initialized by zeros, with real data. The matrix has a shape, say, 500x500. Each step in the loop, I get a new value. And I need to visualize my matrix in real time.
Recently, I used matplotlib's imshow and fig.canvas.draw() function to fully redraw an image on each step. It worked just perfect except the speed. It took about 100 ms to plot one image and it is not good for me as I have to plot thousands of images.
I decided to speed up the rendering process using blitting. It does a perfect job, but my colorbar stopped updating each iteration. Each iteration I set new vmin and vmax values, but colorbar isn't being updated. I suggest it is due to restore a background function fig.canvas.restore_region(bg). I tried different ways to solve this problem for many hours, but I don't succeed. I found a solution for animate function, but I want to redraw image manually in a loop.
Below is my "model" code. In a loop I want to see updating both the plot and the colorbar but colorbar will update only once at the end. Do not understand why.
import matplotlib.pyplot as plt
import numpy as np
import time
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig, ax = plt.subplots()
ax.set_axis_off()
z = np.random.randint(0, 100, size = (500, 500))
line = ax.imshow(z, cmap="jet", extent=[0,100,0,100], interpolation="none", animated=True)
div = make_axes_locatable(ax)
cax = div.append_axes('right', '5%', '5%')
cbar = fig.colorbar(line, cax=cax)
plt.show(block=False)
plt.pause(0.05)
bg = fig.canvas.copy_from_bbox(fig.bbox)
ax.draw_artist(line)
fig.canvas.blit(fig.bbox)
for i in range(10):
if i % 2 == 0:
z = np.random.randint(0, 500, size = (500, 500))
line.set_clim(0, 500)
else:
z = np.random.randint(0, 200, size = (500, 500))
line.set_clim(0, 200)
fig.canvas.restore_region(bg)
line.set_data(z)
ax.draw_artist(line)
fig.canvas.blit(fig.bbox)
fig.canvas.flush_events()
time.sleep(0.2)
I manually try to imitate the different ranges for matrix values and want to see colorbar tracking it.
Addition 1: Actually, the code will run as needed if I replace time.sleep() to plt.pause(). But I will be very thankful to all who could make it more clear for me.
Related
I am new to python and I have wrote the following python code to paste spot.jpg onto background.jpg at coordinate (2,3) given that background.jpg lies on extents [0,20, 0, 16]. Both images are of different sizes.
from PIL import Image
import matplotlib.pyplot as plt
from matplotlib.offsetbox import OffsetImage, AnnotationBbox
# imbg is layout/ background
imbg = Image.open(r"background.jpg")
# imfg is furniture/ foreground
imfg = Image.open(r"spot.jpg")
# manually set aspect dimension for background
ext= [0,20,0,16]
ax = plt.subplot() # add sub-plot
oi= OffsetImage(imfg, zoom= 0.1)
ab1= AnnotationBbox(oi, (2, 3), frameon=False)
ax.add_artist(ab1)
plt.imshow(imbg, zorder= 0, extent=ext )
plt.xlabel("Dimensions", fontsize= 12)
plt.title('Proposed', fontsize= 20)
mng = plt.get_current_fig_manager()
mng.window.state("zoomed")
plt.show()
Results is matlibplot results
I see 3 problems here.
If I don't put zoom= 0.1 (a number I anyhow guess), spot.jpg will be very much bigger than background.jpg. In fact, spot is less than 1m by 1m and background is 20m by 16m. If I need to put a number to zoom in order to put both images to the same scale, what should be the number or how can I calculate the number?
When I zoom into the matplotlib results, spot.jpg doesn't seem to turn bigger. zooming into spot on plot. I wonder why.
The image quality of spot is affected on the plot. Is there anyway to improve how spot.jpg look on plot?
Many thanks in advance to help a noob like me.
Using an offset box maynot be the good approach here. It seems you want to have both images in the same data coordinates. Hence plot both images as imshow with different extents according to the desired positions.
import matplotlib.pyplot as plt
# imbg is layout/ background
imbg = plt.imread(r"data/room.jpg")
# imfg is furniture/ foreground
imfg = plt.imread(r"data/spot.jpg")
# manually set aspect dimension for background
ext= [0,20,0,16]
fig, ax = plt.subplots()
ax.imshow(imbg, zorder= 0, extent=ext )
ax.imshow(imfg, zorder= 1, extent=[2,5,3,6] )
ax.axis(ext)
plt.xlabel("Dimensions", fontsize= 12)
plt.title('Proposed', fontsize= 20)
plt.show()
The extent for the chair is also rather arbitrary here, but you would know better where to put it and in what size.
I have a python program that plots the data from a file as a contour plot for each line in that text file. Currently, I have 3 separate contour plots in my interface. It does not matter if I read the data from a file or I load it to the memory before executing the script I can only get ~6fps from the contour plots.
I also tried using just one contour and the rest normal plots but the speed only increased to 7fps. I don't believe that it is so computationally taxing to draw few lines. Is there a way to make it substantially faster? Ideally, it would be nice to get at least 30fps.
The way I draw the contour is that for each line of my data I remove the previous one:
for coll in my_contour[0].collections:
coll.remove()
and add a new one
my_contour[0] = ax[0].contour(x, y, my_func, [0])
At the beginning of the code, I have plt.ion() to update the plots as I add them.
Any help would be appreciated.
Thanks
Here is an example on how to use a contour plot in an animation. It uses matplotlib.animation.FuncAnimation which makes it easy to turn blitting on and off.
With blit=True it runs at ~64 fps on my machine, without blitting ~55 fps. Note that the interval must of course allow for the fast animation; setting it to interval=10 (milliseconds) would allow for up to 100 fps, but the drawing time limits it to something slower than that.
import matplotlib.pyplot as plt
import matplotlib.animation
import numpy as np
import time
x= np.linspace(0,3*np.pi)
X,Y = np.meshgrid(x,x)
f = lambda x,y, alpha, beta :(np.sin(X+alpha)+np.sin(Y*(1+np.sin(beta)*.4)+alpha))**2
alpha=np.linspace(0, 2*np.pi, num=34)
levels= 10
cmap=plt.cm.magma
fig, ax=plt.subplots()
props = dict(boxstyle='round', facecolor='wheat')
timelabel = ax.text(0.9,0.9, "", transform=ax.transAxes, ha="right", bbox=props)
t = np.ones(10)*time.time()
p = [ax.contour(X,Y,f(X,Y,0,0), levels, cmap=cmap ) ]
def update(i):
for tp in p[0].collections:
tp.remove()
p[0] = ax.contour(X,Y,f(X,Y,alpha[i],alpha[i]), levels, cmap= cmap)
t[1:] = t[0:-1]
t[0] = time.time()
timelabel.set_text("{:.3f} fps".format(-1./np.diff(t).mean()))
return p[0].collections+[timelabel]
ani = matplotlib.animation.FuncAnimation(fig, update, frames=len(alpha),
interval=10, blit=True, repeat=True)
plt.show()
Note that in the animated gif above a slower frame rate is shown, since the process of saving the images takes a little longer.
I am stuck in a rather complicated situation. I am plotting some data as an image with imshow(). Unfortunately my script is long and a little messy, so it is difficult to make a working example, but I am showing the key steps. This is how I get the data for my image from a bigger array, written in a file:
data = np.tril(np.loadtxt('IC-heatmap-20K.mtx'), 1)
#
#Here goes lot's of other stuff, where I define start and end
#
chrdata = data[start:end, start:end]
chrdata = ndimage.rotate(chrdata, 45, order=0, reshape=True,
prefilter=False, cval=0)
ax1 = host_subplot(111)
#I don't really need host_subplot() in this case, I could use something more common;
#It is just divider.append_axes("bottom", ...) is really convenient.
plt.imshow(chrdata, origin='lower', interpolation='none',
extent=[0, length*resolution, 0, length*resolution]) #resolution=20000
So the values I am interested in are all in a triangle with the top angle in the middle of the top side of a square. At the same time I plot some data (lot's of coloured lines in this case) along with the image near it's bottom.
So at first this looks OK, but is actually is not: all pixels in the image are not square, but elongated with their height being bigger, than their width. This is how they look if I zoom in:
This doesn't happen, If I don't set extent when calling imshow(), but I need it so that coordinates in the image and other plots (coloured lines at the bottom in this case), where identical (see Converting coordinates of a picture in matplotlib?).
I tried to fix it using aspect. I tried to do that and it fixed the pixels' shape, but I got a really weird picture:
The thing is, later in the code I explicitly set this:
ax1.set_ylim(0*resolution, length*resolution) #resolution=20000
But after setting aspect I get absolutely different y limits. And the worst thing: ax1 is now wider, than axes of another plot at the bottom, so that their coordinates do not match anymore! I add it in this way:
axPlotx = divider.append_axes("bottom", size=0.1, pad=0, sharex=ax1)
I would really appreciate help with getting it fixed: square pixels, identical coordinates in two (or more, in other cases) plots. As I see it, the axes of the image need to become wider (as aspect does), the ylims should apply and the width of the second axes should be identical to the image's.
Thanks for reading this probably unclear explanation, please, let me know, if I should clarify anything.
UPDATE
As suggested in the comments, I tried to use
ax1.set(adjustable='box-forced')
And it did help with the image itself, but it caused two axes to get separated by white space. Is there any way to keep them close to each other?
Re-edited my entire answer as I found the solution to your problem. I solved it using the set_adjustable("box_forced") option as suggested by the comment of tcaswell.
import numpy
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import host_subplot, make_axes_locatable
#Calculate aspect ratio
def determine_aspect(shape, extent):
dx = (extent[1] - extent[0]) / float(shape[1])
dy = (extent[3] - extent[2]) / float(shape[0])
return dx / dy
data = numpy.random.random((30,60))
shape = data.shape
extent = [-10, 10, -20, 20]
x_size, y_size = 6, 6
fig = plt.figure(figsize = (x_size, y_size))
ax = host_subplot(1, 1, 1)
ax.imshow(data, extent = extent, interpolation = "None", aspect = determine_aspect(shape, extent))
#Determine width and height of the subplot frame
bbox = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted())
width, height = bbox.width, bbox.height
#Calculate distance, the second plot needs to be elevated by
padding = (y_size - (height - width)) / float(1 / (2. * determine_aspect(shape, extent)))
#Create second image in subplot with shared x-axis
divider = make_axes_locatable(ax)
axPlotx = divider.append_axes("bottom", size = 0.1, pad = -padding, sharex = ax)
#Turn off yticks for axPlotx and xticks for ax
axPlotx.set_yticks([])
plt.setp(ax.get_xticklabels(), visible=False)
#Make the plot obey the frame
ax.set_adjustable("box-forced")
fig.savefig("test.png", dpi=300, bbox_inches = "tight")
plt.show()
This results in the following image where the x-axis is shared:
Hope that helps!
I would like to plot a number of curves over an image
Using this code I am reasonably close:
G=plt.matplotlib.gridspec.GridSpec(64,1)
fig = plt.figure()
plt.imshow(img.data[:,:],cmap='gray')
plt.axis('off')
plt.axis([0,128,0,64])
for i in arange(64):
fig.add_subplot(G[i,0])
plt.axis('off')
# note that vtc.data.shape = (64, 128*400=51200)
# so every trace for each image pixel is 400 points long
plt.plot(vtc.data[i,:])
plt.axis([0, 51200, 0, 5])
The result that I am getting looks like this:
The problem is that while I seem to be able to get rid of all the padding in the horizontal (x) direction, there is different amount of padding in the image and the stacked plots in the vertical direction.
I tried using
ax = plt.gca()
ax.autoscale_view('tight')
but that didn't reduce the margin either.
How can I get a grid of m-by-n line plots to line up precisely with a blown up (by factor f) version of an image with dimensions (fm)-by-(fn)?
UPDATE and Solution:
The answer by #RutgerKassies works quite well. I achieved it using his code like so:
fig, axs = plt.subplots(1,1,figsize=(8,4))
axs.imshow(img.data[:,:],cmap='gray', interpolation='none')
nplots = 64
fig.canvas.draw()
box = axs._position.bounds
height = box[3] / nplots
for i in arange(nplots):
tmpax = fig.add_axes([box[0], box[1] + i * height, box[2], height])
tmpax.set_axis_off()
# make sure to get image orientation right and
tmpax.plot(vtc.data[nplots-i-1,:],alpha=.3)
tmpax.set_ylim(0,5)
tmpax.set_xlim(0, 51200)
I think the easiest way is to use the boundaries from your 'imshow axes' to manually calculate the boundaries of all your 'lineplot axes':
import matplotlib.pyplot as plt
import numpy as np
fig, axs = plt.subplots(1,1,figsize=(15,10))
axs.imshow(np.random.rand(50,100) ,cmap='gray', interpolation='none', alpha=0.3)
nplots = 50
fig.canvas.draw()
box = axs._position.bounds
height = box[3] / nplots
for i in arange(nplots):
tmpax = fig.add_axes([box[0], box[1] + i * height, box[2], height])
tmpax.set_axis_off()
tmpax.plot(np.sin(np.linspace(0,np.random.randint(20,1000),1000))*0.4)
tmpax.set_ylim(-1,1)
The above code seems nice, but i do have some issues with the autoscale chopping off part of the plot. Try removing the last line to see the effect, im not sure why thats happening.
I need to know the size of the legend in pixels. I seem to only be able to get height = 1. from any function... I've tried the following
this returns 1.
height = legend.get_frame().get_bbox_to_anchor().height
this returns [0,0],[1.,1.]
box = legend.get_window_extent().get_points()
this also returns [0,0],[1.,1.]
box = legend.get_frame().get_bbox().get_points()
all of these return 1, even if the size of the legend changes! what's going on?
This is because you haven't yet drawn the canvas.
Pixel values simply don't exist in matplotlib (or rather, they exist, have no relation to the screen or other output) until the canvas is drawn.
There are a number of reasons for this, but I'll skip them at the moment. Suffice it to say that matplotlib tries to stay as general as possible, and generally avoids working with pixel values until things are drawn.
As a simple example:
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(10), label='Test')
legend = ax.legend(loc='upper left')
print 'Height of legend before canvas is drawn:'
print legend.get_window_extent().height
fig.canvas.draw()
print 'Height of legend after canvas is drawn:'
print legend.get_window_extent().height
However, this is only going to represent the height of the legend in pixels as it is drawn on the screen! If you save the figure, it will be saved with a different dpi (100, by default) than it is drawn on the screen, so the size of things in pixels will be different.
There are two ways around this:
Quick and dirty: draw the figure's canvas before outputting pixel values and be sure to explicitly specify the dpi of the figure when saving (e.g. fig.savefig('temp.png', dpi=fig.dpi).
Recommended, but slightly more complicated: Connect a callback to the draw event and only work with pixel values when the figure is drawn. This allows you to work with pixel values while only drawing the figure once.
As a quick example of the latter method:
import matplotlib.pyplot as plt
def on_draw(event):
fig = event.canvas.figure
ax = fig.axes[0] # I'm assuming only one subplot here!!
legend = ax.legend_
print legend.get_window_extent().height
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(10), label='Test')
legend = ax.legend(loc='upper left')
fig.canvas.mpl_connect('draw_event', on_draw)
fig.savefig('temp.png')
Notice the different in what is printed as the height of the legend for the first and second examples. (31.0 for the second vs. 24.8 for the first, on my system, but this will depend on the defaults in your .matplotlibrc file)
The difference is due to the different dpi between the default fig.dpi (80 dpi, by default) and the default resolution when saving a figure (100 dpi, by default).
Hopefully that makes some sense, anyway.