Python matplotlib imshow is slow - python

I want to display an image file using imshow. It is an 1600x1200 grayscale image and I found out that matplotlib uses float32 to decode the values. It takes about 2 seconds to load the image and I would like to know if there is any way to make this faster. The point is that I do not really need a high resolution image, I just want to mark certain points and draw the image as a background. So,
First question: Is 2 seconds a good performance for such an image or
can I speed up.
Second question: If it is good performance how can I make the process
faster by reducing the resolution. Important point: I still want the
image to strech over 1600x1200 Pixel in the end.
My code:
import matplotlib
import numpy
plotfig = matplotlib.pyplot.figure()
plotwindow = plotfig.add_subplot(111)
plotwindow.axis([0,1600,0,1200])
plotwindow.invert_yaxis()
img = matplotlib.pyplot.imread("lowres.png")
im = matplotlib.pyplot.imshow(img,cmap=matplotlib.cm.gray,origin='centre')
plotfig.set_figwidth(200.0)
plotfig.canvas.draw()
matplotlib.pyplot.show()
This is what I want to do. Now if the picture saved in lowres.png has a lower resolution as 1600x1200 (i.e. 400x300) it is displayed in the upper corner as it should. How can I scale it to the whole are of 1600x1200 pixel?
If I run this program the slow part comes from the canvas.draw() command below. Is there maybe a way to speed up this command?
Thank you in advance!
According to your suggestions I have updated to the newest version of matplotlib
version 1.1.0svn, checkout 8988
And I also use the following code:
img = matplotlib.pyplot.imread(pngfile)
img *= 255
img2 = img.astype(numpy.uint8)
im = self.plotwindow.imshow(img2,cmap=matplotlib.cm.gray, origin='centre')
and still it takes about 2 seconds to display the image... Any other ideas?
Just to add: I found the following feature
zoomed_inset_axes
So in principle matplotlib should be able to do the task. There one can also plot a picture in a "zoomed" fashion...

The size of the data is independent of the pixel dimensions of the final image.
Since you say you don't need a high-resolution image, you can generate the image quicker by down-sampling your data. If your data is in the form of a numpy array, a quick and dirty way would be to take every nth column and row with data[::n,::n].
You can control the output image's pixel dimensions with fig.set_size_inches and plt.savefig's dpi parameter:
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
data=np.arange(300).reshape((10,30))
plt.imshow(data[::2,::2],cmap=cm.Greys)
fig=plt.gcf()
# Unfortunately, had to find these numbers through trial and error
fig.set_size_inches(5.163,3.75)
ax=plt.gca()
extent=ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted())
plt.savefig('/tmp/test.png', dpi=400,
bbox_inches=extent)

You can disable the default interpolation of imshow by adding the following line to your matplotlibrc file (typically at ~/.matplotlib/matplotlibrc):
image.interpolation : none
The result is much faster rendering and crisper images.

I found a solution as long as one needs to display only low-resolution images. One can do so using the line
im = matplotlib.pyplot.imshow(img,cmap=matplotlib.cm.gray, origin='centre',extent=(0,1600,0,1200))
where the extent-parameter tells matplotlib to plot the figure over this range. If one uses an image which has a lower resolution, this speeds up the process quite a lot. Nevertheless it would be great if somebody knows additional tricks to make the process even faster in order to use a higher resolution with the same speed.
Thanks to everyone who thought about my problem, further remarks are appreciated!!!

Related

Better documentation for arcgisimage, basemap python

I am using the arcgisimage API to have map layers to my scatterplot.
However, the documentation for the API, found here http://basemaptutorial.readthedocs.org/en/latest/backgrounds.html is not that good, especially concerning sizing of the images:
xpixels actually sets the zoom of the image. A bigger number will ask
a bigger image, so the image will have more detail. So when the zoom
is bigger, the xsize must be bigger to maintain the resolution
dpi is the image resolution at the output device. Changing its value
will change the number of pixels, but not the zoom level
THe xsize mentioned is not defined anywhere, and doubling the DPI between 300 and 600 doesn't affect the size of the image.
Anyone have a better documentation/tutorial?
I am learning about some similar things...And I am new to it. So what I can do is to offer some simple ideas in my mind. Wish this will help you.(Although it seems not likely to do so. ^_^ )
The following code is from the given example of tutorial by adding some adjustments. (LA centered)
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
map = Basemap(llcrnrlon=-118.5,llcrnrlat=33.15,urcrnrlon=-117.15,urcrnrlat=34.5, epsg=4269)
#http://server.arcgisonline.com/arcgis/rest/services
#EPSG Number of America is 4269
map.arcgisimage(service='World_Physical_Map', xpixels = 1500, verbose= True)
plt.show()
Firstly, I guess here "xsize" equals "xpixels", or just should be "size" (a misspelling? I'm not sure of that). As told in the tutorial, what "xpixel" influence is the resolution of the final figure and the size of it.
When xpixels=150, you'll get the following picture (about 206KB):
However when xpixels=1500, you'll get a picture with higher resolution (about 278KB). Also when you zoom in to figure out the detail, this picture is clearer than the former one.
If you want to see the picture with bigger zoom, you need to set "xpixels" to a larger value to keep it clear.(That is to maintain the resolution. I guess they gave out just a few simple explaination without many details.) And I have no idea what "dpi" is used for...It is like first time you cut a cake into 300 grids, then the second time you cut it into 600 grids. The figure won't be clearer. And from your saying I know that neither will it become a bigger graph.

Trouble with pyplot displaying resized images in python

This is my first stack overflow question so please correct me if its not a good one:
I am currently processing a bunch of grayscale images as numpy ndarrays (dtype=uint8) in python 2.7. When I resize the images using resized=misc.imresize(image,.1), the resulting image will sometimes show up with different gray levels when I plot it with pyplot. Here is what my code looks like. I would post an image of the result, but I do not have the reputation points yet:
import cv2
from scipy import misc
from matplotlib import pyplot as plt
image=cv2.imread("gray_image.tif",cv2.CV_LOAD_IMAGE_GRAYSCALE)
resize=misc.imresize(image,.1)
plt.subplot(1,2,1),plt.imshow(image,"gray")
plt.subplot(1,2,2),plt.imshow(resize,"gray")
plt.show()
If I write the image to a file, the gray level appears normal.
If I compare the average gray level using numpy:
np.average(image) and np.average(resized),
the average gray level values are about the same, as one would expect.
If I display the image with cv2.imshow, the gray level appears normal.
Its not only an issue with resizing the image, but the gray level also gets screwy when I add images together (when most of one image is black and shouldn't darken the resulting image), and when I build an image pixel-by-pixel such as in:
import numpy as np
image_copy = np.zeros(image.shape)
for row in range(image.shape[0]):
for col in range(image.shape[1]):
image_copy[row,col]=image[row,col]
plt.imshow(image_copy,"gray") #<-- Will sometimes show up darker than original image
plt.show()
Does anyone have an idea as to what may be going on?
I apologize for the wordiness and lack of clarity in this question.
imshow is automatically scaling the color information to fit the whole available range. After resizing, the color range be smaller, resulting in changes of the apparent color (but not of the actual values, which explains why saved images work well).
You likely want to tell imshow not to scale your colors. This can be done using the vmin and vmax arguments as explained in the documentation. You probably want to use something like plt.imshow(image, "gray", vmin=0, vmax=255) to achieve an invariant appearance.

Force pyplot.imshow() to produce image with higher resolution

I have an NxN array that I am plotting in Python using matplotlib.pyplot.imshow(). N will be very large and I want my final image to have resolution to match. However, in the code that follows, the image resolution doesn't seem to change with increasing N at all. I think that imshow() (at least how I'm using it) has a fixed minimum pixel size that is larger than that needed to show my NxN array with full resolution.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
array = np.loadtxt("output.dat",unpack=True)
plt.figsize=(30.0, 30.0)
im = plt.imshow(array,cmap='hot')
plt.colorbar(im)
plt.savefig("mandelbrot.pdf")
As you can see in the code above, I've tried messing with plt.figsize to try and increase resolution but to no avail. I've also tried various output formats (.pdf, .ps, .eps, .png) but these all produced images with lower resolution than I wanted. The .ps, .eps, and .pdf images all looked the exact same.
First, does my problem exist with imshow() or is there some other aspect of my code that needs to be changed to produce higher resolution images?
Second, how do I produce higher resolution images?
plt.figsize() will only change the size of the figure in inches while keeping the default dpi. You can set the resolution of the figure by passing the dpi keyword argument when you save the figure:
fig.savefig('filename.extension', dpi=XXX)
So if you have a figure size of 4x6 and save it with dpi=300 you'll end up with an image with 1200x1800 resolution.
You can also set the default figure size and dpi with matplotlibrc.

Some questions about scipy.misc.imshow

I just want to use python and scipy write a program achieve gray-level image histogram equalization, however, I find something is wrong when I use misc.imshow function.
I first read an image by misc.imread function and immediately I write a display function with misc.imshow, the image displayed is not the original one, it seems do the histogram equalization by default. Here is my code:
from sicpy import misc
import matplotlib.pyplot as plt
......
image1 = misc.imread('./images/Fig2.jpg')
misc.imsave('./iamges/post_Fig2.jpg',image1)
plt.figure()
plt.imshow(image1,cmap=plt.cm.gray)
plt.show()
image1 is an image with low contrast.
post_Fig2.jpg is also an image with low contrast
However the image displayed in figure window has a high contrast, significantly different from the two mentioned above.
So I am wondering whether the plt.imshow
function do the histogram equalization automatically? or something is wrong with my code or my method?

Display and Save Large 2D Matrix with Full Resolution in Python

I have a large 2D array (4000x3000) saved as a numpy array which I would like to display and save while keeping the ability to look at each individual pixels.
For the display part, I currently use matplotlib imshow() function which works very well.
For the saving part, it is not clear to me how I can save this figure and preserve the information contained in all 12M pixels. I tried adjusting the figure size and the resolution (dpi) of the saved image but it is not obvious which figsize/dpi settings should be used to match the resolution of the large 2D matrix displayed. Here is an example code of what I'm doing (arr is a numpy array of shape (3000,4000)):
fig = pylab.figure(figsize=(16,12))
pylab.imshow(arr,interpolation='nearest')
fig.savefig("image.png",dpi=500)
One option would be to increase the resolution of the saved image substantially to be sure all pixels will be properly recorded but this has the significant drawback of creating an image of extremely large size (at least much larger than the 4000x3000 pixels image which is all that I would really need). It also has the disadvantage that not all pixels will be of exactly the same size.
I also had a look at the Python Image Library but it is not clear to me how it could be used for this purpose, if at all.
Any help on the subject would be much appreciated!
I think I found a solution which works fairly well. I use figimage to plot the numpy array without resampling. If you're careful in the size of the figure you create, you can keep full resolution of your matrix whatever size it has.
I figured out that figimage plots a single pixel with size 0.01 inch (this number might be system dependent) so the following code will for example save the matrix with full resolution (arr is a numpy array of shape (3000,4000)):
rows = 3000
columns = 4000
fig = pylab.figure(figsize=(columns*0.01,rows*0.01))
pylab.figimage(arr,cmap=cm.jet,origin='lower')
fig.savefig("image.png")
Two issues I still have with this options:
there is no markers indicating column/row numbers making it hard to know which pixel is which besides the ones on the edges
if you decide to interactively look at the image, it is not possible to zoom in/out
A solution that also solves the above 2 issues would be terrific, if it exists.
The OpenCV library was designed for scientific analysis of images. Consequently, it doesn't "resample" images without your explicitly asking for it. To save an image:
import cv2
cv2.imwrite('image.png', arr)
where arr is your numpy array. The saved image will be the same size as your array arr.
You didn't mention the color-model that you are using. Pngs, like jpegs, are usually 8-bit per color channel. OpenCV will support up to 16-bits per channel if you request it.
Documentation on OpenCV's imwrite is here.

Categories

Resources