Force pyplot.imshow() to produce image with higher resolution - python

I have an NxN array that I am plotting in Python using matplotlib.pyplot.imshow(). N will be very large and I want my final image to have resolution to match. However, in the code that follows, the image resolution doesn't seem to change with increasing N at all. I think that imshow() (at least how I'm using it) has a fixed minimum pixel size that is larger than that needed to show my NxN array with full resolution.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
array = np.loadtxt("output.dat",unpack=True)
plt.figsize=(30.0, 30.0)
im = plt.imshow(array,cmap='hot')
plt.colorbar(im)
plt.savefig("mandelbrot.pdf")
As you can see in the code above, I've tried messing with plt.figsize to try and increase resolution but to no avail. I've also tried various output formats (.pdf, .ps, .eps, .png) but these all produced images with lower resolution than I wanted. The .ps, .eps, and .pdf images all looked the exact same.
First, does my problem exist with imshow() or is there some other aspect of my code that needs to be changed to produce higher resolution images?
Second, how do I produce higher resolution images?

plt.figsize() will only change the size of the figure in inches while keeping the default dpi. You can set the resolution of the figure by passing the dpi keyword argument when you save the figure:
fig.savefig('filename.extension', dpi=XXX)
So if you have a figure size of 4x6 and save it with dpi=300 you'll end up with an image with 1200x1800 resolution.
You can also set the default figure size and dpi with matplotlibrc.

Related

How to make a heatmap using Matplotlib with a specific pixel size for each cell?

I can easily make my heatmap using
data = np.random.rand(4,4)
fig, ax = plt.subplots()
heatmap = ax.pcolor(data, cmap=plt.cm.Blues)
plt.show()
However as you can see each cell is rectangular and I'd like each one to be square (32 x 32 pixels), generating a 512 x 512 pixel image.
Is there a way of forcing the plot (or each individual cell) to be of a specific pixel size?
Edits:
my screen DPI is 100
I'd like the actual heatmap to be 512 x 512 and
not the entire figure
If you only want an image containing colors associated with your values and no tick marks or labels, I would suggest using PIL.Image.fromarray from the Pillow fork of PIL. you'll need to tile your array so each value is repeated 32x32 then create your image probably using float32 mode or int32 mode.. im = PIL.Image.fromarray(ndarray, mode='f')
PIL also allows you to resize an image, so you could create the image with one pixel per bin then resize with no per pixel re-sampling: im.resize((512,512),resample=0)
Here I'm quoting from a previous post attached below:
Matplotlib doesn't work with pixels directly, but rather physical
sizes and DPI. If you want to display a figure with a certain pixel
size, you need to know the DPI of your monitor. For example this link
will detect that for you.
If you have an image of 3841x7195 pixels it is unlikely that you
monitor will be that large, so you won't be able to show a figure of
that size (matplotlib requires the figure to fit in the screen, if you
ask for a size too large it will shrink to the screen size). Let's
imagine you want an 800x800 pixel image just for an example. Here's
how to show an 800x800 pixel image in my monitor (my_dpi=96):
plt.figure(figsize=(800/my_dpi, 800/my_dpi), dpi=my_dpi) So you
basically just divide the dimensions in inches by your DPI.
If you want to save a figure of a specific size, then it is a
different matter. Screen DPIs are not so important anymore (unless you
ask for a figure that won't fit in the screen). Using the same example
of the 800x800 pixel figure, we can save it in different resolutions
using the dpi keyword of savefig. To save it in the same resolution as
the screen just use the same dpi:
plt.savefig('my_fig.png', dpi=my_dpi) To to save it as an 8000x8000
pixel image, use a dpi 10 times larger:
plt.savefig('my_fig.png', dpi=my_dpi * 10) Note that the setting of
the DPI is not supported by all backends. Here, the PNG backend is
used, but the pdf and ps backends will implement the size differently.
Also, changing the DPI and sizes will also affect things like
fontsize. A larger DPI will keep the same relative sizes of fonts and
elements, but if you want smaller fonts for a larger figure you need
to increase the physical size instead of the DPI.
Getting back to your example, if you want to save a image with 3841 x
7195 pixels, you could do the following:
plt.figure(figsize=(3.841, 7.195), dpi=100) ( your code ...)
plt.savefig('myfig.png', dpi=1000) Note that I used the figure dpi of
100 to fit in most screens, but saved with dpi=1000 to achieve the
required resolution. In my system this produces a png with 3840x7190
pixels -- it seems that the DPI saved is always 0.02 pixels/inch
smaller than the selected value, which will have a (small) effect on
large image sizes. Some more discussion of this here.
Specifying and saving a figure with exact size in pixels
Hope it helps!

Some questions about scipy.misc.imshow

I just want to use python and scipy write a program achieve gray-level image histogram equalization, however, I find something is wrong when I use misc.imshow function.
I first read an image by misc.imread function and immediately I write a display function with misc.imshow, the image displayed is not the original one, it seems do the histogram equalization by default. Here is my code:
from sicpy import misc
import matplotlib.pyplot as plt
......
image1 = misc.imread('./images/Fig2.jpg')
misc.imsave('./iamges/post_Fig2.jpg',image1)
plt.figure()
plt.imshow(image1,cmap=plt.cm.gray)
plt.show()
image1 is an image with low contrast.
post_Fig2.jpg is also an image with low contrast
However the image displayed in figure window has a high contrast, significantly different from the two mentioned above.
So I am wondering whether the plt.imshow
function do the histogram equalization automatically? or something is wrong with my code or my method?

Get exact image dimensions with matplotlib and bbox_inches='tight'

I plot a series of figures and save them via savefig as png files (where savefig gets dpi=100 as an argument). The aspect ratio and resolution is previously defined within plt.figure(figsize=(10.24, 10.24), dpi=100), which should result in images of exactly 1024x1024 pixels. I use bbox_inches='tight' as well as plt.tight_layout(), but I still get image dimensions that vary about a few pixels from one image to the next, depending on the axis labels it seems.
Did I miss something? How do I get the exact same image dimensions for every file without losing bbox_inches='tight'? Using matplotlib 1.3.1.

Display and Save Large 2D Matrix with Full Resolution in Python

I have a large 2D array (4000x3000) saved as a numpy array which I would like to display and save while keeping the ability to look at each individual pixels.
For the display part, I currently use matplotlib imshow() function which works very well.
For the saving part, it is not clear to me how I can save this figure and preserve the information contained in all 12M pixels. I tried adjusting the figure size and the resolution (dpi) of the saved image but it is not obvious which figsize/dpi settings should be used to match the resolution of the large 2D matrix displayed. Here is an example code of what I'm doing (arr is a numpy array of shape (3000,4000)):
fig = pylab.figure(figsize=(16,12))
pylab.imshow(arr,interpolation='nearest')
fig.savefig("image.png",dpi=500)
One option would be to increase the resolution of the saved image substantially to be sure all pixels will be properly recorded but this has the significant drawback of creating an image of extremely large size (at least much larger than the 4000x3000 pixels image which is all that I would really need). It also has the disadvantage that not all pixels will be of exactly the same size.
I also had a look at the Python Image Library but it is not clear to me how it could be used for this purpose, if at all.
Any help on the subject would be much appreciated!
I think I found a solution which works fairly well. I use figimage to plot the numpy array without resampling. If you're careful in the size of the figure you create, you can keep full resolution of your matrix whatever size it has.
I figured out that figimage plots a single pixel with size 0.01 inch (this number might be system dependent) so the following code will for example save the matrix with full resolution (arr is a numpy array of shape (3000,4000)):
rows = 3000
columns = 4000
fig = pylab.figure(figsize=(columns*0.01,rows*0.01))
pylab.figimage(arr,cmap=cm.jet,origin='lower')
fig.savefig("image.png")
Two issues I still have with this options:
there is no markers indicating column/row numbers making it hard to know which pixel is which besides the ones on the edges
if you decide to interactively look at the image, it is not possible to zoom in/out
A solution that also solves the above 2 issues would be terrific, if it exists.
The OpenCV library was designed for scientific analysis of images. Consequently, it doesn't "resample" images without your explicitly asking for it. To save an image:
import cv2
cv2.imwrite('image.png', arr)
where arr is your numpy array. The saved image will be the same size as your array arr.
You didn't mention the color-model that you are using. Pngs, like jpegs, are usually 8-bit per color channel. OpenCV will support up to 16-bits per channel if you request it.
Documentation on OpenCV's imwrite is here.

Python matplotlib imshow is slow

I want to display an image file using imshow. It is an 1600x1200 grayscale image and I found out that matplotlib uses float32 to decode the values. It takes about 2 seconds to load the image and I would like to know if there is any way to make this faster. The point is that I do not really need a high resolution image, I just want to mark certain points and draw the image as a background. So,
First question: Is 2 seconds a good performance for such an image or
can I speed up.
Second question: If it is good performance how can I make the process
faster by reducing the resolution. Important point: I still want the
image to strech over 1600x1200 Pixel in the end.
My code:
import matplotlib
import numpy
plotfig = matplotlib.pyplot.figure()
plotwindow = plotfig.add_subplot(111)
plotwindow.axis([0,1600,0,1200])
plotwindow.invert_yaxis()
img = matplotlib.pyplot.imread("lowres.png")
im = matplotlib.pyplot.imshow(img,cmap=matplotlib.cm.gray,origin='centre')
plotfig.set_figwidth(200.0)
plotfig.canvas.draw()
matplotlib.pyplot.show()
This is what I want to do. Now if the picture saved in lowres.png has a lower resolution as 1600x1200 (i.e. 400x300) it is displayed in the upper corner as it should. How can I scale it to the whole are of 1600x1200 pixel?
If I run this program the slow part comes from the canvas.draw() command below. Is there maybe a way to speed up this command?
Thank you in advance!
According to your suggestions I have updated to the newest version of matplotlib
version 1.1.0svn, checkout 8988
And I also use the following code:
img = matplotlib.pyplot.imread(pngfile)
img *= 255
img2 = img.astype(numpy.uint8)
im = self.plotwindow.imshow(img2,cmap=matplotlib.cm.gray, origin='centre')
and still it takes about 2 seconds to display the image... Any other ideas?
Just to add: I found the following feature
zoomed_inset_axes
So in principle matplotlib should be able to do the task. There one can also plot a picture in a "zoomed" fashion...
The size of the data is independent of the pixel dimensions of the final image.
Since you say you don't need a high-resolution image, you can generate the image quicker by down-sampling your data. If your data is in the form of a numpy array, a quick and dirty way would be to take every nth column and row with data[::n,::n].
You can control the output image's pixel dimensions with fig.set_size_inches and plt.savefig's dpi parameter:
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
data=np.arange(300).reshape((10,30))
plt.imshow(data[::2,::2],cmap=cm.Greys)
fig=plt.gcf()
# Unfortunately, had to find these numbers through trial and error
fig.set_size_inches(5.163,3.75)
ax=plt.gca()
extent=ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted())
plt.savefig('/tmp/test.png', dpi=400,
bbox_inches=extent)
You can disable the default interpolation of imshow by adding the following line to your matplotlibrc file (typically at ~/.matplotlib/matplotlibrc):
image.interpolation : none
The result is much faster rendering and crisper images.
I found a solution as long as one needs to display only low-resolution images. One can do so using the line
im = matplotlib.pyplot.imshow(img,cmap=matplotlib.cm.gray, origin='centre',extent=(0,1600,0,1200))
where the extent-parameter tells matplotlib to plot the figure over this range. If one uses an image which has a lower resolution, this speeds up the process quite a lot. Nevertheless it would be great if somebody knows additional tricks to make the process even faster in order to use a higher resolution with the same speed.
Thanks to everyone who thought about my problem, further remarks are appreciated!!!

Categories

Resources