Show 2D array (grayscale image) in heatmap in python - python

I'm a beginner in working with images in python and I'm trying to display 2D array, 500px x 500px, array((500, 500)), which I usually display as an grayscale image as a color image, in heatmap.
to be displayed like this:
I tried but I couldn't find the answers in the internet, and what I found didn't work for me. Please help.
I don't really have much code, I only know that this one:
my_img = plt.imread(filename)
plt.imshow(my_img, cmap="hot")
doesn't work, it displays the same image, in grayscale.

try giving pcolor a try. It's the more usual analogy to a "heatmap". imshow is more aligned with the display of images true to the color values in the array. The fact that your ideal image is inverted from your practice image also tells me pcolor might be a better choice.

Related

Magnify image based on rectangle points selected on image

I am working on a code in python and I came across a figure in a report that I would like to replicate.
Basically I would like to create a 'bounding' box onto the original image, and then subsequently crop and display the part of the image that has the bounding box on it. (basically to 'magnify' that section)
I've been googling but I can't seem to find the correct function to use so that I can achieve this. Currently, opencv is used to read my image, but if there is a function in matplotlib that does this, then you can suggest that too.
Thank you for your help!

Scikit-Image Questions (specifically re: `blob_log`)

I'm trying to use blob detection in scikit-image. blob_log is supposed to return an array of either Nx3 for a 2D image, or an Nx4 for a 3D image (?) The first two values in a 2D image are the (x, y, sigma) and in 3D are (p, x, y sigma)
I'm attempting to load this image into my code which looks like it has quite a few observable blobs & it is a 2D image.
I've got a few questions:
(1) the blob_log is returning a Nx4 array which means its loading it as a 3D image. When I try to print it, it looks like its just a bunch of empty arrays which I don't understand how because when I plt.show() it is a 2D image.
(2) If N is the number of blobs it has calculated, then it is only giving me less than 10% of the total images - I believe this is due to the fact that the image is on a white background making it more difficult for the blob_log to notice - is that correct?
(3) I don't understand how the for loop at the end of the Blob documentation works? How is it plotting the circles over the image? I'm sorry if this is an elementary question, but its frustrating me because I think that would help me with some of the other things I was wondering about.
Attempts to figure out what is going on:
(1) Loading data.coins() and printing it gives me a nice array of values which I assume are the 2D, it still doesn't explain why the image I want to load isn't being recognized as 2D.
(2) I tried to load the data.coins() which should be an obviously provided image with circular images and fooled around with the sigma and threshold settings, I'm getting a variety of different values depending on the settings - is there a good way of figuring out which are the best without having to fool around with the settings until I get one that works?
Due to the length of my code and my question, below is just the applicable parts, but my entire code can be found here
from skimage import data, feature, exposure, io
import matplotlib
import matplotlib.pyplot as plt
img = data.coins()
#img = io.imread('gfp_test.png') #this is the image I linked above just in my dir
print(img)
print(type(img))
A = feature.blob_log(imgG, max_sigma = 30, num_sigma = 10, threshold = .4)
print (A)
Thank you for your help!
(1) You have a color image, while blob_* expect a grayscale image. Use skimage.color.rgb2gray to convert your image to grayscale before using the blob finding functions. See our crash course on NumPy for images for more details.
(2) Let's see if the above fixes your problem. I think blob finding is a local operation, so the white frame around the edges is probably not a problem.
(3) Yes, the variable naming could be clearer. The key is here: sequence = zip(blobs_list, colors, titles). If you look at what those individual variables are, they are length-3 lists with the results from the three different blob-finding methods, three different colors, and three different titles (the names of the three methods). So the outer for-loop is iterating through the methods, and the three panels of the figure. (You should look at the matplotlib documentation for subplots for more on this.)
The inner loop, then, is going through the results of a single blob-finding method and putting a circle around each result. You'll see the x/y transposed, and this is a consequence of the different coordinate conventions between our images (see the crash course linked above) and the matplotlib canvas. Then we create a circle with the appropriate radius for each blob, and add it to the matplotlib axes. See the examples linked from the Circle documentation for more information on adding patches.
Hope this helps!

Saving grayscale image using matplotlib and when loading it has multiple channels

I am saving images using
import matplotlib.pyplot as plt
plt.imsave(img_path,img_arr,cmap = 'gray') #shape (512,512)
...
img = plt.imread(img_path)
and the img.shape returns a (512,512,4) whereas i expect it to only be (512,512).
I thought maybe all the channels would be the same so I could just pick one but np.allclose(img[:,:,0],img_arr)
returns false no matter which index I choose. Printing the images, they are indeed the correct ones I am comparing as they do look almost identical(by eye), but are obviously not exactly identical.
I also tried saving the images with cv2 but that seems to save a black box for some reason. loading them with cv2.imread(img_path,0) does return a (512,512) array but something seems to be lost because again, np.allclose() tells me they're different.
I wanted to know if there is a good way to save grayscale images? Every method I try seems to convert it to RBG or RGBA which is really annoying. Also, I would like to preserve the dtype (int16) of the image as downsampling it loses important information.
Thanks in advance.
You cannot preserve a bit depth of 16 bit when saving images with matplotlib with any of the default colormaps which only have 256 colors (=8 bit).
And in addition, matplotlib converts the pixel values to floats, which may be a source for rounding errors.
In total, matplotlib does not seem to be the optimal tool in case you need to get perfect accuracy.
That being said, even PIL does not seem to allow for 16 bit single channel pngs. There is a possible solution in this question, but I haven't tested it.
In any case a bulletproof way to save your array without accuracy loss is to save it with numpy, np.save("arr.npy", im_arr).

Trouble with pyplot displaying resized images in python

This is my first stack overflow question so please correct me if its not a good one:
I am currently processing a bunch of grayscale images as numpy ndarrays (dtype=uint8) in python 2.7. When I resize the images using resized=misc.imresize(image,.1), the resulting image will sometimes show up with different gray levels when I plot it with pyplot. Here is what my code looks like. I would post an image of the result, but I do not have the reputation points yet:
import cv2
from scipy import misc
from matplotlib import pyplot as plt
image=cv2.imread("gray_image.tif",cv2.CV_LOAD_IMAGE_GRAYSCALE)
resize=misc.imresize(image,.1)
plt.subplot(1,2,1),plt.imshow(image,"gray")
plt.subplot(1,2,2),plt.imshow(resize,"gray")
plt.show()
If I write the image to a file, the gray level appears normal.
If I compare the average gray level using numpy:
np.average(image) and np.average(resized),
the average gray level values are about the same, as one would expect.
If I display the image with cv2.imshow, the gray level appears normal.
Its not only an issue with resizing the image, but the gray level also gets screwy when I add images together (when most of one image is black and shouldn't darken the resulting image), and when I build an image pixel-by-pixel such as in:
import numpy as np
image_copy = np.zeros(image.shape)
for row in range(image.shape[0]):
for col in range(image.shape[1]):
image_copy[row,col]=image[row,col]
plt.imshow(image_copy,"gray") #<-- Will sometimes show up darker than original image
plt.show()
Does anyone have an idea as to what may be going on?
I apologize for the wordiness and lack of clarity in this question.
imshow is automatically scaling the color information to fit the whole available range. After resizing, the color range be smaller, resulting in changes of the apparent color (but not of the actual values, which explains why saved images work well).
You likely want to tell imshow not to scale your colors. This can be done using the vmin and vmax arguments as explained in the documentation. You probably want to use something like plt.imshow(image, "gray", vmin=0, vmax=255) to achieve an invariant appearance.

Display and Save Large 2D Matrix with Full Resolution in Python

I have a large 2D array (4000x3000) saved as a numpy array which I would like to display and save while keeping the ability to look at each individual pixels.
For the display part, I currently use matplotlib imshow() function which works very well.
For the saving part, it is not clear to me how I can save this figure and preserve the information contained in all 12M pixels. I tried adjusting the figure size and the resolution (dpi) of the saved image but it is not obvious which figsize/dpi settings should be used to match the resolution of the large 2D matrix displayed. Here is an example code of what I'm doing (arr is a numpy array of shape (3000,4000)):
fig = pylab.figure(figsize=(16,12))
pylab.imshow(arr,interpolation='nearest')
fig.savefig("image.png",dpi=500)
One option would be to increase the resolution of the saved image substantially to be sure all pixels will be properly recorded but this has the significant drawback of creating an image of extremely large size (at least much larger than the 4000x3000 pixels image which is all that I would really need). It also has the disadvantage that not all pixels will be of exactly the same size.
I also had a look at the Python Image Library but it is not clear to me how it could be used for this purpose, if at all.
Any help on the subject would be much appreciated!
I think I found a solution which works fairly well. I use figimage to plot the numpy array without resampling. If you're careful in the size of the figure you create, you can keep full resolution of your matrix whatever size it has.
I figured out that figimage plots a single pixel with size 0.01 inch (this number might be system dependent) so the following code will for example save the matrix with full resolution (arr is a numpy array of shape (3000,4000)):
rows = 3000
columns = 4000
fig = pylab.figure(figsize=(columns*0.01,rows*0.01))
pylab.figimage(arr,cmap=cm.jet,origin='lower')
fig.savefig("image.png")
Two issues I still have with this options:
there is no markers indicating column/row numbers making it hard to know which pixel is which besides the ones on the edges
if you decide to interactively look at the image, it is not possible to zoom in/out
A solution that also solves the above 2 issues would be terrific, if it exists.
The OpenCV library was designed for scientific analysis of images. Consequently, it doesn't "resample" images without your explicitly asking for it. To save an image:
import cv2
cv2.imwrite('image.png', arr)
where arr is your numpy array. The saved image will be the same size as your array arr.
You didn't mention the color-model that you are using. Pngs, like jpegs, are usually 8-bit per color channel. OpenCV will support up to 16-bits per channel if you request it.
Documentation on OpenCV's imwrite is here.

Categories

Resources