What would be the equivalent of imagesc in OpenCV?
To get the nice colors in imagesc you have to play around with OpenCV a little bit. In OpenCV 2.46 there ss a colormap option.
This is code I use in c++. Im sure its very similar in Python.
mydata.convertTo(display, CV_8UC1, 255.0 / 10000.0, 0);
applyColorMap(display, display, cv::COLORMAP_JET);
imshow("imagesc",display);
The image data or matrix data is stored in mydata. I know that it has a maximum value of 10000 so I scale it down to 1 and then multiply by the range of CV_8UC1 which is 255. If you dont know what the range is the best option is to first convert your matrix in the same way as Matlab does it.
EDIT
Here is a version which automatically normalizes your data.
float Amin = *min_element(mydata.begin<float>(), mydata.end<float>());
float Amax = *max_element(mydata.begin<float>(), mydata.end<float>());
Mat A_scaled = (mydata - Amin)/(Amax - Amin);
A_scaled.convertTo(display, CV_8UC1, 255.0, 0);
applyColorMap(display, display, cv::COLORMAP_JET);
imshow("imagesc",display);
It's close to imshow in matlab.
It depends on modules you use in python:
import cv2
import cv2.cv as cv
I_cv2 = cv2.imread("image.jpg")
I_cv = cv.LoadImage("image.jpg")
#I_cv2 is numpy.ndarray norm can be done easily
I_cv2_norm = (I_cv2-I_cv2.min())/(I_cv2.max()-I_cv2.min())
cv2.imshow("cv2Im scaled", I_cv2_norm)
#Here you have to normalize your cv iplimage as explain by twerdster to norm
cv.ShowImage("cvIm unscaled",I_cv)
The best way I think to be close to imagesc, is to use cv2.imread which load image as numpy.ndarray and next use imshow function from matplotlib.pyplot module:
import cv2
from matplolib.pyplot import imshow, show
I = cv2.imread("path")
#signature:
imshow(I, cmap=None, norm=None, aspect=None, interpolation=None,
alpha=None, vmin=None, vmax=None, origin=None, extent=None,
**kwargs)
Here you can choose whatever you want if normalized or your clims (scale)...
Related
I made a 3D array, which consists of numbers(0~4). What I want is to save 3D array as a stack of 2D images(if possible, save *.tiff file). What am I supposed to do?
import numpy as np
a = np.random.randint(0,5, size=(100,100,100))
a = a.astype('int8')
Actually, I made it. This is my code.
With this code, I don't need to stack a series of 2D image(array).
Make a 3D array, and save it. That is just what I did for this.
import numpy as np
from skimage.external import tifffile as tif
a = np.random.randint(0,5, size=(100,100,100))
a = a.astype('int8')
tif.imsave('a.tif', a, bigtiff=True)
This should work. I haven't tested it but I have separated color images into RGB slices using this method and it should work pretty much the same way here, assuming you don't want to do anything with those pixel values first. (They will be very close to the same color in an image).
import imageio
import numpy as np
a = np.random.randint(0,5, size=(100,100,100))
a = a.astype('int8')
for i in range(100):
newimage = a[:, :, i]
imageio.imwrite("path/to/image%d.tiff" %i, newimage)
What exactly do you mean by "stack"? As you refer to tiff as output format, I assume here you want your data in one file as a multiframe-tiff.
This can easily be done with imageio's mimwrite() function:
# import numpy as np
# a = np.random.randint(0,5, size=(100,100,100))
# a = a.astype('int8')
import imageio
imageio.mimwrite("image.tiff", a)
Note that this function relies on having the counter for your several frames as first parameter and x and y follw. See also its documentation.
However, if I'm wrong and you want to have n (e.g. 100) separate tif-files, you can also use the normal imwrite() function in a loop:
n = len(a)
for i in range(n):
imageio.imwrite(f'image_{i:03}.tiff', a[i])
I am confused regarding how matplotlib handles fp32 pixel intensities. To my understanding, it rescales the values between max and min values of the image. However, when I try to view images originally in [0,1] by rescaling their pixel intensites to [-1,1] (by im*2-1) using imshow(), the image appears differently colored. How do I rescale so that images don't differ ?
EDIT : Please look at the image -
PS: I need to do this as part of a program that outputs those values in [-1,1]
Following is the code used for this:
img = np.float32(misc.face(gray=False))
fig,ax = plt.subplots(1,2)
img = img/255 # Convert to 0,1 range
print (np.max(img), np.min(img))
img0 = ax[0].imshow(img)
plt.colorbar(img0,ax=ax[0])
print (np.max(2*img-1), np.min(2*img-1))
img1 = ax[1].imshow(2*img-1) # Convert to -1,1 range
plt.colorbar(img1,ax=ax[1])
plt.show()
The max,min output is :
(1.0, 0.0)
(1.0, -1.0)
You are probably using matplotlib wrong here.
The normalization-step should work correctly, if it's active. The docs tell us, that is only active by default, if the input-image is of type float!
Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import misc
fig, ax = plt.subplots(2,2)
# This usage shows different colors because there is no normalization
# FIRST ROW
f = misc.face(gray=True)
print(f.dtype)
g = f*2 # just some operation to show the difference between usages
ax[0,0].imshow(f)
ax[0,1].imshow(g)
# This usage makes sure that the input-image is of type float
# -> automatic normalization is used!
# SECOND ROW
f = np.asarray(misc.face(gray=True), dtype=float) # TYPE!
print(f.dtype)
g = f*2 # just some operation to show the difference between usages
ax[1,0].imshow(f)
ax[1,1].imshow(g)
plt.show()
Output
uint8
float64
Analysis
The first row shows the wrong usage, because the input is of type int and therefore no normalization will be used.
The second row shows the correct usage!
EDIT:
sascha has correctly pointed out in the comments that rescaling is not applied for RGB images and inputs must be ensured to be in [0,1] range.
For the same matrix, the image generated by the function imshow() from matplotlib and matlab is different. how to change some parameters of imshow() in matplotlib can get same result in matlab
%matlab
img = 255*rand(101);
img(:,1:50)=3;
img(:,52:101)=1;
img(:,51)=2;
trans_img=imtranslate(img,[3*cos(pi/3),3*sin(pi/3)]);
imshow(trans_img)
This is an image generated by matlab
#python
import numpy as np
import matplotlib.pyplot as plt
from mlab.releases import latest_release as mtl #call matlab function
img = 255 * np.random.uniform(0, 1, (101, 101))
img[:, 51:101] = 1
img[:, 0:50] = 3
img[:, 50] = 2
trans_img = mtl.imtranslate(img, [[3*math.cos(math.pi/3),3*math.sin(math.pi/3)]]
i = plt.imshow(trans_img, cmap=plt.cm.gray)
plt.show(i)
This is an image generated by matplotlib
The trans_img matrix is the same in both cases, but the images in matlab and python are different
Unfortunately I don't have an up-to-date enough version of Matlab that has the imtranslate function, but thankfully the image package in Octave does, which I'm sure is equivalent. Equally, I will be using the oct2py module instead of mlab as a result, for python to access the imtranslate function from octave within python.
Octave code:
img = 255*rand(101);
img(:,1:50)=3;
img(:,52:101)=1;
img(:,51)=2;
trans_img = imtranslate(img, 3*cos(pi/3),3*sin(pi/3));
imshow(trans_img, [min(trans_img(:)), max(trans_img(:))])
Python code:
import numpy as np
import matplotlib.pyplot as plt
import math
from oct2py import octave
octave.pkg('load','image'); # load image pkg for access to 'imtranslate'
img = 255 * np.random.uniform(0, 1, (101, 101))
img[:, 51:101] = 1
img[:, 0:50] = 3
img[:, 50] = 2
trans_img = octave.imtranslate(img, 3*math.cos(math.pi/3), 3*math.sin(math.pi/3))
i = plt.imshow(trans_img, cmap=plt.cm.gray)
plt.show(i)
Resulting image (identical) in both cases:
My only comment on why you may have been seeing the discrepancy, is that I did specify the min and max values in imshow, to ensure appropriate intensity scaling. Equally you could have just used imagesc(trans_img) instead (I actually prefer this). I didn't specify such limits explicitly in python for plt.imshow ... perhaps it performs scaling by default.
Also, your code has a small bug; in the octave version of imtranslate at least, the function takes 3 arguments, not two. (Also, your original code has an unbalanced bracket).
How to do histogram equalization for multiple grayscaled images stored in a NumPy array easily?
I have the 96x96 pixel NumPy data in this 4D format:
(1800, 1, 96,96)
Moose's comment which points to this blog entry does the job quite nicely.
For completeness, I give an example here using nicer variable names and a looped execution on 1000 96x96 images which are in a 4D array as in the question. It is fast (1-2 seconds on my computer) and only needs NumPy.
import numpy as np
def image_histogram_equalization(image, number_bins=256):
# from http://www.janeriksolem.net/histogram-equalization-with-python-and.html
# get image histogram
image_histogram, bins = np.histogram(image.flatten(), number_bins, density=True)
cdf = image_histogram.cumsum() # cumulative distribution function
cdf = (number_bins-1) * cdf / cdf[-1] # normalize
# use linear interpolation of cdf to find new pixel values
image_equalized = np.interp(image.flatten(), bins[:-1], cdf)
return image_equalized.reshape(image.shape), cdf
if __name__ == '__main__':
# generate some test data with shape 1000, 1, 96, 96
data = np.random.rand(1000, 1, 96, 96)
# loop over them
data_equalized = np.zeros(data.shape)
for i in range(data.shape[0]):
image = data[i, 0, :, :]
data_equalized[i, 0, :, :] = image_histogram_equalization(image)[0]
Very fast and easy way is to use the cumulative distribution function provided by the skimage module. Basically what you do mathematically to proof it.
from skimage import exposure
import numpy as np
def histogram_equalize(img):
img = rgb2gray(img)
img_cdf, bin_centers = exposure.cumulative_distribution(img)
return np.interp(img, bin_centers, img_cdf)
As of today janeriksolem's url is broken.
I found however this gist that links the same page and claims to perform histogram equalization without computing the histogram.
The code is:
img_eq = np.sort(img.ravel()).searchsorted(img)
Here's an alternate implementation for a single channel image that is fast. See skimage.exposure.histogram for reference. Using timeit, 'image_histogram_equalization' in Trilarion's answer has a mean execution time was 0.3696 seconds, while this function has a mean execution time of 0.0534 seconds. However this implementation also relies on skimage.
import numpy as np
from skimage import exposure
def hist_eq(image):
hist, bins = exposure.histogram(image, nbins=256, normalize=False)
# append any remaining 0 values to the histogram
hist = np.hstack((hist, np.zeros((255 - bins[-1]))))
cdf = 255*(hist/hist.sum()).cumsum()
equalized = cdf[image].astype(np.uint8)
return equalized
I'm new to python and matplotlib and I was wondering whether anyone knew if there were any utilities available to do the equavalent of histogram equalization but to a matplotlib color table? There is a function called matplotlib.colors.Normalize which, if given a image array, will automatically set the bottom and top levels but I want something more intelligent that this. I could always just apply histogram equalization to the data itself but I would rather not touch the data. Any thoughts?
You have to create your own image-specific colormap, but it's not too tricky:
import pylab
import matplotlib.colors
import numpy
im = pylab.imread('lena.png').sum(axis=2) # make grayscale
pylab.imshow(im, cmap=pylab.cm.gray)
pylab.title('orig')
imvals = numpy.sort(im.flatten())
lo = imvals[0]
hi = imvals[-1]
steps = (imvals[::len(imvals)/256] - lo) / (hi - lo)
num_steps = float(len(steps))
interps = [(s, idx/num_steps, idx/num_steps) for idx, s in enumerate(steps)]
interps.append((1, 1, 1))
cdict = {'red' : interps,
'green' : interps,
'blue' : interps}
histeq_cmap = matplotlib.colors.LinearSegmentedColormap('HistEq', cdict)
pylab.figure()
pylab.imshow(im, cmap=histeq_cmap)
pylab.title('histeq')
pylab.show()
Histogram equalization can be applied by modifying the palette (or LUT) of your image, so it would the definition of a palette that is equalized.
I searched a bit and couldn't find source code for computing an equalized palette, so unless something exitss you would have to code it yourself.
You should be started with the description of the algorithm on the Wikipedia article.
You could also ask for help on the matplotlib lists.