OpenCV Python: LUT for 16bit image? - python

I have been trying to learn some image processing on OpenCV python. I have a 16-bit image, and I would like to apply a LUT conversion on this 16-bit image without reducing it to 8-bit. From the documentation, I read that LUT function in OpenCV is applicable only for 8-bit images. Does anyone know of an efficient way to use this function for 16-bit image?
I have used LUT coversion for 8-bit images. They work alright, but for 16bit images, the follwing error throws up: error: (-215:Assertion failed) (lutcn == cn || lutcn == 1) && \_lut.total() == 256 && \_lut.isContinuous() && (depth == CV_8U || depth == CV_8S) in function 'cv::LUT'.
Later, I found that this is because LUT function is application only for 8-bit images.

As you've already discovered, the implementation of OpenCV's LUT method only supports 8-bit LUTs. However, you can implement your own for arbitrary bit resolutions and it's actually quite simple. For each value in the image, this is directly used to access the LUT which will output the desired value. Because OpenCV interfaces with NumPy, you can just use the input image and index into the LUT directly in order to obtain the final output, taking advantage of NumPy array indexing.
First define a LUT - you'll need to ensure it's 16-bit and I'm assuming you have values that go from 0 to 65535 to respect the 16-bit resolution. Once you do that, use the table to index into your image. Here's an example using gamma adjusting:
import numpy as np
def adjust_gamma(image, gamma=1.0):
# build a lookup table mapping the pixel values [0, 65535] to
# their adjusted gamma values
inv_gamma = 1.0 / gamma
table = ((np.arange(0, 65536) / 65535) ** inv_gamma) * 65535
# Ensure table is 16-bit
table = table.astype(np.uint16)
# Now just index into this with the intensities to get the output
return table[image]
This applies inverse gamma adjusting of an input image, where we first generate a LUT that is 16-bit, then the image is used to directly index into it to create the output image. Take note that the input image is also assumed to be 16-bit. If you have any values that are beyond the 0-65535 range, this will give you an out-of-bounds indexing error.
Note - Multi-channel images
Take note that the above case assumes a single-channel image. If you want to apply this for multi-channel (i.e. RGB images), then you'll need to define a LUT for each channel and apply the LUT to each channel separately. The easiest way to do this would be a for loop across all channels. There are definitely more vectorized ways to do this in one-shot, but I will not diverge from the intent of your question and I want this to be as simple to read as possible.
First define a 2D LUT where each row in this matrix is a single LUT. Specifically, row i corresponds to the LUT to apply to channel i of the image. Once you're finished, loop through the channel dimension and apply the LUT. What we can also do to save some time is to preallocate the output image so that it's all zeroes, then fill in each channel accordingly.
Something like:
# Assume LUT is defined as `table` and it's a 2D NumPy array
output = np.zeros_like(image)
for i in range(image.shape[2]):
output[..., i] = table[i, image[..., i]]
output will contain the desired result. However, for the special case where the LUT is the same across all channels, you can just use the same 1D LUT you had previously and you can use the same indexing method that I talked about earlier:
output = table[image]

Related

numpy transforming RGB image to YIQ color space

For a class, I need to transform RGB image into YIQ.
We have been told that the conversion can be made by:
I started to write a messy code with loops to have the matrix multiplication and then I found out a function
skimage.color.yiq2rgb(imYIQ)
and when I looked inside to see what they were doing I saw the following (I'm copying stuff so it will be more clear):
yiq_from_rgb = yiq_from_rgb = np.array([[0.299, 0.587, 0.114],
[0.59590059, -0.27455667, -0.32134392],
[0.21153661, -0.52273617, 0.31119955]])
return np.dot(arr, yiq_from_rgb.T.copy())
when arr is just the RGB pic as a matrix
I'm trying to understand why this works? why do they take the Transpose matrix? (.T)
And how exactly does the dot product work when the arr shape is different than the yiq_from_rgb?
In your reference figure containing the matrix for the conversion, the transformation matrix is on the left of the RGB channels. So, for the first pixel in your RGB image, let's call it (p1r, p1g, p1b) corresponding to R, G, B channels respectively, we need to multiply with the transformation matrix and sum the results like:
y1y = (0.299*p1r + 0.587*p1g + 0.114*p1b)
y1i = (0.596*p1r - 0.275*p1g - 0.321*p1b)
y1q = (0.212*p1r - 0.523*p1g + 0.311*p1b)
where (y1y,y1i,y1q) is the value for the first pixel in the resulting YIQ image, after rounding/taking int. We do the same kind of multiplication for all the pixels in the whole RGB image and obtain the desired YIQ image.
Now, since they do this whole implementation using np.dot(arr, yiq_from_rgb.T), to have the weighting work out correctly the transformation matrix needs to be transposed. And copy is just to have a dedicated of the transposed transformation matrix for the purpose of this conversion.
Also, notice that contrary to your figure, in np.dot() the RGB array is on the left of transformation matrix.

Converting an image into a vector of pixels

I am trying to convert an image into an array of pixels.
Here is my current code.
im = Image.open("beeleg.png")
pixels = im.load()
im.getdata() # doesn't work
print(pixels # doesn't work
Ideally, my end goal is to convert the image into a vector of just pixels, so for instance if I have an image of dimensions 100x100, then I want a vector of dimensions 1x10000, where each value is between [0, 255]. Then, divide each of the values in the array by 256 and add a bias of 1 in the front of the vector. However, I am not able to proceed with all this without being able to obtain an array. How to proceed?
Scipy's ndimage library is generally the go-to library for working with pixels as data (arrays). You can load an image from file (most common formats supported) using scipy.ndimage.imread into a numpy array which can be easily reshaped and mathematically operated on. The mode keyword can be used to specify a colorspace transformation upon load (convert an RGB image to black and white). In your case you asked for single color pixels from 0-255 (8bit grayscale) so you would use mode='L'. See The Documentation for usage / more useful functions.
If use OpenCV, gray=cv2.imread(image,0) will return a grayscale image with n rows x m cols single channel numpy array. rows, cols = gray.shape will return the height and width of the image.

FFT on image with Python

I have a problem with FFT implementation in Python. I have completely strange results.
Ok so, I want to open image, get value of every pixel in RGB, then I need to use fft on it, and convert to image again.
My steps:
1) I'm opening image with PIL library in Python like this
from PIL import Image
im = Image.open("test.png")
2) I'm getting pixels
pixels = list(im.getdata())
3) I'm seperate every pixel to r,g,b values
for x in range(width):
for y in range(height):
r,g,b = pixels[x*width+y]
red[x][y] = r
green[x][y] = g
blue[x][y] = b
4). Let's assume that I have one pixel (111,111,111). And use fft on all red values like this
red = np.fft.fft(red)
And then:
print (red[0][0], green[0][0], blue[0][0])
My output is:
(53866+0j) 111 111
It's completely wrong I think. My image is 64x64, and FFT from gimp is completely different. Actually, my FFT give me only arrays with huge values, thats why my output image is black.
Do you have any idea where is problem?
[EDIT]
I've changed as suggested to
red= np.fft.fft2(red)
And after that I scale it
scale = 1/(width*height)
red= abs(red* scale)
And still, I'm getting only black image.
[EDIT2]
Ok, so lets take one image.
Assume that I dont want to open it and save as greyscale image. So I'm doing like this.
def getGray(pixel):
r,g,b = pixel
return (r+g+b)/3
im = Image.open("test.png")
im.load()
pixels = list(im.getdata())
width, height = im.size
for x in range(width):
for y in range(height):
greyscale[x][y] = getGray(pixels[x*width+y])
data = []
for x in range(width):
for y in range(height):
pix = greyscale[x][y]
data.append(pix)
img = Image.new("L", (width,height), "white")
img.putdata(data)
img.save('out.png')
After this, I'm getting this image , which is ok. So now, I want to make fft on my image before I'll save it to new one, so I'm doing like this
scale = 1/(width*height)
greyscale = np.fft.fft2(greyscale)
greyscale = abs(greyscale * scale)
after loading it. After saving it to file, I have . So lets try now open test.png with gimp and use FFT filter plugin. I'm getting this image, which is correct
How I can handle it?
Great question. I’ve never heard of it but the Gimp Fourier plugin seems really neat:
A simple plug-in to do fourier transform on you image. The major advantage of this plugin is to be able to work with the transformed image inside GIMP. You can so draw or apply filters in fourier space, and get the modified image with an inverse FFT.
This idea—of doing Gimp-style manipulation on frequency-domain data and transforming back to an image—is very cool! Despite years of working with FFTs, I’ve never thought about doing this. Instead of messing with Gimp plugins and C executables and ugliness, let’s do this in Python!
Caveat. I experimented with a number of ways to do this, attempting to get something close to the output Gimp Fourier image (gray with moiré pattern) from the original input image, but I simply couldn’t. The Gimp image appears to be somewhat symmetric around the middle of the image, but it’s not flipped vertically or horizontally, nor is it transpose-symmetric. I’d expect the plugin to be using a real 2D FFT to transform an H×W image into a H×W array of real-valued data in the frequency domain, in which case there would be no symmetry (it’s just the to-complex FFT that’s conjugate-symmetric for real-valued inputs like images). So I gave up trying to reverse-engineer what the Gimp plugin is doing and looked at how I’d do this from scratch.
The code. Very simple: read an image, apply scipy.fftpack.rfft in the leading two dimensions to get the “frequency-image”, rescale to 0–255, and save.
Note how this is different from the other answers! No grayscaling—the 2D real-to-real FFT happens independently on all three channels. No abs needed: the frequency-domain image can legitimately have negative values, and if you make them positive, you can’t recover your original image. (Also a nice feature: no compromises on image size. The size of the array remains the same before and after the FFT, whether the width/height is even or odd.)
from PIL import Image
import numpy as np
import scipy.fftpack as fp
## Functions to go from image to frequency-image and back
im2freq = lambda data: fp.rfft(fp.rfft(data, axis=0),
axis=1)
freq2im = lambda f: fp.irfft(fp.irfft(f, axis=1),
axis=0)
## Read in data file and transform
data = np.array(Image.open('test.png'))
freq = im2freq(data)
back = freq2im(freq)
# Make sure the forward and backward transforms work!
assert(np.allclose(data, back))
## Helper functions to rescale a frequency-image to [0, 255] and save
remmax = lambda x: x/x.max()
remmin = lambda x: x - np.amin(x, axis=(0,1), keepdims=True)
touint8 = lambda x: (remmax(remmin(x))*(256-1e-4)).astype(int)
def arr2im(data, fname):
out = Image.new('RGB', data.shape[1::-1])
out.putdata(map(tuple, data.reshape(-1, 3)))
out.save(fname)
arr2im(touint8(freq), 'freq.png')
(Aside: FFT-lover geek note. Look at the documentation for rfft for details, but I used Scipy’s FFTPACK module because its rfft interleaves real and imaginary components of a single pixel as two adjacent real values, guaranteeing that the output for any-sized 2D image (even vs odd, width vs height) will be preserved. This is in contrast to Numpy’s numpy.fft.rfft2 which, because it returns complex data of size width/2+1 by height/2+1, forces you to deal with one extra row/column and deal with deinterleaving complex-to-real yourself. Who needs that hassle for this application.)
Results. Given input named test.png:
this snippet produces the following output (global min/max have been rescaled and quantized to 0-255):
And upscaled:
In this frequency-image, the DC (0 Hz frequency) component is in the top-left, and frequencies move higher as you go right and down.
Now, let’s see what happens when you manipulate this image in a couple of ways. Instead of this test image, let’s use a cat photo.
I made a few mask images in Gimp that I then load into Python and multiply the frequency-image with to see what effect the mask has on the image.
Here’s the code:
# Make frequency-image of cat photo
freq = im2freq(np.array(Image.open('cat.jpg')))
# Load three frequency-domain masks (DSP "filters")
bpfMask = np.array(Image.open('cat-mask-bpfcorner.png')).astype(float) / 255
hpfMask = np.array(Image.open('cat-mask-hpfcorner.png')).astype(float) / 255
lpfMask = np.array(Image.open('cat-mask-corner.png')).astype(float) / 255
# Apply each filter and save the output
arr2im(touint8(freq2im(freq * bpfMask)), 'cat-bpf.png')
arr2im(touint8(freq2im(freq * hpfMask)), 'cat-hpf.png')
arr2im(touint8(freq2im(freq * lpfMask)), 'cat-lpf.png')
Here’s a low-pass filter mask on the left, and on the right, the result—click to see the full-res image:
In the mask, black = 0.0, white = 1.0. So the lowest frequencies are kept here (white), while the high ones are blocked (black). This blurs the image by attenuating high frequencies. Low-pass filters are used all over the place, including when decimating (“downsampling”) an image (though they will be shaped much more carefully than me drawing in Gimp 😜).
Here’s a band-pass filter, where the lowest frequencies (see that bit of white in the top-left corner?) and high frequencies are kept, but the middling-frequencies are blocked. Quite bizarre!
Here’s a high-pass filter, where the top-left corner that was left white in the above mask is blacked out:
This is how edge-detection works.
Postscript. Someone, make a webapp using this technique that lets you draw masks and apply them to an image real-time!!!
There are several issues here.
1) Manual conversion to grayscale isn't good. Use Image.open("test.png").convert('L')
2) Most likely there is an issue with types. You shouldn't pass np.ndarray from fft2 to a PIL image without being sure their types are compatible. abs(np.fft.fft2(something)) will return you an array of type np.float32 or something like this, whereas PIL image is going to receive something like an array of type np.uint8.
3) Scaling suggested in the comments looks wrong. You actually need your values to fit into 0..255 range.
Here's my code that addresses these 3 points:
import numpy as np
from PIL import Image
def fft(channel):
fft = np.fft.fft2(channel)
fft *= 255.0 / fft.max() # proper scaling into 0..255 range
return np.absolute(fft)
input_image = Image.open("test.png")
channels = input_image.split() # splits an image into R, G, B channels
result_array = np.zeros_like(input_image) # make sure data types,
# sizes and numbers of channels of input and output numpy arrays are the save
if len(channels) > 1: # grayscale images have only one channel
for i, channel in enumerate(channels):
result_array[..., i] = fft(channel)
else:
result_array[...] = fft(channels[0])
result_image = Image.fromarray(result_array)
result_image.save('out.png')
I must admit I haven't managed to get results identical to the GIMP FFT plugin. As far as I see it does some post-processing. My results are all kinda very low contrast mess, and GIMP seems to overcome this by tuning contrast and scaling down non-informative channels (in your case all chanels except Red are just empty). Refer to the image:

Interpreting numpy array obtained from tif file

I need to work with some greyscale tif files and I have been using PIL to import them as images and convert them into numpy arrays:
np.array(Image.open(src))
I want to have a transparent understanding of exactly what the values of these array correspond to and in particular, it was not clear what value was appropriate as a white point or black point for my images. For instance if I wanted to convert this array into an array of floats with pixel values of 1 for white values and 0 for black with other values scaled linearly in between.
I have tried some naive methods including scaling by the maximum value in the array but opening the resulting files, there is always some amount of shift in the color levels.
Is there any documentation for the proper way to understand the values stored in these tif arrays?
A TIFF is basically a computer file format for storing raster graphics images. It has a lot of specs and quick search on the web will get you the resources you need.
The thing is you are using PIL as your input library. The array you have is likely working with an uint8 data type, which means your data can be anywhere within 0 to 255. To obtain the 0 to 1 color range do the following:
im = np.array(Image.open(src)).astype('float32')/255
Notice your array will likely have 4 layers given in the third dimension im[:,:, here] (im.shape = (i,j,k)). So each trace im[i,j,:] (which represents a pixel) is going to be a quadruplet for an RGBA value.
The R stands for Red (or quantity of Red), G for Green, B for Blue. A is the alpha channel and it is what enables you to have transparency (lower values means less opacity and more transparency).
It can also have three layers for only RGB, or one layer if intended to be plotted in the grey-scale.
In the case you have RGB (or RGBA but not considering alpha) but need a single value you should understand that there are quite a few different ways of doing this. In this post #denis recommends the use of the following formulation:
Y = .2126 * R^gamma + .7152 * G^gamma + .0722 * B^gamma
where gamma is 2.2 for many PCs. The usual R G B are sometimes written
as R' G' B' (R' = Rlin ^ (1/gamma)) (purists tongue-click) but here
I'll drop the '.
And finally L* = 116 * Y ^ 1/3 - 16 to obtain the luminance.
I recommend you to read his post. Also consider looking into the following concepts:
RGB Colors model
Gamma correction
Tagged Image File Format
Pillow documentation of TIFF
Working with TIFFs (import, export) in Python using numpy

How to get an average picture from 100 pictures using PIL?

For example, I have 100 pictures whose resolution is the same, and I want to merge them into one picture. For the final picture, the RGB value of each pixel is the average of the 100 pictures' at that position. I know the getdata function can work in this situation, but is there a simpler and faster way to do this in PIL(Python Image Library)?
Let's assume that your images are all .png files and they are all stored in the current working directory. The python code below will do what you want. As Ignacio suggests, using numpy along with PIL is the key here. You just need to be a little bit careful about switching between integer and float arrays when building your average pixel intensities.
import os, numpy, PIL
from PIL import Image
# Access all PNG files in directory
allfiles=os.listdir(os.getcwd())
imlist=[filename for filename in allfiles if filename[-4:] in [".png",".PNG"]]
# Assuming all images are the same size, get dimensions of first image
w,h=Image.open(imlist[0]).size
N=len(imlist)
# Create a numpy array of floats to store the average (assume RGB images)
arr=numpy.zeros((h,w,3),numpy.float)
# Build up average pixel intensities, casting each image as an array of floats
for im in imlist:
imarr=numpy.array(Image.open(im),dtype=numpy.float)
arr=arr+imarr/N
# Round values in array and cast as 8-bit integer
arr=numpy.array(numpy.round(arr),dtype=numpy.uint8)
# Generate, save and preview final image
out=Image.fromarray(arr,mode="RGB")
out.save("Average.png")
out.show()
The image below was generated from a sequence of HD video frames using the code above.
I find it difficult to imagine a situation where memory is an issue here, but in the (unlikely) event that you absolutely cannot afford to create the array of floats required for my original answer, you could use PIL's blend function, as suggested by #mHurley as follows:
# Alternative method using PIL blend function
avg=Image.open(imlist[0])
for i in xrange(1,N):
img=Image.open(imlist[i])
avg=Image.blend(avg,img,1.0/float(i+1))
avg.save("Blend.png")
avg.show()
You could derive the correct sequence of alpha values, starting with the definition from PIL's blend function:
out = image1 * (1.0 - alpha) + image2 * alpha
Think about applying that function recursively to a vector of numbers (rather than images) to get the mean of the vector. For a vector of length N, you would need N-1 blending operations, with N-1 different values of alpha.
However, it's probably easier to think intuitively about the operations. At each step you want the avg image to contain equal proportions of the source images from earlier steps. When blending the first and second source images, alpha should be 1/2 to ensure equal proportions. When blending the third with the the average of the first two, you would like the new image to be made up of 1/3 of the third image, with the remainder made up of the average of the previous images (current value of avg), and so on.
In principle this new answer, based on blending, should be fine. However I don't know exactly how the blend function works. This makes me worry about how the pixel values are rounded after each iteration.
The image below was generated from 288 source images using the code from my original answer:
On the other hand, this image was generated by repeatedly applying PIL's blend function to the same 288 images:
I hope you can see that the outputs from the two algorithms are noticeably different. I expect this is because of accumulation of small rounding errors during repeated application of Image.blend
I strongly recommend my original answer over this alternative.
One can also use numpy mean function for averaging. The code looks better and works faster.
Here the comparison of timing and results for 700 noisy grayscale images of faces:
def average_img_1(imlist):
# Assuming all images are the same size, get dimensions of first image
w,h=Image.open(imlist[0]).size
N=len(imlist)
# Create a numpy array of floats to store the average (assume RGB images)
arr=np.zeros((h,w),np.float)
# Build up average pixel intensities, casting each image as an array of floats
for im in imlist:
imarr=np.array(Image.open(im),dtype=np.float)
arr=arr+imarr/N
out = Image.fromarray(arr)
return out
def average_img_2(imlist):
# Alternative method using PIL blend function
N = len(imlist)
avg=Image.open(imlist[0])
for i in xrange(1,N):
img=Image.open(imlist[i])
avg=Image.blend(avg,img,1.0/float(i+1))
return avg
def average_img_3(imlist):
# Alternative method using numpy mean function
images = np.array([np.array(Image.open(fname)) for fname in imlist])
arr = np.array(np.mean(images, axis=(0)), dtype=np.uint8)
out = Image.fromarray(arr)
return out
average_img_1()
100 loops, best of 3: 362 ms per loop
average_img_2()
100 loops, best of 3: 340 ms per loop
average_img_3()
100 loops, best of 3: 311 ms per loop
BTW, the results of averaging are quite different. I think the first method lose information during averaging. And the second one has some artifacts.
average_img_1
average_img_2
average_img_3
in case anybody is interested in a blueprint numpy solution (I was actually looking for it), here's the code:
mean_frame = np.mean(([frame for frame in frames]), axis=0)
I would consider creating an array of x by y integers all starting at (0, 0, 0) and then for each pixel in each file add the RGB value in, divide all the values by the number of images and then create the image from that - you will probably find that numpy can help.
I ran into MemoryErrors when trying the method in the accepted answer. I found a way to optimize that seems to produce the same result. Basically, you blend one image at a time, instead of adding them all up and dividing.
N=len(images_to_blend)
avg = Image.open(images_to_blend[0])
for im in images_to_blend: #assuming your list is filenames, not images
img = Image.open(im)
avg = Image.blend(avg, img, 1/N)
avg.save(blah)
This does two things, you don't have to have two very dense copies of the image while you're turning the image into an array, and you don't have to use 64-bit floats at all. You get similarly high precision, with smaller numbers. The results APPEAR to be the same, though I'd appreciate if someone checked my math.

Categories

Resources