NumPy, PIL adding an image - python

I'm trying to add two images together using NumPy and PIL. The way I would do this in MATLAB would be something like:
>> M1 = imread('_1.jpg');
>> M2 = imread('_2.jpg');
>> resM = M1 + M2;
>> imwrite(resM, 'res.jpg');
I get something like this:
alt text http://www.deadlink.cc/matlab.jpg
Using a compositing program and adding the images the MATLAB result seems to be right.
In Python I'm trying to do the same thing like this:
from PIL import Image
from numpy import *
im1 = Image.open('/Users/rem7/Desktop/_1.jpg')
im2 = Image.open('/Users/rem7/Desktop/_2.jpg')
im1arr = asarray(im1)
im2arr = asarray(im2)
addition = im1arr + im2arr
resultImage = Image.fromarray(addition)
resultImage.save('/Users/rem7/Desktop/a.jpg')
and I get something like this:
alt text http://www.deadlink.cc/python.jpg
Why am I getting all those funky colors? I also tried using ImageMath.eval("a+b", a=im1, b=im2), but I get an error about RGB unsupported.
I also saw that there is an Image.blend() but that requires an alpha.
What's the best way to achieve what I'm looking for?
Source Images (images have been removed):
alt text http://www.deadlink.cc/_1.jpg
alt text http://www.deadlink.cc/_2.jpg
Humm, OK, well I added the source images using the add image icon and they show up when I'm editing the post, but for some reason the images don't show up in the post.
(images have been removed) 2013 05 09

As everyone suggested already, the weird colors you're observing are overflow. And as you point out in the comment of schnaader's answer you still get overflow if you add your images like this:
addition=(im1arr+im2arr)/2
The reason for this overflow is that your NumPy arrays (im1arr im2arr) are of the uint8 type (i.e. 8-bit). This means each element of the array can only hold values up to 255, so when your sum exceeds 255, it loops back around 0:
>>>array([255,10,100],dtype='uint8') + array([1,10,160],dtype='uint8')
array([ 0, 20, 4], dtype=uint8)
To avoid overflow, your arrays should be able to contain values beyond 255. You need to convert them to floats for instance, perform the blending operation and convert the result back to uint8:
im1arrF = im1arr.astype('float')
im2arrF = im2arr.astype('float')
additionF = (im1arrF+im2arrF)/2
addition = additionF.astype('uint8')
You should not do this:
addition = im1arr/2 + im2arr/2
as you lose information, by squashing the dynamic of the image (you effectively make the images 7-bit) before you perform the blending information.
MATLAB note: the reason you don't see this problem in MATLAB, is probably because MATLAB takes care of the overflow implicitly in one of its functions.

Using PIL's blend() with an alpha value of 0.5 would be equivalent to (im1arr + im2arr)/2. Blend does not require that the images have alpha layers.
Try this:
from PIL import Image
im1 = Image.open('/Users/rem7/Desktop/_1.jpg')
im2 = Image.open('/Users/rem7/Desktop/_2.jpg')
Image.blend(im1,im2,0.5).save('/Users/rem7/Desktop/a.jpg')

To clamp numpy array values:
>>> c = a + b
>>> c[c > 256] = 256

It seems the code you posted just sums up the values and values bigger than 256 are overflowing. You want something like "(a + b) / 2" or "min(a + b, 256)". The latter seems to be the way that your Matlab example does it.

Your sample images are not showing up form me so I am going to do a bit of guessing.
I can't remember exactly how the numpy to pil conversion works but there are two likely cases. I am 95% sure it is 1 but am giving 2 just in case I am wrong.
1) 1 im1Arr is a MxN array of integers (ARGB) and when you add im1arr and im2arr together you are overflowing from one channel into the next if the components b1+b2>255. I am guessing matlab represents their images as MxNx3 arrays so each color channel is separate. You can solve this by splitting the PIL image channels and then making numpy arrays
2) 1 im1Arr is a MxNx3 array of bytes and when you add im1arr and im2arr together you are wrapping the component around.
You are also going to have to rescale the range back to between 0-255 before displaying. Your choices are divide by 2, scale by 255/array.max() or do a clip. I don't know what matlab does

Related

Luminance Correction (Prospective Correction)

When I was searching internet for an algorithm to correct luminance I came across this article about prospective correction and retrospective correction. I'm mostly interested in the prospective correction. Basically we take pictures of the scene with image in it(original one), and two other ,one bright and one dark, pictures where we only see the background of the original picture.
My problem is that I couldn't find any adaptation of these formulas in openCV or code example. I tried to use the formulas as they were in my code but this time I had a problem with data types. This happened when I tried to find C constant by applying operations on images.
This is how I implemented the formula in my code:
def calculate_C(im, im_b):
fx_mean = cv.mean(im)
fx_over_bx = np.divide(im,im_b)
mean_fx_bx = cv.mean(fx_over_bx)
c = np.divide(fx_mean, mean_fx_bx)
return c
#Basic image reading and resizing
# Original image
img = cv.imread(image_path)
img = cv.resize(img, (1000,750))
# Bright image
b_img = cv.imread(bright_image_path)
b_img = cv.resize(b_img, (1000,750))
# Calculating C constant from the formula
c_constant = calculate_C(img, b_img)
# Because I have only the bright image I am using second formula from the article
img = np.multiply(np.divide(img,b_img), c_constant)
When I try to run this code I get the error:
img = np.multiply(np.divide(img,b_img), c_constant)
ValueError: operands could not be broadcast together with shapes (750,1000,3) (4,)
So, is there anything I can do to fix my code? or is there any hints that you can share with me to handle luminance correction with this method or better methods?
You are using cv2.mean function which returns array with shape (4,) - mean value for each channel. You may need to ignore last channel and correctly broadcast it to numpy.
Or you could use numpy for calculations instead of opencv.
I just take example images from provided article.
grain.png:
grain_background.png:
Complete example:
import cv2
import numpy as np
from numpy.ma import divide, mean
f = cv2.imread("grain.png")
b = cv2.imread("grain_background.png")
f = f.astype(np.float32)
b = b.astype(np.float32)
C = mean(f) / divide(f, b).mean()
g = divide(f, b) * C
g = g.astype(np.uint8)
cv2.imwrite("grain_out.png", g)
Your need to use masked divide operation because ordinary operation could lead to division by zero => nan values.
Resulting image (output.png):

Why PIL.ImageChops.difference and np.array difference have different results?

Why PIL.ImageChops.difference and np.array absolute difference have different results? Pillow document says ImageChops.difference acts just like absolute difference(https://pillow.readthedocs.io/en/3.1.x/reference/ImageChops.html).
tamp_image = Image.open(tamp_file_path).convert("RGB")
orig_image = Image.open(orig_file_path).convert("RGB")
diff = ImageChops.difference(orig_image, tamp_image)
diff.show() #1
Image.fromarray(abs(np.array(tamp_image)-np.array(orig_image))).show() #2
results(top:#1, bottom:#2):
Interestingly, if I convert diff to np.array and then Image object again, it shows like #1.
I had a similar problem. The solution is quite simple, the converted numpy arrays have the datatype uint8. By subtracting large values, the bits will flip entirely, and you get some weird looking result.
So the solution is to convert the images to a datatype with an appropriate range, like int8.
img1 = np.array(tamp_image, dtype='int8')
img2 = np.array(orig_image, dtype='int8')
diff = np.abs(img1 - img2)
Image.fromarray(np.array(diff, dtype='uint8')).show()

Faster implementation to quantize an image with an existing palette?

I am using Python 3.6 to perform basic image manipulation through Pillow. Currently, I am attempting to take 32-bit PNG images (RGBA) of arbitrary color compositions and sizes and quantize them to a known palette of 16 colors. Optimally, this quantization method should be able to leave fully transparent (A = 0) pixels alone, while forcing all semi-transparent pixels to be fully opaque (A = 255). I have already devised working code that performs this, but I wonder if it may be inefficient:
import math
from PIL import Image
# a list of 16 RGBA tuples
palette = [
(0, 0, 0, 255),
# ...
]
with Image.open('some_image.png').convert('RGBA') as img:
for py in range(img.height):
for px in range(img.width):
pix = img.getpixel((px, py))
if pix[3] == 0: # Ignore fully transparent pixels
continue
# Perform exhaustive search for closest Euclidean distance
dist = 450
best_fit = (0, 0, 0, 0)
for c in palette:
if pix[:3] == c: # If pixel matches exactly, break
best_fit = c
break
tmp = sqrt(pow(pix[0]-c[0], 2) + pow(pix[1]-c[1], 2) + pow(pix[2]-c[2], 2))
if tmp < dist:
dist = tmp
best_fit = c
img.putpixel((px, py), best_fit + (255,))
img.save('quantized.png')
I think of two main inefficiencies of this code:
Image.putpixel() is a slow operation
Calculating the distance function multiple times per pixel is computationally wasteful
Is there a faster method to do this?
I've noted that Pillow has a native function Image.quantize() that seems to do exactly what I want. But as it is coded, it forces dithering in the result, which I do not want. This has been brought up in another StackOverflow question. The answer to that question was simply to extract the internal Pillow code and tweak the control variable for dithering, which I tested, but I find that Pillow corrupts the palette I give it and consistently yields an image where the quantized colors are considerably darker than they should be.
Image.point() is a tantalizing method, but it only works on each color channel individually, where color quantization requires working with all channels as a set. It'd be nice to be able to force all of the channels into a single channel of 32-bit integer values, which seems to be what the ill-documented mode "I" would do, but if I run img.convert('I'), I get a completely greyscale result, destroying all color.
An alternative method seems to be using NumPy and altering the image directly. I've attempted to create a lookup table of RGB values, but the three-dimensional indexing of NumPy's syntax is driving me insane. Ideally I'd like some kind of code that works like this:
img_arr = numpy.array(img)
# Find all unique colors
unique_colors = numpy.unique(arr, axis=0)
# Generate lookup table
colormap = numpy.empty(unique_colors.shape)
for i, c in enumerate(unique_colors):
dist = 450
best_fit = None
for pc in palette:
tmp = sqrt(pow(c[0] - pc[0], 2) + pow(c[1] - pc[1], 2) + pow(c[2] - pc[2], 2))
if tmp < dist:
dist = tmp
best_fit = pc
colormap[i] = best_fit
# Hypothetical pseudocode I can't seem to write out
for iy in range(arr.size):
for ix in range(arr[0].size):
if arr[iy, ix, 3] == 0: # Skip transparent
continue
index = # Find index of matching color in unique_colors, somehow
arr[iy, ix] = colormap[index]
I note with this hypothetical example that numpy.unique() is another slow operation, since it sorts the output. Since I cannot seem to finish the code the way I want, I haven't been able to test if this method is faster anyway.
I've also considered attempting to flatten the RGBA axis by converting the values to a 32-bit integer and desiring to create a one-dimensional lookup table with the simpler index:
def shift(a):
return a[0] << 24 | a[1] << 16 | a[2] << 8 | a[3]
img_arr = numpy.apply_along_axis(shift, 1, img_arr)
But this operation seemed noticeably slow on its own.
I would prefer answers that involve only Pillow and/or NumPy, please. Unless using another library demonstrates a dramatic computational speed increase over any PIL- or NumPy-native solution, I don't want to import extraneous libraries to do something these two libraries should be reasonably capable of on their own.
for loops should be avoided for speed.
I think you should make a tensor like:
d2[x,y,color_index,rgb] = distance_squared
where rgb = 0..2 (0 = r, 1 = g, 2 = b).
Then compute the distance:
d[x,y,color_index] =
sqrt(sum(rgb,d2))
Then select the color_index with the minimal distance:
c[x,y] = min_index(color_index, d)
Finally replace alpha as needed:
alpha = ceil(orig_image.alpha)
img = c,alpha

When plotting Mandelbrot using NumPy & Pillow, program outputs apparent noise

Previously, I had created a Mandelbrot generator in python using turtle. Now, I am re-writing the program to use the Python Imaging Library in order to increase speed and reduce limits on size of images.
However, the program below only outputs RGB nonsense, almost noise. I think it is something to do with a difference in the way NumPy and PIL deal with arrays, since saying l[x,y] = [1,1,1] where l = np.zeros((height,width,3)) doesn't just make 1 pixel white when img = Image.fromarray(l) and img.show() are performed.
def imagebrot(mina=-1.25, maxa=1.25, minb=-1.25, maxb=1.25, width=100, height=100, maxit=300, inf=2):
l,b = np.zeros((height,width,3), dtype=np.float64), minb
for y in range(0, height):
a = mina
for x in range(0, width):
ab = mandel(a, b, maxit, inf)
if ab[0] == maxit:
l[x,y:] = [1,1,1]
#if ab[0] < maxit:
#smoothit = mandelc(ab[0], ab[1], ab[2])
#l[x, y] = colorsys.hsv_to_rgb(smoothit, 1, 1)
a += abs(mina-maxa)/width
b += abs(minb-maxb)/height
img = Image.fromarray(l, "RGB")
img.show()
def mandel(re, im, maxit, inf):
z = complex(re, im)
c,it = z,0
for i in range(0, maxit):
if abs(z) > inf:
break
z,it = z*z+c,it+1
return it,z,inf
def mandelc(it,z,inf):
return (it+1-log(log(abs(z)))/log(2))
UPDATE 1:
I realised that one of the major errors in this program (I'm sure there are many) is the fact that I was using the x,y coords as the complex coefficients! So, 0 to 100 instead of -1.25 to 1.25! I have changed this so that the code now uses variables a,b to describe them, incremented in a manner I've stolen from some of my code in the turtle version. The code above has been updated accordingly. Since the Smooth Colouring Algorithm code is currently commented out for debugging, the inf variable has been reduced to 2 in size.
UPDATE 2:
I have edited the numpy index with help from a great user. The program now outputs this when set to 200 by 200:
As you can see, it definitely shows some mathematical shape and yet is filled with all these strange red, green and blue pixels! Why could these be here? My program can only set RGB values to [1,1,1] or leave it as a default [0,0,0]. It can't be [1,0,0] or anything like that - this must be a serious flaw...
UPDATE 3:
I think there is an error with NumPy and PIL's integration. If I make l = np.zeros((100, 100, 3)) and then state l[0,0,:] = 1 and finally img = Image.fromarray(l) & img.show(), this is what we get:
Here we get a series of coloured pixels. This calls for another question.
UPDATE 4:
I have no idea what was happening previously, but it seems with a np.uint8 array, Image.fromarray() uses colour values from 0-255. With this piece of wisdom, I move one step closer to understanding this Mandelbug!
Now, I do get something vaguely mathematical, however it still outputs strange things.
This dot is all there is... I get even stranger things if I change to np.uint16, I presume due to the different byte-shape and encoding scheme.
You are indexing the 3D array l incorrectly, try
l[x,y,:] = [1,1,1]
instead. For more details on how to access and modify numpy arrays have a look at numpy indexing
As a side note: the quickstart documentation of numpy actually has an implementation of the mandelbrot set generation and plotting.

Displaying a 32-bit image with NaN values (ImageJ)

I wrote a multilanguage 3-D image denoising ImageJ plugin that does some operations on an image and returns the denoised image as a 1-D array. The 1-D array contains NaN values (around the edges). The 1-D array is converted back into an image stack and displayed. It is simply black. I saved the image stack and opened it in ImageJ again. I moved my cursor over the image and saw the values change as they should. In some places (where I presume the neurone is) the pixels values were in the range of 1000-4000 . Yet, the whole image was simply pitch black. Here is a snippet of the code that converts the array into an image-stack at the end:
# Image-denoising routines written in C (this is where the nan values are introduced)
fimg = JNApackage.NativeCodeJNA.NativeCall(InputImgArray, medfiltArray, int(searchradius), int(patchradius), beta , int(x), int(y), int(z))
# Optimal Inverse Anscombe Transform (Some more operations in Jython)
fimg = InverseAnscombe.InvAnscombe(fimg)
InputImg.flush()
outputstack = ImageStack(x, y, z )
for i in xrange(0, z):
# Get the slice at index i and assign array elements corresponding to it.
outputstack.setPixels(fimg[int(i*x*y):int((i+1)*x*y)], i+1)
print 'Preparing denoised image for display '
outputImp = ImagePlus("Output Image", outputstack)
#print "OutputImage Stats:"
Stats = StackStatistics(outputImp)
print "mean:", Stats.mean, "minimum:", Stats.min, "maximum:", Stats.max
outputImp.show()
Any help as to what is going on?
The display range of your image might not be set correctly.
Try
outputImp.resetDisplayRange()
or
outputImp.setDisplayRange(Stats.min, Stats.max)
See the ImagePlus javadoc for more info.

Categories

Resources