When plotting Mandelbrot using NumPy & Pillow, program outputs apparent noise - python

Previously, I had created a Mandelbrot generator in python using turtle. Now, I am re-writing the program to use the Python Imaging Library in order to increase speed and reduce limits on size of images.
However, the program below only outputs RGB nonsense, almost noise. I think it is something to do with a difference in the way NumPy and PIL deal with arrays, since saying l[x,y] = [1,1,1] where l = np.zeros((height,width,3)) doesn't just make 1 pixel white when img = Image.fromarray(l) and img.show() are performed.
def imagebrot(mina=-1.25, maxa=1.25, minb=-1.25, maxb=1.25, width=100, height=100, maxit=300, inf=2):
l,b = np.zeros((height,width,3), dtype=np.float64), minb
for y in range(0, height):
a = mina
for x in range(0, width):
ab = mandel(a, b, maxit, inf)
if ab[0] == maxit:
l[x,y:] = [1,1,1]
#if ab[0] < maxit:
#smoothit = mandelc(ab[0], ab[1], ab[2])
#l[x, y] = colorsys.hsv_to_rgb(smoothit, 1, 1)
a += abs(mina-maxa)/width
b += abs(minb-maxb)/height
img = Image.fromarray(l, "RGB")
img.show()
def mandel(re, im, maxit, inf):
z = complex(re, im)
c,it = z,0
for i in range(0, maxit):
if abs(z) > inf:
break
z,it = z*z+c,it+1
return it,z,inf
def mandelc(it,z,inf):
return (it+1-log(log(abs(z)))/log(2))
UPDATE 1:
I realised that one of the major errors in this program (I'm sure there are many) is the fact that I was using the x,y coords as the complex coefficients! So, 0 to 100 instead of -1.25 to 1.25! I have changed this so that the code now uses variables a,b to describe them, incremented in a manner I've stolen from some of my code in the turtle version. The code above has been updated accordingly. Since the Smooth Colouring Algorithm code is currently commented out for debugging, the inf variable has been reduced to 2 in size.
UPDATE 2:
I have edited the numpy index with help from a great user. The program now outputs this when set to 200 by 200:
As you can see, it definitely shows some mathematical shape and yet is filled with all these strange red, green and blue pixels! Why could these be here? My program can only set RGB values to [1,1,1] or leave it as a default [0,0,0]. It can't be [1,0,0] or anything like that - this must be a serious flaw...
UPDATE 3:
I think there is an error with NumPy and PIL's integration. If I make l = np.zeros((100, 100, 3)) and then state l[0,0,:] = 1 and finally img = Image.fromarray(l) & img.show(), this is what we get:
Here we get a series of coloured pixels. This calls for another question.
UPDATE 4:
I have no idea what was happening previously, but it seems with a np.uint8 array, Image.fromarray() uses colour values from 0-255. With this piece of wisdom, I move one step closer to understanding this Mandelbug!
Now, I do get something vaguely mathematical, however it still outputs strange things.
This dot is all there is... I get even stranger things if I change to np.uint16, I presume due to the different byte-shape and encoding scheme.

You are indexing the 3D array l incorrectly, try
l[x,y,:] = [1,1,1]
instead. For more details on how to access and modify numpy arrays have a look at numpy indexing
As a side note: the quickstart documentation of numpy actually has an implementation of the mandelbrot set generation and plotting.

Related

Rotating a reshaped image as a matrix operation

I have a gray scale image that I want to rotate. However, I need to do optimization on it. Therefore, I cannot use pillow or opencv.
I want to reshape this image using python with numpy.reshape into an one dimensional vector (where I use the default settings C-style reshape).
And thereafter, I want to rotate this image around a point using matrix multiplication and addition, i.e. it should be something like
rotated_image_vector = A # vector + b # (or the equivalent in homogenious coordinates).
After this operation I want to reshape the outcome back to two dimensions and have the rotated image.
It would be best if it would as well use linear interpolation between the pixels that do not fit exactly to an other pixel.
The mathematical theory tells it is possible, and I believe there is a very elegant solution to this problem, but I do not see how to create this matrix. Did anyone already have this problem or sees an immediate solution?
Thanks a lot,
Eike
I like your approach but there is a slight misconception in it. What you want to transform are not the pixel values themselves but the coordinates. So you don't reshape your image but rather do a np.indices on it to obtain coordinates to each pixel. For those a rotation around a point looks like
rotation_matrix#(coordinates-fixed_point)+fixed_point
except that I have to transpose a bit to get the dimensions to align. The cove below is a slight adoption of my code in this answer.
As an example I am going to use the Wikipedia-logo-v2 by Nohat. It is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.
First I read in the picture, swap x and y axis to not get mad and rotate the coordinates as described above.
import numpy as np
import matplotlib.pyplot as plt
import itertools
image = plt.imread('wikipedia.jpg')
image = np.swapaxes(image,0,1)/255
fixed_point = np.array(image.shape[:2], dtype='float')/2
points = np.moveaxis(np.indices(image.shape[:2]),0,-1).reshape(-1,2)
a = 2*np.pi/8
A = np.array([[np.cos(a),-np.sin(a)],[np.sin(a),np.cos(a)]])
rotated_coordinates = (A#(points-fixed_point.reshape(1,2)).T).T+fixed_point.reshape(1,2)
Now I set up a little class to interpolate between the pixels that do not fit exactly to an other pixel. And finally I swap the axis back and plot it.
class Image_knn():
def fit(self, image):
self.image = image.astype('float')
def predict(self, x, y):
image = self.image
weights_x = [(1-(x % 1)).reshape(*x.shape,1), (x % 1).reshape(*x.shape,1)]
weights_y = [(1-(y % 1)).reshape(*x.shape,1), (y % 1).reshape(*x.shape,1)]
start_x = np.floor(x)
start_y = np.floor(y)
return sum([image[np.clip(np.floor(start_x + x), 0, image.shape[0]-1).astype('int'),
np.clip(np.floor(start_y + y), 0, image.shape[1]-1).astype('int')] * weights_x[x]*weights_y[y]
for x,y in itertools.product(range(2),range(2))])
image_model = Image_knn()
image_model.fit(image)
transformed_image = image_model.predict(*rotated_coordinates.T).reshape(*image.shape)
plt.imshow(np.swapaxes(transformed_image,0,1))
And I get a result like this
Possible Issue
The artifact in the bottom left that looks like one needs to clean the screen comes from the following problem: When we rotate it can happen that we don't have enough pixels to paint the lower left. What we do by default in image_knn is to clip the coordinates to an area where we have information. That means when we ask image knn for pixels coming from outside the image it gives us the pixels at the boundary of the image. This looks good if there is a background but if an object touches the edge of the picture it looks odd like here. Just something to keep in mind when using this.
Thank you for your answer!
But actually it is not a misconception that you could let this roation be represented by a matrix multiplication with the reshaped vector.
I used your code to generate such a matrix (its surely not the most efficient way but it works, most likely you see a more efficient implementation immediately XD. You see I really need it as a matix multiplication :-D).
What I basically did is to generate the representation matrix of the linear transformation, by computing how every of the 100*100 basis images (i.e. the image with zeros everywhere und a one) is mapped by your transformation.
import sys
import numpy as np
import matplotlib.pyplot as plt
import itertools
angle = 2*np.pi/6
image_expl = plt.imread('wikipedia.jpg')
image_expl = image_expl[:,:,0]
plt.imshow(image_expl)
plt.title("Image")
plt.show()
image_shape = image_expl.shape
pixel_number = image_shape[0]*image_shape[1]
rot_mat = np.zeros((pixel_number,pixel_number))
for i in range(pixel_number):
vector = np.zeros(pixel_number)
vector[i] = 1
image = vector.reshape(*image_shape)
fixed_point = np.array(image.shape, dtype='float')/2
points = np.moveaxis(np.indices(image.shape),0,-1).reshape(-1,2)
a = -angle
A = np.array([[np.cos(a),-np.sin(a)],[np.sin(a),np.cos(a)]])
rotated_coordinates = (A#(points-fixed_point.reshape(1,2)).T).T+fixed_point.reshape(1,2)
x,y = rotated_coordinates.T
image = image.astype('float')
weights_x = [(1-(x % 1)).reshape(*x.shape), (x % 1).reshape(*x.shape)]
weights_y = [(1-(y % 1)).reshape(*x.shape), (y % 1).reshape(*x.shape)]
start_x = np.floor(x)
start_y = np.floor(y)
transformed_image_returned = sum([image[np.clip(np.floor(start_x + x), 0, image.shape[0]-1).astype('int'),
np.clip(np.floor(start_y + y), 0, image.shape[1]-1).astype('int')] * weights_x[x]*weights_y[y]
for x,y in itertools.product(range(2),range(2))])
rot_mat[:,i] = transformed_image_returned
if i%100 == 0: print(int(100*i/pixel_number), "% finisched")
plt.imshow((rot_mat # image_expl.reshape(-1)).reshape(image_shape))
Thank you again :-)

Faster implementation to quantize an image with an existing palette?

I am using Python 3.6 to perform basic image manipulation through Pillow. Currently, I am attempting to take 32-bit PNG images (RGBA) of arbitrary color compositions and sizes and quantize them to a known palette of 16 colors. Optimally, this quantization method should be able to leave fully transparent (A = 0) pixels alone, while forcing all semi-transparent pixels to be fully opaque (A = 255). I have already devised working code that performs this, but I wonder if it may be inefficient:
import math
from PIL import Image
# a list of 16 RGBA tuples
palette = [
(0, 0, 0, 255),
# ...
]
with Image.open('some_image.png').convert('RGBA') as img:
for py in range(img.height):
for px in range(img.width):
pix = img.getpixel((px, py))
if pix[3] == 0: # Ignore fully transparent pixels
continue
# Perform exhaustive search for closest Euclidean distance
dist = 450
best_fit = (0, 0, 0, 0)
for c in palette:
if pix[:3] == c: # If pixel matches exactly, break
best_fit = c
break
tmp = sqrt(pow(pix[0]-c[0], 2) + pow(pix[1]-c[1], 2) + pow(pix[2]-c[2], 2))
if tmp < dist:
dist = tmp
best_fit = c
img.putpixel((px, py), best_fit + (255,))
img.save('quantized.png')
I think of two main inefficiencies of this code:
Image.putpixel() is a slow operation
Calculating the distance function multiple times per pixel is computationally wasteful
Is there a faster method to do this?
I've noted that Pillow has a native function Image.quantize() that seems to do exactly what I want. But as it is coded, it forces dithering in the result, which I do not want. This has been brought up in another StackOverflow question. The answer to that question was simply to extract the internal Pillow code and tweak the control variable for dithering, which I tested, but I find that Pillow corrupts the palette I give it and consistently yields an image where the quantized colors are considerably darker than they should be.
Image.point() is a tantalizing method, but it only works on each color channel individually, where color quantization requires working with all channels as a set. It'd be nice to be able to force all of the channels into a single channel of 32-bit integer values, which seems to be what the ill-documented mode "I" would do, but if I run img.convert('I'), I get a completely greyscale result, destroying all color.
An alternative method seems to be using NumPy and altering the image directly. I've attempted to create a lookup table of RGB values, but the three-dimensional indexing of NumPy's syntax is driving me insane. Ideally I'd like some kind of code that works like this:
img_arr = numpy.array(img)
# Find all unique colors
unique_colors = numpy.unique(arr, axis=0)
# Generate lookup table
colormap = numpy.empty(unique_colors.shape)
for i, c in enumerate(unique_colors):
dist = 450
best_fit = None
for pc in palette:
tmp = sqrt(pow(c[0] - pc[0], 2) + pow(c[1] - pc[1], 2) + pow(c[2] - pc[2], 2))
if tmp < dist:
dist = tmp
best_fit = pc
colormap[i] = best_fit
# Hypothetical pseudocode I can't seem to write out
for iy in range(arr.size):
for ix in range(arr[0].size):
if arr[iy, ix, 3] == 0: # Skip transparent
continue
index = # Find index of matching color in unique_colors, somehow
arr[iy, ix] = colormap[index]
I note with this hypothetical example that numpy.unique() is another slow operation, since it sorts the output. Since I cannot seem to finish the code the way I want, I haven't been able to test if this method is faster anyway.
I've also considered attempting to flatten the RGBA axis by converting the values to a 32-bit integer and desiring to create a one-dimensional lookup table with the simpler index:
def shift(a):
return a[0] << 24 | a[1] << 16 | a[2] << 8 | a[3]
img_arr = numpy.apply_along_axis(shift, 1, img_arr)
But this operation seemed noticeably slow on its own.
I would prefer answers that involve only Pillow and/or NumPy, please. Unless using another library demonstrates a dramatic computational speed increase over any PIL- or NumPy-native solution, I don't want to import extraneous libraries to do something these two libraries should be reasonably capable of on their own.
for loops should be avoided for speed.
I think you should make a tensor like:
d2[x,y,color_index,rgb] = distance_squared
where rgb = 0..2 (0 = r, 1 = g, 2 = b).
Then compute the distance:
d[x,y,color_index] =
sqrt(sum(rgb,d2))
Then select the color_index with the minimal distance:
c[x,y] = min_index(color_index, d)
Finally replace alpha as needed:
alpha = ceil(orig_image.alpha)
img = c,alpha

Better image normalization with numpy

I already achieved the goal described in the title but I was wondering if there was a more efficient (or generally better) way to do it. First of all let me introduce the problem.
I have a set of images of different sizes but with a width/height ratio less than (or equal) 2 (could be anything but let's say 2 for now), I want to normalize each one, meaning I want all of them to have the same size. Specifically I am going to do so like this:
Extract the max height above all images
Zoom the image so that each image reaches the max height keeping its ratio
Add a padding to the right with just white pixels until the image has a width/height ratio of 2
Keep in mind the images are represented as numpy matrices of grey scale values [0,255].
This is how I'm doing it now in Python:
max_height = numpy.max([len(obs) for obs in data if len(obs[0])/len(obs) <= 2])
for obs in data:
if len(obs[0])/len(obs) <= 2:
new_img = ndimage.zoom(obs, round(max_height/len(obs), 2), order=3)
missing_cols = max_height * 2 - len(new_img[0])
norm_img = []
for row in new_img:
norm_img.append(np.pad(row, (0, missing_cols), mode='constant', constant_values=255))
norm_img = np.resize(norm_img, (max_height, max_height*2))
There's a note about this code:
I'm rounding the zoom ratio because it makes the final height equal to max_height, I'm sure this is not the best approach but it's working (any suggestion is appreciated here). What I'd like to do is to expand the image keeping the ratio until it reaches a height equal to max_height. This is the only solution I found so far and it worked right away, the interpolation works pretty good.
So my final questions are:
Is there a better approach to achieve what explained above (image normalization) ? Do you think I could have done this differently ? Is there a common good practice I'm not following ?
Thanks in advance for your time.
Instead of ndimage.zoom you could use
scipy.misc.imresize. This
function allows you to specify the target size as a tuple, instead of by zoom
factor. Thus you won't have to call np.resize later to get the size exactly as
desired.
Note that scipy.misc.imresize calls
PIL.Image.resize
under the hood, so PIL (or Pillow) is a dependency.
Instead of using np.pad in a for-loop, you could allocate space for the desired array, norm_arr, first:
norm_arr = np.full((max_height, max_width), fill_value=255)
and then copy the resized image, new_arr into norm_arr:
nh, nw = new_arr.shape
norm_arr[:nh, :nw] = new_arr
For example,
from __future__ import division
import numpy as np
from scipy import misc
data = [np.linspace(255, 0, i*10).reshape(i,10)
for i in range(5, 100, 11)]
max_height = np.max([len(obs) for obs in data if len(obs[0])/len(obs) <= 2])
max_width = 2*max_height
result = []
for obs in data:
norm_arr = obs
h, w = obs.shape
if float(w)/h <= 2:
scale_factor = max_height/float(h)
target_size = (max_height, int(round(w*scale_factor)))
new_arr = misc.imresize(obs, target_size, interp='bicubic')
norm_arr = np.full((max_height, max_width), fill_value=255)
# check the shapes
# print(obs.shape, new_arr.shape, norm_arr.shape)
nh, nw = new_arr.shape
norm_arr[:nh, :nw] = new_arr
result.append(norm_arr)
# visually check the result
# misc.toimage(norm_arr).show()

How to speed up 'for loop' for searching Pixel value in Image in python?

I have written the following code in Python that finds the value of each pixel in 160x160 image and records the values where certain intensity is observed.
def red_finder(img,fname):
resolution=img.shape
reds=[] #array to add intensity pixels
centre=np.divide(resolution,2)
for y in range(resolution[1]):
for x in range(resolution[0]):
#print img[x,y]
if np.all(img[x,y]!=0):
reds.append([x,y])
reds=np.array(reds)
(redx,redy)=(np.mean(reds,axis=0))
proc=(np.mean(reds,axis=0))
xray=proc-centre
return (xray[1],xray[0])
I think,
for y in range(resolution[1]):
for x in range(resolution[0]):
#print img[x,y]
if np.all(img[x,y]!=0):
reds.append([x,y])
this way of scanning each pixel is consuming a lot of time.
Is there any solution to speed the code as the image size is larger and sometimes it varies.
Thank you.
The double for-loop can be replaced by a call to numpy.where:
def red_finder(img):
resolution = img.shape
centre = np.divide(resolution, 2)
redx, redy = np.where(img != 0)
proc = (redx.mean(), redy.mean())
xray = proc - centre
return (xray[1], xray[0])
Note: I believe your code implies that img has 2 dimensions (otherwise proc - center would raise an error). Therefore img[x, y] is a scalar value. So np.all(img[x, y] != 0) is the same as img[x, y] != 0.
Have you tried a list comprehension? They usually go a bit faster than plain loops.
reds = [ [x,y] for x in range(resolution[0]) for y in range(resolution[1]) if np.all(img[x,y]!=0) ]
Apart from that, np.all() is pretty efficient because 1. it's written in C and 2. it stops at any value that is not satisfying the condition.
As noted in another answer, a way you can go faster in to replace your whole loop with a built-in NumPy function that does the same thing natively.

NumPy, PIL adding an image

I'm trying to add two images together using NumPy and PIL. The way I would do this in MATLAB would be something like:
>> M1 = imread('_1.jpg');
>> M2 = imread('_2.jpg');
>> resM = M1 + M2;
>> imwrite(resM, 'res.jpg');
I get something like this:
alt text http://www.deadlink.cc/matlab.jpg
Using a compositing program and adding the images the MATLAB result seems to be right.
In Python I'm trying to do the same thing like this:
from PIL import Image
from numpy import *
im1 = Image.open('/Users/rem7/Desktop/_1.jpg')
im2 = Image.open('/Users/rem7/Desktop/_2.jpg')
im1arr = asarray(im1)
im2arr = asarray(im2)
addition = im1arr + im2arr
resultImage = Image.fromarray(addition)
resultImage.save('/Users/rem7/Desktop/a.jpg')
and I get something like this:
alt text http://www.deadlink.cc/python.jpg
Why am I getting all those funky colors? I also tried using ImageMath.eval("a+b", a=im1, b=im2), but I get an error about RGB unsupported.
I also saw that there is an Image.blend() but that requires an alpha.
What's the best way to achieve what I'm looking for?
Source Images (images have been removed):
alt text http://www.deadlink.cc/_1.jpg
alt text http://www.deadlink.cc/_2.jpg
Humm, OK, well I added the source images using the add image icon and they show up when I'm editing the post, but for some reason the images don't show up in the post.
(images have been removed) 2013 05 09
As everyone suggested already, the weird colors you're observing are overflow. And as you point out in the comment of schnaader's answer you still get overflow if you add your images like this:
addition=(im1arr+im2arr)/2
The reason for this overflow is that your NumPy arrays (im1arr im2arr) are of the uint8 type (i.e. 8-bit). This means each element of the array can only hold values up to 255, so when your sum exceeds 255, it loops back around 0:
>>>array([255,10,100],dtype='uint8') + array([1,10,160],dtype='uint8')
array([ 0, 20, 4], dtype=uint8)
To avoid overflow, your arrays should be able to contain values beyond 255. You need to convert them to floats for instance, perform the blending operation and convert the result back to uint8:
im1arrF = im1arr.astype('float')
im2arrF = im2arr.astype('float')
additionF = (im1arrF+im2arrF)/2
addition = additionF.astype('uint8')
You should not do this:
addition = im1arr/2 + im2arr/2
as you lose information, by squashing the dynamic of the image (you effectively make the images 7-bit) before you perform the blending information.
MATLAB note: the reason you don't see this problem in MATLAB, is probably because MATLAB takes care of the overflow implicitly in one of its functions.
Using PIL's blend() with an alpha value of 0.5 would be equivalent to (im1arr + im2arr)/2. Blend does not require that the images have alpha layers.
Try this:
from PIL import Image
im1 = Image.open('/Users/rem7/Desktop/_1.jpg')
im2 = Image.open('/Users/rem7/Desktop/_2.jpg')
Image.blend(im1,im2,0.5).save('/Users/rem7/Desktop/a.jpg')
To clamp numpy array values:
>>> c = a + b
>>> c[c > 256] = 256
It seems the code you posted just sums up the values and values bigger than 256 are overflowing. You want something like "(a + b) / 2" or "min(a + b, 256)". The latter seems to be the way that your Matlab example does it.
Your sample images are not showing up form me so I am going to do a bit of guessing.
I can't remember exactly how the numpy to pil conversion works but there are two likely cases. I am 95% sure it is 1 but am giving 2 just in case I am wrong.
1) 1 im1Arr is a MxN array of integers (ARGB) and when you add im1arr and im2arr together you are overflowing from one channel into the next if the components b1+b2>255. I am guessing matlab represents their images as MxNx3 arrays so each color channel is separate. You can solve this by splitting the PIL image channels and then making numpy arrays
2) 1 im1Arr is a MxNx3 array of bytes and when you add im1arr and im2arr together you are wrapping the component around.
You are also going to have to rescale the range back to between 0-255 before displaying. Your choices are divide by 2, scale by 255/array.max() or do a clip. I don't know what matlab does

Categories

Resources