Creating an image from a dictionary using PIL - python

I have a dictionary that maps coordinate tuples (in the range (0,0) to (199, 199) to grayscale values (integers between 0 and 255.) Is there a good way to create an PIL Image that has the specified values at the specified coordinates? I'd prefer a solution that only uses PIL to one that uses scipy.

You can try image.putpixel() to change the color of a pixel at a particular position. Example code -
from PIL import Image
from random import randint
d = {(x,y):randint(0,255) for x in range(200) for y in range(200)}
im = Image.new('L',(200,200))
for i in d:
im.putpixel(i,d[i])
im.save('blah.png')
It gave me a result like -

You could do it with putpixel(), but that could potentially involve tens of thousands of calls. How much this matters depends on how many coordinate tuples are defined in the dictionary. I've included the method shown in each of the current answers for comparison (including my own before any benchmarking was added, but just now I made a small change to how it initializes the data buffer which measurably sped it up).
To make a level playing field, for testing purposes the input dictionary randomly selects only ½ of the possible pixels in the image to define and allows the rest to be set to a default background color. Anand S Kumar's answer currently doesn't do the latter, but the slightly modified version shown below does.
All produce the same image from the data.
from __future__ import print_function
import sys
from textwrap import dedent
import timeit
N = 100 # number of executions of each algorithm
R = 3 # number of repeations of executions
# common setup for all algorithms - is not included in algorithm timing
setup = dedent("""
from random import randint, sample, seed
from PIL import Image
seed(42)
background = 0 # default color of pixels not defined in dictionary
width, height = 200, 200
# create test dict of input data defining half of the pixel coords in image
coords = sample([(x,y) for x in xrange(width) for y in xrange(height)],
width * height // 2)
d = {coord: randint(0, 255) for coord in coords}
""")
algorithms = {
"Anand S Kumar": dedent("""
im = Image.new('L', (width, height), color=background) # set bgrd
for i in d:
im.putpixel(i, d[i])
"""),
"martineau": dedent("""
data = bytearray([background] * width * height)
for (x, y), v in d.iteritems():
data[x + y*width] = v
im = Image.frombytes('L', (width, height), str(data))
"""),
"PM 2Ring": dedent("""
data = [background] * width * height
for i in d:
x, y = i
data[x + y * width] = d[i]
im = Image.new('L', (width, height))
im.putdata(data)
"""),
}
# execute and time algorithms, collecting results
timings = [
(label,
min(timeit.repeat(algorithms[label], setup=setup, repeat=R, number=N)),
) for label in algorithms
]
print('fastest to slowest execution speeds (Python {}.{}.{})\n'.format(
*sys.version_info[:3]),
' ({:,d} executions, best of {:d} repetitions)\n'.format(N, R))
longest = max(len(timing[0]) for timing in timings) # length of longest label
ranked = sorted(timings, key=lambda t: t[1]) # ascending sort by execution time
fastest = ranked[0][1]
for timing in ranked:
print("{:>{width}} : {:9.6f} secs, rel speed {:4.2f}x, {:6.2f}% slower".
format(timing[0], timing[1], round(timing[1]/fastest, 2),
round((timing[1]/fastest - 1) * 100, 2), width=longest))
Output:
fastest to slowest execution speeds (Python 2.7.10)
(100 executions, best of 3 repetitions)
martineau : 0.255203 secs, rel speed 1.00x, 0.00% slower
PM 2Ring : 0.307024 secs, rel speed 1.20x, 20.31% slower
Anand S Kumar : 1.835997 secs, rel speed 7.19x, 619.43% slower

As martineau suggests putpixel() is ok when you're modifying a few random pixels, but it's not so efficient for building whole images. My approach is similar to his, except I use a list of ints and .putdata(). Here's some code to test these 3 different approaches.
from PIL import Image
from random import seed, randint
width, height = 200, 200
background = 0
seed(42)
d = dict(((x, y), randint(0, 255)) for x in range(width) for y in range(height))
algorithm = 2
print('Algorithm', algorithm)
if algorithm == 0:
im = Image.new('L', (width, height))
for i in d:
im.putpixel(i, d[i])
elif algorithm == 1:
buff = bytearray((background for _ in xrange(width * height)))
for (x,y), v in d.items():
buff[y*width + x] = v
im = Image.frombytes('L', (width,height), str(buff))
elif algorithm == 2:
data = [background] * width * height
for i in d:
x, y = i
data[x + y * width] = d[i]
im = Image.new('L', (width, height))
im.putdata(data)
#im.show()
fname = 'qrand%d.png' % algorithm
im.save(fname)
print(fname, 'saved')
Here are typical timings on my 2GHz machine running Python 2.6.6
$ time ./qtest.py
Algorithm 0
qrand0.png saved
real 0m0.926s
user 0m0.768s
sys 0m0.040s
$ time ./qtest.py
Algorithm 1
qrand1.png saved
real 0m0.733s
user 0m0.548s
sys 0m0.020s
$ time ./qtest.py
Algorithm 2
qrand2.png saved
real 0m0.638s
user 0m0.520s
sys 0m0.032s

Related

How to efficiently loop over an image pixel by pixel in python OpenCV?

What I want to do is to loop over an image pixel by pixel using each pixel value to draw a circle in another corresponding image.
My approach is as follows:
it = np.nditer(pixels, flags=['multi_index'])
while not it.finished:
y, x = it.multi_index
color = it[0]
it.iternext()
center = (x*20 + 10, y*20 + 10) # corresponding circle center
cv2.circle(circles, center, int(8 * color/255), 255, -1)
Looping this way is somewhat slow. I tried adding the #njit decorator of numba, but apparently it has problems with opencv.
Input images are 32x32 pixels
They map to output images that are 32x32 circles
each circle is drawn inside a 20x20 pixels square
That is, the output image is 640x640 pixels
A single image takes around 100ms to be transformed to circles, and I was hoping to lower that to 30ms or lower
Any recommendations?
When:
Dealing with drawings
The number of possible options does not exceed a common sense value (in this case: 256)
Speed is important (I guess that's always the case)
There's no other restriction preventing this approach
the best way would be to "cache" the drawings (draw them upfront (or on demand depending on the needed overhead) in another array), and when the drawing should normally take place, simply take the appropriate drawing from the cache and place it in the target area (as #ChristophRackwitz stated in one of the comments), which is a very fast NumPy operation (compared to drawing).
As a side note, this is a generic method not necessarily limited to drawings.
But the results you claim you're getting: ~100 ms per one 32x32 image (to a 640x640 circles one), didn't make any sense to me (as OpenCV is also fast, and 1024 circles shouldn't be such a big deal), so I created a program to convince myself.
code00.py:
#!/usr/bin/env python
import itertools as its
import sys
import time
import cv2
import numpy as np
def draw_img_orig(arr_in, arr_out):
factor = round(arr_out.shape[0] / arr_in.shape[0])
factor_2 = factor // 2
it = np.nditer(arr_in, flags=["multi_index"])
while not it.finished:
y, x = it.multi_index
color = it[0]
it.iternext()
center = (x * factor + factor_2, y * factor + factor_2) # corresponding circle center
cv2.circle(arr_out, center, int(8 * color / 255), 255, -1)
def draw_img_regular_iter(arr_in, arr_out):
factor = round(arr_out.shape[0] / arr_in.shape[0])
factor_2 = factor // 2
for row_idx, row in enumerate(arr_in):
for col_idx, col in enumerate(row):
cv2.circle(arr_out, (col_idx * factor + factor_2, row_idx * factor + factor_2), int(8 * col / 255), 255, -1)
def draw_img_cache(arr_in, arr_out, cache):
factor = round(arr_out.shape[0] / arr_in.shape[0])
it = np.nditer(arr_in, flags=["multi_index"])
while not it.finished:
y, x = it.multi_index
yf = y * factor
xf = x *factor
arr_out[yf: yf + factor, xf: xf + factor] = cache[it[0]]
it.iternext()
def generate_input_images(shape, count, dtype=np.uint8):
return np.random.randint(256, size=(count,) + shape, dtype=dtype)
def generate_circles(shape, dtype=np.uint8, count=256, rad_func=lambda arg: int(8 * arg / 255), color=255):
ret = np.zeros((count,) + shape, dtype=dtype)
cy = shape[0] // 2
cx = shape[1] // 2
for idx, arr in enumerate(ret):
cv2.circle(arr, (cx, cy), rad_func(idx), color, -1)
return ret
def test_draw(imgs_in, img_out, count, draw_func, *draw_func_args):
print("\nTesting {:s}".format(draw_func.__name__))
start = time.time()
for i, e in enumerate(its.cycle(range(imgs_in.shape[0]))):
draw_func(imgs_in[e], img_out, *draw_func_args)
if i >= count:
break
print("Took {:.3f} seconds ({:d} images)".format(time.time() - start, count))
def test_speed(shape_in, shape_out, dtype=np.uint8):
imgs_in = generate_input_images(shape_in, 50, dtype=dtype)
#print(imgs_in.shape, imgs_in)
img_out = np.zeros(shape_out, dtype=dtype)
circles = generate_circles((shape_out[0] // shape_in[0], shape_out[1] // shape_in[1]))
count = 250
funcs_data = (
(draw_img_orig,),
(draw_img_regular_iter,),
(draw_img_cache, circles),
)
for func_data in funcs_data:
test_draw(imgs_in, img_out, count, func_data[0], *func_data[1:])
def test_accuracy(shape_in, shape_out, dtype=np.uint8):
img_in = np.arange(np.product(shape_in), dtype=dtype).reshape(shape_in)
circles = generate_circles((shape_out[0] // shape_in[0], shape_out[1] // shape_in[1]))
funcs_data = (
(draw_img_orig, "orig.png"),
(draw_img_regular_iter, "regit.png"),
(draw_img_cache, "cache.png", circles),
)
imgs_out = [np.zeros(shape_out, dtype=dtype) for _ in funcs_data]
for idx, func_data in enumerate(funcs_data):
func_data[0](img_in, imgs_out[idx], *func_data[2:])
cv2.imwrite(func_data[1], imgs_out[idx])
for idx, img in enumerate(imgs_out[1:], start=1):
if not np.array_equal(img, imgs_out[0]):
print("Image index different: {:d}".format(idx))
def main(*argv):
dt = np.uint8
shape_in = (32, 32)
factor_io = 20
shape_out = tuple(i * factor_io for i in shape_in)
test_speed(shape_in, shape_out, dtype=dt)
test_accuracy(shape_in, shape_out, dtype=dt)
if __name__ == "__main__":
print("Python {:s} {:03d}bit on {:s}\n".format(" ".join(elem.strip() for elem in sys.version.split("\n")),
64 if sys.maxsize > 0x100000000 else 32, sys.platform))
rc = main(*sys.argv[1:])
print("\nDone.\n")
sys.exit(rc)
Notes:
Besides your implementation that uses np.nditer (which I placed in a function called draw_img_orig), I created 2 more:
One that iterates the input array Pythonicly (draw_img_regular_iter)
One that uses cached circles, and also iterates via np.nditer (draw_img_cache)
In terms of tests, there are 2 of them - each being performed on every of the 3 (above) approaches:
Speed: measure the time took to process a number of images
Accuracy: measure the output for a 32x32 input containing the interval [0, 255] (4 times)
Output:
[cfati#CFATI-5510-0:e:\Work\Dev\StackOverflow\q071818080]> sopr.bat
### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ###
[prompt]> dir /b
code00.py
[prompt]> "e:\Work\Dev\VEnvs\py_pc064_03.09_test0\Scripts\python.exe" code00.py
Python 3.9.9 (tags/v3.9.9:ccb0e6a, Nov 15 2021, 18:08:50) [MSC v.1929 64 bit (AMD64)] 064bit on win32
Testing draw_img_orig
Took 0.908 seconds (250 images)
Testing draw_img_regular_iter
Took 1.061 seconds (250 images)
Testing draw_img_cache
Took 0.426 seconds (250 images)
Done.
[prompt]>
[prompt]> dir /b
cache.png
code00.py
orig.png
regit.png
Above there are the speed test results: as seen, your approach took a bit less than a second for 250 images!!! So I was right, I don't know where your slowness comes from, but it's not from here (maybe you got the measurements wrong?).
The regular method is a bit slower, while the cached one is ~2X faster. I ran the code on my laptop:
Win 10 pc064
CPU: Intel i7 6820HQ # 2.70GHz (fairly old)
GPU: not relevant, as I didn't notice any spikes during execution
Regarding the accuracy test, all (3) output arrays are identical (there's no message saying otherwise), here's one saved image:

Quick pixel manipulation with Pillow and/or NumPy

I'm trying to improve the speed of my image manipulation as it's been too slow for actual use.
What I need to do is apply a complex transformation on the colour of every pixel on an image. The manipulation is basically apply a vector transform like T(r, g, b, a) => (r * x, g * x, b * y, a) or in layman's terms, it's a multiplication of Red and Green values by a constant, a different multiplication for Blue and keep Alpha. But I also need to manipulate it differently if the RGB colour falls under some specific colours, in those cases they must follow a dictionary/transformation table where RGB => newRGB again keeping alpha.
The algorithm would be:
for each pixel in image:
if pixel[r, g, b] in special:
return special[pixel[r, g, b]] + pixel[a]
else:
return T(pixel)
It's simple but speed has been sub-optimal. I believe there's some way using numpy vectors, but I could not find how.
Important details about the implementation:
I don't care about the original buffer/image (manipulation can be in place)
I can use wxPython, Pillow and NumPy
Order or dimension of the array is not important as long as the buffer keeps the length
The buffer is obtained from a wxPython Bitmap and special and (RG|B)_pal are transformation tables, the end result will become a wxPython Bitmap too. They're obtained like these:
# buffer
bitmap = wx.Bitmap # it's valid wxBitmap here, this is just to let you know it exists
buff = bytearray(bitmap.GetWidth() * bitmap.GetHeight() * 4)
bitmap.CopyToBuffer(buff, wx.BitmapBufferFormat_RGBA)
self.RG_mult= 0.75
self.B_mult = 0.83
self.RG_pal = []
self.B_pal = []
for i in range(0, 256):
self.RG_pal.append(int(i * self.RG_mult))
self.B_pal.append(int(i * self.B_mult))
self.special = {
# RGB: new_RGB
# Implementation specific for the fastest access
# with buffer keys are 24bit numbers, with PIL keys are tuples
}
Implementations I tried include direct buffer manipulation:
for x in range(0, bitmap.GetWidth() * bitmap.GetHeight()):
index = x * 4
r = buf[index]
g = buf[index + 1]
b = buf[index + 2]
rgb = buf[index:index + 3]
if rgb in self.special:
special = self.special[rgb]
buf[index] = special[0]
buf[index + 1] = special[1]
buf[index + 2] = special[2]
else:
buf[index] = self.RG_pal[r]
buf[index + 1] = self.RG_pal[g]
buf[index + 2] = self.B_pal[b]
Use Pillow with getdata():
pil = Image.frombuffer("RGBA", (bitmap.GetWidth(), bitmap.GetHeight()), buf)
pil_buf = []
for colour in pil.getdata():
colour_idx = colour[0:3]
if (colour_idx in self.special):
special = self.special[colour_idx]
pil_buf.append((
special[0],
special[1],
special[2],
colour[3],
))
else:
pil_buf.append((
self.RG_pal[colour[0]],
self.RG_pal[colour[1]],
self.B_pal[colour[2]],
colour[3],
))
pil.putdata(pil_buf)
buf = pil.tobytes()
Pillow with point() and getdata() (fastest I achieved, more than twice times faster than others)
pil = Image.frombuffer("RGBA", (bitmap.GetWidth(), bitmap.GetHeight()), buf)
r, g, b, a = pil.split()
r = r.point(lambda r: r * self.RG_mult)
g = g.point(lambda g: g * self.RG_mult)
b = b.point(lambda b: b * self.B_mult)
pil = Image.merge("RGBA", (r, g, b, a))
i = 0
for colour in pil.getdata():
colour_idx = colour[0:3]
if (colour_idx in self.special):
special = self.special[colour_idx]
pil.putpixel(
(i % bitmap.GetWidth(), i // bitmap.GetWidth()),
(
special[0],
special[1],
special[2],
colour[3],
)
)
i += 1
buf = pil.tobytes()
I also tried working with numpy.where but then I could not get it to work. With numpy.apply_along_axis it worked but the performance was terrible. Other tries with numpy I could not access the RGB together, only as separated bands.
Pure Numpy Version
This first optimization relies on the fact, that one probably has way less special colors than pixels. I use numpy to do all the inner loops. This works well with images of up to 1MP. If You have multiple images I'd recommend the parallel approach.
Let's define a test case:
import requests
from io import BytesIO
from PIL import Image
import numpy as np
# Load some image, so we have the same
response = requests.get("https://upload.wikimedia.org/wikipedia/commons/4/41/Rick_Astley_Dallas.jpg")
# Make areas of known color
img = Image.open(BytesIO(response.content)).rotate(10, expand=True).rotate(-10,expand=True, fillcolor=(255,255,255)).convert('RGBA')
print("height: %d, width: %d (%.2f MP)"%(img.height, img.width, img.width*img.height/10e6))
height: 5034, width: 5792 (2.92 MP)
Define our special colors
specials = {
(4,1,6):(255,255,255),
(0, 0, 0):(255, 0, 255),
(255, 255, 255):(0, 255, 0)
}
Algorithm
def transform_map(img, specials, R_factor, G_factor, B_factor):
# Your transform
def transform(x, a):
a *= x
return a.clip(0, 255).astype(np.uint8)
# Convert to array
img_array = np.asarray(img)
# Extract channels
R = img_array.T[0]
G = img_array.T[1]
B = img_array.T[2]
A = img_array.T[3]
# Find Special colors
# First, calculate a uniqe hash
color_hashes = (R + 2**8 * G + 2**16 * B)
# Find inidices of special colors
special_idxs = []
for k, v in specials.items():
key_arr = np.array(list(k))
val_arr = np.array(list(v))
spec_hash = key_arr[0] + 2**8 * key_arr[1] + 2**16 * key_arr[2]
special_idxs.append(
{
'mask': np.where(np.isin(color_hashes, spec_hash)),
'value': val_arr
}
)
# Apply transform to whole image
R = transform(R, R_factor)
G = transform(G, G_factor)
B = transform(B, B_factor)
# Replace values where special colors were found
for idx in special_idxs:
R[idx['mask']] = idx['value'][0]
G[idx['mask']] = idx['value'][1]
B[idx['mask']] = idx['value'][2]
return Image.fromarray(np.array([R,G,B,A]).T, mode='RGBA')
And finally some bench marks on a Intel Core i5-6300U # 2.40GHz
import time
times = []
for i in range(10):
t0 = time.time()
# Test
transform_map(img, specials, 1.2, .9, 1.2)
#
t1 = time.time()
times.append(t1-t0)
np.round(times, 2)
print('average run time: %.2f +/-%.2f'%(np.mean(times), np.std(times)))
average run time: 9.72 +/-0.91
EDIT Parallelization
With the same setup as above, we can get a 2x speed increase on large images. (Small ones are faster without numba)
from numba import njit, prange
from numba.core import types
from numba.typed import Dict
# Map dict of special colors or transform over array of pixel values
#njit(parallel=True, locals={'px_hash': types.uint32})
def check_and_transform(img_array, d, T):
#Save Shape for later
shape = img_array.shape
# Flatten image for 1-d iteration
img_array_flat = img_array.reshape(-1,3).copy()
N = img_array_flat.shape[0]
# Replace or map
for i in prange(N):
px_hash = np.uint32(0)
px_hash += img_array_flat[i,0]
px_hash += types.uint32(2**8) * img_array_flat[i,1]
px_hash += types.uint32(2**16) * img_array_flat[i,2]
try:
img_array_flat[i] = d[px_hash]
except Exception:
img_array_flat[i] = (img_array_flat[i] * T).astype(np.uint8)
# return image
return img_array_flat.reshape(shape)
# Wrapper for function above
def map_or_transform_jit(image: Image, specials: dict, T: np.ndarray):
# assemble numba typed dict
d = Dict.empty(
key_type=types.uint32,
value_type=types.uint8[:],
)
for k, v in specials.items():
k = types.uint32(k[0] + 2**8 * k[1] + 2**16 * k[2])
v = np.array(v, dtype=np.uint8)
d[k] = v
# get rgb channels
img_arr = np.array(img)
rgb = img_arr[:,:,:3].copy()
img_shape = img_arr.shape
# apply map
rgb = check_and_transform(rgb, d, T)
# set color channels
img_arr[:,:,:3] = rgb
return Image.fromarray(img_arr, mode='RGBA')
# Benchmark
import time
times = []
for i in range(10):
t0 = time.time()
# Test
test_img = map_or_transform_jit(img, specials, np.array([1, .5, .5]))
#
t1 = time.time()
times.append(t1-t0)
np.round(times, 2)
print('average run time: %.2f +/- %.2f'%(np.mean(times), np.std(times)))
test_img
average run time: 3.76 +/- 0.08

Python gray-scale conversion of an image

So I made this script that takes an image and turns it into a gray scale of itself.
I know that a lot of modules can do this automatically like .convert('grey') but I want to do it manually by myself to learn more about python programming.
It works ok but its very slow, for a 200pX200p image it takes 10 seconds so, what can I modify for making it go faster?
it works like this, it takes a pixel, calculates the averange of R, G and B values, set the three to the averange value, adds 40 to each one for more brightness and writes the pixel.
Here is the code:
import imageio
import os
from PIL import Image, ImageDraw
from random import randrange
img = '/storage/emulated/0/DCIM/Camera/IMG_20190714_105429.jpg'
f = open('network.csv', 'a+')
pic = imageio.imread(img)
picture = Image.open(img)
draw = ImageDraw.Draw(picture)
f.write('\n')
def por():
cien = pic.shape[0] * pic.shape[1]
prog = pic.shape[1] * (h - 1) + w
porc = prog * 100 / cien
porc = round(porc)
porc = str(porc)
print(porc + '%')
rh = int(pic.shape[0])
wh = int(pic.shape[1])
for h in range(rh):
for w in range(wh):
prom = int(pic[h , w][0]) + int(pic[h, w][1]) + int(pic[h, w][2])
prom = prom / 3
prom = round(prom)
prom = int(prom)
prom = prom + 40
por()
draw.point( (w,h), (prom,prom,prom))
picture.save('/storage/emulated/0/DCIM/Camera/Modificada.jpg')
PIL does this for you.
from PIL import Image
img = Image.open('image.png').convert('grey')
img.save('modified.png')
The Method you are using for conversion of RGB to greyscale, is called Averaging.
from PIL import Image
image = Image.open(r"image_path").convert("RGB")
mapping = list(map(lambda x: int(x[0]*.33 + x[1]*.33 + x[2]*.33), list(image.getdata())))
Greyscale_img = Image.new("L", (image.size[0], image.size[1]), 255)
Greyscale_img.putdata(mapping)
Greyscale_img.show()
The above method (Averaging) isn't recommended for conversion of an colored image into greyscale. As it treats each color channel equally, assuming human perceives all colors equally (which is not the truth).
You should rather use something like ITU-R 601-2 luma transform (method used by PIL for converting RGB to L) for the conversion. As it uses perceptual luminance-preserving conversion to grayscale.
For that Just replace the line
mapping = list(map(lambda x: int(x[0]*.33 + x[1]*.33 + x[2]*.33), list(image.getdata())))
with
mapping = list(map(lambda x: int(x[0]*(299/1000) + x[1]*(587/1000) + x[2]*(114/1000)), list(image.getdata())))
P.S.:- I didn't add 40 to each pixel value, as it doesn't really makes any sense in the conversion of the image to greyscale.
Python is an interpreted language and not really fast enough for pixel loops. cython is a sister project that can compile Python into an executable and can be faster than plain Python for code like this.
You could also try using a Python math library like numpy or pyvips. These add array operations to Python: you can write lines like a += 12 * b where a and b are whole images and they'll operate on every pixel at the same time. You get the control of being able to specify every detail of the operation yourself combined with the speed of something like C.
For example, in pyvips you could write:
import sys
import pyvips
x = pyvips.Image.new_from_file(sys.argv[1], access="sequential")
x = 299 / 1000 * x[0] + 587 / 1000 * x[1] + 114 / 1000 * x[2]
x.write_to_file(sys.argv[2])
Copying the equation from Vasu Deo.S's excellent answer, then run with something like:
./grey2.py ~/pics/k2.jpg x.png
To read the JPG image k2.jpg and write a greyscale PNG called x.png.
You can approximate conversion in linear space with a pow before and after, assuming your source image is sRGB:
x = x ** 2.2
x = 299 / 1000 * x[0] + 587 / 1000 * x[1] + 114 / 1000 * x[2]
x = x ** (1 / 2.2)
Though that's not exactly correct since it's missing the linear part of the sRGB power function.
You could also simply use x = x.colourspace('b-w'), pyvips's built-in greyscale operation.

Python - Quick batch modification of PNGs

I wrote a python script with combines images in unique ways for an OpenGL shader. The problem is that I have a large number of very large maps and it takes a long time to process. Is there a way to write this in a quicker fashion?
import numpy as np
map_data = {}
image_data = {}
for map_postfix in names:
file_name = inputRoot + '-' + map_postfix + resolution + '.png'
print 'Loading ' + file_name
image_data[map_postfix] = Image.open(file_name, 'r')
map_data[map_postfix] = image_data[map_postfix].load()
color = mapData['ColorOnly']
ambient = mapData['AmbientLight']
shine = mapData['Shininess']
width = imageData['ColorOnly'].size[0]
height = imageData['ColorOnly'].size[1]
arr = np.zeros((height, width, 4), dtype=int)
for i in range(width):
for j in range(height):
ambient_mod = ambient[i,j][0] / 255.0
arr[j, i, :] = [color[i,j][0] * ambient_mod , color[i,j][1] * ambient_mod , color[i,j][2] * ambient_mod , shine[i,j][0]]
print 'Converting Color Map to image'
return Image.fromarray(arr.astype(np.uint8))
This is just a sample of a large number of batch processes so I am more interested in if there is a faster way to iterate and modify an image file. Almost all the time is being spent on the nested loop vs loading and saving.
Vectorised-code example -- test effect on yours in timeit or zmq.Stopwatch()
Reported to have 22.14 seconds >> 0.1624 seconds speedup!
While your code seems to loop just over RGBA[x,y], let me show a "vectorised"-syntax of a code, that benefits from numpy matrix-manipulation utilities ( forget the RGB/YUV manipulation ( originally based on OpenCV rather than PIL ), but re-use the vectorised-syntax approach to avoid for-loops and adapt it to work efficiently for your calculus. Wrong order of operations may more than double yours processing time.
And use a test / optimise / re-test loop for speeding up.
For testing, use standard python timeit if [msec] resolution is enough.
Go rather for zmq.StopWatch() if you need going into [usec] resolution.
# Vectorised-code example, to see the syntax & principles
# do not mind another order of RGB->BRG layers
# it has been OpenCV traditional convention
# it has no other meaning in this demo of VECTORISED code
def get_YUV_U_Cb_Rec709_BRG_frame( brgFRAME ): # For the Rec. 709 primaries used in gamma-corrected sRGB, fast, VECTORISED MUL/ADD CODE
out = numpy.zeros( brgFRAME.shape[0:2] )
out -= 0.09991 / 255 * brgFRAME[:,:,1] # // Red
out -= 0.33601 / 255 * brgFRAME[:,:,2] # // Green
out += 0.436 / 255 * brgFRAME[:,:,0] # // Blue
return out
# normalise to <0.0 - 1.0> before vectorised MUL/ADD, saves [usec] ...
# on 480x640 [px] faster goes about 2.2 [msec] instead of 5.4 [msec]
In your case, using dtype = numpy.int, guess it shall be faster to MUL first by ambient[:,:,0] and finally DIV to normalisearr[:,:,:3] /= 255
# test if this goes even faster once saving the vectorised overhead on matrix DIV
arr[:,:,0] = color[:,:,0] * ambient[:,:,0] / 255 # MUL remains INT, shall precede DIV
arr[:,:,1] = color[:,:,1] * ambient[:,:,0] / 255 #
arr[:,:,2] = color[:,:,2] * ambient[:,:,0] / 255 #
arr[:,:,3] = shine[:,:,0] # STO alpha
So how it may look in your algo?
One need not have Peter Jackson's impressive budget and time once planned, spanned and executed immense number-crunching over 3 years in a New Zealand hangar, overcrowded by a herd of SGI workstations, as he was producing "The Lord of The Rings" fully-digital mastering assembly-line, right by the frame-by-frame pixel manipulation, to realise that miliseconds and microseconds and even nanoseconds in the mass-production pipe-line simply do matter.
So, take a deep breath and test and re-test so as to optimise your real-world imagery processing performance to levels that your project needs.
Hope this may help you on this:
# OPTIONAL for performance testing -------------# ||||||||||||||||||||||||||||||||
from zmq import Stopwatch # _MICROSECOND_ timer
# # timer-resolution step ~ 21 nsec
# # Yes, NANOSECOND-s
# OPTIONAL for performance testing -------------# ||||||||||||||||||||||||||||||||
arr = np.zeros( ( height, width, 4 ), dtype = int )
aStopWatch = zmq.Stopwatch() # ||||||||||||||||||||||||||||||||
# /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\# <<< your original code segment
# aStopWatch.start() # |||||||||||||__.start
# for i in range( width ):
# for j in range( height ):
# ambient_mod = ambient[i,j][0] / 255.0
# arr[j, i, :] = [ color[i,j][0] * ambient_mod, \
# color[i,j][1] * ambient_mod, \
# color[i,j][2] * ambient_mod, \
# shine[i,j][0] \
# ]
# usec_for = aStopWatch.stop() # |||||||||||||__.stop
# print 'Converting Color Map to image'
# print ' FOR processing took ', usec_for, ' [usec]'
# /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\# <<< proposed alternative
aStopWatch.start() # |||||||||||||__.start
# reduced numpy broadcasting one dimension less # ref. comments below
arr[:,:, 0] = color[:,:,0] * ambient[:,:,0] # MUL ambient[0] * [{R}]
arr[:,:, 1] = color[:,:,1] * ambient[:,:,0] # MUL ambient[0] * [{G}]
arr[:,:, 2] = color[:,:,2] * ambient[:,:,0] # MUL ambient[0] * [{B}]
arr[:,:,:3] /= 255 # DIV 255 to normalise
arr[:,:, 3] = shine[:,:,0] # STO shine[ 0] in [3]
usec_Vector = aStopWatch.stop() # |||||||||||||__.stop
print 'Converting Color Map to image'
print ' Vectorised processing took ', usec_Vector, ' [usec]'
return Image.fromarray( arr.astype( np.uint8 ) )

Fast algorithm to detect main colors in an image?

Does anyone know a fast algorithm to detect main colors in an image?
I'm currently using k-means to find the colors together with Python's PIL but it's very slow. One 200x200 image takes 10 seconds to process. I've several hundred thousand images.
One fast method would be to simply divide up the color space into bins and then construct a histogram. It's fast because you only need a small number of decisions per pixel, and you only need one pass over the image (and one pass over the histogram to find the maxima).
Update: here's a rough diagram to help explain what I mean.
On the x-axis is the color divided into discrete bins. The y-axis shows the value of each bin, which is the number of pixels matching the color range of that bin. There are two main colors in this image, shown by the two peaks.
With a bit of tinkering, this code (which I suspect you might have already seen!) can be sped up to just under a second
If you increase the kmeans(min_diff=...) value to about 10, it produces very similar results, but runs in 900ms (compared to about 5000-6000ms with min_diff=1)
Further decreasing the size of the thumbnails to 100x100 doesn't seem to impact the results much either, and takes the runtime to about 250ms
Here's a slightly tweaked version of the code, which just parameterises the min_diff value, and includes some terrible code to generate an HTML file with the results/timing
from collections import namedtuple
from math import sqrt
import random
try:
import Image
except ImportError:
from PIL import Image
Point = namedtuple('Point', ('coords', 'n', 'ct'))
Cluster = namedtuple('Cluster', ('points', 'center', 'n'))
def get_points(img):
points = []
w, h = img.size
for count, color in img.getcolors(w * h):
points.append(Point(color, 3, count))
return points
rtoh = lambda rgb: '#%s' % ''.join(('%02x' % p for p in rgb))
def colorz(filename, n=3, mindiff=1):
img = Image.open(filename)
img.thumbnail((200, 200))
w, h = img.size
points = get_points(img)
clusters = kmeans(points, n, mindiff)
rgbs = [map(int, c.center.coords) for c in clusters]
return map(rtoh, rgbs)
def euclidean(p1, p2):
return sqrt(sum([
(p1.coords[i] - p2.coords[i]) ** 2 for i in range(p1.n)
]))
def calculate_center(points, n):
vals = [0.0 for i in range(n)]
plen = 0
for p in points:
plen += p.ct
for i in range(n):
vals[i] += (p.coords[i] * p.ct)
return Point([(v / plen) for v in vals], n, 1)
def kmeans(points, k, min_diff):
clusters = [Cluster([p], p, p.n) for p in random.sample(points, k)]
while 1:
plists = [[] for i in range(k)]
for p in points:
smallest_distance = float('Inf')
for i in range(k):
distance = euclidean(p, clusters[i].center)
if distance < smallest_distance:
smallest_distance = distance
idx = i
plists[idx].append(p)
diff = 0
for i in range(k):
old = clusters[i]
center = calculate_center(plists[i], old.n)
new = Cluster(plists[i], center, old.n)
clusters[i] = new
diff = max(diff, euclidean(old.center, new.center))
if diff < min_diff:
break
return clusters
if __name__ == '__main__':
import sys
import time
for x in range(1, 11):
sys.stderr.write("mindiff %s\n" % (x))
start = time.time()
fname = "akira_940x700.png"
col = colorz(fname, 3, x)
print "<h1>%s</h1>" % x
print "<img src='%s'>" % (fname)
print "<br>"
for a in col:
print "<div style='background-color: %s; width:20px; height:20px'> </div>" % (a)
print "<br>Took %.02fms<br> % ((time.time()-start)*1000)
K-means is a good choice for this task because you know number of main colors beforehand. You need to optimize K-means. I think you can reduce your image size, just scale it down to 100x100 pixels or so. Find the size on witch your algorithm works with acceptable speed. Another option is to use dimensionality reduction before k-means clustering.
And try to find fast k-means implementation. Writing such things in python is a misuse of python. It's not supposed to be used like this.

Categories

Resources