Python gray-scale conversion of an image - python

So I made this script that takes an image and turns it into a gray scale of itself.
I know that a lot of modules can do this automatically like .convert('grey') but I want to do it manually by myself to learn more about python programming.
It works ok but its very slow, for a 200pX200p image it takes 10 seconds so, what can I modify for making it go faster?
it works like this, it takes a pixel, calculates the averange of R, G and B values, set the three to the averange value, adds 40 to each one for more brightness and writes the pixel.
Here is the code:
import imageio
import os
from PIL import Image, ImageDraw
from random import randrange
img = '/storage/emulated/0/DCIM/Camera/IMG_20190714_105429.jpg'
f = open('network.csv', 'a+')
pic = imageio.imread(img)
picture = Image.open(img)
draw = ImageDraw.Draw(picture)
f.write('\n')
def por():
cien = pic.shape[0] * pic.shape[1]
prog = pic.shape[1] * (h - 1) + w
porc = prog * 100 / cien
porc = round(porc)
porc = str(porc)
print(porc + '%')
rh = int(pic.shape[0])
wh = int(pic.shape[1])
for h in range(rh):
for w in range(wh):
prom = int(pic[h , w][0]) + int(pic[h, w][1]) + int(pic[h, w][2])
prom = prom / 3
prom = round(prom)
prom = int(prom)
prom = prom + 40
por()
draw.point( (w,h), (prom,prom,prom))
picture.save('/storage/emulated/0/DCIM/Camera/Modificada.jpg')

PIL does this for you.
from PIL import Image
img = Image.open('image.png').convert('grey')
img.save('modified.png')

The Method you are using for conversion of RGB to greyscale, is called Averaging.
from PIL import Image
image = Image.open(r"image_path").convert("RGB")
mapping = list(map(lambda x: int(x[0]*.33 + x[1]*.33 + x[2]*.33), list(image.getdata())))
Greyscale_img = Image.new("L", (image.size[0], image.size[1]), 255)
Greyscale_img.putdata(mapping)
Greyscale_img.show()
The above method (Averaging) isn't recommended for conversion of an colored image into greyscale. As it treats each color channel equally, assuming human perceives all colors equally (which is not the truth).
You should rather use something like ITU-R 601-2 luma transform (method used by PIL for converting RGB to L) for the conversion. As it uses perceptual luminance-preserving conversion to grayscale.
For that Just replace the line
mapping = list(map(lambda x: int(x[0]*.33 + x[1]*.33 + x[2]*.33), list(image.getdata())))
with
mapping = list(map(lambda x: int(x[0]*(299/1000) + x[1]*(587/1000) + x[2]*(114/1000)), list(image.getdata())))
P.S.:- I didn't add 40 to each pixel value, as it doesn't really makes any sense in the conversion of the image to greyscale.

Python is an interpreted language and not really fast enough for pixel loops. cython is a sister project that can compile Python into an executable and can be faster than plain Python for code like this.
You could also try using a Python math library like numpy or pyvips. These add array operations to Python: you can write lines like a += 12 * b where a and b are whole images and they'll operate on every pixel at the same time. You get the control of being able to specify every detail of the operation yourself combined with the speed of something like C.
For example, in pyvips you could write:
import sys
import pyvips
x = pyvips.Image.new_from_file(sys.argv[1], access="sequential")
x = 299 / 1000 * x[0] + 587 / 1000 * x[1] + 114 / 1000 * x[2]
x.write_to_file(sys.argv[2])
Copying the equation from Vasu Deo.S's excellent answer, then run with something like:
./grey2.py ~/pics/k2.jpg x.png
To read the JPG image k2.jpg and write a greyscale PNG called x.png.
You can approximate conversion in linear space with a pow before and after, assuming your source image is sRGB:
x = x ** 2.2
x = 299 / 1000 * x[0] + 587 / 1000 * x[1] + 114 / 1000 * x[2]
x = x ** (1 / 2.2)
Though that's not exactly correct since it's missing the linear part of the sRGB power function.
You could also simply use x = x.colourspace('b-w'), pyvips's built-in greyscale operation.

Related

Perlin noise in Python's noise library

I have a problem with generating Perlin noise for my project. As I wanted to understand how to use library properly, I tried to follow step-by-step this page: https://medium.com/#yvanscher/playing-with-perlin-noise-generating-realistic-archipelagos-b59f004d8401
In first part, there is code:
import noise
import numpy as np
from scipy.misc import toimage
shape = (1024,1024)
scale = 100.0
octaves = 6
persistence = 0.5
lacunarity = 2.0
world = np.zeros(shape)
for i in range(shape[0]):
for j in range(shape[1]):
world[i][j] = noise.pnoise2(i/scale,
j/scale,
octaves=octaves,
persistence=persistence,
lacunarity=lacunarity,
repeatx=1024,
repeaty=1024,
base=0)
toimage(world).show()
I copy-paste it with small change at the end (toimage is obsolete) so I have:
import noise
import numpy as np
from PIL import Image
shape = (1024,1024)
scale = 100
octaves = 6
persistence = 0.5
lacunarity = 2.0
seed = np.random.randint(0,100)
world = np.zeros(shape)
for i in range(shape[0]):
for j in range(shape[1]):
world[i][j] = noise.pnoise2(i/scale,
j/scale,
octaves=octaves,
persistence=persistence,
lacunarity=lacunarity,
repeatx=1024,
repeaty=1024,
base=seed)
Image.fromarray(world, mode='L').show()
I tried a lot of diffrient modes, but this noise is not even close to coherent noise. My result is something like this (mode='L'). Could someone explain me, what am I doing wrong?
Here is the working code. I took the liberty of cleaning it up a little. See comments for details. As a final advice: When testing code, use matplotlib for visualization. Its imshow() function is way more robust than PIL.
import noise
import numpy as np
from PIL import Image
shape = (1024,1024)
scale = .5
octaves = 6
persistence = 0.5
lacunarity = 2.0
seed = np.random.randint(0,100)
world = np.zeros(shape)
# make coordinate grid on [0,1]^2
x_idx = np.linspace(0, 1, shape[0])
y_idx = np.linspace(0, 1, shape[1])
world_x, world_y = np.meshgrid(x_idx, y_idx)
# apply perlin noise, instead of np.vectorize, consider using itertools.starmap()
world = np.vectorize(noise.pnoise2)(world_x/scale,
world_y/scale,
octaves=octaves,
persistence=persistence,
lacunarity=lacunarity,
repeatx=1024,
repeaty=1024,
base=seed)
# here was the error: one needs to normalize the image first. Could be done without copying the array, though
img = np.floor((world + .5) * 255).astype(np.uint8) # <- Normalize world first
Image.fromarray(img, mode='L').show()
If someone comes after me, with noise library you should rather normalize with
img = np.floor((world + 1) * 127).astype(np.uint8)
This way there will not be any spots of abnormal colour opposite to what it should be.

How to efficiently apply a function to each channel of every pixel in an image (for color conversion)?

I'm trying to implement Reinhard's method to use the color distribution of a target image to color normalize a passed in image for a research project. I've gotten the code to work and it outputs correctly but it's pretty slow. It takes about 20 minutes to iterate through 300 images. I'm pretty sure the bottleneck is how I'm handling applying the function to each image. I'm currently iterating through each pixel of the image and applying the functions below to each channel.
def reinhard(target, img):
#converts image and target from BGR colorspace to l alpha beta
lAB_img = cv2.cvtColor(img, cv2.COLOR_BGR2Lab)
lAB_tar = cv2.cvtColor(target, cv2.COLOR_BGR2Lab)
#finds mean and standard deviation for each color channel across the entire image
(mean, std) = cv2.meanStdDev(lAB_img)
(mean_tar, std_tar) = cv2.meanStdDev(lAB_tar)
#iterates over image implementing formula to map color normalized pixels to target image
for y in range(512):
for x in range(512):
lAB_tar[x, y, 0] = (lAB_img[x, y, 0] - mean[0]) / std[0] * std_tar[0] + mean_tar[0]
lAB_tar[x, y, 1] = (lAB_img[x, y, 1] - mean[1]) / std[1] * std_tar[1] + mean_tar[1]
lAB_tar[x, y, 2] = (lAB_img[x, y, 2] - mean[2]) / std[2] * std_tar[2] + mean_tar[2]
mapped = cv2.cvtColor(lAB_tar, cv2.COLOR_Lab2BGR)
return mapped
My supervisor told me that I could try using a matrix to apply the function all at once to improve the runtime but I'm not exactly sure how to go about doing that.
The original and the target:
Color transfer reuslts using Reinhard'method in 5 ms:
I prefer to implement the formulat in numpy vectorized operations other than python loops.
# implementing the formula
#(Io - mo)/so*st + mt = Io * (st/so) + mt - mo*(st/so)
ratio = (std_tar/std_ori).reshape(-1)
offset = (mean_tar - mean_ori*std_tar/std_ori).reshape(-1)
lab_tar = cv2.convertScaleAbs(lab_ori*ratio + offset)
Here is the code:
# 2019/02/19 by knight-金
# https://stackoverflow.com/a/54757659/3547485
import numpy as np
import cv2
def reinhard(target, original):
# cvtColor: COLOR_BGR2Lab
lab_tar = cv2.cvtColor(target, cv2.COLOR_BGR2Lab)
lab_ori = cv2.cvtColor(original, cv2.COLOR_BGR2Lab)
# meanStdDev: calculate mean and stadard deviation
mean_tar, std_tar = cv2.meanStdDev(lab_tar)
mean_ori, std_ori = cv2.meanStdDev(lab_ori)
# implementing the formula
#(Io - mo)/so*st + mt = Io * (st/so) + mt - mo*(st/so)
ratio = (std_tar/std_ori).reshape(-1)
offset = (mean_tar - mean_ori*std_tar/std_ori).reshape(-1)
lab_tar = cv2.convertScaleAbs(lab_ori*ratio + offset)
# convert back
mapped = cv2.cvtColor(lab_tar, cv2.COLOR_Lab2BGR)
return mapped
if __name__ == "__main__":
ori = cv2.imread("ori.png")
tar = cv2.imread("tar.png")
mapped = reinhard(tar, ori)
cv2.imwrite("mapped.png", mapped)
mapped_inv = reinhard(ori, tar)
cv2.imwrite("mapped_inv.png", mapped)
I managed to figure it out after looking at the numpy documentation. I just needed to replace my nested for loop with proper array accessing. It took less than a minute to iterate through all 300 images with this.
lAB_tar[:,:,0] = (lAB_img[:,:,0] - mean[0])/std[0] * std_tar[0] + mean_tar[0]
lAB_tar[:,:,1] = (lAB_img[:,:,1] - mean[1])/std[1] * std_tar[1] + mean_tar[1]
lAB_tar[:,:,2] = (lAB_img[:,:,2] - mean[2])/std[2] * std_tar[2] + mean_tar[2]

Bradley-Roth Adaptive Thresholding Algorithm - How do I get better performance?

I have the following code for image thresholding, using the Bradley-Roth image thresholding method.
from PIL import Image
import copy
import time
def bradley_threshold(image, threshold=75, windowsize=5):
ws = windowsize
image2 = copy.copy(image).convert('L')
w, h = image.size
l = image.convert('L').load()
l2 = image2.load()
threshold /= 100.0
for y in xrange(h):
for x in xrange(w):
#find neighboring pixels
neighbors =[(x+x2,y+y2) for x2 in xrange(-ws,ws) for y2 in xrange(-ws, ws) if x+x2>0 and x+x2<w and y+y2>0 and y+y2<h]
#mean of all neighboring pixels
mean = sum([l[a,b] for a,b in neighbors])/len(neighbors)
if l[x, y] < threshold*mean:
l2[x,y] = 0
else:
l2[x,y] = 255
return image2
i = Image.open('test.jpg')
windowsize = 5
bradley_threshold(i, 75, windowsize).show()
This works fine when windowsize is small and the image is small. I've been using this image for testing:
I'm experiencing processing times of about 5 or 6 seconds when using a window size of 5, but if I bump my window size up to 20 and the algorithm is checking 20 pixels in each direction for the mean value, i get times upwards of one minute for that image.
If I use an image with a size like 2592x1936 with a window size of only 5, it takes nearly 10 minutes to complete.
So, how can I improve those times? Would a numpy array be faster? Is im.getpixel faster than loading the image into pixel access mode? Are there any other tips for speed boosts? Thanks in advance.
Referencing our comments, I wrote a MATLAB implementation of this algorithm here: Extract a page from a uniform background in an image, and it was quite fast on large images.
If you'd like a better explanation of the algorithm, please see my other answer here: Bradley Adaptive Thresholding -- Confused (questions). This may be a good place to start if you want a better understanding of the code I wrote.
Because MATLAB and NumPy are similar, this is a re-implementation of the Bradley-Roth thresholding algorithm, but in NumPy. I convert the PIL image into a NumPy array, do the processing on this image, then convert back to a PIL image. The function takes in three parameters: the grayscale image image, the size of the window s and the threshold t. This threshold is different than what you have as this is following the paper exactly. The threshold t is a percentage of the total summed area of each pixel window. If the summed area is less than this threshold, then the output should be a black pixel - else it's a white pixel. The defaults for s and t are the number of columns divided by 8 and rounded, and 15% respectively:
import numpy as np
from PIL import Image
def bradley_roth_numpy(image, s=None, t=None):
# Convert image to numpy array
img = np.array(image).astype(np.float)
# Default window size is round(cols/8)
if s is None:
s = np.round(img.shape[1]/8)
# Default threshold is 15% of the total
# area in the window
if t is None:
t = 15.0
# Compute integral image
intImage = np.cumsum(np.cumsum(img, axis=1), axis=0)
# Define grid of points
(rows,cols) = img.shape[:2]
(X,Y) = np.meshgrid(np.arange(cols), np.arange(rows))
# Make into 1D grid of coordinates for easier access
X = X.ravel()
Y = Y.ravel()
# Ensure s is even so that we are able to index into the image
# properly
s = s + np.mod(s,2)
# Access the four corners of each neighbourhood
x1 = X - s/2
x2 = X + s/2
y1 = Y - s/2
y2 = Y + s/2
# Ensure no coordinates are out of bounds
x1[x1 < 0] = 0
x2[x2 >= cols] = cols-1
y1[y1 < 0] = 0
y2[y2 >= rows] = rows-1
# Ensures coordinates are integer
x1 = x1.astype(np.int)
x2 = x2.astype(np.int)
y1 = y1.astype(np.int)
y2 = y2.astype(np.int)
# Count how many pixels are in each neighbourhood
count = (x2 - x1) * (y2 - y1)
# Compute the row and column coordinates to access
# each corner of the neighbourhood for the integral image
f1_x = x2
f1_y = y2
f2_x = x2
f2_y = y1 - 1
f2_y[f2_y < 0] = 0
f3_x = x1-1
f3_x[f3_x < 0] = 0
f3_y = y2
f4_x = f3_x
f4_y = f2_y
# Compute areas of each window
sums = intImage[f1_y, f1_x] - intImage[f2_y, f2_x] - intImage[f3_y, f3_x] + intImage[f4_y, f4_x]
# Compute thresholded image and reshape into a 2D grid
out = np.ones(rows*cols, dtype=np.bool)
out[img.ravel()*count <= sums*(100.0 - t)/100.0] = False
# Also convert back to uint8
out = 255*np.reshape(out, (rows, cols)).astype(np.uint8)
# Return PIL image back to user
return Image.fromarray(out)
if __name__ == '__main__':
img = Image.open('test.jpg').convert('L')
out = bradley_roth_numpy(img)
out.show()
out.save('output.jpg')
The image is read in and converted to grayscale if required. The output image will be displayed, and it will be saved to the same directory where you ran the script to an image called output.jpg. If you want to override the settings, simply do:
out = bradley_roth_numpy(img, windowsize, threshold)
Play around with this to get good results. Using the default parameters and using IPython, I measured the average time of execution using timeit, and this is what I get for your image you uploaded in your post:
In [16]: %timeit bradley_roth_numpy(img)
100 loops, best of 3: 7.68 ms per loop
This means that running this function repeatedly 100 times on the image you uploaded, the best of 3 execution times gave on average 7.68 milliseconds per run.
I also get this image as a result when I threshold it:
Profiling your code in IPython with %prun yields shows:
ncalls tottime percall cumtime percall filename:lineno(function)
50246 2.009 0.000 2.009 0.000 <ipython-input-78-b628a43d294b>:15(<listcomp>)
50246 0.587 0.000 0.587 0.000 <ipython-input-78-b628a43d294b>:17(<listcomp>)
1 0.170 0.170 2.829 2.829 <ipython-input-78-b628a43d294b>:5(bradley_threshold)
50246 0.058 0.000 0.058 0.000 {built-in method sum}
50257 0.004 0.000 0.004 0.000 {built-in method len}
i.e, almost all of the running time is due to Python loops (slow) and non-vectorized arithmetic (slow). So I would expect big improvements if you rewrite using numpy arrays; alternatively you could use cython if you can't work out how to vectorize your code.
OK, I am a bit late here. Let me share my thoughts on that anyway:
You could speed it up by using dynamic programming to compute the means but it much easier and faster to let scipy and numpy do all the dirty work. (Note that I use Python3 for my code, so xrange is changed to range in your code).
#!/usr/bin/env python3
import numpy as np
from scipy import ndimage
from PIL import Image
import copy
import time
def faster_bradley_threshold(image, threshold=75, window_r=5):
percentage = threshold / 100.
window_diam = 2*window_r + 1
# convert image to numpy array of grayscale values
img = np.array(image.convert('L')).astype(np.float) # float for mean precision
# matrix of local means with scipy
means = ndimage.uniform_filter(img, window_diam)
# result: 0 for entry less than percentage*mean, 255 otherwise
height, width = img.shape[:2]
result = np.zeros((height,width), np.uint8) # initially all 0
result[img >= percentage * means] = 255 # numpy magic :)
# convert back to PIL image
return Image.fromarray(result)
def bradley_threshold(image, threshold=75, windowsize=5):
ws = windowsize
image2 = copy.copy(image).convert('L')
w, h = image.size
l = image.convert('L').load()
l2 = image2.load()
threshold /= 100.0
for y in range(h):
for x in range(w):
#find neighboring pixels
neighbors =[(x+x2,y+y2) for x2 in range(-ws,ws) for y2 in range(-ws, ws) if x+x2>0 and x+x2<w and y+y2>0 and y+y2<h]
#mean of all neighboring pixels
mean = sum([l[a,b] for a,b in neighbors])/len(neighbors)
if l[x, y] < threshold*mean:
l2[x,y] = 0
else:
l2[x,y] = 255
return image2
if __name__ == '__main__':
img = Image.open('test.jpg')
t0 = time.process_time()
threshed0 = bradley_threshold(img)
print('original approach:', round(time.process_time()-t0, 3), 's')
threshed0.show()
t0 = time.process_time()
threshed1 = faster_bradley_threshold(img)
print('w/ numpy & scipy :', round(time.process_time()-t0, 3), 's')
threshed1.show()
That made it much faster on my machine:
$ python3 bradley.py
original approach: 3.736 s
w/ numpy & scipy : 0.003 s
PS: Note that the mean I used from scipy behaves slightly different at the borders than the one from your code (for positions where the window for mean calculation is not fully contained in he image anymore). However, I think that shouldn't be a problem.
Another minor difference is that the window from the for-loops was not exactly centered at the pixel as the offset by xrange(-ws,ws) with ws=5 yields -5,-4-,...,3,4 and results in an average of -0.5. This probably wasn't intended.

Creating an image from a dictionary using PIL

I have a dictionary that maps coordinate tuples (in the range (0,0) to (199, 199) to grayscale values (integers between 0 and 255.) Is there a good way to create an PIL Image that has the specified values at the specified coordinates? I'd prefer a solution that only uses PIL to one that uses scipy.
You can try image.putpixel() to change the color of a pixel at a particular position. Example code -
from PIL import Image
from random import randint
d = {(x,y):randint(0,255) for x in range(200) for y in range(200)}
im = Image.new('L',(200,200))
for i in d:
im.putpixel(i,d[i])
im.save('blah.png')
It gave me a result like -
You could do it with putpixel(), but that could potentially involve tens of thousands of calls. How much this matters depends on how many coordinate tuples are defined in the dictionary. I've included the method shown in each of the current answers for comparison (including my own before any benchmarking was added, but just now I made a small change to how it initializes the data buffer which measurably sped it up).
To make a level playing field, for testing purposes the input dictionary randomly selects only ½ of the possible pixels in the image to define and allows the rest to be set to a default background color. Anand S Kumar's answer currently doesn't do the latter, but the slightly modified version shown below does.
All produce the same image from the data.
from __future__ import print_function
import sys
from textwrap import dedent
import timeit
N = 100 # number of executions of each algorithm
R = 3 # number of repeations of executions
# common setup for all algorithms - is not included in algorithm timing
setup = dedent("""
from random import randint, sample, seed
from PIL import Image
seed(42)
background = 0 # default color of pixels not defined in dictionary
width, height = 200, 200
# create test dict of input data defining half of the pixel coords in image
coords = sample([(x,y) for x in xrange(width) for y in xrange(height)],
width * height // 2)
d = {coord: randint(0, 255) for coord in coords}
""")
algorithms = {
"Anand S Kumar": dedent("""
im = Image.new('L', (width, height), color=background) # set bgrd
for i in d:
im.putpixel(i, d[i])
"""),
"martineau": dedent("""
data = bytearray([background] * width * height)
for (x, y), v in d.iteritems():
data[x + y*width] = v
im = Image.frombytes('L', (width, height), str(data))
"""),
"PM 2Ring": dedent("""
data = [background] * width * height
for i in d:
x, y = i
data[x + y * width] = d[i]
im = Image.new('L', (width, height))
im.putdata(data)
"""),
}
# execute and time algorithms, collecting results
timings = [
(label,
min(timeit.repeat(algorithms[label], setup=setup, repeat=R, number=N)),
) for label in algorithms
]
print('fastest to slowest execution speeds (Python {}.{}.{})\n'.format(
*sys.version_info[:3]),
' ({:,d} executions, best of {:d} repetitions)\n'.format(N, R))
longest = max(len(timing[0]) for timing in timings) # length of longest label
ranked = sorted(timings, key=lambda t: t[1]) # ascending sort by execution time
fastest = ranked[0][1]
for timing in ranked:
print("{:>{width}} : {:9.6f} secs, rel speed {:4.2f}x, {:6.2f}% slower".
format(timing[0], timing[1], round(timing[1]/fastest, 2),
round((timing[1]/fastest - 1) * 100, 2), width=longest))
Output:
fastest to slowest execution speeds (Python 2.7.10)
(100 executions, best of 3 repetitions)
martineau : 0.255203 secs, rel speed 1.00x, 0.00% slower
PM 2Ring : 0.307024 secs, rel speed 1.20x, 20.31% slower
Anand S Kumar : 1.835997 secs, rel speed 7.19x, 619.43% slower
As martineau suggests putpixel() is ok when you're modifying a few random pixels, but it's not so efficient for building whole images. My approach is similar to his, except I use a list of ints and .putdata(). Here's some code to test these 3 different approaches.
from PIL import Image
from random import seed, randint
width, height = 200, 200
background = 0
seed(42)
d = dict(((x, y), randint(0, 255)) for x in range(width) for y in range(height))
algorithm = 2
print('Algorithm', algorithm)
if algorithm == 0:
im = Image.new('L', (width, height))
for i in d:
im.putpixel(i, d[i])
elif algorithm == 1:
buff = bytearray((background for _ in xrange(width * height)))
for (x,y), v in d.items():
buff[y*width + x] = v
im = Image.frombytes('L', (width,height), str(buff))
elif algorithm == 2:
data = [background] * width * height
for i in d:
x, y = i
data[x + y * width] = d[i]
im = Image.new('L', (width, height))
im.putdata(data)
#im.show()
fname = 'qrand%d.png' % algorithm
im.save(fname)
print(fname, 'saved')
Here are typical timings on my 2GHz machine running Python 2.6.6
$ time ./qtest.py
Algorithm 0
qrand0.png saved
real 0m0.926s
user 0m0.768s
sys 0m0.040s
$ time ./qtest.py
Algorithm 1
qrand1.png saved
real 0m0.733s
user 0m0.548s
sys 0m0.020s
$ time ./qtest.py
Algorithm 2
qrand2.png saved
real 0m0.638s
user 0m0.520s
sys 0m0.032s

Python - Quick batch modification of PNGs

I wrote a python script with combines images in unique ways for an OpenGL shader. The problem is that I have a large number of very large maps and it takes a long time to process. Is there a way to write this in a quicker fashion?
import numpy as np
map_data = {}
image_data = {}
for map_postfix in names:
file_name = inputRoot + '-' + map_postfix + resolution + '.png'
print 'Loading ' + file_name
image_data[map_postfix] = Image.open(file_name, 'r')
map_data[map_postfix] = image_data[map_postfix].load()
color = mapData['ColorOnly']
ambient = mapData['AmbientLight']
shine = mapData['Shininess']
width = imageData['ColorOnly'].size[0]
height = imageData['ColorOnly'].size[1]
arr = np.zeros((height, width, 4), dtype=int)
for i in range(width):
for j in range(height):
ambient_mod = ambient[i,j][0] / 255.0
arr[j, i, :] = [color[i,j][0] * ambient_mod , color[i,j][1] * ambient_mod , color[i,j][2] * ambient_mod , shine[i,j][0]]
print 'Converting Color Map to image'
return Image.fromarray(arr.astype(np.uint8))
This is just a sample of a large number of batch processes so I am more interested in if there is a faster way to iterate and modify an image file. Almost all the time is being spent on the nested loop vs loading and saving.
Vectorised-code example -- test effect on yours in timeit or zmq.Stopwatch()
Reported to have 22.14 seconds >> 0.1624 seconds speedup!
While your code seems to loop just over RGBA[x,y], let me show a "vectorised"-syntax of a code, that benefits from numpy matrix-manipulation utilities ( forget the RGB/YUV manipulation ( originally based on OpenCV rather than PIL ), but re-use the vectorised-syntax approach to avoid for-loops and adapt it to work efficiently for your calculus. Wrong order of operations may more than double yours processing time.
And use a test / optimise / re-test loop for speeding up.
For testing, use standard python timeit if [msec] resolution is enough.
Go rather for zmq.StopWatch() if you need going into [usec] resolution.
# Vectorised-code example, to see the syntax & principles
# do not mind another order of RGB->BRG layers
# it has been OpenCV traditional convention
# it has no other meaning in this demo of VECTORISED code
def get_YUV_U_Cb_Rec709_BRG_frame( brgFRAME ): # For the Rec. 709 primaries used in gamma-corrected sRGB, fast, VECTORISED MUL/ADD CODE
out = numpy.zeros( brgFRAME.shape[0:2] )
out -= 0.09991 / 255 * brgFRAME[:,:,1] # // Red
out -= 0.33601 / 255 * brgFRAME[:,:,2] # // Green
out += 0.436 / 255 * brgFRAME[:,:,0] # // Blue
return out
# normalise to <0.0 - 1.0> before vectorised MUL/ADD, saves [usec] ...
# on 480x640 [px] faster goes about 2.2 [msec] instead of 5.4 [msec]
In your case, using dtype = numpy.int, guess it shall be faster to MUL first by ambient[:,:,0] and finally DIV to normalisearr[:,:,:3] /= 255
# test if this goes even faster once saving the vectorised overhead on matrix DIV
arr[:,:,0] = color[:,:,0] * ambient[:,:,0] / 255 # MUL remains INT, shall precede DIV
arr[:,:,1] = color[:,:,1] * ambient[:,:,0] / 255 #
arr[:,:,2] = color[:,:,2] * ambient[:,:,0] / 255 #
arr[:,:,3] = shine[:,:,0] # STO alpha
So how it may look in your algo?
One need not have Peter Jackson's impressive budget and time once planned, spanned and executed immense number-crunching over 3 years in a New Zealand hangar, overcrowded by a herd of SGI workstations, as he was producing "The Lord of The Rings" fully-digital mastering assembly-line, right by the frame-by-frame pixel manipulation, to realise that miliseconds and microseconds and even nanoseconds in the mass-production pipe-line simply do matter.
So, take a deep breath and test and re-test so as to optimise your real-world imagery processing performance to levels that your project needs.
Hope this may help you on this:
# OPTIONAL for performance testing -------------# ||||||||||||||||||||||||||||||||
from zmq import Stopwatch # _MICROSECOND_ timer
# # timer-resolution step ~ 21 nsec
# # Yes, NANOSECOND-s
# OPTIONAL for performance testing -------------# ||||||||||||||||||||||||||||||||
arr = np.zeros( ( height, width, 4 ), dtype = int )
aStopWatch = zmq.Stopwatch() # ||||||||||||||||||||||||||||||||
# /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\# <<< your original code segment
# aStopWatch.start() # |||||||||||||__.start
# for i in range( width ):
# for j in range( height ):
# ambient_mod = ambient[i,j][0] / 255.0
# arr[j, i, :] = [ color[i,j][0] * ambient_mod, \
# color[i,j][1] * ambient_mod, \
# color[i,j][2] * ambient_mod, \
# shine[i,j][0] \
# ]
# usec_for = aStopWatch.stop() # |||||||||||||__.stop
# print 'Converting Color Map to image'
# print ' FOR processing took ', usec_for, ' [usec]'
# /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\# <<< proposed alternative
aStopWatch.start() # |||||||||||||__.start
# reduced numpy broadcasting one dimension less # ref. comments below
arr[:,:, 0] = color[:,:,0] * ambient[:,:,0] # MUL ambient[0] * [{R}]
arr[:,:, 1] = color[:,:,1] * ambient[:,:,0] # MUL ambient[0] * [{G}]
arr[:,:, 2] = color[:,:,2] * ambient[:,:,0] # MUL ambient[0] * [{B}]
arr[:,:,:3] /= 255 # DIV 255 to normalise
arr[:,:, 3] = shine[:,:,0] # STO shine[ 0] in [3]
usec_Vector = aStopWatch.stop() # |||||||||||||__.stop
print 'Converting Color Map to image'
print ' Vectorised processing took ', usec_Vector, ' [usec]'
return Image.fromarray( arr.astype( np.uint8 ) )

Categories

Resources