Vertically fade image to transparency using Python PIL library - python

I have looked at tutorials, other stackoverflow questions and the PIL documentation itself, but I'm still still not sure how to do it.
I'd like to start fading an image vertically at approximately 55% down the y-axis, and have the image completely transparent at approximately 75%. It's important that I preserve the full height of the image, even though the last 25% or so should be completely transparent.
Is this possible to do with PIL?

Sure it's doable.
Let's assume that you're starting off with an image with no transparency (because otherwise, your question is ambiguous).
Step 1: Add an alpha plane. That's just putalpha, unless you're dealing with a non-planar image, in which case you'll need to convert it to RGB or L first.
Step 2: Iterate through the pixels you want to change by using the pixel array returned by load (or getpixel and setpixel if you have to deal with ancient versions of PIL).
Step 3: There is no step 3. Unless you count saving the image. In which case, OK, step 3 is saving the image.
from PIL import Image
im = Image.open('bird.jpg')
im.putalpha(255)
width, height = im.size
pixels = im.load()
for y in range(int(height*.55), int(height*.75)):
alpha = 255-int((y - height*.55)/height/.20 * 255)
for x in range(width):
pixels[x, y] = pixels[x, y][:3] + (alpha,)
for y in range(y, height):
for x in range(width):
pixels[x, y] = pixels[x, y][:3] + (0,)
im.save('birdfade.png')
Here the alpha drops off linearly from 255 to 0; it you want it to drop off according to a different curve, or you're using RGB16 instead of RGB8, or you're using L instead of RGB, you should be able to figure out how to change it.
If you want to do this faster, you can use numpy instead of a Python loop for step 2. Or you can reverse steps 1 and 2—construct an alpha plane in advance, and apply it all at once by passing it to putalpha instead of 255. Or… Since this took under half a second on the biggest image I had lying around, I'm not too worried about performance, unless you have to do a million of them and you want a faster version.

Using NumPy:
import numpy as np
import Image
img = Image.open(FILENAME).convert('RGBA')
arr = np.array(img)
alpha = arr[:, :, 3]
n = len(alpha)
alpha[:] = np.interp(np.arange(n), [0, 0.55*n, 0.75*n, n], [255, 255, 0, 0])[:,np.newaxis]
img = Image.fromarray(arr, mode='RGBA')
img.save('/tmp/out.png')

You have to change the code of #abarnert a bit if you want to fade image which already had a transparent background (detail):
from PIL import Image
im = Image.open('bird.jpg')
width, height = im.size
pixels = im.load()
for y in range(int(height*.55), int(height*.75)):
for x in range(width):
alpha = pixels[x, y][3]-int((y - height*.55)/height/.20 * 255) # change made here
if alpha <= 0:
alpha = 0
pixels[x, y] = pixels[x, y][:3] + (alpha,)
for y in range(y, height):
for x in range(width):
pixels[x, y] = pixels[x, y][:3] + (0,)
bg.save('birdfade.png')

Related

Finding the "level of black" in a pixel in a grayscale image

I am trying to calculate the percentage of black a pixel is. For example, let's say I have a pixel that is 75% black, so a gray. I have the RGBA values, so how do I get the level of black?
I have already completed getting each pixel and replacing it with a new RGBA value, and tried to use some RGBA logic to no avail.
#Gradient Testing here
from PIL import Image
picture = Image.open("img1.png")
img = Image.open('img1.png').convert('LA')
img.save('greyscale.png')
# Get the size of the image
width, height = picture.size
# Process every pixel
for x in range(width):
for y in range(height):
#Code I need here
r1, g1, b1, alpha = picture.getpixel( (x,y) )
r,g,b = 120, 140, 99
greylvl = 1 - (alpha(r1 + g1 + b1) / 765) #Code I tried
I would like to get new variable called that gives me a value, such as 0.75, which would represent a 0.75 percent black pixel.
I'm not quite sure what the "LA" format you're trying to convert to is for; I would try "L" instead.
Try this code: (Make sure you're using Python 3.)
from PIL import Image
picture = Image.open('img1.png').convert('L')
width, height = picture.size
for x in range(width):
for y in range(height):
value = picture.getpixel( (x, y) )
black_level = 1 - value / 255
print('Level of black at ({}, {}): {} %'.format(x, y, black_level * 100))
Is this what you're looking for?

Python find the coordinates of black square on JPEG image

I have this sample image that has white rectangular box with a single black square in it. (2 blue arrows are just for illustration purpose, they are not part of the image)
Is there anyway to find out how many pixels is the black square away from the left and top boundaries of the image?
If possible, i prefer not to use OpenCV as the rest of the processing were done in PIL and it's probably an overkill if i have to use OpenCV just to do this one operation.
FYI: the image is in JPEG, there will always be only 1 black squre (no multiple squares) in the box.
UPDATE
Based on the answer by karlphillip, i have come up with this piece of code.
from PIL import Image
img = Image.open('test.png').convert('1')
pixels = img.load()
xlist = []
ylist = []
for y in xrange(img.size[1]):
for x in xrange(img.size[0]):
if pixels[x, y] == 0:
xlist.append(x)
ylist.append(y)
#4 corners of the black square
xleft = min(xlist)
xright = max(xlist)
ytop = min(ylist)
ybot = max(ylist)
Based on that link I mentioned, I was able to put together the following pseudocode:
from PIL import Image
img = Image.open('test.png')
pixels = img.load()
for y in xrange(img.size[1]):
for x in xrange(img.size[0]):
if pixels[x, y] == (0, 0, 0):
// black pixel found, add it to a counter variable or something.
This answer demonstrates how to access and test each pixel in the image using PIL.

Scale images with PIL preserving transparency and color?

Say you want to scale a transparent image but do not yet know the color(s) of the background you will composite it onto later. Unfortunately PIL seems to incorporate the color values of fully transparent pixels leading to bad results. Is there a way to tell PIL-resize to ignore fully transparent pixels?
import PIL.Image
filename = "trans.png" # http://qrc-designer.com/stuff/trans.png
size = (25,25)
im = PIL.Image.open(filename)
print im.mode # RGBA
im = im.resize(size, PIL.Image.LINEAR) # the same with CUBIC, ANTIALIAS, transform
# im.show() # does not use alpha
im.save("resizelinear_"+filename)
# PIL scaled image has dark border
original image with (0,0,0,0) (black but fully transparent) background (left)
output image with black halo (middle)
proper output scaled with gimp (right)
edit: It looks like to achieve what I am looking for I would have to modify the sampling of the resize function itself such that it would ignore pixels with full transparency.
edit2: I have found a very ugly solution. It sets the color values of fully transparent pixels to the average of the surrounding non fully transparent pixels to minimize impact of fully transparent pixel colors while resizing. It is slow in the simple form but I will post it if there is no other solution. Might be possible to make it faster by using a dilate operation to only process the necessary pixels.
edit3: premultiplied alpha is the way to go - see Mark's answer
It appears that PIL doesn't do alpha pre-multiplication before resizing, which is necessary to get the proper results. Fortunately it's easy to do by brute force. You must then do the reverse to the resized result.
def premultiply(im):
pixels = im.load()
for y in range(im.size[1]):
for x in range(im.size[0]):
r, g, b, a = pixels[x, y]
if a != 255:
r = r * a // 255
g = g * a // 255
b = b * a // 255
pixels[x, y] = (r, g, b, a)
def unmultiply(im):
pixels = im.load()
for y in range(im.size[1]):
for x in range(im.size[0]):
r, g, b, a = pixels[x, y]
if a != 255 and a != 0:
r = 255 if r >= a else 255 * r // a
g = 255 if g >= a else 255 * g // a
b = 255 if b >= a else 255 * b // a
pixels[x, y] = (r, g, b, a)
Result:
You can resample each band individually:
im.load()
bands = im.split()
bands = [b.resize(size, Image.LINEAR) for b in bands]
im = Image.merge('RGBA', bands)
EDIT
Maybe by avoiding high transparency values like so (need numpy)
import numpy as np
# ...
im.load()
bands = list(im.split())
a = np.asarray(bands[-1])
a.flags.writeable = True
a[a != 0] = 1
bands[-1] = Image.fromarray(a)
bands = [b.resize(size, Image.LINEAR) for b in bands]
a = np.asarray(bands[-1])
a.flags.writeable = True
a[a != 0] = 255
bands[-1] = Image.fromarray(a)
im = Image.merge('RGBA', bands)
Maybe you can fill the whole image with the color you want, and only create the shape in the alpha channnel?
sorry for answering myself but this is the only working solution that I know of. It sets the color values of fully transparent pixels to the average of the surrounding non fully transparent pixels to minimize impact of fully transparent pixel colors while resizing. There are special cases where the proper result will not be achieved.
It is very ugly and slow. I'd be happy to accept your answer if you can come up with something better.
# might be possible to speed this up by only processing necessary pixels
# using scipy dilate, numpy where
import PIL.Image
filename = "trans.png" # http://qrc-designer.com/stuff/trans.png
size = (25,25)
import numpy as np
im = PIL.Image.open(filename)
npImRgba = np.asarray(im, dtype=np.uint8)
npImRgba2 = np.asarray(im, dtype=np.uint8)
npImRgba2.flags.writeable = True
lenY = npImRgba.shape[0]
lenX = npImRgba.shape[1]
for y in range(npImRgba.shape[0]):
for x in range(npImRgba.shape[1]):
if npImRgba[y, x, 3] != 0: # only change completely transparent pixels
continue
colSum = np.zeros((3), dtype=np.uint16)
i = 0
for oy in [-1, 0, 1]:
for ox in [-1, 0, 1]:
if not oy and not ox:
continue
iy = y + oy
if iy < 0:
continue
if iy >= lenY:
continue
ix = x + ox
if ix < 0:
continue
if ix >= lenX:
continue
col = npImRgba[iy, ix]
if not col[3]:
continue
colSum += col[:3]
i += 1
npImRgba2[y, x, :3] = colSum / i
im = PIL.Image.fromarray(npImRgba2)
im = im.transform(size, PIL.Image.EXTENT, (0,0) + im.size, PIL.Image.LINEAR)
im.save("slime_"+filename)
result:

Getting list of pixel values from PIL

I'm trying to convert a black & white .jpg image into a list which I can then modulate into an audio signal.
I have imported the PIL module and am trying to call the built-in function: list(im.getdata()). When I call it, python crashes. Is there some way of breaking down the image (always 320x240) into 240 lines to make the computations easier? Or am I just calling the wrong function?
Python shouldn't crash when you call getdata(). The image might be corrupted or there is something wrong with your PIL installation. Try it with another image or post the image you are using.
This should break down the image the way you want:
from PIL import Image
im = Image.open('um_000000.png')
pixels = list(im.getdata())
width, height = im.size
pixels = [pixels[i * width:(i + 1) * width] for i in xrange(height)]
If you have numpy installed you can try:
data = numpy.asarray(im)
(I say "try" here, because it's unclear why getdata() isn't working for you, and I don't know whether asarray uses getdata, but it's worth a test.)
I assume you are getting an error like.. TypeError: 'PixelAccess' object is not iterable...?
See the Image.load documentation for how to access pixels..
Basically, to get the list of pixels in an image, using PIL:
from PIL import Image
i = Image.open("myfile.png")
pixels = i.load() # this is not a list, nor is it list()'able
width, height = i.size
all_pixels = []
for x in range(width):
for y in range(height):
cpixel = pixels[x, y]
all_pixels.append(cpixel)
That appends every pixel to the all_pixels - if the file is an RGB image (even if it only contains a black-and-white image) these will be a tuple, for example:
(255, 255, 255)
To convert the image to monochrome, you just average the three values - so, the last three lines of code would become..
cpixel = pixels[x, y]
bw_value = int(round(sum(cpixel) / float(len(cpixel))))
# the above could probably be bw_value = sum(cpixel)/len(cpixel)
all_pixels.append(bw_value)
Or to get the luminance (weighted average):
cpixel = pixels[x, y]
luma = (0.3 * cpixel[0]) + (0.59 * cpixel[1]) + (0.11 * cpixel[2])
all_pixels.append(luma)
Or pure 1-bit looking black and white:
cpixel = pixels[x, y]
if round(sum(cpixel)) / float(len(cpixel)) > 127:
all_pixels.append(255)
else:
all_pixels.append(0)
There is probably methods within PIL to do such RGB -> BW conversions quicker, but this works, and isn't particularly slow.
If you only want to perform calculations on each row, you could skip adding all the pixels to an intermediate list.. For example, to calculate the average value of each row:
from PIL import Image
i = Image.open("myfile.png")
pixels = i.load() # this is not a list
width, height = i.size
row_averages = []
for y in range(height):
cur_row_ttl = 0
for x in range(width):
cur_pixel = pixels[x, y]
cur_pixel_mono = sum(cur_pixel) / len(cur_pixel)
cur_row_ttl += cur_pixel_mono
cur_row_avg = cur_row_ttl / width
row_averages.append(cur_row_avg)
print "Brighest row:",
print max(row_averages)
Or if you want to count white or black pixels
This is also a solution:
from PIL import Image
import operator
img = Image.open("your_file.png").convert('1')
black, white = img.getcolors()
print black[0]
print white[0]
pixVals = list(pilImg.getdata())
output is a list of all RGB values from the picture:
[(248, 246, 247), (246, 248, 247), (244, 248, 247), (244, 248, 247), (246, 248, 247), (248, 246, 247), (250, 246, 247), (251, 245, 247), (253, 244, 247), (254, 243, 247)]
Not PIL, but scipy.misc.imread might still be interesting:
import scipy.misc
im = scipy.misc.imread('um_000000.png', flatten=False, mode='RGB')
print(im.shape)
gives
(480, 640, 3)
so it is (height, width, channels). So you can iterate over it by
for y in range(im.shape[0]):
for x in range(im.shape[1]):
color = tuple(im[y][x])
r, g, b = color
data = numpy.asarray(im)
Notice:In PIL, img is RGBA. In cv2, img is BGRA.
My robust solution:
def cv_from_pil_img(pil_img):
assert pil_img.mode=="RGBA"
return cv2.cvtColor(np.array(pil_img), cv2.COLOR_RGBA2BGRA)
As I commented above, problem seems to be the conversion from PIL internal list format to a standard python list type. I've found that Image.tostring() is much faster, and depending on your needs it might be enough. In my case, I needed to calculate the CRC32 digest of image data, and it suited fine.
If you need to perform more complex calculations, tom10 response involving numpy might be what you need.
Looks like PILlow may have changed tostring() to tobytes(). When trying to extract RGBA pixels to get them into an OpenGL texture, the following worked for me (within the glTexImage2D call which I omit for brevity).
from PIL import Image
img = Image.open("mandrill.png").rotate(180).transpose(Image.FLIP_LEFT_RIGHT)
# use img.convert("RGBA").tobytes() as texels

PIL Best Way To Replace Color?

I am trying to remove a certain color from my image however it's not working as well as I'd hoped. I tried to do the same thing as seen here Using PIL to make all white pixels transparent? however the image quality is a bit lossy so it leaves a little ghost of odd colored pixels around where what was removed. I tried doing something like change pixel if all three values are below 100 but because the image was poor quality the surrounding pixels weren't even black.
Does anyone know of a better way with PIL in Python to replace a color and anything surrounding it? This is probably the only sure fire way I can think of to remove the objects completely however I can't think of a way to do this.
The picture has a white background and text that is black. Let's just say I want to remove the text entirely from the image without leaving any artifacts behind.
Would really appreciate someone's help! Thanks
The best way to do it is to use the "color to alpha" algorithm used in Gimp to replace a color. It will work perfectly in your case. I reimplemented this algorithm using PIL for an open source python photo processor phatch. You can find the full implementation here. This a pure PIL implementation and it doesn't have other dependences. You can copy the function code and use it. Here is a sample using Gimp:
to
You can apply the color_to_alpha function on the image using black as the color. Then paste the image on a different background color to do the replacement.
By the way, this implementation uses the ImageMath module in PIL. It is much more efficient than accessing pixels using getdata.
EDIT: Here is the full code:
from PIL import Image, ImageMath
def difference1(source, color):
"""When source is bigger than color"""
return (source - color) / (255.0 - color)
def difference2(source, color):
"""When color is bigger than source"""
return (color - source) / color
def color_to_alpha(image, color=None):
image = image.convert('RGBA')
width, height = image.size
color = map(float, color)
img_bands = [band.convert("F") for band in image.split()]
# Find the maximum difference rate between source and color. I had to use two
# difference functions because ImageMath.eval only evaluates the expression
# once.
alpha = ImageMath.eval(
"""float(
max(
max(
max(
difference1(red_band, cred_band),
difference1(green_band, cgreen_band)
),
difference1(blue_band, cblue_band)
),
max(
max(
difference2(red_band, cred_band),
difference2(green_band, cgreen_band)
),
difference2(blue_band, cblue_band)
)
)
)""",
difference1=difference1,
difference2=difference2,
red_band = img_bands[0],
green_band = img_bands[1],
blue_band = img_bands[2],
cred_band = color[0],
cgreen_band = color[1],
cblue_band = color[2]
)
# Calculate the new image colors after the removal of the selected color
new_bands = [
ImageMath.eval(
"convert((image - color) / alpha + color, 'L')",
image = img_bands[i],
color = color[i],
alpha = alpha
)
for i in xrange(3)
]
# Add the new alpha band
new_bands.append(ImageMath.eval(
"convert(alpha_band * alpha, 'L')",
alpha = alpha,
alpha_band = img_bands[3]
))
return Image.merge('RGBA', new_bands)
image = color_to_alpha(image, (0, 0, 0, 255))
background = Image.new('RGB', image.size, (255, 255, 255))
background.paste(image.convert('RGB'), mask=image)
Using numpy and PIL:
This loads the image into a numpy array of shape (W,H,3), where W is the
width and H is the height. The third axis of the array represents the 3 color
channels, R,G,B.
import Image
import numpy as np
orig_color = (255,255,255)
replacement_color = (0,0,0)
img = Image.open(filename).convert('RGB')
data = np.array(img)
data[(data == orig_color).all(axis = -1)] = replacement_color
img2 = Image.fromarray(data, mode='RGB')
img2.show()
Since orig_color is a tuple of length 3, and data has
shape (W,H,3), NumPy
broadcasts
orig_color to an array of shape (W,H,3) to perform the comparison data ==
orig_color. The result in a boolean array of shape (W,H,3).
(data == orig_color).all(axis = -1) is a boolean array of shape (W,H) which
is True wherever the RGB color in data is original_color.
#!/usr/bin/python
from PIL import Image
import sys
img = Image.open(sys.argv[1])
img = img.convert("RGBA")
pixdata = img.load()
# Clean the background noise, if color != white, then set to black.
# change with your color
for y in xrange(img.size[1]):
for x in xrange(img.size[0]):
if pixdata[x, y] == (255, 255, 255, 255):
pixdata[x, y] = (0, 0, 0, 255)
You'll need to represent the image as a 2-dimensional array. This means either making a list of lists of pixels, or viewing the 1-dimensional array as a 2d one with some clever math. Then, for each pixel that is targeted, you'll need to find all surrounding pixels. You could do this with a python generator thus:
def targets(x,y):
yield (x,y) # Center
yield (x+1,y) # Left
yield (x-1,y) # Right
yield (x,y+1) # Above
yield (x,y-1) # Below
yield (x+1,y+1) # Above and to the right
yield (x+1,y-1) # Below and to the right
yield (x-1,y+1) # Above and to the left
yield (x-1,y-1) # Below and to the left
So, you would use it like this:
for x in range(width):
for y in range(height):
px = pixels[x][y]
if px[0] == 255 and px[1] == 255 and px[2] == 255:
for i,j in targets(x,y):
newpixels[i][j] = replacementColor
If the pixels are not easily identifiable e.g you say (r < 100 and g < 100 and b < 100) also doesn't match correctly the black region, it means you have lots of noise.
Best way would be to identify a region and fill it with color you want, you can identify the region manually or may be by edge detection e.g. http://bitecode.co.uk/2008/07/edge-detection-in-python/
or more sophisticated approach would be to use library like opencv (http://opencv.willowgarage.com/wiki/) to identify objects.
This is part of my code, the result would like:
source
target
import os
import struct
from PIL import Image
def changePNGColor(sourceFile, fromRgb, toRgb, deltaRank = 10):
fromRgb = fromRgb.replace('#', '')
toRgb = toRgb.replace('#', '')
fromColor = struct.unpack('BBB', bytes.fromhex(fromRgb))
toColor = struct.unpack('BBB', bytes.fromhex(toRgb))
img = Image.open(sourceFile)
img = img.convert("RGBA")
pixdata = img.load()
for x in range(0, img.size[0]):
for y in range(0, img.size[1]):
rdelta = pixdata[x, y][0] - fromColor[0]
gdelta = pixdata[x, y][0] - fromColor[0]
bdelta = pixdata[x, y][0] - fromColor[0]
if abs(rdelta) <= deltaRank and abs(gdelta) <= deltaRank and abs(bdelta) <= deltaRank:
pixdata[x, y] = (toColor[0] + rdelta, toColor[1] + gdelta, toColor[2] + bdelta, pixdata[x, y][3])
img.save(os.path.dirname(sourceFile) + os.sep + "changeColor" + os.path.splitext(sourceFile)[1])
if __name__ == '__main__':
changePNGColor("./ok_1.png", "#000000", "#ff0000")

Categories

Resources