I have a set of white icons on transparent background, and I'd like to invert them all to be black on transparent background.
Have tried with PIL (ImageChops) but it does not seem to work with transparent backgrounds. I've also tried Gimp's Python interface, but no luck there, either.
Any idea how inverting is best achieved in Python?
ImageChops.invert seems to also invert the alpha channel of each pixel.
This should do the job:
import Image
img = Image.open('image.png').convert('RGBA')
r, g, b, a = img.split()
def invert(image):
return image.point(lambda p: 255 - p)
r, g, b = map(invert, (r, g, b))
img2 = Image.merge(img.mode, (r, g, b, a))
img2.save('image2.png')
I have tried Acorn's approach, but the result is somewhat strange (top part of the below image).
The bottom icon is what I really wanted, I achieved it by using Image Magick's convert method:
convert tools.png -negate tools_black.png
(not python per se, but python wrappers exist, like PythonMagickWand.
The only drawback is that you have to install a bunch of dependencies to get ImageMagick to work, but it seems to be a powerful image manipulation framework.
You can do this quite easily with PIL, like this:
Ensure the image is represented as RGBA, by using convert('RGBA').
Split the image into seperate RGBA bands.
Do what ever you want to the RGB bands (in your case we set them all to black by using the point function), but don't touch the alpha band.
Merge the bands back.
heres the code:
import Image
im = Image.open('image.png')
im = im.convert('RGBA')
r, g, b, a = im.split()
r = g = b = r.point(lambda i: 0)
im = Image.merge('RGBA', (r, g, b, a))
im.save('saved.png')
give it a shot.
import Image, numpy
pixels = numpy.array(Image.open('myimage.png'))
pixels[:,:,0:3] = 255 - pixels[:,:,0:3] # invert
Image.fromarray(pixels).save('out.png')
Probably the fastest solution so far, since it doesn't interpret any Python code inside a "for each pixel" loop.
Related
I'm trying to paste a png image on another png image.
Currently what I'm doing is:
from PIL import Image, ImageFilter
path = 'image.png'
image = Image.open(path).convert("RGBA").resize((80,80))
back = Image.open('base.png').convert("RGBA")
back.paste(image.convert("RGB"), (80,80), image)
back.save('result.png')
With that I'm getting:
However, I would like something like:
With a smoother transition between image and background
The alpha channel of the bear is what determines what you can see at each location and therefore it controls the transition. If there is a step-change in the bear's alpha, there'll be a step-change in the transition.
So, you need to extract the alpha channel from the bear, soften it or smooth it with some type of blur and then put it back in the bear image and paste on top of the background as before.
# Split image into channels
R, G, B, A = bear.split()
# Blur/soften/smooth A channel
softA = ...
...
# Recombine channels
softBear = Image.merge('RGBA', (R, G, B, softA))
I think am doing some trivial standard task: I am converting a (py)cairo surface to a PIL(low) image. The original cairo surface uses ARGB mode. The target PIL image uses RGBA, i.e. I want to maintain all colors and the alpha channel. However, things get really bizarre in the conversion: It appears that cairo stores its data internally as BGRA, so I actually need to swap the color channels during the conversion, see here:
import cairo
import gi
gi.require_version('Rsvg', '2.0')
from gi.repository import Rsvg
from PIL import Image
w, h = 600, 600
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, w, h,)
ctx = cairo.Context(surface)
ctx.set_source_rgba(1.0, 1.0, 1.0, 1.0) # explicitly draw white background
ctx.rectangle(0, 0, w, h)
ctx.fill()
# tested with https://raw.githubusercontent.com/pleiszenburg/abgleich/v0.0.7/src/abgleich/share/icon.svg
layout = Rsvg.Handle.new_from_file('icon.svg')
layout.render_cairo(ctx)
# EXPORT TEST #1: cairo
surface.write_to_png('export_cairo.png') # ok, looks as expected
pil = Image.frombuffer(mode = 'RGBA', size = (w, h), data = surface.get_data(),)
b, g, r, a = pil.split() # Color swap, part 1: Splitting the channels
pil = Image.merge('RGBA', (r, g, b, a)) # Color swap, part 2: Rearranging the channels
# EXPORT TEST #2: PIL
pil.save('export_pil.png') # ok, looks as expected IF COLORS ARE REARRANGED AS ABOVE
The above test uses rsvg, but it can also be reproduced by simply drawing a few colorful lines with cairo.
Am I terribly misunderstanding something or is this actually the right way to do it?
From the cairo documentation (https://www.cairographics.org/manual/cairo-Image-Surfaces.html#cairo-format-t):
CAIRO_FORMAT_ARGB32
each pixel is a 32-bit quantity, with alpha in the upper 8 bits, then red, then green, then blue. The 32-bit quantities are stored native-endian. Pre-multiplied alpha is used. (That is, 50% transparent red is 0x80800000, not 0x80ff0000.) (Since 1.0)
So, on little endian, this is actually what PIL calls BGRA, I think.
Not directly related to your question, but this is Pre-multiplied alpha.
According to https://pillow.readthedocs.io/en/stable/handbook/concepts.html#concept-modes, the only mode with premultiplied alpha is 'RGBa'.
or is this actually the right way to do it?
No idea what "right" means. However, my comment would be: there must be some way to do this without going through an intermediate image.
Since Pillow does not support cairo's image mode, perhaps you can instead use something numpy-y to do the conversion. For example, pycairo's test suite contains the following: https://github.com/dalembertian/pycairo/blob/22d29e0820d0dcbe070a6eb6f8f302e8c41b71a7/test/isurface_get_data.py#L37-L42
buf = surface.get_data()
a = numpy.ndarray (shape=(w,h,4), dtype=numpy.uint8, buffer=buf)
# draw a vertical line
a[:,40,0] = 255 # byte 0 is blue on little-endian systems
a[:,40,1] = 0
a[:,40,2] = 0
So, to convert from (in Pillow-speak) BGRa to RGBa, you could do something like this to swap the red and blue channels (where a is a buffer similar to the above):
(a[:,:,0], a[:,:,2]) = (a[:,:,2], a[:,:,0])
If this is really better than your approach of going through an intermediate Image... well, I do not know. You have to judge what is the best approach to do this. At least you should now be able to explain why it is necessary (there is no common image format supported by both cairo and PIL)
I want a way or steps to unify the brightness of 2 images or in other words make their brightness the same but without assigning them. I know how to get the brightness of an image using PIL, the code is found below:
from PIL import Image
imag = Image.open("test.png")
# Convert the image te RGB if it is a .gif for example
imag = imag.convert('RGB')
# coordinates of the pixel
X, Y = 0, 0
# Get RGB
pixelRGB = imag.getpixel((X, Y))
R, G, B = pixelRGB
brightness = sum([R, G, B]) / 3 ##0 is dark (black) and 255 is bright (white)
print(brightness)
Does anyone have an idea of how to make 2 images having the same brightness. Thank you
You can use the mean/standard deviation color transfer technique in Python/OpenCV as described at https://www.pyimagesearch.com/2014/06/30/super-fast-color-transfer-images/. But to force it so as not to modify the color and only adjust the brightness/contrast, convert your image to HSV. Process only the V channel using the method described in that reference. Then combine the new V and old S and H channels and convert that back to BRG.
Okay, here's the situation:
I want to use the Python Image Library to "theme" an image like this:
Theme color: "#33B5E5"
IN:
OUT:
I got the result using this commands with ImageMagick:
convert image.png -colorspace gray image.png
mogrify -fill "#33b5e5" -tint 100 image.png
Explanation:
The image is first converted to black-and-white, and then it is themed.
I want to get the same result with the Python Image Library.
But it seems I'm having some problems using it since:
Can not handle transparency
Background (transparency in main image) gets themed too..
I'm trying to use this script:
import Image
import ImageEnhance
def image_overlay(src, color="#FFFFFF", alpha=0.5):
overlay = Image.new(src.mode, src.size, color)
bw_src = ImageEnhance.Color(src).enhance(0.0)
return Image.blend(bw_src, overlay, alpha)
img = Image.open("image.png")
image_overlay(img, "#33b5e5", 0.5)
You can see I did not convert it to a grayscale first, because that didn't work with transparency either.
I'm sorry to post so many issues in one question, but I couldn't do anything else :$
Hope you all understand.
Note: There's a Python 3/pillow fork of PIL version of this answer here.
Update 4: Guess the previous update to my answer wasn't the last one after all. Although converting it to use PIL exclusively was a major improvement, there were a couple of things that seemed like there ought to be better, less awkward, ways to do, if only PIL had the ability.
Well, after reading the documentation closely as well as some of the source code, I realized what I wanted to do was in fact possible. The trade-off was that now it has to build the look-up table used manually, so the overall code is slightly longer. However the result is that it only needs to make one call to the relatively slow Image.point() method, instead of three of them.
from PIL import Image
from PIL.ImageColor import getcolor, getrgb
from PIL.ImageOps import grayscale
def image_tint(src, tint='#ffffff'):
if Image.isStringType(src): # file path?
src = Image.open(src)
if src.mode not in ['RGB', 'RGBA']:
raise TypeError('Unsupported source image mode: {}'.format(src.mode))
src.load()
tr, tg, tb = getrgb(tint)
tl = getcolor(tint, "L") # tint color's overall luminosity
if not tl: tl = 1 # avoid division by zero
tl = float(tl) # compute luminosity preserving tint factors
sr, sg, sb = map(lambda tv: tv/tl, (tr, tg, tb)) # per component adjustments
# create look-up tables to map luminosity to adjusted tint
# (using floating-point math only to compute table)
luts = (map(lambda lr: int(lr*sr + 0.5), range(256)) +
map(lambda lg: int(lg*sg + 0.5), range(256)) +
map(lambda lb: int(lb*sb + 0.5), range(256)))
l = grayscale(src) # 8-bit luminosity version of whole image
if Image.getmodebands(src.mode) < 4:
merge_args = (src.mode, (l, l, l)) # for RGB verion of grayscale
else: # include copy of src image's alpha layer
a = Image.new("L", src.size)
a.putdata(src.getdata(3))
merge_args = (src.mode, (l, l, l, a)) # for RGBA verion of grayscale
luts += range(256) # for 1:1 mapping of copied alpha values
return Image.merge(*merge_args).point(luts)
if __name__ == '__main__':
import os
input_image_path = 'image1.png'
print 'tinting "{}"'.format(input_image_path)
root, ext = os.path.splitext(input_image_path)
result_image_path = root+'_result'+ext
print 'creating "{}"'.format(result_image_path)
result = image_tint(input_image_path, '#33b5e5')
if os.path.exists(result_image_path): # delete any previous result file
os.remove(result_image_path)
result.save(result_image_path) # file name's extension determines format
print 'done'
Here's a screenshot showing input images on the left with corresponding outputs on the right. The upper row is for one with an alpha layer and the lower is a similar one that doesn't have one.
You need to convert to grayscale first. What I did:
get original alpha layer using Image.split()
convert to grayscale
colorize using ImageOps.colorize
put back original alpha layer
Resulting code:
import Image
import ImageOps
def tint_image(src, color="#FFFFFF"):
src.load()
r, g, b, alpha = src.split()
gray = ImageOps.grayscale(src)
result = ImageOps.colorize(gray, (0, 0, 0, 0), color)
result.putalpha(alpha)
return result
img = Image.open("image.png")
tinted = tint_image(img, "#33b5e5")
I am using putpixel on an image (srcImage) which is w = 134 and h = 454.
The code here gets the r,g,b value of a part of the font which is 0,255,0 (which I found through debugging, using print option).
image = letters['H']
r,g,b = image.getpixel((1,1)) #Note r g b values are 0, 255,0
srcImage.putpixel((10,15),(r,g,b))
srcImage.save('lolmini2.jpg')
This code does not throw any error. However, when I check the saved image I cannot spot the pure green pixel.
Instead of using putpixel() and getpixel() you should use indexing instead. For getpixel() you can use pixesl[1, 1] and for putpixel you can use pixels[1, 1] = (r, g, b). It should work the same but it's much faster. pixels here is image.load()
However, I don't see why it wouldn't work. It should work without a problem. Perhaps the jpeg compression is killing you here. Have you tried saving it as a png/gif file instead? Or setting more than 1 pixel.
I know it is a very old post but, for beginners who'd want to stick to putpixels() for a while, here's the solution:
initialize the image variable as:
from PIL import Image
img = Image.new('RGB', [200,200], 0x000000)
Make sure to initialize it as 'RGB' if you want to manipulate RGB values.
Sometimes people initialize images as:
img = Image.new('I', [200, 200], 0x000000)
and then try to work with RGB values, which doesn't work.