Pillow - Transparency over non-transparent image with paste - python

Let me prefix with a disclaimer that I am clueless when it comes to imaging/graphics all together, so maybe I'm lacking a fundamental understanding with something here.
I'm trying to paste an image (game_image) to my base image (image) with a transparent overlay (overlay_image) over top to add some darkening for the text.
Here's an example of the expected result:
Here's an example of what my current code generates:
Here is my current code:
from PIL import Image, ImageFont, ImageDraw
# base image sizing specific to Twitter recommended
base_image_size = (1600, 900)
base_image_mode = "RGBA"
base_image_background_color = (0, 52, 66)
image = Image.new(base_image_mode, base_image_size, base_image_background_color)
# game_image is the box art image on the left side of the card
game_image = Image.open("hunt.jpg")
image.paste(game_image)
# overlay_image is the darkened overlay over the left side of the card
overlay_image = Image.new(base_image_mode, base_image_size, (0, 0, 0))
overlay_image.putalpha(128)
# x position should be negative 50% of base canvas size
image.paste(overlay_image, (-800, 0), overlay_image)
image.save("test_image.png", format="PNG")
You can see that the game image sort of inherits the transparency from the overlay. I suspect it has something to do with the mask added in my paste above, but I tried looking into what masking is & its just beyond my understanding in any context I find it in.
Any help on understanding why this occurs and/or how I can resolve is appreciated!

You are super close... All you need, is to use Image.alpha_composite instead of paste. So, the last two lines of your code should be:
image = Image.alpha_composite(image, overlay_image)
image.save("test_image.png", format="PNG")

Related

How to highlight part of image with Python Imaging Library (PIL)?

How can I highlight part of image? (Location defined as tuple of 4 numbers). You can imagine it like I have image of pc motherboard, and I need to highlight for example part where CPU Socket is located.
Note that for Python 3, you need to use the pillow fork of PIL, which is a mostly backwards compatible fork of the original module but, unlike it, is currently actively being maintained.
Here's some sample code that shows how to do it using the PIL.ImageEnhance.Brightness class.
Doing what you want requires multiple steps:
The portion to be highlighted is cut out of — or cropped from — the image.
An instance of the Brightness class is created from this cropped image.
The cropped image is the lightened by calling the enhance() method of the Brightness instance.
The cropped and now lightened image is pasted back into the location it came from.
To make doing them all easier to repeat, below is a function named highlight_area() to perform them.
Note that I've also added a bonus feature that will optionally outline the highlighted region with a colored border — which you can of course remove if you don't need or want it.
from PIL import Image, ImageColor, ImageDraw, ImageEnhance
def highlight_area(img, region, factor, outline_color=None, outline_width=1):
""" Highlight specified rectangular region of image by `factor` with an
optional colored boarder drawn around its edges and return the result.
"""
img = img.copy() # Avoid changing original image.
img_crop = img.crop(region)
brightner = ImageEnhance.Brightness(img_crop)
img_crop = brightner.enhance(factor)
img.paste(img_crop, region)
# Optionally draw a colored outline around the edge of the rectangular region.
if outline_color:
draw = ImageDraw.Draw(img) # Create a drawing context.
left, upper, right, lower = region # Get bounds.
coords = [(left, upper), (right, upper), (right, lower), (left, lower),
(left, upper)]
draw.line(coords, fill=outline_color, width=outline_width)
return img
if __name__ == '__main__':
img = Image.open('motherboard.jpg')
red = ImageColor.getrgb('red')
cpu_socket_region = 110, 67, 274, 295
img2 = highlight_area(img, cpu_socket_region, 2.5, outline_color=red, outline_width=2)
img2.save('motherboard_with_cpu_socket_highlighted.jpg')
img2.show() # Display the result.
Here's an example of using the function. The original image is shown on the left opposite the one resulting from calling the function on it with the values shown in the sample code.

Python PIL remove every alpha channel completely

I tried so hard to converting PNG to Bitmap smoothly but failed every time.
but now I think I might found a reason.
it's because of the alpha channels.
('feather' in Photoshop)
Input image:
Output I've expected:
Current output:
I want to convert it to 8bit Bitmap and colour every invisible(alpha) pixels to purple(#FF00FF) and set them to dot zero. (very first palette)
but apparently, the background area and the invisible area around the actual image has a different colour.
i want all of them coloured same as background.
what should i do?
i tried these three
image = Image.open(file).convert('RGB')
image = Image.open(file)
image = image.convert('P')
pp = image.getpalette()
pp[0] = 255
pp[1] = 0
pp[2] = 255
image.putpalette(pp)
image = Image.open('feather.png')
result = image.quantize(colors=256, method=2)
the third method looks better but it becomes the same when I save it as a bitmap.
I just want to get it over now. I wasted too much time on this.
if i remove background from the output file,
it still looks awkward.
You question is kind of misleading as You stated:-
I want to convert it to 8bit Bitmap and colour every invisible(alpha) pixels to purple(#FF00FF) and set them to dot zero. (very first palette)
But in the description you gave an input image having no alpha channel. Luckily, I have seen your previous question Convert PNG to 8 bit bitmap, therefore I obtained the image containing alpha (that you mentioned in the description) but didn't posted.
HERE IS THE IMAGE WITH ALPHA:-
Now we have to obtain .bmp equivalent of this image, in P mode.
from PIL import Image
image = Image.open(r"Image_loc")
new_img = Image.new("RGB", (image.size[0],image.size[1]), (255, 0, 255))
cmp_img = Image.composite(image, new_img, image).quantize(colors=256, method=2)
cmp_img.save("Destination_path.bmp")
OUTPUT IMAGE:-

How to remove an image in pygame?

I'm working on a game and I have some problems working with images.
I have loaded a few images . loading them and using screen.blit() was okay like below:
img1 = pygame.image.load("leaf.png")
img1 = pygame.transform.scale(img1, (25,25))
leaf = img1.get_rect()
leaf.x = random.randint(0, 570)
leaf.y = random.randint(0, 570)
but I don't know how to remove them in an if statement like this for example:
if count == 1:
...
and I though maybe there is no way and I should draw a rectangle on the image to disappear it. Also I don't know how to use screen.fill() while I don't want the other images to get disappeared. Is there any other way?
You can fill individual images, since they are pygame Surfaces.
First, what I would do is I would put something like this after defining the leaf's x/y:
leaf.image = img1
Then, I would create a color variable called transparent:
transparent = (0, 0, 0, 0)
The first 3 numbers, as you might know, represent RGB color values. The last number is the alpha (transparency) value of a color. 0 is completely invisible.
Finally, I would add this code to make the leaf completely transparent:
leaf.image.fill(transparent)
This makes the leaf transparent without making every other image in your window disappear. Hope this helped!

python imaging library: Can I simply fill my image with one color?

I googled, checked the documentation of PIL library and much more, but I couldn't find the answer to my simple question: how can I fill an existing image with a desired color?
(I am using from PIL import Image and from PIL import ImageDraw)
This command creates a new image filled with a desired color
image = Image.new("RGB", (self.width, self.height), (200, 200, 200))
But I would like to reuse the same image without the need of calling "new" every time
Have you tried:
image.paste(color, box)
where box can be a 2-tuple giving the upper left corner, a 4-tuple defining the left, upper, right, and lower pixel coordinate, or None (same as (0, 0))
Since you want to fill the entire image, you can use the following:
image.paste( (200,200,200), [0,0,image.size[0],image.size[1]])
One possibility is to draw a rectangle:
from PIL import Image
from PIL import ImageDraw
#...
draw = ImageDraw.Draw(image)
draw.rectangle([(0,0),image.size], fill = (200,200,200) )
Or (untested):
draw = ImageDraw.Draw(image).rectangle([(0,0),image.size], fill = (200,200,200) )
(Although it is surprising there is no simpler method to fill a whole image with one background color, like setTo for opencv)

Mirror Image but wrong size

I am trying to input an image (image1) and flip it horizontally and then save to a file (image2). This works but not the way I want it to
currently this code gives me a flipped image but it just shows the bottom right quarter of the image, so it is the wrong size. Am I overwriting something somewhere? I just want the code to flip the image horizontally and show the whole picture flipped. Where did I go wrong?
and I cannot just use a mirror function or reverse function, I need to write an algorithm
I get the correct window size but the incorrect image size
def Flip(image1, image2):
img = graphics.Image(graphics.Point(0, 0), image1)
X, Y = img.getWidth(), img.getHeight()
for y in range(Y):
for x in range(X):
r, g, b = img.getPixel(x,y)
color = graphics.color_rgb(r, g, b)
img.setPixel(X-x, y, color)
win = graphics.GraphWin(img, img.getWidth(), img.getHeight())
img.draw(win)
img.save(image2)
I think your problem is in this line:
win = graphics.GraphWin(img, img.getWidth(), img.getHeight())
The first argument to the GraphWin constructor is supposed to be the title, but you are instead giving it an Image object. It makes me believe that maybe the width and height you are supplying are then being ignored. The default width and height for GraphWin is 200 x 200, so depending on the size of your image, that may be why only part of it is being drawn.
Try something like this:
win = graphics.GraphWin("Flipping an Image", img.getWidth(), img.getHeight())
Another problem is that your anchor point for the image is wrong. According to the docs, the anchor point is where the center of the image will be rendered (thus at 0,0 you are only seeing the bottom right quadrant of the picture). Here is a possible solution if you don't know what the size of the image is at the time of creation:
img = graphics.Image(graphics.Point(0, 0), image1)
img.move(img.getWidth() / 2, img.getHeight() / 2)
You are editing your source image. It would be
better to create an image copy and set those pixels instead:
create a new image for editing:
img_new = img
Assign the pixel values to that:
img_new.setPixel(X-x, y, color)
And draw that instead:
win = graphics.GraphWin(img_new, img_new.getWidth(), img_new.getHeight())
img_new.draw(win)
img_new.save(image2)
This will also check that your ranges are correct. if they are not, you will see both flipped and unflipped portions in the final image, showing which portions are outside of your ranges.
If you're not opposed to using an external library, I'd recommend the Python Imaging Library. In particular, the ImageOps module has a mirror function that should do exactly what you want.

Categories

Resources