All I need is to create a .png image with transparent background, draw some text in black on it and save it using img.save('target.png', option='optimize')
It looks like PIL saves .png images in 32-bit mode automatically. Can I reduce the color depth while not making the output images look much worse before saving? Since it contains only black text and transparent background, I think reducing the color depth would greatly reduce file size.
The RGBA mode is the only mode that supports transparency, and it is necessarily 32 bits:
1 (1-bit pixels, black and white, stored with one pixel per byte)
L (8-bit pixels, black and white)
P (8-bit pixels, mapped to any other mode using a color palette)
RGB (3x8-bit pixels, true color)
RGBA (4x8-bit pixels, true color with transparency mask)
I would recommend you to store your image with a non-transparent 1 mode and use the image itself as a mask. If you give your image with mode 1 as a mask on your image, black pixels will stay and white ones will be transparent. This will take 32 times less space without any loss of information.
You can use either “1”, “L” or “RGBA” images (in the latter case, the alpha band is used as mask). Where the mask is 255, the given image is copied as is. Where the mask is 0, the current value is preserved. Intermediate values will mix the two images together, including their alpha channels if they have them.
It will look something like this:
your_transparent_image.paste(bw_image, mask=bw_image)
where bw_image is your black and white text.
Related
I am trying to use a .png file in a CNN with TensorFlow and have this trouble where I get a strange shape when I import a .png file.
Png_file = mpimg.imread("201537.png")
Png_file.shape
and get the output as (255, 255, 4).
I get that the file is 255 pixels x 255 pixels but isnt the last number meaning RGB so it should be 3 and not 4?
PNG images support transparency. The transparency value per pixel is contained as another channel besides the 3 color channels for red, green, and blue.
The transparency channel is also called "alpha channel" and the image mode is therefore called RGBA.
My goal is to draw the text bounding boxes for the following image. Since the two regions are colored differently, so this should be easy. I just need to select the pixels that match a certain color values to filter out the other text region and run a convex hull detection.
However, when I zoom in the image, I notice that the text regions has the zig-zag effect on the edges, so I'm not able to easily find the two color values (for the blue and green) from the above image.
I wonder is there a way to remove the zig-zag effect to make sure each phrase is colored consistently? Or is there a way to determine the dominant color for each text region?
The anti-aliasing causes the color to become lighter (or darker if against a black background) so you can think of the color as being affected by light. In that case, we can use light-invariant color spaces to extract the colors.
So first convert to hsv since it is a light invariant colorspace. Since the background can be either black or white, we will filter out them out (if the bg is always white and the text can be black you would need to change the filtering to allow for that).
I took the saturation as less than 80 as that will encompass white black and gray since they are the only colors with low saturation. (your image is not perfectly white, its 238 instead of 255 maybe due to jpg compression)
Since we found all the black, white and gray, the rest of the image are our main colors, so i took the inverse mask of the filter, then to make the colors uniform and unaffected by light, set the Saturation and Value of the colors to 255, that way the only difference between all the colors will be the hue. I also set bg pixels to 0 to make it easier for finding contours but thats not necissary
After this you can use whatever method you want to get the different groups of colors, I just did a quick histogram for the hue values and got 3 peaks but 2 were close together so they can be bundled together as 1. You can maybe use peak finding to try to find the peaks. There might be better methods of finding the color groups but this is what i just thought of quickly.
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
mask = hsv[:,:,1] < 80 # for white, gray & black
hsv[mask] = 0 # set bg pixels to 0
hsv[~mask,1:] = 255 # set fg pixels saturation and value to 255 for uniformity
colors = hsv[~mask]
z = np.bincount(colors[:,0])
print(z)
bgr = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
cv2.imshow('bgr', bgr)
After importing an image using python's PIL module I would like to get the set of colours in the image as a list of rgb tuples.
What If I know before hand that there will only be 2 colours and the image will be very small, maybe 20x20 pixels? However, I will be running this algorithm over a lot of images. Will It be more effiient to loop through all pixels until I see 2 unique colours? Because I understand loops are very slow in python.
First, let's make an image. I'll just use ImageMagick to make a blue background with magenta writing:
convert -size 300x120 -background blue -fill magenta -gravity center -font AppleChancery label:"StackOverflow" PNG24:image.png
As you can see, I only specified two colours - magenta and blue, but the PNG image actually contains 200+ colours and the JPEG image contains 2,370 different colours!
So, if I want to get the two main colours, I can do this:
from PIL import Image
# Open the image
im = Image.open('image.png')
# Quantize down to 2 colour palettised image using *"Fast Octree"* method:
q = im.quantize(colors=2,method=2)
# Now look at the first 2 colours, each 3 RGB entries in the palette:
print(q.getpalette()[:6])
Sample Result
[0, 0, 255, 247, 0, 255]
If you write that out as 2 RGB triplets, you get:
RGB 0/0/255 = blue
RGB 247/0/255 = magenta
The best way to do this for lots of images is to use multithreading or multiprocessing if you want them done fast!
Keywords: Python, PIL, Pillow, image, image processing, octree, fast octree, quantise, quantize, palette, palettise, palettize, reduce colours, reduce colors, anti-aliasing, font, unique, unique colours, unique colors.
I can successfully convert a rectangular image into a png with transparent rounded corners like this:
However, when I take this transparent cornered image and I want to use it in another image generated with Pillow, I end up with this:
The transparent corners become black. I've been playing around with this for a while but I can't find any way in which the transparent parts of an image don't turn black once I place them on another image with Pillow.
Here is the code I use:
mask = Image.open('Test mask.png').convert('L')
im = Image.open('boat.jpg')
im.resize(mask.size)
output = ImageOps.fit(im, mask.size, centering=(0.5, 0.5))
output.putalpha(mask)
output.save('output.png')
im = Image.open('output.png')
image_bg = Image.new('RGBA', (1292,440), (255,255,255,100))
image_fg = im.resize((710, 400), Image.ANTIALIAS)
image_bg.paste(image_fg, (20, 20))
image_bg.save('output2.jpg')
Is there a solution for this? Thanks.
Per some suggestions I exported the 2nd image as a PNG, but then I ended up with an image with holes in it:
Obviously I want the second image to have a consistent white background without holes.
Here is what I actually want to end up with. The orange is only placed there to highlight the image itself. It's a rectangular image with white background, with a picture placed into it with rounded corners.
If you paste an image with transparent pixels onto another image, the transparent pixels are just copied as well. It looks like you only want to paste the non-transparent pixels. In that case, you need a mask for the paste function.
image_bg.paste(image_fg, (20, 20), mask=image_fg)
Note the third argument here. From the documentation:
If a mask is given, this method updates only the regions indicated by
the mask. You can use either "1", "L" or "RGBA" images (in the latter
case, the alpha band is used as mask). Where the mask is 255, the
given image is copied as is. Where the mask is 0, the current value
is preserved. Intermediate values will mix the two images together,
including their alpha channels if they have them.
What we did here is provide an RGBA image as mask, and use the alpha channel as mask.
I'm trying to mask a jpg image using a png black/transparent mask, but due to aliasing and border blur, i always have in output a contour line of the original jpg.
Since graphical precision is not required by the task, this could be easily solved by increasing the masked area by a few pixels.
So for example if the masked area allows a centered circle of 100px, simply "extending" the circle by some pixel, would solve the problem.
Is there a way to achieve this with Pillow ?
I found a solution; i write it down so that others may benefit if needed:
1) apply a gaussian blur to the mask. this will "expand" the borders with a shade
1b) convert in black/white colors only if needed
2) apply a transformation that converts each pixel in black or white based on a threshold. no other colors allowed
so something similar:
blackThreshold = 128.0
img = img.filter(ImageFilter.GaussianBlur(radius=3))
r,g,b,a = img.split() # supposing to have a RGBA PNG
gray = Image.merge('L',(a,)) #
gray = gray.point(lambda x: 0 if x<blackThreshold else 255)