How do you draw semi-transparent polygons using the Python Imaging Library?
Can you draw the polygon on a separate RGBA image then use the Image.paste(image, box, mask) method?
Edit: This works.
from PIL import Image
from PIL import ImageDraw
back = Image.new('RGBA', (512,512), (255,0,0,0))
poly = Image.new('RGBA', (512,512))
pdraw = ImageDraw.Draw(poly)
pdraw.polygon([(128,128),(384,384),(128,384),(384,128)],
fill=(255,255,255,127),outline=(255,255,255,255))
back.paste(poly,mask=poly)
back.show()
http://effbot.org/imagingbook/image.htm#image-paste-method
I think #Nick T's answer is good, but you need to be careful when using his code as written with a very large background image, especially in the case that you may be annotating several polygons on said image. This is something I do when processing huge satellite images with some object detection code and annotating the detections using a transparent rectangle. To make the code efficient no matter the size of the background image, I make the following suggestion.
I would modify the solution to specify that the polygon image that you will paste be only as large as required to hold the polygon, not the same size as the back image. The coordinates of the polygon are specified with respect to the local bounding box, not the global image coordinates. Then you paste the polygon image at the offset in the larger background image.
import Image
import ImageDraw
img_size = (512,512)
poly_size = (256,256)
poly_offset = (128,128) #location in larger image
back = Image.new('RGBA', img_size, (255,0,0,0) )
poly = Image.new('RGBA', poly_size )
pdraw = ImageDraw.Draw(poly)
pdraw.polygon([ (0,0), (256,256), (0,256), (256,0)],
fill=(255,255,255,127), outline=(255,255,255,255))
back.paste(poly, poly_offset, mask=poly)
back.show()
Using the Image.paste(image, box, mask) method will convert the alpha channel in the pasted area of the background image into the corresponding transparency value of the polygon image.
The Image.alpha_composite(im1,im2) method utilizes the alpha channel of the "pasted" image, and will not turn the background transparent. However, this method again needs two equally sized images.
Related
I have been trying to write a code to extract cracks from an image using thresholding. However, I wanted to keep the background black. What would be a good solution to keep the outer boundary visible and the background black. Attached below is the original image along with the threshold image and the code used to extract this image.
import cv2
#Read Image
img = cv2.imread('Original.png')
# Convert into gray scale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Image processing ( smoothing )
# Averaging
blur = cv2.blur(gray,(3,3))
ret,th1 = cv2.threshold(blur,145,255,cv2.THRESH_BINARY)
inverted = np.invert(th1)
plt.figure(figsize = (20,20))
plt.subplot(121),plt.imshow(img)
plt.title('Original'),plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(inverted,cmap='gray')
plt.title('Threshold'),plt.xticks([]), plt.yticks([])
Method 1
Assuming the circle in your images stays in one spot throughout your image set you can manually create a black 'mask' image with a white hole in the middle, then overlay it on the final inverted image.
You can easily make the mask image using your favorite image editor's magic wand tool.
I made this1 by also expanding the circle inwards by one pixel to take into account some of the pixels the magic wand tool couldn't catch.
You would then use the mask image like this:
mask = cv2.imread('/path/to/mask.png')
masked = cv2.bitwise_and(inverted, inverted, mask=mask)
Method 2
If the circle does NOT stay is the same spot throughout your entire image set you can try to make the mask from all the fully black pixels in your original image. This assumes that the 'sample' itself (the thing with the cracks) does not contain fully black pixels. Although this will result in the text on the bottom left to be left white.
# make all the non black pixels white
_,mask = cv2.threshold(gray,1,255,cv2.THRESH_BINARY)
1 The original is not the same size as your inverted image and thus the mask I made won't actually fit, you're gonna have to make it yourself.
I want to take one image, and overlay it as its outline only without background/filling. I have one image that is an outline in PNG format, that has had its background, as well as the contents within the outline removed, so that when opened, all is transparent except the outline, similar to this image:
However, when I open the image and try to overlay it in OpenCV, the background and area within the outline shows as all-white, showing the full rectangle of the image's dimensions and obscuring the background image.
However, what I want to do is the following, where only the outline is overlayed on the background image, like so:
Bonus points if you can help me with changing the color of the outline as well.
I don't want to deal with any blending with alphas, as I need the background to appear in full, and want the outline very clear.
In this special case, your image has some alpha channel you can use. Using Boolean array indexing, you can access all values 255 in the alpha channel. What's left to do, is setting up some region of interest (ROI) in the "background" image w.r.t. some position, and in that ROI, you again use Boolean array indexing to set all pixels to some color, i.e. red.
Here's some code:
import cv2
# Open overlay image, and its dimensions
overlay_img = cv2.imread('1W7HZ.png', cv2.IMREAD_UNCHANGED)
h, w = overlay_img.shape[:2]
# In this special case, take the alpha channel of the overlay image, and
# check for value 255; idx is a Boolean array
idx = overlay_img[:, :, 3] == 255
# Open image to work on
img = cv2.imread('path/to/your/image.jpg')
# Position for overlay image
top, left = (50, 50)
# Access region of interest with overlay image's dimensions at position
# img[top:top+h, left:left+w] and there, use Boolean array indexing
# to set the color to red (for example)
img[top:top+h, left:left+w, :][idx] = (0, 0, 255)
# Save image
cv2.imwrite('output.png', img)
That's the output for some random "background" image:
For the general case, i.e. without a proper alpha channel, you could threshold the overlay image to set up a proper mask for the Boolean array indexing.
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.8.5
OpenCV: 4.5.1
----------------------------------------
I have two images, one overlay and one background.
I want to create a new image, by editing overlay image and manipulating it to show only the pixels which have blue colour in the background image.
I dont want to add the background, it is only for selecting the pixels.
Rest part should be transparent.
Any hints or ideas please? PS: I edited result image with paint so its not perfect.
Image 1 is background image.
Image 2 is overlay image.
Image 3 is the check I want to perform. (to find out which pixels have blue in background and making remaining pixels transparent)
Image 4 is the result image after editing.
I renamed your images according to my way of thinking, so I took this as image.png:
and this as mask.png:
Then I did what I think you want as follows. I wrote it quite verbosely so you can see all the steps along the way:
#!/usr/local/bin/python3
from PIL import Image
import numpy as np
# Open input images
image = Image.open("image.png")
mask = Image.open("mask.png")
# Get dimensions
h,w=image.size
# Resize mask to match image, taking care not to introduce new colours (Image.NEAREST)
mask = mask.resize((h,w), Image.NEAREST)
mask.save('mask_resized.png')
# Convert both images to numpy equivalents
npimage = np.array(image)
npmask = np.array(mask)
# Make image transparent where mask is not blue
# Blue pixels in mask seem to show up as RGB(163 204 255)
npimage[:,:,3] = np.where((npmask[:,:,0]<170) & (npmask[:,:,1]<210) & (npmask[:,:,2]>250),255,0).astype(np.uint8)
# Identify grey pixels in image, i.e. R=G=B, and make transparent also
RequalsG=np.where(npimage[:,:,0]==npimage[:,:,1],1,0)
RequalsB=np.where(npimage[:,:,0]==npimage[:,:,2],1,0)
grey=(RequalsG*RequalsB).astype(np.uint8)
npimage[:,:,3] *= 1-grey
# Convert numpy image to PIL image and save
PILrgba=Image.fromarray(npimage)
PILrgba.save("result.png")
And this is the result:
Notes:
a) Your image already has an (unused) alpha channel present.
b) Any lines starting:
npimage[:,:,3] = ...
are just modifying the 4th channel, i.e. the alpha/transparency channel of the image
I can successfully convert a rectangular image into a png with transparent rounded corners like this:
However, when I take this transparent cornered image and I want to use it in another image generated with Pillow, I end up with this:
The transparent corners become black. I've been playing around with this for a while but I can't find any way in which the transparent parts of an image don't turn black once I place them on another image with Pillow.
Here is the code I use:
mask = Image.open('Test mask.png').convert('L')
im = Image.open('boat.jpg')
im.resize(mask.size)
output = ImageOps.fit(im, mask.size, centering=(0.5, 0.5))
output.putalpha(mask)
output.save('output.png')
im = Image.open('output.png')
image_bg = Image.new('RGBA', (1292,440), (255,255,255,100))
image_fg = im.resize((710, 400), Image.ANTIALIAS)
image_bg.paste(image_fg, (20, 20))
image_bg.save('output2.jpg')
Is there a solution for this? Thanks.
Per some suggestions I exported the 2nd image as a PNG, but then I ended up with an image with holes in it:
Obviously I want the second image to have a consistent white background without holes.
Here is what I actually want to end up with. The orange is only placed there to highlight the image itself. It's a rectangular image with white background, with a picture placed into it with rounded corners.
If you paste an image with transparent pixels onto another image, the transparent pixels are just copied as well. It looks like you only want to paste the non-transparent pixels. In that case, you need a mask for the paste function.
image_bg.paste(image_fg, (20, 20), mask=image_fg)
Note the third argument here. From the documentation:
If a mask is given, this method updates only the regions indicated by
the mask. You can use either "1", "L" or "RGBA" images (in the latter
case, the alpha band is used as mask). Where the mask is 255, the
given image is copied as is. Where the mask is 0, the current value
is preserved. Intermediate values will mix the two images together,
including their alpha channels if they have them.
What we did here is provide an RGBA image as mask, and use the alpha channel as mask.
Is there a way to cut out non rectangular areas of an image with Python PIL?
e.g. in this picture I want to exclude all black areas as well as towers, rooftops and poles.
http://img153.imageshack.us/img153/5330/skybig.jpg
I guess the ImagePath Module can do that, but furthermore, how can I read data of e.g. a svg file and convert it into a path?
Any help will be appreciated.
(My sub question is presumably the easier task: how to cut at least a circle of an image?)
If I understood correctly, you want to make some areas transparent within the image. And these areas are random shaped. Easiest way (that I can think of) is to create a mask and put it to the alpha channel of the image. Below is a code that shows how to do this.
If your question was "How to create a polygon mask" I will redirect you to:
SciPy Create 2D Polygon Mask
and look the accepted answer.
br,
Juha
import numpy
import Image
# read image as RGB and add alpha (transparency)
im = Image.open("lena.png").convert("RGBA")
# convert to numpy (for convenience)
imArray = numpy.asarray(im)
# create mask (zeros + circle with ones)
center = (200,200)
radius = 100
mask = numpy.zeros((imArray.shape[0],imArray.shape[1]))
for i in range(imArray.shape[0]):
for j in range(imArray.shape[1]):
if (i-center[0])**2 + (j-center[0])**2 < radius**2:
mask[i,j] = 1
# assemble new image (uint8: 0-255)
newImArray = numpy.empty(imArray.shape,dtype='uint8')
# colors (three first columns, RGB)
newImArray[:,:,:3] = imArray[:,:,:3]
# transparency (4th column)
newImArray[:,:,3] = mask*255
# back to Image from numpy
newIm = Image.fromarray(newImArray, "RGBA")
newIm.save("lena3.png")
Edit
Actually, I could not resist... the polygon mask solution was so elegant (replace the above circle with this):
# create mask
polygon = [(100,100), (200,100), (150,150)]
maskIm = Image.new('L', (imArray.shape[0], imArray.shape[1]), 0)
ImageDraw.Draw(maskIm).polygon(polygon, outline=1, fill=1)
mask = numpy.array(maskIm)
Edit2
Now when I think of it. If you have a black and white svg, you can load your svg directly as mask (assuming white is your mask). I have no sample svg images, so I cannot test this. I am not sure if PIL can open svg images.