How to make the background of an image transparent with Wand? - python

I have an image with a white background, and want to convert the white background to transparent. How can I do this with Wand?
The ImageMagick command to do this is:
convert ~/Desktop/cat_with_white_gb.png -transparent white ~/Desktop/cat_with_transparent_bg.png
I have tried:
import urllib2
fg_url = 'http://i.stack.imgur.com/Mz9y0.jpg'
fg = urllib2.urlopen(fg_url)
with Image(file=fg) as img:
img.background_color = Color('transparent')
img.save(filename='test.png')
and
with Image(file=fg) as fg_img:
with Color('#FFF') as white:
fg_img.transparent_color(white, 0.0)

The big thing to remember is that JPEG source images will not have an alpha channel. You can add this by defining wand.image.Image.alpha_channel, or just setting the image format to something that does work with transparency.
from wand.image import Image
from wand.color import Color
with Image(filename="http://i.stack.imgur.com/Mz9y0.jpg") as img:
img.format = 'png'
with Color('#FDFDFD') as white:
twenty_percent = int(65535 * 0.2) # Note: percent must be calculated from Quantum
img.transparent_color(white, alpha=0.0, fuzz=twenty_percent)
img.save(filename="/tmp/Mz9y0.png")
perhaps the fuzz of 20% is to aggressive in this example

Related

How to convert this ImageMagick command for white background removal to Python Wand module?

I am referring to #fmw42 and his wonderful ImageMagic command to turn a white background to a transparent background. I modified his original command to work with the latest version of ImageMagick.
magick test_imagemagick.jpg -fuzz 25% -fill none -draw "alpha 0,0 floodfill" -channel alpha -blur 0x1 -level 50x100% +channel result.png
This is great on the command line, but I am struggling with understanding how to implement the same in Python Wand.
This is what I have so far which is not much because I have no idea how to map the info from both documentation.s
with Image(filename= 'test_imagemagick.jpg') as img:
img.fuzz = 0.25 * QUANTUM_RANGE # 25%
img.fill_color = 'transparent'
Try this command in Python/Wand:
from wand.image import Image
from wand.drawing import Drawing
from wand.color import Color
from wand.display import display
with Image(filename='logo:') as img:
with img.clone() as copied:
copied.fuzz = 0.25 * img.quantum_range
with Drawing() as draw:
draw.fill_color = Color('transparent')
draw.matte(x=0.0, y=0.0, paint_method='floodfill')
draw(copied)
copied.alpha_channel = 'extract'
copied.blur(radius=0.0, sigma=2)
copied.level(black=0.5, white=1, gamma=1.0)
img.composite(copied, left=0, top=0, operator='copy_opacity')
img.format='png'
display(img)
img.save(filename='logo_transparent_antialiased.png')

Convert full color image to three color image for e-ink display

I'd like to be able to automagically convert full color images down to three color (black / red / white) for an e-ink display (Waveshare 7.5"). Right now I'm just letting the screen handle it, but as expected complex images get washed out.
Are there any algorithms or filters I could apply to make things a bit more visible?
Right now I'm using Python, but I'm not averse to other languages/environments if necessary.
Good image:
Washed out image:
You could make your own palette of 3 acceptable colours like this:
magick xc:red xc:white xc:black +append palette.gif
Then you can apply it to your image like this:
magick input.png +dither -remap palette.gif result.png
If you want to send it straight to the frame buffer and it supports RB888, you can try running something like this:
magick input.png +dither -remap palette.gif -depth 8 RGB:/dev/fb0
Just adding a bit to Mark Setchell's answer. For printing you might be better dithering your 3 colors. So here is your image with and without dithering using Imagemagick 7. If using Imagemagick 6, replace magick with convert.
Input:
Create 3 color palette:
magick xc:red xc:white xc:black +append palette.gif
With dithering(default is Floyd-Steinberg):
magick input.png -remap palette.gif result.png
[![enter image description here][2]][2]
With out dithering
magick input.png -dither none -remap palette.gif result2.png
[![enter image description here][3]][3]
If you want Python, then you could try Python Wand. It is based upon Imagemagick.
ADDITION:
To separate the red and black into two image, each of which are represented by black and the rest as white, you can do the following and save as BMP as you want in your comments. (You can do this with or without dithering from above as you desire)
magick result.png -color-threshold "red-red" -negate red.bmp
magick result.png -color-threshold "black-black" -negate black.bmp
Red:
Black:
You appear to be choosing the nearest color for each pixel. See if a dithering algorithm works better for your purposes. Generally, dithering algorithms take into account neighboring pixels when determining how to color a given pixel.
EDIT: In the case of PIL (the Python Imaging Library), it doesn't seem trivial to dither to an arbitrary set of three colors, at least as of 2012.
Just adding a bit to Mark and Fred's answers. I'm using ImageMagick on Raspberry Pi, which is version < 7 and uses "convert". Some of the commands Fred had suggested didn't work for that version. Here's what I did to resize, remap and dither, and split the image into white-and-black and white-and-red sub-images.
# Create palette with red, white and black colors
convert xc:red xc:white xc:black +append palette.gif
# Resize input file into size suitable for ePaper Display - 264x176
# Converting to BMP.
# Note, if working with JPG, it is a lossy
# format and subsequently remapping and working with it results
# in the color palette getting overwritten - we just convert to BMP
# and work with that instead
convert $1 -resize 264x176^ -gravity center -extent 264x176 resized.bmp
# Remap the resized image into the colors of the palette using
# Floyd Steinberg dithering (default)
# Resulting image will have only 3 colors - red, white and black
convert resized.bmp -remap palette.gif result.bmp
# Replace all the red pixels with white - this
# isolates the white and black pixels - i.e the "black"
# part of image to be rendered on the ePaper Display
convert -fill white -opaque red result.bmp result_black.bmp
# Similarly, Replace all the black pixels with white - this
# isolates the white and red pixels - i.e the "red"
# part of image to be rendered on the ePaper Display
convert -fill white -opaque black result.bmp result_red.bmp
I've also implemented in using Python Wand, a Python layer over ImageMagick
# This function takes as input a filename for an image
# It resizes the image into the dimensions supported by the ePaper Display
# It then remaps the image into a tri-color scheme using a palette (affinity)
# for remapping, and the Floyd Steinberg algorithm for dithering
# It then splits the image into two component parts:
# a white and black image (with the red pixels removed)
# a white and red image (with the black pixels removed)
# It then converts these into PIL Images and returns them
# The PIL Images can be used by the ePaper library to display
def getImagesToDisplay(filename):
print(filename)
red_image = None
black_image = None
try:
with WandImage(filename=filename) as img:
img.resize(264, 176)
with WandImage() as palette:
with WandImage(width = 1, height = 1, pseudo ="xc:red") as red:
palette.sequence.append(red)
with WandImage(width = 1, height = 1, pseudo ="xc:black") as black:
palette.sequence.append(black)
with WandImage(width = 1, height = 1, pseudo ="xc:white") as white:
palette.sequence.append(white)
palette.concat()
img.remap(affinity=palette, method='floyd_steinberg')
red = img.clone()
black = img.clone()
red.opaque_paint(target='black', fill='white')
# This is not nececessary - making the white and red image
# white and black instead - left here FYI
# red.opaque_paint(target='red', fill='black')
black.opaque_paint(target='red', fill='white')
red_image = Image.open(io.BytesIO(red.make_blob("bmp")))
black_image = Image.open(io.BytesIO(black.make_blob("bmp")))
except Exception as ex:
print ('traceback.format_exc():\n%s',traceback.format_exc())
return (red_image, black_image)
Here's my writeup on my project on Hackster (including full source code links) - https://www.hackster.io/sridhar-rajagopal/photostax-digital-epaper-photo-frame-84d4ed
I've attributed both Mark and Fred there - thank you!

How can I insert Monospace fonts into an image with opencv?

Currently, I am able to insert some texts of HERSHEY font into images with openCV API (putText). But it seems openCV are not supporting any monospace font.
I was wondering how I can insert some Monospace or fixed-pitch texts into the image.
You could use PIL/Pillow for that aspect quite easily. OpenCV images are numpy arrays, so you can make a Pillow Image from an OpenCV image with:
PilImage = Image.fromarray(OpenCVimage)
Then you can draw with a mono spaced font using code in my answer here. You only need the 3 lines after the comment "Get a drawing context".
Then you can convert back to OpenCV image with:
OpenCVimage = np.array(PilImage)
That might look like this:
#!/usr/local/bin/python3
from PIL import Image, ImageFont, ImageDraw
import numpy as np
import cv2
# Open image with OpenCV
im_o = cv2.imread('start.png')
# Make into PIL Image
im_p = Image.fromarray(im_o)
# Get a drawing context
draw = ImageDraw.Draw(im_p)
monospace = ImageFont.truetype("/Library/Fonts/Andale Mono.ttf",32)
draw.text((40, 80),"Hopefully monospaced",(255,255,255),font=monospace)
# Convert back to OpenCV image and save
result_o = np.array(im_p)
cv2.imwrite('result.png', result_o)
Alternatively, you could have a function generate a lump of canvas itself, write your text on it, and then splice it into your OpenCV image wherever you want. Something along these lines - though I have no idea of what flexibility you would require so I have not parameterised everything:
#!/usr/local/bin/python3
from PIL import Image, ImageFont, ImageDraw, ImageColor
import numpy as np
import cv2
def GenerateText(size, fontsize, bg, fg, text):
"""Generate a piece of canvas and draw text on it"""
canvas = Image.new('RGB', size, bg)
# Get a drawing context
draw = ImageDraw.Draw(canvas)
monospace = ImageFont.truetype("/Library/Fonts/Andale Mono.ttf",fontsize)
draw.text((10, 10), text, fg, font=monospace)
# Change to BGR order for OpenCV's peculiarities
return cv2.cvtColor(np.array(canvas), cv2.COLOR_RGB2BGR)
# Open image with OpenCV
im_o = cv2.imread('start.png')
# Try some tests
w,h = 350,50
a,b = 20, 80
text = GenerateText((w,h), 32, 'black', 'magenta', "Magenta on black")
im_o[a:a+h, b:b+w] = text
w,h = 200,40
a,b = 120, 280
text = GenerateText((w,h), 18, 'cyan', 'blue', "Blue on cyan")
im_o[a:a+h, b:b+w] = text
cv2.imwrite('result.png', im_o)
Keywords: OpenCV, Python, Numpy, PIL, Pillow, image, image processing, monospace, font, fonts, fixed, fixed width, courier, HERSHEY.

Python: Greyscale image: Make everything white, except for black pixels

I tried to open (already greyscale) images and change all non-black pixels to white pixels. I implemented the following code:
from scipy.misc import fromimage, toimage
from PIL import Image
import numpy as np
in_path = 'E:\\in.png'
out_path = 'E:\\out.png'
# Open gray-scale image
img = Image.open(in_path).convert('L')
# Just for testing: The image is saved correct
#img.save(out_path)
# Make all non-black colors white
imp_arr = fromimage(img)
imp_arr = (np.ceil(imp_arr / 255.0) * 255.0).astype(int)
# Save the image
img = toimage(imp_arr, mode='L')
img.save(out_path)
The calculation to make all pixels white, except for the black ones is quite simple and also very fast. For my use-case it is especially important that it works very fast, for this reason i used numpy. For some reason this code does not work with all images?
An example: The following image is the input.
It contains a grey rectangle and also a white border. The output should be a complete white image, but for some reason the output is a black image:
With some other images it works quite well. What do i do wrong? I think floating point shouldn't be a big issue here, because this code does not require a high calculation accuracy to work.
Thank you very much
toimage expects a byte array, so convert to uint8 not int:
imp_arr = (np.ceil(imp_arr / 255.0) * 255.0).astype('uint8')
I seems to work for int if there is a mix of black and white pixels in the output, but not if they are all white. I can't find any explanation for this in the documentation.

Drawing semi-transparent polygons in PIL

How do you draw semi-transparent polygons using the Python Imaging Library?
Can you draw the polygon on a separate RGBA image then use the Image.paste(image, box, mask) method?
Edit: This works.
from PIL import Image
from PIL import ImageDraw
back = Image.new('RGBA', (512,512), (255,0,0,0))
poly = Image.new('RGBA', (512,512))
pdraw = ImageDraw.Draw(poly)
pdraw.polygon([(128,128),(384,384),(128,384),(384,128)],
fill=(255,255,255,127),outline=(255,255,255,255))
back.paste(poly,mask=poly)
back.show()
http://effbot.org/imagingbook/image.htm#image-paste-method
I think #Nick T's answer is good, but you need to be careful when using his code as written with a very large background image, especially in the case that you may be annotating several polygons on said image. This is something I do when processing huge satellite images with some object detection code and annotating the detections using a transparent rectangle. To make the code efficient no matter the size of the background image, I make the following suggestion.
I would modify the solution to specify that the polygon image that you will paste be only as large as required to hold the polygon, not the same size as the back image. The coordinates of the polygon are specified with respect to the local bounding box, not the global image coordinates. Then you paste the polygon image at the offset in the larger background image.
import Image
import ImageDraw
img_size = (512,512)
poly_size = (256,256)
poly_offset = (128,128) #location in larger image
back = Image.new('RGBA', img_size, (255,0,0,0) )
poly = Image.new('RGBA', poly_size )
pdraw = ImageDraw.Draw(poly)
pdraw.polygon([ (0,0), (256,256), (0,256), (256,0)],
fill=(255,255,255,127), outline=(255,255,255,255))
back.paste(poly, poly_offset, mask=poly)
back.show()
Using the Image.paste(image, box, mask) method will convert the alpha channel in the pasted area of the background image into the corresponding transparency value of the polygon image.
The Image.alpha_composite(im1,im2) method utilizes the alpha channel of the "pasted" image, and will not turn the background transparent. However, this method again needs two equally sized images.

Categories

Resources