Image creation and fonts in Python - python

I have created a set of images in python utilizing PIL. In addition to this, I've implemented textwrap in order to put text onto the images I've created, however, they're not quite perfect. First, below are three examples of images I've created.
These three images have different widths, but I'd like them all to have the same width, whereas height isn't of concern and can be taller or smaller than each other; the width is the only thing that must remain consistent. In addition to this, I've used utf-8 encoding in order to get this text on the images, but I would like the font to look something more like the following
Also shown in the above image is how those boxes are stacked--That is how I'd like to have my final product. Rather than three separate images of bordered text, I'd like to have one single image containing those bordered boxes of text. Here is my current code for what I've output
for match in find_matches(text=fullText):
ct += 1
match_words = match.split(" ")
match = " ".join(match_words[:-1])
print(match)
W, H = 300, 300
base = Image.new("RGB", (W, H), (255, 255, 255))
draw = ImageDraw.Draw(base)
font = ImageFont.load_default()
current_h, pad = 50, 5
for key in textwrap.wrap(match, width=50):
line = key.encode("ascii")
w, h = draw.textsize(line, font=font)
draw.text(((W - w) / 2, current_h), line, (0, 0, 0), font=font)
current_h += h + pad
draw.text((W / 2, current_h), str(ct).encode("utf-8"), (0, 0, 0), font=font)
for count, matches in enumerate(match):
base.save(f"{ct}C.png")
bbox = ImageOps.invert(base).getbbox()
trim = base.crop(bbox)
patent = ImageOps.expand(trim, border=5, fill=(255, 255, 255))
patent = ImageOps.expand(patent, border=3, fill=(0, 0, 0))
patent.save(f"{ct}C.png")
p_w, p_h = patent.size
Image.open(result_fpath, "r")
result.paste(patent)
result.save(result_fpath)
Finally, this has to be an automated process. What I was thinking that could be done for the stacked boxes into a single image would be a for-loop that takes in the created images and then pastes them into an image of the same size as the first pasted image which resizes appropriately for each subsequent bordered box of text. I'd appreciate any help on this greatly.

I find this sort of thing much easier with ImageMagick, for which there are decent bindings available with wand.
Here's how you can do one image, just at the command-line in Terminal, showing the various parts in different colours so you can see what affects what:
magick -background yellow -gravity center -pointsize 24 -size 400x caption:"Detecting, by the component, that a replacement component has been added in the transport\n246C" -bordercolor magenta -border 10 -bordercolor cyan -border 5 result.png
And here's how you can do a few in one go:
magick -background white -gravity center -pointsize 24 -size 400x -bordercolor black \
\( caption:"Detecting, by the component, that a replacement component has been added in the transport\n246C" -bordercolor black -border 5 -bordercolor white -border 5 \) \
\( caption:"Detecting, by the component, that another component has been removed\n246D" -bordercolor black -border 5 -bordercolor white -border 5 \) \
\( caption:"Detecting, by any means, that another component has been replaced\n247K" -bordercolor black -border 5 -bordercolor white -border 5 \) \
-append result.png
Of course you can change the fonts, change the colours, read the captions from a file, use Unicode, space differently and/or do it all in Python with very similar-looking code - here is a link to an answer showing the approximate technique in wand in Python.

Related

Extracting only specific color from image with scanner artifacts

I have the following problem:
I want to extract only the color of a blue pen from scanned images that also contain grayscale and black printed areas on a white page background.
I'm okay with disregarding any kind of grayscale (not colored) pixel values and only keeping the blue parts, there won't be any dominant color other than blue on the images.
It sounds like a simple task, but the problem is that through the scanning process, the entire image contains colored pixels, including blue ones, even the grayscale or black parts, so I'm not sure how to go about isolating those parts and keeping only the blue ones, here is a closeup to show what I mean:
Here is what an image would look like for reference:
I would like the output to be a new image, containing only the parts drawn / written in blue pen, in this case the drawing of the hedgehog / eye.
So I've tried to isolate an HSV range for blue-ish colors in the image using this code:
img = cv.imread("./data/scan_611a720bcd70bafe7beb502d.jpg")
img_hsv = cv.cvtColor(img, cv.COLOR_BGR2HSV)
# accepted color range for blue pen
lower_blue = np.array([90, 35, 140])
upper_blue = np.array([150, 255, 255])
# preparing the mask to overlay
mask = cv.inRange(img_hsv, lower_blue, upper_blue)
inverted_mask = cv.bitwise_not(mask)
mask_blur = cv.GaussianBlur(inverted_mask, (5, 5), 0)
ret, mask_thresh = cv.threshold(mask_blur, 0, 255, cv.THRESH_BINARY + cv.THRESH_OTSU)
# The black region in the mask has the value of 0,
# so when multiplied with original image removes all non-blue regions
result = cv.bitwise_and(img, img, mask=mask)
cv.imshow("Result", mask_thresh)
k = cv.waitKey(0)
However the result is this:
Many parts of the picture that are drawn in black such as the cloud image are not removed since as mentioned, they contain blue / colored pixels due to the scanning process.
Is there any method that would allow for a clean isolation of those blue parts of the image even with those artifacts present?
The solution would need to work for any kind of image like this, the one given is just an example, but as mentioned the only color present would be the blue pen apart from the grey/black areas.
Maybe try the opposite- search for black parts first and then do some erosion around this black mask and remove all around it before you are searching for the blue. The "main" color in the cloud is still black so you can play around this.
You should realign the color planes of your scan. Then you're at least rid of those color fringes. I'd recommend scanning a sheet of graph paper to calibrate.
This is done using OpenCV's findTransformECC.
Complete examples can be found here:
https://docs.opencv.org/master/dd/d93/samples_2cpp_2image_alignment_8cpp-example.html
https://learnopencv.com/image-alignment-ecc-in-opencv-c-python/
And here's specific code to align the color planes of the picture given in the question:
https://gist.github.com/crackwitz/b8867b46f320eae17f4b2684416c79ea
(all it does is split the color planes, call findTransformECC and warpPerspective, merge the color planes)

Convert full color image to three color image for e-ink display

I'd like to be able to automagically convert full color images down to three color (black / red / white) for an e-ink display (Waveshare 7.5"). Right now I'm just letting the screen handle it, but as expected complex images get washed out.
Are there any algorithms or filters I could apply to make things a bit more visible?
Right now I'm using Python, but I'm not averse to other languages/environments if necessary.
Good image:
Washed out image:
You could make your own palette of 3 acceptable colours like this:
magick xc:red xc:white xc:black +append palette.gif
Then you can apply it to your image like this:
magick input.png +dither -remap palette.gif result.png
If you want to send it straight to the frame buffer and it supports RB888, you can try running something like this:
magick input.png +dither -remap palette.gif -depth 8 RGB:/dev/fb0
Just adding a bit to Mark Setchell's answer. For printing you might be better dithering your 3 colors. So here is your image with and without dithering using Imagemagick 7. If using Imagemagick 6, replace magick with convert.
Input:
Create 3 color palette:
magick xc:red xc:white xc:black +append palette.gif
With dithering(default is Floyd-Steinberg):
magick input.png -remap palette.gif result.png
[![enter image description here][2]][2]
With out dithering
magick input.png -dither none -remap palette.gif result2.png
[![enter image description here][3]][3]
If you want Python, then you could try Python Wand. It is based upon Imagemagick.
ADDITION:
To separate the red and black into two image, each of which are represented by black and the rest as white, you can do the following and save as BMP as you want in your comments. (You can do this with or without dithering from above as you desire)
magick result.png -color-threshold "red-red" -negate red.bmp
magick result.png -color-threshold "black-black" -negate black.bmp
Red:
Black:
You appear to be choosing the nearest color for each pixel. See if a dithering algorithm works better for your purposes. Generally, dithering algorithms take into account neighboring pixels when determining how to color a given pixel.
EDIT: In the case of PIL (the Python Imaging Library), it doesn't seem trivial to dither to an arbitrary set of three colors, at least as of 2012.
Just adding a bit to Mark and Fred's answers. I'm using ImageMagick on Raspberry Pi, which is version < 7 and uses "convert". Some of the commands Fred had suggested didn't work for that version. Here's what I did to resize, remap and dither, and split the image into white-and-black and white-and-red sub-images.
# Create palette with red, white and black colors
convert xc:red xc:white xc:black +append palette.gif
# Resize input file into size suitable for ePaper Display - 264x176
# Converting to BMP.
# Note, if working with JPG, it is a lossy
# format and subsequently remapping and working with it results
# in the color palette getting overwritten - we just convert to BMP
# and work with that instead
convert $1 -resize 264x176^ -gravity center -extent 264x176 resized.bmp
# Remap the resized image into the colors of the palette using
# Floyd Steinberg dithering (default)
# Resulting image will have only 3 colors - red, white and black
convert resized.bmp -remap palette.gif result.bmp
# Replace all the red pixels with white - this
# isolates the white and black pixels - i.e the "black"
# part of image to be rendered on the ePaper Display
convert -fill white -opaque red result.bmp result_black.bmp
# Similarly, Replace all the black pixels with white - this
# isolates the white and red pixels - i.e the "red"
# part of image to be rendered on the ePaper Display
convert -fill white -opaque black result.bmp result_red.bmp
I've also implemented in using Python Wand, a Python layer over ImageMagick
# This function takes as input a filename for an image
# It resizes the image into the dimensions supported by the ePaper Display
# It then remaps the image into a tri-color scheme using a palette (affinity)
# for remapping, and the Floyd Steinberg algorithm for dithering
# It then splits the image into two component parts:
# a white and black image (with the red pixels removed)
# a white and red image (with the black pixels removed)
# It then converts these into PIL Images and returns them
# The PIL Images can be used by the ePaper library to display
def getImagesToDisplay(filename):
print(filename)
red_image = None
black_image = None
try:
with WandImage(filename=filename) as img:
img.resize(264, 176)
with WandImage() as palette:
with WandImage(width = 1, height = 1, pseudo ="xc:red") as red:
palette.sequence.append(red)
with WandImage(width = 1, height = 1, pseudo ="xc:black") as black:
palette.sequence.append(black)
with WandImage(width = 1, height = 1, pseudo ="xc:white") as white:
palette.sequence.append(white)
palette.concat()
img.remap(affinity=palette, method='floyd_steinberg')
red = img.clone()
black = img.clone()
red.opaque_paint(target='black', fill='white')
# This is not nececessary - making the white and red image
# white and black instead - left here FYI
# red.opaque_paint(target='red', fill='black')
black.opaque_paint(target='red', fill='white')
red_image = Image.open(io.BytesIO(red.make_blob("bmp")))
black_image = Image.open(io.BytesIO(black.make_blob("bmp")))
except Exception as ex:
print ('traceback.format_exc():\n%s',traceback.format_exc())
return (red_image, black_image)
Here's my writeup on my project on Hackster (including full source code links) - https://www.hackster.io/sridhar-rajagopal/photostax-digital-epaper-photo-frame-84d4ed
I've attributed both Mark and Fred there - thank you!

Python PIL decrease letter spacing

How can I decrease the letter spacing of this text? I want to make the text more squished together by a few pixels.
I'm trying to make a transparent image, with text on it, that I want pushed together. Like this, but transparent:
from PIL import Image, ImageDraw, ImageFont
(W, H) = (140, 40)
#create transparent image
image = Image.new("RGBA", (140, 40), (0,0,0,0))
#load font
font = ImageFont.truetype("Arial.ttf", 30)
draw = ImageDraw.Draw(image)
text = "kpy7n"
w,h = font.getsize(text)
draw.text(((W-w)/2,(H-h)/2), text, font=font, fill=0)
image.save("transparent-image.png")
This function will automate all the pain for you. It was written to emulate Photoshop values and can render leading (the space between lines) as well as tracking (the space between characters).
def draw_text_psd_style(draw, xy, text, font, tracking=0, leading=None, **kwargs):
"""
usage: draw_text_psd_style(draw, (0, 0), "Test",
tracking=-0.1, leading=32, fill="Blue")
Leading is measured from the baseline of one line of text to the
baseline of the line above it. Baseline is the invisible line on which most
letters—that is, those without descenders—sit. The default auto-leading
option sets the leading at 120% of the type size (for example, 12‑point
leading for 10‑point type).
Tracking is measured in 1/1000 em, a unit of measure that is relative to
the current type size. In a 6 point font, 1 em equals 6 points;
in a 10 point font, 1 em equals 10 points. Tracking
is strictly proportional to the current type size.
"""
def stutter_chunk(lst, size, overlap=0, default=None):
for i in range(0, len(lst), size - overlap):
r = list(lst[i:i + size])
while len(r) < size:
r.append(default)
yield r
x, y = xy
font_size = font.size
lines = text.splitlines()
if leading is None:
leading = font.size * 1.2
for line in lines:
for a, b in stutter_chunk(line, 2, 1, ' '):
w = font.getlength(a + b) - font.getlength(b)
# dprint("[debug] kwargs")
print("[debug] kwargs:{}".format(kwargs))
draw.text((x, y), a, font=font, **kwargs)
x += w + (tracking / 1000) * font_size
y += leading
x = xy[0]
It takes a font and a draw object, which can be obtained via:
font = ImageFont.truetype("Arial.ttf", 30)
draw = ImageDraw.Draw(image)
You have to draw the text character by character and then change the x coordinate when drawing the next
Example of code:
w,h = font.getsize("k")
draw.text(((W,H),"K", font=font, fill=0)
draw.text(((W+w)*0.7,H),"p", font=font, fill=0)
draw.text(((W+w*2)*0.7,H),"y", font=font, fill=0)
draw.text(((W+w*3)*1,H),"7", font=font, fill=0)
draw.text(((W+w*4)*0.8,H),"n", font=font, fill=0)
You can do this by changing the kerning - I am not sure how to do that with PIL at the moment, but it is possible with ImageMagick in the Terminal and with Python using wand which is a Python binding to ImageMagick.
First, in the Terminal - look at the parameter -kerning which is first minus three then plus three:
magick -size 200x80 xc:black -gravity center -font "Arial Bold.ttf" -pointsize 50 -kerning -3 -fill white -draw "text 0,0 'kpy7n'" k-3.png
magick -size 200x80 xc:black -gravity center -font "Arial Bold.ttf" -pointsize 50 -kerning 3 -fill white -draw "text 0,0 'kpy7n'" k+3.png
And, somewhat similarly in Python:
#!/usr/bin/env python3
# Needed this on macOS Monterey:
# export WAND_MAGICK_LIBRARY_SUFFIX="-7.Q16HDRI"
# export MAGICK_HOME=/opt/homebrew
from wand.image import Image
from wand.drawing import Drawing
from wand.font import Font
text = "kpy7n"
# Create a black canvas 400x120
with Image(width=400, height=120, pseudo='xc:black') as image:
with Drawing() as draw:
# Draw once in yellow with positive kerning
draw.font_size = 50
draw.font = 'Arial Bold.ttf'
draw.fill_color = 'yellow'
draw.text_kerning = 3.0
draw.text(10, 80, text)
draw(image)
# Draw again in magenta with negative kerning
draw.fill_color = 'magenta'
draw.text_kerning = -3.0
draw.text(200, 80, text)
draw(image)
image.save(filename='result.png')

Find edges (border of rectangle) inside an image

I have an image of a sticky note on a background (say a wall, or a laptop) and I want to detect the edges of the sticky note (rough detection also works fine) so that i can run a crop on it.
I plan on using ImageMagick for the actual cropping, but am stuck on detecting the edges.
Ideally, my output should give me 4 coordinates for the 4 border points so I can run my crop on it.
How should I proceed with this?
You can do that with ImageMagick.
There are different IM methods one can come up with. Here is the first algorithm which came to mind for me. It assumes the "sticky notes" are not tilted or rotated on the larger image:
First stage: use canny edge detection to reveal the edges of the sticky note.
Second stage: determine the coordinates of the edges.
Canny Edge Detection
This command will create a black+white image depicting all edges in the original image:
convert \
http://i.stack.imgur.com/SxrwG.png \
-canny 0x1+10%+30% \
canny-edges.png
Determine Coordinates of Edges
Assuming the image is sized XxY pixels. Then you can resize an image into a 1xY column and a Xx1 row of pixels, where each pixel's color value is the average of the respective pixels of all pixels which were in the same row or same column as the respective column/row pixel.
As an example which can be seen below, I'll first resize the new canny-edges.png to 4xY and Xx4 images:
identify -format " %W x %H\n" canny-edges.png
400x300
convert canny-edges.png -resize 400x4\! canny-4cols.png
convert canny-edges.png -resize 4x300\! canny-4rows.png
canny-4cols.png
canny-4rows.png
Now that the previous images visualized what the compression-resizing of an image into a few columns or rows of pixels will achieve, let's do it with a single column and a single row. At the same time we'll change the output format to text, not PNG, in order to get the coordinates of these pixels which are white:
convert canny-edges.png -resize 400x1\! canny-1col.txt
convert canny-edges.png -resize 1x300\! canny-1row.txt
Here is part of the output from canny-1col.txt:
# ImageMagick pixel enumeration: 400,1,255,gray
0,0: (0,0,0) #000000 gray(0)
1,0: (0,0,0) #000000 gray(0)
2,0: (0,0,0) #000000 gray(0)
[....]
73,0: (0,0,0) #000000 gray(0)
74,0: (0,0,0) #000000 gray(0)
75,0: (10,10,10) #0A0A0A gray(10)
76,0: (159,159,159) #9F9F9F gray(159)
77,0: (21,21,21) #151515 gray(21)
78,0: (156,156,156) #9C9C9C gray(156)
79,0: (14,14,14) #0E0E0E gray(14)
80,0: (3,3,3) #030303 gray(3)
81,0: (3,3,3) #030303 gray(3)
[....]
162,0: (3,3,3) #030303 gray(3)
163,0: (4,4,4) #040404 gray(4)
164,0: (10,10,10) #0A0A0A gray(10)
165,0: (7,7,7) #070707 gray(7)
166,0: (8,8,8) #080808 gray(8)
167,0: (8,8,8) #080808 gray(8)
168,0: (8,8,8) #080808 gray(8)
169,0: (9,9,9) #090909 gray(9)
170,0: (7,7,7) #070707 gray(7)
171,0: (10,10,10) #0A0A0A gray(10)
172,0: (5,5,5) #050505 gray(5)
173,0: (13,13,13) #0D0D0D gray(13)
174,0: (6,6,6) #060606 gray(6)
175,0: (10,10,10) #0A0A0A gray(10)
176,0: (10,10,10) #0A0A0A gray(10)
177,0: (7,7,7) #070707 gray(7)
178,0: (8,8,8) #080808 gray(8)
[....]
319,0: (3,3,3) #030303 gray(3)
320,0: (3,3,3) #030303 gray(3)
321,0: (14,14,14) #0E0E0E gray(14)
322,0: (156,156,156) #9C9C9C gray(156)
323,0: (21,21,21) #151515 gray(21)
324,0: (159,159,159) #9F9F9F gray(159)
325,0: (10,10,10) #0A0A0A gray(10)
326,0: (0,0,0) #000000 gray(0)
327,0: (0,0,0) #000000 gray(0)
[....]
397,0: (0,0,0) #000000 gray(0)
398,0: (0,0,0) #000000 gray(0)
399,0: (0,0,0) #000000 gray(0)
As you can see, the detected edges from the text also influenced the grayscale values of the pixels. So we could introduce an additional -threshold 50% operation into our commands, to get pure black+white output:
convert canny-edges.png -resize 400x1\! -threshold 50% canny-1col.txt
convert canny-edges.png -resize 1x300\! -threshold 50% canny-1row.txt
I'll not quote the contents of the new text files here, you can try it and look for yourself if you are interested. Instead, I'll do a shortcut: I'll output the textual representation of the pixel color values to <stdout> and directly grep it for all non-black pixels:
convert canny-edges.png -resize 400x1\! -threshold 50% txt:- \
| grep -v black
# ImageMagick pixel enumeration: 400,1,255,srgb
76,0: (255,255,255) #FFFFFF white
78,0: (255,255,255) #FFFFFF white
322,0: (255,255,255) #FFFFFF white
324,0: (255,255,255) #FFFFFF white
convert canny-edges.png -resize 1x300\! -threshold 50% txt:- \
| grep -v black
# ImageMagick pixel enumeration: 1,300,255,srgb
0,39: (255,255,255) #FFFFFF white
0,41: (255,255,255) #FFFFFF white
0,229: (255,255,255) #FFFFFF white
0,231: (255,255,255) #FFFFFF white
From above results you can conclude that the four pixel coordinates of the
stick note inside the other image are:
lower left corner: (323|40)
upper right corner: (77|230)
The width of the area is 246 pixels and the height is 190 pixels.
(ImageMagick assumes the origin of its coordinate system the upper left corner of an image.)
To now cut the sticky note from the original image you can do:
convert http://i.stack.imgur.com/SxrwG.png[246x190+77+40] sticky-note.png
More options to explore
autotrace
You can streamline the above procedure (even transform it into an automatically working script if you want) even more, by converting the intermediate "canny-edges.png" into an SVG vector graphic, for example by running it through autotrace...
This could be useful if your sticky note is tilted or rotated.
Hough Line Detection
Once you have the "canny" lines, you could also apply the Hough Line Detection algorithm on them:
convert \
canny-edges.png \
-background black \
-stroke red \
-hough-lines 5x5+20 \
lines.png
Note that the -hough-lines operator extends and draws detected lines from one edge (with floating point values) to another edge of the original image.
While the previous command finally converted the lines to a PNG the -hough-lines operator really generates an MVG file (Magick Vector Graphics) internally. That means you could actually read the source code of the MVG file, and determine the mathematical parameters of each line which is depicted in the "red lines" image:
convert \
canny-edges.png \
-hough-lines 5x5+20 \
lines.mvg
This is more sophisticated and also works for edges which are not strictly horizontal and/or vertical.
But your example image does use horizontal and vertical edges, so you can even use simple shell commands to discover these.
There are 80 line descriptions in total in the generated MVG file. You can identify all horizontal lines in that file:
cat lines.mvg \
| while read a b c d e ; do \
if [ x${b/0,/} == x${c/400,/} ]; then \
echo "$a $b $c $d $e" ; \
fi; \
done
line 0,39.5 400,39.5 # 249
line 0,62.5 400,62.5 # 48
line 0,71.5 400,71.5 # 52
line 0,231.5 400,231.5 # 249
Now identify all vertical lines:
cat lines.mvg \
| while read a b c d e; do \
if [ x${b/,0/} == x${c/,300} ]; then \
echo "$a $b $c $d $e" ; \
fi; \
done
line 76.5,0 76.5,300 # 193
line 324.5,0 324.5,300 # 193
I have met similar problem of detecting image borders (whitespaces) last week and spent many hours trying various approaches and tools and after that finally solved it using entropy difference calculation approach, so JFYI here is algorithm.
Let's assume you want to detect if your 200x100px image has border at the top:
Get upper piece of image 25% height (25px) (0: 25, 0: 200)
Get lower piece with the same height starting from upper piece end and deeper to image center (25: 50, 0: 200)
upper and lower pieces depicted
Calculate entropies for the both pieces
Find entropy difference and store it with current block height
Make upper piece 1px less (24 px) and repeat from p.2 until we hit image edge (height 0) - resizing scan area every iteration thus sliding up to the image edge
Find maximum of the stored entropy differences and its block height - this is center of our border if it lies closer to the edge rather than to the center of image and maximum entropy difference is higher than pre-set threshold (0.5 for example)
And apply this algorithm to every side of your image.
Here is a piece of code to detect if image has top border and find its approximate coordinate (offset from top), pass grayscale ('L' mode) Pillow image to the scan function:
import numpy as np
MEDIAN = 0.5
def scan(im):
w, h = im.size
array = np.array(im)
center_ = None
diff_ = None
for center in reversed(range(1, h // 4 + 1)):
upper = entropy(array[0: center, 0: w].flatten())
lower = entropy(array[center: 2 * center, 0: w].flatten())
diff = upper / lower if lower != 0.0 else MEDIAN
if center_ is None or diff_ is None:
center_ = center
diff_ = diff
if diff < diff_:
center_ = center
diff_ = diff
top = diff_ < MEDIAN and center_ < h // 4, center_, diff_
Full source with examples of bordered and clear (not bordered) images processed is here: https://github.com/embali/enimda/

Generate Smooth White Border Around Circular Image

I'm using pgmagick to generate a circular thumbnail. I'm using a process similar to the one discussed here, which does indeed produce a nice circular thumbnail for me. However, I need a white border around the radius of the circle.
My initial approach was to create a new image of a slightly larger white circle with a transparent background and composite the thumbnail over that, letting the white circle "peak out" from under the thumbnail and create a border effect. Here's the pgmagick code I used to achieve that:
border_background = Image(Geometry(220, 220), Color('transparent'))
drawer = Draw()
drawer.circle(110, 110, 33.75, 33.75)
drawer.fill_color(Color('white'))
drawer.stroke_antialias(False)
border_background.draw(drawer.drawer)
border_background.composite(original_thumbnail, 0, 0, CompositeOperator.OverCompositeOp)
This 'works', but the surrounding white border is fairly distorted with choppy edges -- not production ready. If I take drawer.stroke_antialias(False) out, it's even worse.
Any ideas on making this border smoother using pgmagick?
I leave it as a simple exercise for the reader to convert this solution from
commandline to pgmagick (see more below). The code underlying pgmagick is the same as that used by the commandline.
You could draw the circle larger and then "resize" it down. This ameliorates the jaggy look of the circle by averaging the edge with the surrounding background during the resizing operation.
Instead of
gm convert -size 220x220 xc:none -fill white \
-draw "circle 110,110, 33.75,33.75" \
original.png
Do this:
gm convert -size 880x880 xc:none -fill white \
-draw "circle 440,440, 135,135" \
-resize 25% resized.png
You could try other sizes and
decide which is the smallest that satisfies you, e.g.,
gm convert -size 440x440 xc:none -fill white \
-draw "circle 220,220, 67.5,65.5" \
-resize 50% resized.png
This commandline works on both GraphicsMagick ("gm convert") and ImageMagick ("convert")
Looking at the pgmagick documentation at
http://pgmagick.readthedocs.org/en/latest/cookbook.html#scaling-a-image it is not clear that pgmagick offers "resize". The documentation shows "img.scale" which will probably result in a jaggy circle. Using "-scale" on the commandline examples above instead of "-resize" does indeed produce the same jaggy image.
pgmagick does however allow you to specify the filter type, as in
img.scale((150, 100), 'lanczos')
which should be equivalent to "-resize" and is what you want.
You will get a better result if you choose a different approach:
# First draw the thumbnail inside the circle.
background = Image(Geometry(220, 220), Color('transparent'))
drawer = Draw()
drawer.circle(110, 110, 33.75, 33.75)
drawer.fill_color(Color('white'))
background.draw(drawer.drawer)
background.composite(original_thumbnail, 0, 0, CompositeOperator.InCompositeOp)
# Draw only the border of the circle on top of the thumbnail inside the circle
border = Image(Geometry(220, 220), Color('transparent'))
drawer.fill_color(Color('transparent'))
drawer.stroke_color(Color('white'))
drawer.stroke_width(3)
border.draw(drawer.drawer)
background.composite(border, 0, 0, CompositeOperator.OverCompositeOp)

Categories

Resources