How to kern text in PIL - python

I'm working on a problem that requires good precision pixel by pixel, so I need to have the ability to manipulate text in more ways than what PIL provides. Especially with regard to kerning. There is a feature that allows you to disable kerning, but not control the amount.
This problem was made more challenging because of bugs in PIL that relate to accurately measuring the size of text. There are many posts about this problem but the most useful information is a SO post here, How to get the font pixel height using PIL's ImageFont class? and a blog article, How to properly calculate text size in PIL images

My code is for my own use but if someone is having similar issues, I'm sure that it can easily be adapted for your own needs.
My key functions are:
def get_text_width(text_string, font):
return font.getmask(text_string).getbbox()[2]
def kern(name, draw_object, y, space, font, fill):
chars = [char for char in name]
total_width = 0
for char in chars:
width_text = get_text_width(char, font)
total_width += (width_text + int(space))
__, height_text = draw_object.textsize(name, font)
__, offset_y = font.getoffset(name)
height_text += offset_y
width_adjuster = 0
for char in chars:
width_text = get_text_width(char, font)
top_left_x = (473 / 2 - total_width / 2) + width_adjuster
top_left_y = (40 / 2 - height_text / 2) + y
xy = top_left_x, top_left_y
width_adjuster += width_text + int(space)
print(f"char:{char},width_text:{width_text},xy:{xy},width_adjuster:{width_adjuster}")
draw_object.text(xy, char, font=font, fill=fill)
It gives a nice output below. However, it's not entirely precise. The number of pixels between letters will vary slightly with different fonts. I have not found a way to standardize this, so I've just accepted the fact when I enter a value for kerning into my GUI, it is just a scalar relative to the font, not the number of pixels

Related

How do I improve the number detection for blueprints (OCR)

I have a number of blueprints where I would like to detect the numbers on the blueprint such that I can turn them into proper models.
for example I have the following image and would like all the numbers on this image so I ran the following code:
import pytesseract
from pytesseract import Output
import cv2
import numpy as np
img = cv2.imread('vdb7C.jpg')
custom_config = r' (--oem 2 --psm 10'
d = pytesseract.image_to_data(img,config=custom_config,lang='eng', output_type=Output.DICT)
n_boxes = len(d['level'])
for i in range(n_boxes):
text=d["text"][i]
print(text+ str(str.isdigit(text)))
if str.isdigit(text):
(x, y, w, h) = (d['left'][i], d['top'][i], d['width'][i], d['height'][i])
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.imwrite("output.jpg" , img)
This gave me the following result: . As you can see it does correctly identify a number of numbers on the blueprint, however it misses quite a few others and falsely detect a few that aren't really there. I care more about getting all the numbers than a few false positives but would still like to keep those to a minimum so any suggestions there?
I have already tried thinning operations, re-scaling the images, rotating the images and smoothing the images but all of those don't appear to make much difference, extreme rescaling (*0.1 or *10) does change a few things but any gains made in one part of the image are undone by faults appearing in other parts.
Especially difficult are situations such as on the left building where we have lines numbers close to or even overlapping part of the design.
Here we see 2 examples of such situations
also note that font usage is not consistent between images.
It's worth noting that the lines are almost always obviously thinner then the fond used for the numbers so perhaps something could be done with that?
I have also tried using the EAST OCR system with the following code:
img = cv2.imread('vdb7C.jpg')
W=5664
H=4000
dim = (W, H)
img = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
net = cv2.dnn.readNet("frozen_east_text_detection.pb")
blob = cv2.dnn.blobFromImage(img, 1.0, (W, H),
(123.68, 116.78, 103.94), swapRB=True, crop=False)
net.setInput(blob)
(scores, geometry) = net.forward(["feature_fusion/Conv_7/Sigmoid",
"feature_fusion/concat_3"])
(numRows, numCols) = scores.shape[2:4]
rects = []
confidences = []
# loop over the number of rows
for y in range(0, numRows):
# extract the scores (probabilities), followed by the geometrical
# data used to derive potential bounding box coordinates that
# surround text
scoresData = scores[0, 0, y]
xData0 = geometry[0, 0, y]
xData1 = geometry[0, 1, y]
xData2 = geometry[0, 2, y]
xData3 = geometry[0, 3, y]
anglesData = geometry[0, 4, y]
for x in range(0, numCols):
if scoresData[x] < confidence:
continue
(offsetX, offsetY) = (x * 4.0, y * 4.0)
angle = anglesData[x]
cos = np.cos(angle)
sin = np.sin(angle)
h = xData0[x] + xData2[x]
w = xData1[x] + xData3[x]
endX = int(offsetX + (cos * xData1[x]) + (sin * xData2[x]))
endY = int(offsetY - (sin * xData1[x]) + (cos * xData2[x]))
startX = int(endX - w)
startY = int(endY - h)
rects.append((startX, startY, endX, endY))
confidences.append(scoresData[x])
boxes = non_max_suppression(np.array(rects), probs=confidences)
for box in boxes:
(y,h,x,w) = box
print(box)
print(np.shape(img))
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.imwrite("output.jpg" , img)
however this causes quite a number of bounding boxes to be outside of the image and in general the bounding boxes seem unrelated to the content, so anyone know what's up there?
Any suggestions? I have 8000 images right now and need to eventually process a total of about 400k images.
I suggest using a solution that applies neural networks like keras-ocr which applies CRAFT and CRNN. It does a better job in detecting text that overlaps with the design. This is what I got using it out of the box:
import matplotlib.pyplot as plt
import keras_ocr
detector = keras_ocr.detection.Detector()
image = keras_ocr.tools.read('vdb7C.jpg')
boxes = detector.detect(images=[image])[0]
canvas = keras_ocr.tools.drawBoxes(image, boxes)
plt.imshow(canvas)
Result:
Run your tesseract piece of code, but only use results with 3 or more digits. This should provide you with enough good examples of digits. Extract each digit to separate file and save their positions. Now you can go two ways.
You can go the simple way if you will see that the fonts of the digits are quite similar. Then you can create a set of templates for the digits (say 15-30). Remember that you can get the size of the digits for specific image? Resize your digits template to the right size and run the most trivial template matching. This will definitely create some false detections (especially for "1"s), and you will have to find a way to reduce their amount to acceptable level.
More complex way is to build a custom CNN detector and train it on your data. So from the first stage you will get several hundred examples of digits (and their positions) that you want to detect. You can look at this project or this one as references. Also this article can provide you some guidance.
One more thing that can be useful. Your images have lots of long perpendicular lines. If you align them to the axis, you can remove the lines very easily by binarizing the original, shifting the result (right or down) by several pixels and ANDing them. This will left only the long lines. Find their length, and you will be able to remove lines above certain length in the original image.

How to delete or clear contours from image?

I'm working with license plates, what I do is apply a series of filters to it, such as:
Grayscale
Blur
Threshhold
Binary
The problem is when I doing this, there are some contour like this image at borders, how can I clear them? or make it just black color (masked)? I used this code but sometimes it falls.
# invert image and detect contours
inverted = cv2.bitwise_not(image_binary_and_dilated)
contours, hierarchy = cv2.findContours(inverted,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
# get the biggest contour
biggest_index = -1
biggest_area = -1
i = 0
for c in contours:
area = cv2.contourArea(c)
if area > biggest_area:
biggest_area = area
biggest_index = i
i = i+1
print("biggest area: " + str(biggest_area) + " index: " + str(biggest_index))
cv2.drawContours(image_binary_and_dilated, contours, biggest_index, [0,0,255])
center, size, angle = cv2.minAreaRect(contours[biggest_index])
rot_mat = cv2.getRotationMatrix2D(center, angle, 1.)
#cv2.warpPerspective()
print(size)
dst = cv2.warpAffine(inverted, rot_mat, (int(size[0]), int(size[1])))
mask = dst * 0
x1 = max([int(center[0] - size[0] / 2)+1, 0])
y1 = max([int(center[1] - size[1] / 2)+1, 0])
x2 = int(center[0] + size[0] / 2)-1
y2 = int(center[1] + size[1] / 2)-1
point1 = (x1, y1)
point2 = (x2, y2)
print(point1)
print(point2)
cv2.rectangle(dst, point1, point2, [0,0,0])
cv2.rectangle(mask, point1, point2, [255,255,255], cv2.FILLED)
masked = cv2.bitwise_and(dst, mask)
#cv2_imshow(imgg)
cv2_imshow(dst)
cv2_imshow(masked)
#cv2_imshow(mask)
Some results:
The original plates were:
Good result 1
Good result 2
Good result 3
Good result 4
Bad result 1
Bad result 2
Binary plates are:
Image 1
Image 2
Image 3
Image 4
Image 5 - Bad result 1
Image 6 - Bad result 2
How can I fix this code? only that I want to avoid that bad result or improve it.
INTRODUCTION
What you are asking starts to become complicated, and I believe there is not anymore a right or wrong answer, just different ways to do this. Almost all of them will yield positive and negative results, most likely in a different ratio. Having a 100% positive result is quite a challenging task, and I do believe my answer does not reach it. Yet it can be the basis for a more sophisticated work towards that goal.
MY PROPOSAL
So, I want to make a different proposal here.
I am not 100% sure why you are doing all the steps, and I believe some of them could be unnecessary.
Let's start from the problem: you want to remove the white parts on the borders (which are not numbers).
So, we need an idea about how to distinguish them from the letters, in order to correctly tackle them.
If we just try to contour and warp, it is likely to work on some images and not on others, because not all of them look the same. This is the hardest problem to have a general solution that works for many images.
What are the difference between the characteristics of the numbers and the characteristics of the borders (and other small points?):
after thinking about that, I would say: the shapes! That meaning, if you would imagine a bounding box around a letter/number, it would look like a rectangle, whose size is related to the image size. While in the case of the border, they are usually very large and narrow, or too small to be considered a letter/number (random points).
Therefore, my guess would be on segmentation, dividing the features via their shape. So we take the binary image, we remove some parts using the projection on their axes (as you correctly asked in the previous question and I believe we should use) and we get an image where each letter is separated from the white borders.
Then we can segment and check the shape of each segmented object, and if we think these are letters, we keep them, otherwise we discard them.
THE CODE
I wrote the code before as an example on your data. Some of the parameters are tuned on this set of images, so they may have to be relaxed for a larger dataset.
import cv2
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import scipy.ndimage as ndimage
# do this for all the images
num_images = 6
plt.figure(figsize=(16,16))
for k in range(num_images):
# read the image
binary_image = cv2.imread("binary_image/img{}.png".format(k), cv2.IMREAD_GRAYSCALE)
# just for visualization purposes, I create another image with the same shape, to show what I am doing
new_intermediate_image = np.zeros((binary_image.shape), np.uint8)
new_intermediate_image += binary_image
# here we will copy only the cleaned parts
new_cleaned_image = np.zeros((binary_image.shape), np.uint8)
### THIS CODE COMES FROM THE PREVIOUS ANSWER:
# https://stackoverflow.com/questions/62127537/how-to-clean-binary-image-using-horizontal-projection?noredirect=1&lq=1
(rows,cols)=binary_image.shape
h_projection = np.array([ x/rows for x in binary_image.sum(axis=0)])
threshold_h = (np.max(h_projection) - np.min(h_projection)) / 10
print("we will use threshold {} for horizontal".format(threshold))
# select the black areas
black_areas_horizontal = np.where(h_projection < threshold_h)
for j in black_areas_horizontal:
new_intermediate_image[:, j] = 0
v_projection = np.array([ x/cols for x in binary_image.sum(axis=1)])
threshold_v = (np.max(v_projection) - np.min(v_projection)) / 10
print("we will use threshold {} for vertical".format(threshold_v))
black_areas_vertical = np.where(v_projection < threshold_v)
for j in black_areas_vertical:
new_intermediate_image[j, :] = 0
### UNTIL HERE
# define the features we are looking for
# this parameters can also be tuned
min_width = binary_image.shape[1] / 14
max_width = binary_image.shape[1] / 2
min_height = binary_image.shape[0] / 5
max_height = binary_image.shape[0]
print("we look for feature with width in [{},{}] and height in [{},{}]".format(min_width, max_width, min_height, max_height))
# segment the iamge
labeled_array, num_features = ndimage.label(new_intermediate_image)
# loop over all features found
for i in range(num_features):
# get a bounding box around them
slice_x, slice_y = ndimage.find_objects(labeled_array==i)[0]
roi = labeled_array[slice_x, slice_y]
# check the shape, if the bounding box is what we expect, copy it to the new image
if roi.shape[0] > min_height and \
roi.shape[0] < max_height and \
roi.shape[1] > min_width and \
roi.shape[1] < max_width:
new_cleaned_image += (labeled_array == i)
# print all images on a grid
plt.subplot(num_images,3,1+(k*3))
plt.imshow(binary_image)
plt.subplot(num_images,3,2+(k*3))
plt.imshow(new_intermediate_image)
plt.subplot(num_images,3,3+(k*3))
plt.imshow(new_cleaned_image)
that produces the output (in the grid, left image are the input images, central one are the images after the mask based on histogram projections, and on the right are the cleaned images):
CONCLUSIONS:
As said above, this method does not yield 100% positive results. The last picture has lower quality and some parts are unconnected, and they are lost in the process. I personally believe this is a price to pay to get cleaner image, and if you have a lot of images, it won't be a problem, and you can remove those kind of images. Overall, I think this method returns quite clear images, where all other parts that are not letters or numbers are correctly removed.
ADVANTAGES
the image is clean, nothing more than letters or numbers are kept
the parameters can be tuned, and should be consistent across images
in case of problem, using some prints or some debugging on the loop that chooses the features to keep should make it easier to understand where are the problem and correct them
LIMITATIONS
it may fail in some cases where letters and numbers touch the white borders, which seems quite possible. It is handled from the black_areas created using the projection, but I am not so confident this will work 100% of the time.
some small parts of the numbers can be lost during the process, as in the last picture.

How to generate quality windows icons with images of capital letters inside using Python?

I want to get 26 files (for starters): A.ico, B.ico, ... Z.ico, where they are composed of 16x16 256-color image, and a 32x32 256-color image, where the color of the text is black, and the font is ... say Calibri, and the size - whatever fits best into the square. I would like to do this using Python Image Library if possible.
I know that I can probably get my icons through other means, but I would like to learn to use the PIL better, and would like to use it for the task at hand.
Start with a large blank image and draw the character on the center of it. Find the edges of the character and extract a square from the image that includes all of the character. Use the thumbnail function with the ANTIALIAS option to reduce it to the 16x16 or 32x32 size required. Then reduce the number of colors to 256: How to reduce color palette with PIL
This is based on the answer by #Mark Ransom. Thank you, Mark!
This worked for me, though the 'blackify' function is imperfect still.
I still need to figure out how to create an .ico file without using icotool for Linux.
# This script generates icon files from the two images.
# Uses Python 2.6.5, uses the Python Imaging Library
import Image
import ImageDraw
import ImageFont
letters = [chr(i + ord('A')) for i in range(26)]
default_huge = ImageFont.load_default()
large_size = 1000
lim = large_size + 1
# Apparently I can use the same size for the font.
calibri_huge = ImageFont.truetype("calibri.ttf", large_size)
def crop_letter(img):
minx, maxx, miny, maxy = lim, -lim, lim, -lim
for x in range(large_size):
for y in range(large_size):
if sum(img.getpixel((x, y))) == 3 * 255: continue
# Else, found a black pixel
minx = min(minx, x)
maxx = max(maxx, x)
miny = min(miny, y)
maxy = max(maxy, y)
return img.crop(box = (minx, miny, maxx, maxy))
# This works for me 95% of the time
def blackify(color):
return 255 * (color > 240)
for letter in letters:
# A bit wasteful, but I have plenty of RAM.
img = Image.new("RGB", (large_size, large_size), "white")
draw = ImageDraw.Draw(img)
draw.text((0,0), letter, font = calibri_huge, fill = "black")
img32 = crop_letter(img)
img16 = img32.copy()
img32.thumbnail((32, 32), Image.ANTIALIAS)
img16.thumbnail((16, 16), Image.ANTIALIAS)
img32 = Image.eval(img32, blackify)
img16 = Image.eval(img16, blackify)
## Not needed
## # Apparently this is all it takes to get 256 colors.
## img32 = img32.convert('P')
## img16 = img16.convert('P')
img32.save('icons3/{0}32x32.bmp'.format(letter))
img16.save('icons3/{0}16x16.bmp'.format(letter))
# break
print('DONE!')

PIL how to scale text size in relation to the size of the image

I'm trying to dynamically scale text to be placed on images of varying but known dimensions. The text will be applied as a watermark. Is there any way to scale the text in relation to the image dimensions? I don't require that the text take up the whole surface area, just to be visible enough so its easily identifiable and difficult to remove. I'm using Python Imaging Library version 1.1.7. on Linux.
I would like to be able to set the ratio of the text size to the image dimensions, say like 1/10 the size or something.
I have been looking at the font size attribute to change the size but I have had no luck in creating an algorithm to scale it. I'm wondering if there is a better way.
Any ideas on how I could achieve this?
Thanks
You could just increment the font size until you find a fit. font.getsize() is the function that tells you how large the rendered text is.
from PIL import ImageFont, ImageDraw, Image
image = Image.open('hsvwheel.png')
draw = ImageDraw.Draw(image)
txt = "Hello World"
fontsize = 1 # starting font size
# portion of image width you want text width to be
img_fraction = 0.50
font = ImageFont.truetype("arial.ttf", fontsize)
while font.getsize(txt)[0] < img_fraction*image.size[0]:
# iterate until the text size is just larger than the criteria
fontsize += 1
font = ImageFont.truetype("arial.ttf", fontsize)
# optionally de-increment to be sure it is less than criteria
fontsize -= 1
font = ImageFont.truetype("arial.ttf", fontsize)
print('final font size',fontsize)
draw.text((10, 25), txt, font=font) # put the text on the image
image.save('hsvwheel_txt.png') # save it
If this is not efficient enough for you, you can implement a root-finding scheme, but I'm guessing that the font.getsize() function is small potatoes compared to the rest of your image editing processes.
I know this is an old question that has already been answered with a solution that I too have used. Thanks, #Paul!
Though with increasing the font size by one for each iteration can be time-consuming (at least for me on my poor little server). So eg. small text (like "Foo") would take around 1 - 2 seconds, depending on the image size.
To solve that I adjusted Pauls code so that it searches for the number somewhat like a binary search.
breakpoint = img_fraction * photo.size[0]
jumpsize = 75
while True:
if font.getsize(text)[0] < breakpoint:
fontsize += jumpsize
else:
jumpsize = jumpsize // 2
fontsize -= jumpsize
font = ImageFont.truetype(font_path, fontsize)
if jumpsize <= 1:
break
Like this, it increases the font size until it's above the breakpoint and from there on out it goes up and down with (cutting the jump size in half with each down) until it has the right size.
With that, I could reduce the steps from around 200+ to about 10 and so from around 1-2 sec to 0.04 to 0.08 sec.
This is a drop-in replacement for Pauls code (for the while statement and the 2 lines after it because you already get the font correct font size in the while)
This was done in a few mins so any improvements are appreciated! I hope this can help some who are looking for a bit more performant friendly solution.
In general when you change the font sizing its not going to be a linear change in size of the font.
Now this often depends on the software, fonts, etc... This example was taken from Typophile and uses LaTex + Computer Modern font. As you can see its not exactly a linear scaling. So if you are having trouble with non-linear font scaling then I'm not sure how to resolve it, but one suggestion maybe is to.
Render the font as closely to the size that you want, then scale that up/down via regular image scaling algorithm...
Just accept that it won't exactly be linear scaling and try to create some sort of table/algorithm that will select the closest point size for the font to match up with the image size.
Despite other answers saying that font size do not scale linearly, in all the examples that I tested they did scale linearly (within 1-2%).
So if you need a simpler and more efficient version that works within a few percent, you can copy/paste the following:
from PIL import ImageFont, ImageDraw, Image
def find_font_size(text, font, image, target_width_ratio):
tested_font_size = 100
tested_font = ImageFont.truetype(font, tested_font_size)
observed_width, observed_height = get_text_size(text, image, tested_font)
estimated_font_size = tested_font_size / (observed_width / image.width) * target_width_ratio
return round(estimated_font_size)
def get_text_size(text, image, font):
im = Image.new('RGB', (image.width, image.height))
draw = ImageDraw.Draw(im)
return draw.textsize(text, font)
The function find_font_size() can then be used like that (full example):
width_ratio = 0.5 # Portion of the image the text width should be (between 0 and 1)
font_family = "arial.ttf"
text = "Hello World"
image = Image.open('image.jpg')
editable_image = ImageDraw.Draw(image)
font_size = find_font_size(text, font_family, image, width_ratio)
font = ImageFont.truetype(font_family, font_size)
print(f"Font size found = {font_size} - Target ratio = {width_ratio} - Measured ratio = {get_text_size(text, image, font)[0] / image.width}")
editable_image.text((10, 10), text, font=font)
image.save('output.png')
Which for a 225x225 image would print:
>> Font size found = 22 - Target ratio = 0.5 - Measured ratio = 0.502
I tested find_font_size() with various fonts and picture sizes, and it worked in all cases.
If you want to know how this function works, basically tested_font_size is used to find out which ratio will be obtained if we use this specific font size to generate the text. Then, we use a cross-multiplication rule to get the targeted font size.
I tested different values for tested_font_size and found that as long as it's not too small, it does not make any difference.

fonts clipping with PIL

This image was created with PIL. See how the g's and the y's are cut off in this image? How can I prevent this?
http://img109.imageshack.us/img109/8874/screenshotep.png
The code that created this image is pretty straight forward (abbreviated):
import Image, ImageDraw, ImageFont
im = Image.new("RGBA", (200, 200), 'white')
draw = ImageDraw.Draw(im)
font = ImageFont.truetype("VeraSe.ttf", 12)
draw.text(
(1, 1),
" %s: " % "ggjyfFwe__",
font=font,
fill='black'
)
draw.text(
(1, 30),
" %s" % 15,
font=font,
fill='black'
)
im.show()
I tried it with a few different fonts, and it always gets clipped. Surprising;y, googleing "PIL font clipping" returns very few useful hits... I'm using python 2.6.4 and PIL 1.1.6 on Ubuntu 9.10
Here's a late answer for this older question.
The problem appears to be that PIL and Pillow will clip the edges of rendered text. This most often shows on trailing wide characters and decenders (like 'y's). This can also appear on the top of some fonts. This has been a problem for at least ten years. It happens regardless of the size of the image on which text() is called. The conflict appears to choosing the bounding rectangle as "font.size * number_chars" instead of "whatever I actually need to render" and this occurs deep in the stack (_imagingft.c). Fixing this causes other problems, like lining up text rendered letter by letter.
Some solutions include:
Append a space to the end of your string. im.text(xy, my_text + ' ', ...)
For height issues, get the width of your text (font.getsize()), second render the text plus a good ascender and descender, chop the rendered text to the first reported width and the second actual height.
Use a different library such as AggDraw or pyvips.
This is referenced in various questions fonts clipping with PIL, PIL cuts off top of letters, Properly render text with a given font in Python and accurately detect its boundaries. These questions reference the same underlying issue but are not duplicates
I couldn't solve this problem for some fonts using the approaches mentioned so far, so I ended up using aggdraw as a transparent replacement for PIL's text drawig methods.
Your code rewritten to aggdraw would look like:
import Image
import aggdraw
im = Image.new("RGBA", (200, 200), 'white')
draw = aggdraw.Draw(im)
# note that the color is specified in the font constructor in aggdraw
font = aggdraw.Font((0,0,0), "VeraSe.ttf", size=12, opacity=255)
draw.text((1, 1), " %s: " % "ggjyfFwe__", font) # no color here
draw.text((1, 30), " %s" % 15, font)
draw.flush() # don't forget this to update the underlying PIL image!
im.show()
The "bug" still exists in 2012, with Ubuntu 11.10. Fontsize 11, 12, 13 and 15 clip the underscore completely.
#!/usr/bin/env python
""" demonstrates clipping of descenders for certain font sizes """
import Image, ImageDraw, ImageFont
fontPath = "/usr/share/fonts/truetype/ttf-dejavu/DejaVuSans-Bold.ttf"
im = Image.new('L', (256, 256))
ys=15
for i in range(10,21):
fh = ImageFont.truetype(fontPath, i)
sometext="%dgt_}" % (i)
ImageDraw.Draw(im).text((10, ys ),sometext , 254, fh)
ys+=i+5
im.show()
My suggestion is, before you create the image object, to get the required size for the text.
This is done using font.getsize("text") (documentation).
In a image generating script I made, I first found the maximum height of one line of text, by calling the equvalient of font.getsize("Åj") (If you only need US-ASCII, you could find the height of "Aj" instead). Then I calculated the required image height and line offsets, including margins and line-spacing.
Here is an kludge that works well for me. It is a variant on gnud's answer. (Different enough to deserve a separate answer vs. comment I hope.) I have tested a lot of word placements and this has performed consistently.
When a text is drawn without fully reaching the full height of the font, clipping can occur. As gnud noted, by using characters such as "Aj" (I use "Fj") you avoid this bug.
Whenever a word is placed:
1) Do a draw.textsize(text, font=font) with your desired word. Store the height/width.
2) Add ' Fj' (spaceFJ) to the end of the word, and redo the textsize and store tis third height/width.
4) You will do the actual text draw with the word from item 2 (with the ' Fj' at the end). Having this addendum will keep the font from being clipped.
4) Before you do the actual text draw, crop the image where the ' Fj' will land (crop.load() is required to avoid a lazy copy). Then draw the text, and past the cropped image back over the ' Fj'.
This process avoids clipping, seems reasonably performant, and yields the full, unclipped text. Below is a copy/paste of a section of Python code I use for this. Partial example, but hopefully it adds some insight.
# note: xpos & ypos were previous set = coordinates for text draw
# the hard-coded addition of 4 to c_x likely will vary by font
# (I only use one font in this process, so kludged it.)
width, height = draw.textsize(word, font=font)
word2 = word + ' Fj'
width2, height2 = draw.textsize(word2, font=font)
# crop to overwrite ' Fj' with previous image bits
c_w = width2 - width
c_h = height2
c_x = xpos + width + 4
c_y = ypos
box = (c_x, c_y, c_x + c_w, c_y + c_h)
region = img.crop(box)
region.load()
draw.text((xpos, ypos), word2, (0,0,0), font=font)
img.paste(region, box)

Categories

Resources