How can I insert Monospace fonts into an image with opencv? - python

Currently, I am able to insert some texts of HERSHEY font into images with openCV API (putText). But it seems openCV are not supporting any monospace font.
I was wondering how I can insert some Monospace or fixed-pitch texts into the image.

You could use PIL/Pillow for that aspect quite easily. OpenCV images are numpy arrays, so you can make a Pillow Image from an OpenCV image with:
PilImage = Image.fromarray(OpenCVimage)
Then you can draw with a mono spaced font using code in my answer here. You only need the 3 lines after the comment "Get a drawing context".
Then you can convert back to OpenCV image with:
OpenCVimage = np.array(PilImage)
That might look like this:
#!/usr/local/bin/python3
from PIL import Image, ImageFont, ImageDraw
import numpy as np
import cv2
# Open image with OpenCV
im_o = cv2.imread('start.png')
# Make into PIL Image
im_p = Image.fromarray(im_o)
# Get a drawing context
draw = ImageDraw.Draw(im_p)
monospace = ImageFont.truetype("/Library/Fonts/Andale Mono.ttf",32)
draw.text((40, 80),"Hopefully monospaced",(255,255,255),font=monospace)
# Convert back to OpenCV image and save
result_o = np.array(im_p)
cv2.imwrite('result.png', result_o)
Alternatively, you could have a function generate a lump of canvas itself, write your text on it, and then splice it into your OpenCV image wherever you want. Something along these lines - though I have no idea of what flexibility you would require so I have not parameterised everything:
#!/usr/local/bin/python3
from PIL import Image, ImageFont, ImageDraw, ImageColor
import numpy as np
import cv2
def GenerateText(size, fontsize, bg, fg, text):
"""Generate a piece of canvas and draw text on it"""
canvas = Image.new('RGB', size, bg)
# Get a drawing context
draw = ImageDraw.Draw(canvas)
monospace = ImageFont.truetype("/Library/Fonts/Andale Mono.ttf",fontsize)
draw.text((10, 10), text, fg, font=monospace)
# Change to BGR order for OpenCV's peculiarities
return cv2.cvtColor(np.array(canvas), cv2.COLOR_RGB2BGR)
# Open image with OpenCV
im_o = cv2.imread('start.png')
# Try some tests
w,h = 350,50
a,b = 20, 80
text = GenerateText((w,h), 32, 'black', 'magenta', "Magenta on black")
im_o[a:a+h, b:b+w] = text
w,h = 200,40
a,b = 120, 280
text = GenerateText((w,h), 18, 'cyan', 'blue', "Blue on cyan")
im_o[a:a+h, b:b+w] = text
cv2.imwrite('result.png', im_o)
Keywords: OpenCV, Python, Numpy, PIL, Pillow, image, image processing, monospace, font, fonts, fixed, fixed width, courier, HERSHEY.

Related

How can I use pango (HTML subset) with the ImageMagick Python library wand?

My goal is to take a picture and add a centered text to its center. I want to use italics and bold for this text, specified with the HTML-like pango.
I currently have this code:
import os
from wand.image import Image
from wand.drawing import Drawing
from wand.color import Color
with Image(filename='testimg.png') as img:
with Drawing() as draw:
draw.font = 'Arial'
draw.font_size = 36
text = 'pango:<b>Formatted</b> text'
(width, height) = draw.get_font_metrics(img, text).size()
print(width, height)
x = int((img.width - width) / 2)
y = int((img.height - height) / 2)
draw.fill_color = Color('black')
draw.text(x, y, text)
draw(img)
img.save(filename='output.jpg')
However, the text does not get formatted currently, but is simply "pango:Formatted text", and it is very hard to find any documentation.
(Before this approach I tried using pillow, but that does not seem to support anything HTML-like at all)
It kind of works if you create a new image and set the file path to a pango string:
import os
from wand.image import Image
from wand.drawing import Drawing
from wand.color import Color
# Open the image file
with Image(filename='testimg.png') as img:
# Create a Drawing object
with Image(filename="pango:<b>Formatted</b> text") as text_img:
text_img.transparent_color('white', alpha=0, fuzz=0)
text_img.font_path = r"Montserrat-SemiBold.ttf"
text_img.font_size = 36
# Calculate the x and y coordinates to center the text on the image
x = int((img.width - text_img.width) / 2)
y = int((img.height - text_img.height) / 2)
# Draw the text on the image
img.composite(text_img, left=x, top=y)
# Save the image
img.save(filename='outputnew.jpg')
However, the result is very ugly because the text rendering is like on a white background, only that the background is not white:

How to render a TTF-Glyf to a image with FontTools

I want to render a glyf of a TT-Font into an image (numpy.array):
from fontTools.ttLib import TTFont
import matplotlib.pyplot as plt
font = TTFont('font.ttf')
glyf = font['glyf']['A']
coords = np.array(glyf.coordinates)
coords = np.swapaxes(coords,0,1)
plt.scatter(coords[0], coords[1])
This are the vertices.
How can I draw the glyf to an numpy.array? I found glyf.draw(...) but I don't found a tutorial or examples how to use it. I also do not found any informations about the pen-concept.
Edit 1:
I found a way to render text with pillow:
from PIL import ImageFont, ImageDraw, Image
image = Image.new(mode='L', size=(128,128), color=224)
draw = ImageDraw.Draw(image)
imageFont = ImageFont.truetype('font.ttf', 64)
draw.text((0, 0), "A", font=imageFont)
image
That is a good start, but I need more control of the final result. There glyf shut be centered and in a size, that does use the space in a more efficient way.
I am also interested in the gridlines, f.ex. baseline and others.
Edit 2:
I found some hints in this question: How to get the font pixel height using PIL's ImageFont class?
from PIL import ImageFont, ImageDraw, Image
x_size = 128
y_size = 128
font_size = 64
imageFont = ImageFont.truetype('font.ttf', font_size)
ascent, descent = imageFont.getmetrics()
image = Image.new(mode='L', size=(x_size, y_size), color=224)
draw = ImageDraw.Draw(image)
text = 'Aj;^'
draw.line([0,ascent,127,ascent], fill=128)
draw.line([0,descent,127,descent], fill=128)
draw.text((0, 0), text, font=imageFont)
image
There are two lines that mark to points on the y-axis. But as you can see, there are characters going down one line. And characters are overlapping in the x-direction, as you can see on the "j" and "A".
I still need more control of the final result.

How to draw square pixel by pixel (Python, PIL)

On blank canvas I want to draw a square, pixel by pixel, using Pillow.
I have tried using img.putpixel((30,60), (155,155,55)) to draw one pixel but it doesn't do anything.
from PIL import Image
def newImg():
img = Image.new('RGB', (1280,768))
img.save('sqr.png')
return img
wallpaper = newImg()
wallpaper.show()
Running the code you say you have tried totally works, see below.
To draw the rectangle, repeat the img.putpixel((30,60), (155,155,55)) command with other coordinates.
from PIL import Image
def newImg():
img = Image.new('RGB', (100, 100))
img.putpixel((30,60), (155,155,55))
img.save('sqr.png')
return img
wallpaper = newImg()
wallpaper.show()
sqr.png

How to draw text with image in background?

I want to make something like this python.
I have the image in background and write text with transparent fill, so that image shows up.
Here's one way I found to do it using the Image.composite() function which is documented here and here.
The approach used is described (very) tersely in this answer to the question Is it possible to mask an image in Python Imaging Library (PIL)? by #Mark Ransom…the following is just an illustration of applying it to accomplish what you want do.
from PIL import Image, ImageDraw, ImageFont
BACKGROUND_IMAGE_FILENAME = 'cookie_cutter_background_cropped.png'
RESULT_IMAGE_FILENAME = 'cookie_cutter_text_result.png'
THE_TEXT = 'LOADED'
FONT_NAME = 'arialbd.ttf' # Arial Bold
# Read the background image and convert to an RGB image with Alpha.
with open(BACKGROUND_IMAGE_FILENAME, 'rb') as file:
bgr_img = Image.open(file)
bgr_img = bgr_img.convert('RGBA') # Give iamge an alpha channel.
bgr_img_width, bgr_img_height = bgr_img.size
cx, cy = bgr_img_width//2, bgr_img_height//2 # Center of image.
# Create a transparent foreground to be result of non-text areas.
fgr_img = Image.new('RGBA', bgr_img.size, color=(0, 0, 0, 0))
font_size = bgr_img_width//len(THE_TEXT)
font = ImageFont.truetype(FONT_NAME, font_size)
txt_width, txt_height = font.getsize(THE_TEXT) # Size of text w/font if rendered.
tx, ty = cx - txt_width//2, cy - txt_height//2 # Center of text.
mask_img = Image.new('L', bgr_img.size, color=255)
mask_img_draw = ImageDraw.Draw(mask_img)
mask_img_draw.text((tx, ty), THE_TEXT, fill=0, font=font, align='center')
res_img = Image.composite(fgr_img, bgr_img, mask_img)
res_img.save(RESULT_IMAGE_FILENAME)
res_img.show()
Which, using the following BACKGROUND_IMAGE:
produced the image shown below, which is it being viewed in Photoshop so the transparent background it has would be discernible (not to scale):
Here's an enlargement, showing the smoothly rendered edges of the characters:

How to make the background of an image transparent with Wand?

I have an image with a white background, and want to convert the white background to transparent. How can I do this with Wand?
The ImageMagick command to do this is:
convert ~/Desktop/cat_with_white_gb.png -transparent white ~/Desktop/cat_with_transparent_bg.png
I have tried:
import urllib2
fg_url = 'http://i.stack.imgur.com/Mz9y0.jpg'
fg = urllib2.urlopen(fg_url)
with Image(file=fg) as img:
img.background_color = Color('transparent')
img.save(filename='test.png')
and
with Image(file=fg) as fg_img:
with Color('#FFF') as white:
fg_img.transparent_color(white, 0.0)
The big thing to remember is that JPEG source images will not have an alpha channel. You can add this by defining wand.image.Image.alpha_channel, or just setting the image format to something that does work with transparency.
from wand.image import Image
from wand.color import Color
with Image(filename="http://i.stack.imgur.com/Mz9y0.jpg") as img:
img.format = 'png'
with Color('#FDFDFD') as white:
twenty_percent = int(65535 * 0.2) # Note: percent must be calculated from Quantum
img.transparent_color(white, alpha=0.0, fuzz=twenty_percent)
img.save(filename="/tmp/Mz9y0.png")
perhaps the fuzz of 20% is to aggressive in this example

Categories

Resources