How to make a font sheet(for use in python roguelike) - python

I have continued making progress on my python roguelike, and dived further into this tutorial: http://roguebasin.roguelikedevelopment.org/index.php?title=Complete_Roguelike_Tutorial,_using_python%2Blibtcod . I also made my own font to use in this game, but I'm not sure how to do so. This is a link to the font image I'm currently using: http://i.imgur.com/j6FdNky.png . In the python code, it sets a custom font to 'arial10x10.png' which is that font image. I tried making an image from my own font, but it got really distorted.
Does anyone know how I could implement my own font? Also, I'm using libtcod, and I only have my own font in a .ttf format.
Thanks.

To render your TrueType font to a bitmap in the way that libtcod expects, you should use a separate library -- font rendering is a surprisingly complex task. FreeType is a very popular open source library for font rendering. You can find a Python interface here: http://code.google.com/p/freetype-py/. You will only need to use FreeType in a tool you'll use when developing your roguelike, not in your actual game.
First, determine what characters you will be using in your roguelike. Determine how to layout these characters on your font sheet. You can also simply choose to use the same layout as the one in the image you posted -- that's a sheet with 32 columns, starting at the space character (character 32).
Using your font rendering library, render each character by itself at the desired size. Pay attention to the size generated for each -- for instance, a '.' will be small and a 'w' will be large, even at the same font size. You must not just calculate a height, but a height above the baseline and a height below the baseline. (Example: if 'A' and 'g' are both 16 pixels tall, it's possible that you'll still need a rectangle taller than 16 pixels to align both correctly within it -- baseline-to-baseline.) Find the smallest rectangle size that will accommodate all of these sizes -- this is how large each cell in your font sheet must be.
Once you know how large your sheet will be, make another pass through all the desired letters to construct your bitmap, putting each letter in its cell. As far as y-positioning goes, all baselines must be aligned. If some characters are wider than others, you can choose to center or to left-align each character within its cell. (Each of these will come with its own weirdnesses -- you're really going to want a fixed-width font for a roguelike.)
Additional tips:
Use antialiasing. This'll make your font easier on the eyes than pure
monochrome.
Don't use colour, render your font in grayscale. libtcod has
functionality to generate coloured text from your grayscale
fontsheet.
Consider whether you want a square font or not. The advantage of a
square font is that "circles" in your roguelike world will look like
circles on the screen. The disadvantage is that square fonts are
generally "uglier" and harder to read.

Related

How to change image position and text wrapping using python-docx?

After adding the image using add_method(), I want to change the image position and text wrapping properties.
I want to change the text-wrapping: in front of text
I want to change the properties as
horizontal
alignment : right , relative to : margin
vertical
absolute position: 2.15 cm , below: Page
This is how I change it manually in word, but I want to do it using python-docx
Is there any way to get it done?
The short answer is "No."
There are two ways images can be placed in Word, inline images and floating images.
An inline image is placed in a run and is essentially treated as a big character. The height of the line it appears on is adjusted upward to fit the image and the paragraph it is in flows between pages depending on the text before it, just like any other paragraph.
A floating image lies on the drawing layer, like a clear plastic sheet above the document layer where the text lives. It is given an absolute position and in general does not flow with the text (although it can be anchored to part of the text). Text can be set to wrap around the image, wherever it ends up on the page.
python-docx currently only supports inline images. There is no existing API support for floating images (and the text wrapping they allow).

Converting Text to Image with Unicode characters

So, the idea here is that the given text, which happens to be Devanagari character such as संस्थानका कर्मचारी and I want to convert given text to image. Here is what I have attempted.
def draw_image(myString):
width=500
height=100
back_ground_color=(255,255,255)
font_size=10
font_color=(0,0,0)
unicode_text = myString
im = Image.new ( "RGB", (width,height), back_ground_color )
draw = ImageDraw.Draw (im)
unicode_font = ImageFont.truetype("arial.ttf", font_size)
draw.text ( (10,10), unicode_text, font=unicode_font, fill=font_color )
im.save("text.jpg")
if cv2.waitKey(0)==ord('q'):
cv2.destroyAllWindows()
But the font is not recognized, so the image consists of boxes, and other characters that are not understandable. So, which font should I use to get the correct image? Or is there any better approach to convert, the given text in character such as those, to image?
So I had a similar problem when I wanted to write text in Urdu onto images, firstly you need the correct font since writing purely with PIL or even openCV requires the appropriate Unicode characters, and even when you get the appropriate font the letters of one word are disjointed, and you don't get the correct results.
To resolve this you have to stray a bit from the traditional python-only approach since I was creating artificial datasets for an OCR, i needed to print large sets of such words onto a white background. I decided to use graphics software for this. Since some like photoshop even allows you to write scripts to automate processes.
The software I went for was GIMP, which allows you to quickly write and run extensions.scripts to automate the process. It allows you to write an extension in python, or more accurately a modified version of python, known as python-fu. Documentation was limited so it was difficult to get started, but with some persistence, I was able to write functions that would read text from a text file, and place them on white backgrounds and save to disk.
I was able to generate around 300k images from this in a matter of hours. I would suggest if you too are aiming for large amounts of text writing then you too rely on python-fu and GIMP.
For more info you may refer to the GIMP Python Documentation

detecting font of text in image

i want to detect the font of text in an image so that i can do better OCR on it. searching for a solution i found this post. although it may seem that it is the same as my question, it does not exactly address my problem.
background
for OCR i am using tesseract, which uses trained data for recognizing text. training tesseract with lots of fonts reduces the accuracy which is natural and understandable. one solution is to build multiple trained data - one per few similar fonts - and then automatically use the appropriate data for each image. for this to work we need to be able to detect the font in image.
number 3 in this answer uses OCR to isolate image of characters along with their recognized character and then generates the same character's image with each font and compare them with the isolated image. in my case the user should provide a bounding box and the character associated with it. but because i want to OCR Arabic script(which is cursive and character shapes may vary depending on what other characters are adjacent to it) and because the bounding box may not be actually the minimal bounding box, i am not sure how i can do the comparing.
i believe Hausdorff distance is not applicable here. am i right?
shape context may be good(?) and there is a shapeContextDistanceExtractor class in opencv but i am not sure how i can use it in opencv-python
thank you
sorry for bad English

Assigning different palette indices to a paletted image

I'm writing a game with Python and Pygame. For this, the graphics will be in the style of old video game consoles like the NES. Therefore, the graphics consist of a single tileset file with 2-bit (4-colour) images, and I want to be able to assign an arbitrary 4-colour palette to these images when loading them.
What I want to do is use an 8-bit (256-colour) palette mode, with a palette that I have divided into 64 sub-palettes of 4 colours each. Every time I load a 16x16 tile from the 2-bit graphics file, I want to assign one of these virtual 4-colour palettes to it. So, in the raw tile set file, the palette indices are going to be 0-3, because it is a 2-bit indexed file. I want to load tiles from this file into memory, and use a function to reassign the palette indices from 0-3 to whatever palette offset I choose, so that when I blit it to screen, it is coloured in my choice of 4-colour palette -- much like the NES hardware works. This gets a little hairy to explain, so maybe this picture makes it a little clearer:
I have looked around the manuals of Pygame and PIL and found nothing that lets me manipulate paletted files like this. Are there any other libs to look into, or is there a simpler solution I'm not seeing?
Although I personally have not done this, in PyGame I believe the call you're searching for is:
http://pygame.org/docs/ref/surface.html#Surface.set_palette
You can do this with pygame.Surface.set_palette_at(). In your case, you have four colours for sixty-four blocks, so to recolour a specific block, you might use:
for n, newRGBA in enumerate(fourNewColours): # where fourNewColours are the RGBA tuples you want to plop into the palette.
GameSurface.set_palette_at(palette_offset + n, newRGBA) # where palette_offset is the number of the first palette index you want to replace (adding n to increment your way through the new colours).
This is how you might replace individual blocks of four colours. Pushing every palette into the game surface to start with may be more complicated, since pygame seems to approximate blit'd images' palettes instead of replacing the destination surface's indices directly. More complicated because I don't know how you'd go about recolouring pixels with duplicate index RGBA values. (For instance if Palette 14/2 and Palette 40/1 are both (90,91,200,255), for example. I don't know if pygame will assign all pixels with that RGBA value to palette index 57 (from Pal14/2) and none to idx 160 for Pal40/1, or if the two will remain distinct.
This could become an issue if you want to recolour images on the fly. It ought to work reliably if you take the slightly longer route and alter the game palette as needed, then blit (or re-blit) surfaces with the relevant new palettes later.
Hope that's useful (even though it's well over two and a half years later)!
If you want to see a longer-form version of this answer, check out this link.

Tools to extract glyph data from bitmap font image

I have all the characters of a font rendered in a PNG. I want to use the PNG for texture-mapped font rendering in OpenGL, but I need to extract the glyph information - character position and size, etc.
Are there any tools out there to assist in that process? Most tools I see generate the image file and glyph data together. I haven't found anything to help me extract from an existing image.
I use the gimp as my primary image editor, and I've considered writing a plugin to assist me in the process of identifying the bounding box for each character. I know python but haven't done any gimp plugins before, so that would be a chore. I'm hoping something already exists...
Generally, the way this works is you use a tool to generate the glyph images. That tool will also generate metric information for those glyphs (how big they are, etc). You shouldn't be analyzing the image to find the glyphs; you should have additional information alongside your glyphs that tell where they are, how big they should be, etc.
Consider the letter "i". Depending on the font, there will be some space to the left and right of it. Even if you have a tool that can identify the glyph "i", you would need to know how many pixels of space the font put to the left and right of the glyph. This is pretty much impossible to do accurately for all letters. Not without some very advanced computer vision algorithms. And since the tool that generated those glyphs already knew how much spacing they were supposed to get, you would be better off changing the tool to write the glyph info as well.
You can use PIL to help you automate the process.
Assuming there is at least one row/column of background colour separating lines/characters, you can use the Image.crop method to check each row (and then each column of the row) if it contains only the background colour; thus you get the borders of each character.
Ping if you need further assistance.

Categories

Resources