How to split other information from binary string? - python

I have an image which is a result of a python code and has to be shown in a LabVIEW program. The pixels of the image are sent ( with sys.stdout.buffer.write)as a U32 pixels string, so I used unflatten from string in LabVIEW code to show the image, but the result from python includes other information as shown in the picture below, when I split them "manually" I can get the right picture. My question is, how can I only get the pixels information from python output to get the picture.

You can use the match pattern node twice to cut off the first two lines like this:
Note that you might need to replace \n with \r\n, depending on how your actual input is coded.

Related

How to parse and preserve text formatting (Python-Docx)?

I'm using Python-Docx to export all the data from a 500-page Docx file into a spreadsheet using pandas. So far so good except that the process is removing all character styles. I have written the following to preserve superscript, but I can't seem to get it working.
for para in document.paragraphs:
content = para.text
for run in para.runs:
if run.font.superscript:
r.font.superscript = True
r = para.add_run(run.text)
scripture += r.text
My Input text might me, for example:
Genesis 1:1 1 In the beginning God created the heavens and the earth.
But my output into the Xlsx file is:
Genesis 1:1 1 In the beginning God created the heavens and the earth. (Still losing the superscript formatting).
How do I preserve the font.style of each run for export? Perhaps more specifically, how do I get the text formatting from each run to be encoded into the "scripture" string?
Any help is greatly appreciated!
You cannot encode font information in a str object. A str object is a sequence of characters and that's that. It cannot indicate "make these five characters bold and the following three characters italic. There's just no place to put that sort of thing and the str data type is not made for that job.
Font (character-formatting) information must be stored in a container object of some sort. In Word, that's a run. It HTML it can be a <span> element. If you want character-formatting in your spreadsheet, you'll need to know how character formatting is stored in the target format (Excel maybe) and then apply it to text in that export format on a run-by-run basis.
There are some other problems with your code you should be aware of:
the r in r.font.superscript = True is being used before being defined. The r = para.add_run(run.text) line would need to appear prior to that line to avoid problems. I wouldn't bother here because it's not actually doing anything here it turns out, but names need to be defined before use.
You are doubling the size of the source paragraph by adding runs to it. This part actually contributes nothing because you then call run.text which as we mentioned cannot contain any character-formatting information and so it gets stripped back out.
The same result as your current code can be achieved by this:
scripture = "".join(p.text for p in document.paragraphs)
but I think you'll at approach like:
Parse out bits that go in separate cells
Within the text that goes into a single cell, write a "rich-text" cell something like that described here for XlsxWriter: https://xlsxwriter.readthedocs.io/example_rich_strings.html

Python, Convert white spaces to invisible ASCII character

A quick question. I have no idea how to google for an answer.
I have a python program for taking input from users and generating string output.
I then use this string variables to populate text boxes on other software (Illustrator).
The string is made out of: number + '%' + text, e.g.
'50% Cool', '25% Alright', '25% Decent'.
These three elements are imputed into one Text Box (next to one another), and as it is with text boxes if one line does not fit the whole text, the text is moved down to another line as soon as it finds a white space ' '. Like So:
50% Cool 25% Alright 25%
Decent
I need to keep this feature in (Where text gets moved down to a lower line if it does not fit) but I need it to move the whole element and not split it.
Like So:
50% Cool 25% Alright
25% Decent
The only way I can think of to stop this from happening; is to use some sort of invisible ASCII code which connects each element together (while still retaining human visible white spaces).
Does anyone know of such ASCII connector that could be used?
So, understand first of all that what you're asking about is encoding specific. In ASCII/ANSI encoding (or Latin1), a non-breaking space can either be character 160 or character 255. (See this discussion for details.) Eg:
non_breaking_space = ord(160)
HOWEVER, that's for encoded ASCII binary strings. Assuming you're using Python 3 (which you should consider if you're not), your strings are all Unicode strings, not encoded binary strings.
And this also begs the question of how you plan on getting these strings into Illustrator. Does the user copy and paste them? Are they encoded into a file? That will affect how you want to transmit a non-breaking space.
Assuming you're using Python 3 and not worrying about encoding, this is what you want:
'Alright\u002025%'
\u0020 inserts a Unicode non-breaking space.

Python3 src encodings of Emojis

I'd like to print emojis from python(3) src
I'm working on a project that analyzes Facebook Message histories and in the raw htm data file downloaded I find a lot of emojis are displayed as boxes with question marks, as happens when the value can't be displayed. If I copy paste these symbols into terminal as strings, I get values such as \U000fe328. This is also the output I'm getting when I run the htm files through BeautifulSoup and output the data.
I Googled this string (and others), and consistently one of the only sites that comes up with them is iemoji.com, in the case of the above string this page, that lists the string as a Python Src. I want to be able to print out these strings as their corresponding emojis (after all, they were originallly emojis when being messaged), and after looking around I found a mapping of src encodings at this page, that mapped the above like strings to emoji string names. I then found this emoji string names to Unicode list, that for the most part seems to map the emoji names to Unicode. If I try printing out these values, I get good output. Like following
>>> print(u'\U0001F624')
😤
Is there a way to map these "Python src" encodings to their unicode values? Chaining both libraries would work if not for the fact that the original src mapping is missing around 50% of the unicode values found in the unicode library. And if I do end up having to do that, is there a good way to find the Python Src value of a given emoji? From my testing emoji as strings equal their Unicode, such as '😤' == u'\U0001F624', but I can't seem to get any sort of relations to \U000fe328
This has nothing to do with Python. An escape like \U000fe328 just contains the hexadecimal representation of a code point, so this one is U+0FE328 (which is a private use character).
These days a lot of emoji are assigned to code points, eg. 😤 is U+01F624 — FACE WITH LOOK OF TRIUMPH.
Before these were assigned, various programs used various code points in the private use ranges to represent emoji. Facebook apparently used the private use character U+0FE328. The mapping from these code points to the standard code points is arbitrary. Some of them may not have a standard equivalent at all.
So what you have to look for is a table which tells you which of these old assignments correspond to which standard code point.
There's php-emoji on GitHub which appears to contain these mappings. But note that this is PHP code, and the characters are represented as UTF-8 (eg. the character above would be "\xf3\xbe\x8c\xa8").

How do I return the most similar Unicode character to a section of an image?

I made a simple converter in Python to convert images to ASCII. Right now it uses various shades of dark characters, so it works but it is hard to make out at low resolutions: for example, the Google logo comes out as:
.. .;. .#
a; .. .; . .. a. # ...;.
aa .a.▒. ▒.;. ;.;; a. ▒ #a
.;.. .; ..... . ..;;; ; ;..
.a. .;
This can barely be made out. Is there a way that I could compare each section to a subset of Unicode characters and return the most similar, so it could return for example something like:
./--.\. /▒
a; ./-.; / \ ./ \\ ▒ ./━\.
aa -a.▒. ▒.|. |.;▒ ┃ ▒ ▒-~┘
\;.. /| \\_// \ / .\;;; ▒ \\.-
.pp--▒
Generate an image for each character you'd like to use, in the font which you'll be using. You will probably use a fixed width font which will make it possible to create one large image and break it up later. This might be as easy as typing the characters into an editor and doing a screen capture.
For each patch of the input image, compare the patch to all of the character images. I would take corresponding pixels from the patch and the character and square the difference, and sum them up - the character with the lowest sum is the one that most closely matches the patch.
You might improve the results by doing a blur on the character images, the input image, or both. You also might get better results by increasing the contrast on the input image.
Another idea to improve both result quality and speed would be to calculate the average darkness of each character, and only attempt to match characters that are nearly the same darkness as the patch.
This is an old thread, but I might as well add my solution here. You can use braille characters to get pixel-perfect representations. Like so:
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⡿⡻⡫⡫⡣⣣⢣⢇⢧⢫⢻⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⡟⡟⣝⣜⠼⠼⢚⢚⢚⠓⠷⣧⣇⠧⡳⡱⣻⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⡟⣏⡧⠧⠓⠍⡂⡂⠅⠌⠄⠄⠄⡁⠢⡈⣷⡹⡸⣪⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⢿⠿⢿⢿⢿⢟⢏⡧⠗⡙⡐⡐⣌⢬⣒⣖⣼⣼⣸⢸⢐⢁⠂⡐⢰⡏⣎⢮⣾⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣽⣾⣶⣿⢿⢻⡱⢕⠋⢅⠢⠱⢼⣾⣾⣿⣿⣿⣿⣿⣿⣿⡇⡇⠢⢁⢂⡯⡪⣪⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⢟⠏⢎⠪⠨⡐⠔⠁⠁⠀⠀⠀⠙⢿⣿⣿⣿⣿⣿⣿⣿⢱⠡⡁⣢⢏⢮⣾⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⢟⢍⢆⢃⢑⠤⠑⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⣿⣿⣿⣿⡿⡱⢑⢐⢼⢱⣵⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⢿⢫⡱⢊⢂⢢⠢⡃⠌⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⣿⣿⢟⢑⢌⢦⢫⣪⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⡻⡱⡑⢅⢢⣢⣳⢱⢑⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠹⡑⡑⡴⡹⣼⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⢟⢝⠜⠨⡐⣴⣵⣿⣗⡧⡣⠢⢈⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣜⢎⣷⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⡫⡱⠑⡁⣌⣮⣾⣿⣿⣿⣟⡮⡪⡪⡐⠠⠀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡟⢏⠜⠌⠄⣕⣼⣿⣿⣿⣿⣿⣿⣯⡯⣎⢖⠌⠌⠄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢨⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⢟⢕⠕⢁⠡⣸⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⡽⡮⡪⡪⠨⡂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⢟⢕⠕⢁⢐⢔⣽⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⢽⡱⡱⡑⡠⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⢟⢕⠕⢁⢐⢰⣼⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣟⣞⢜⠔⢄⠡⠀⠀⠀⠀⠀⠀⠀⠀⠀⣼⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⡿⡹⡰⠃⢈⠠⣢⣿⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡮⣇⢏⢂⠢⠀⠀⠀⠀⠀⠀⠀⣠⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⢫⢒⡜⠐⠀⢢⣱⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣳⢕⢕⠌⠄⡀⠀⠀⢀⣤⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⡿⡑⣅⠗⠀⡀⣥⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⠟⢙⠙⠿⣿⣿⣿⣿⣿⣿⣿⣿⣯⢮⡪⣂⣢⣬⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⡟⡜⢌⡞⡀⣡⣾⣿⣿⣿⣿⣿⣿⣿⡿⠛⠉⢀⡠⠔⢜⣱⣴⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⡿⡸⡘⢜⣧⣾⣿⣿⣿⣿⣿⣿⠿⢛⡡⠤⡒⢪⣑⣬⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⡇⡇⡣⣷⣿⣿⣿⣿⣿⠿⡛⡣⡋⣕⣬⣶⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣮⣺⣿⣿⣟⣻⣩⣢⣵⣾⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿⠿
I built a tool for this in Go called dotmatrix: https://github.com/kevin-cantwell/dotmatrix
When you say
compare each section to a subset of Unicode
this is not really clear, because there is more than one way to do this. I would bring the comparing down to the level of pixel. In a gray image, every pixel has a gray-value. Assume you want to replace every pixel by an appropriate character, how has this character to match the pixel? If you look at a character from really far, you'll see only a gray spot. If you replace now a pixel with a character, you should choose the character with the most similar gray-value to that pixel.
In a monospaced font, every character uses the same amount of space. If you take now this rectangle of space, draw a character on it, you can calculate the mean gray-value. This mean gray-value is not more than how much area of the rectangle is white compared to the whole rectangle. A space has a gray-value of 1. And maybe a dollar-sign is one of the most black characters you'll find.
So here is what I would do:
Take a set of characters, no matter whether you use only ascii or uni-code. Calculate for every character the amount of white. It should be obvious, that this could be different for different fonts, but you have to use a monospaced one.
You have now a list which maps every character to a gray-value. You should now rescale the gray-values to your target gray-value interval. When you have an 8-bit image, then your brightest character (space) should correspond to a value of 255 and your darkest should correstpond to gray-level 0.
Now, rescale your input image, so that it is not too big, because even with a very small font, you'll maybe not getting 2000 characters on one line.
Replace every pixel with the character whose gray-level is nearest to its own graylevel
In Mathematica this is only a few lines of code. In python it's maybe a bit longer, but it should be ok too.
Using this way, you get pretty amazing results when you look at the text from far away and when you get closer, you see that it all consists of characters.
Update
When you want to create an image of the same size as the original, then the approach is not very different but even here you have, as Mark already pointed out, to create a raster image of every letter you are using.
I don't really see a faster way of comparing your image-tiles with a letter to decide which one is the most appropriate.
Maybe one hint: If your using this approach, the letters will be visible in your image, because when you have e.g. a 12pt font, each letter will have at least an image-size of about 10x15. When you now convert an image of 1000x1500, which is not so small, you use only 100x100 letters.
Therefore, it might be worth a thought to not use the image itself but the image gradients. This may give better images, because then a letter is choosen, which follows the edges quite good.
Using only the gradients, the google logo doesn't look so bad

image processing

This is an assignment, i have put good effort since i am new to python programming:
I am running the following function which takes in image and phrase (spaces will be removed so just text) as arguments, i have already been given all the import and preprocessing code, i just need to implement this function. I can only use getpixel, putpixel, load, and save. That is why coding this has been a hard task for me.
def InsertoImage(srcImage, phrase):
pix = srcImage.load()
for index,value in enumerate(phrase):
pix[10+index,15] = phrase[index]
srcImage.save()
pass
This code is giving "system error" which says that "new style getargs format but argument is not tuple"
Edit:
C:\Users\Nave\Desktop\a1>a1_template.py lolmini.jpg Hi
Traceback (most recent call last):
File "C:\Users\Nave\Desktop\a1\a1_template.py", line 31, in <module>
doLOLImage(srcImage, phrase)
File "C:\Users\Nave\Desktop\a1\a1_template.py", line 23, in doLOLImage
pix[10+index,15] = phrase[index]
SystemError: new style getargs format but argument is not a tuple
Edit:
Ok Thanks, i understood and now posting code but i am getting error for the if statement not sure why the if statement is not working, here is full code sorry for not adding it entirely before:
from future import division
letters, numbers, and punctation are dictionaries mapping (uppercase)
characters to Images representing that character
NOTE: There is no space character stored!
from imageproc import letters, numbers, punctuation, preProcess
This is the function to implement
def InserttoImage(srcImage, phrase):
pix = srcImage.load()
for index,value in enumerate(phrase):
if value in letters:
pix[10+index, 15] = letters[value]
elif value in numbers:
pix[10+index, 15] = numbers[value]
elif value in punctuation:
pix[10+index, 15] = punctuation[value]
srcImage.save()
pass
This code is performed when this script is called from the command line via:
'python .py'
if name == 'main':
srcImage, phrase = preProcess()
InserttoImage(srcImage, phrase)
Thanks, letter, numbers, and punctuation are dictionaries which see the key element and open the image (font).
But still there is an issue with pix[10+index, 15] as it is giving error:
pix[10+index, 15] = letters[value]
SystemError: new style getargs format but argument is not a tuple
You seem to be confusing two very different concepts. Following from the sample code you posted, let's assume that:
srcImage = A Python Image Library image, generated from lolmini.jpg.
phrase = A string, 'Hi'.
You're trying to get phrase to appear as text written on top of srcImage. Your current code shows that you plan on doing this by accessing the individual pixels of the image, and assigning a letter to them.
This doesn't work for a few reasons. The primary two are that:
You're working with single pixels. A pixel is a picture element. It only ever displays one colour at a time. You cannot represent a letter with a single pixel. The pixel is just a dot. You need multiple pixels together, to form a coherent shape that we recognize as a letter.
What does your text of Hi actually look like? When you envision it being written on top of the image, are the letters thin? Do they vary in their size? Are they thick and chunky? Italic? Do they look handwritten? These are all attributes of a font face. Currently, your program has no idea what those letters should look like. You need to give your program the name of a font, so that it knows how to draw the letters from phrase onto the image.
The Python Imaging Library comes with a module specifically for helping you draw fonts. The documentation for it is here:
The ImageFont Module
Your code shows that you have the general idea correct — loop through each letter, place it in the image, and increment the x value so that the next letter doesn't overlap it. Instead of working with the image's pixels, though, you need to load in a font and use the methods shown in the above-linked library to draw them onto the image.
If you take a look at the draw.text() function in the linked documentation, you'll see that you can in fact skip the need to loop through each letter, instead passing the entire string to be used on the image.
I could've added sample code, but as this is a homework assignment I've intentionally left any out. With the linked documentation and your existing code, you hopefully shouldn't have any troubles seeing this through to completion.
Edit:
Just read your comment to another answer, indicating that you are only allowed to use getpixel() and putpixel() for drawing onto the source image. If this is indeed the case, your workload has just increased exponentially.
My comments above stand — a single pixel will not be able to represent a letter. Assuming you're not allowed any outside source code, you will need to create data structures that contain the locations of multiple pixels, which are then all drawn in a specific location in order to represent a particular letter.
You will then need to do this for every letter you want to support.
If you could include the text of the assignment verbatim, I think it would help those here to better understand all of your constraints.
Actually, upon further reading, I think the problem is that you are trying to assign a character value to a pixel. You have to figure out some kind of way to actually draw the characters on the image (and within the images boundaries).
Also as a side note since you are using
for index,value in enumerate(phrase):
You could use value instead of phrase[index]
My suggestion to the general problem is to create an image that contains all of the characters, at known coordinates (top, bottom, left, right) and then transfer the appropriate parts of the character image into the new output image.
Just try this:
pix[10+index:15] = letters[value]
Use ":" instead of ","

Categories

Resources