I'm wondering how can I convert ISO-8859-2 (latin-2) characters (I mean integer or hex values that represents ISO-8859-2 encoded characters) to UTF-8 characters.
What I need to do with my project in python:
Receive hex values from serial port, which are characters encoded in ISO-8859-2.
Decode them, this is - get "standard" python unicode strings from them.
Prepare and write xml file.
Using Python 3.4.3
txt_str = "ąęłóźć"
txt_str.decode('ISO-8859-2')
Traceback (most recent call last): File "<stdin>", line 1, in <module>
AttributeError: 'str' object has no attribute 'decode'
The main problem is still to prepare valid input for the "decode" method (it works in python 2.7.10, and thats the one I'm using in this project). How to prepare valid string from decimal value, which are Latin-2 code numbers?
Note that it would be uber complicated to receive utf-8 characters from serial port, thanks to devices I'm using and communication protocol limitations.
Sample data, on request:
68632057
62206A75
7A647261
B364206F
20616775
777A616E
616A2061
6A65696B
617A20B6
697A7970
6A65B361
70697020
77F36469
62202C79
6E647572
75206A65
7963696C
72656D75
6A616E20
73726F67
206A657A
65647572
77207972
73772065
00000069
This is some sample data. ISO-8859-2 pushed into uint32, 4 chars per int.
bit of code that manages unboxing:
l = l[7:].replace(",", "").replace(".", "").replace("\n","").replace("\r","") # crop string from uart, only data left
vl = [l[0:2], l[2:4], l[4:6], l[6:8]] # list of bytes
vl = vl[::-1] # reverse them - now in actual order
To get integer value out of hex string I can simply use:
int_vals = [int(hs, 16) for hs in vl]
Your example doesn't work because you've tried to use a str to hold bytes. In Python 3 you must use byte strings.
In reality, if you're using PySerial then you'll be reading byte strings anyway, which you can convert as required:
with serial.Serial('/dev/ttyS1', 19200, timeout=1) as ser:
s = ser.read(10)
# Py3: s == bytes
# Py2.x: s == str
my_unicode_string = s.decode('iso-8859-2')
If your iso-8895-2 data is actually then encoded to ASCII hex representation of the bytes, then you have to apply an extra layer of encoding:
with serial.Serial('/dev/ttyS1', 19200, timeout=1) as ser:
hex_repr = ser.read(10)
# Py3: hex_repr == bytes
# Py2.x: hex_repr == str
# Decodes hex representation to bytes
# Eg. b"A3" = b'\xa3'
hex_decoded = codecs.decode(hex_repr, "hex")
my_unicode_string = hex_decoded.decode('iso-8859-2')
Now you can pass my_unicode_string to your favourite XML library.
Interesting sample data. Ideally your sample data should be a direct print of the raw data received from PySerial. If you actually are receiving the raw bytes as 8-digit hexadecimal values, then:
#!python3
from binascii import unhexlify
data = b''.join(unhexlify(x)[::-1] for x in b'''\
68632057
62206A75
7A647261
B364206F
20616775
777A616E
616A2061
6A65696B
617A20B6
697A7970
6A65B361
70697020
77F36469
62202C79
6E647572
75206A65
7963696C
72656D75
6A616E20
73726F67
206A657A
65647572
77207972
73772065
00000069'''.splitlines())
print(data.decode('iso-8859-2'))
Output:
W chuj bardzo długa nazwa jakiejś zapyziałej pipidówy, brudnej ulicyumer najgorszej rudery we wsi
Google Translate of Polish to English:
The dick very long name some zapyziałej Small Town , dirty ulicyumer worst hovel in the village
This topic is closed. Working code, that handles what need to be done:
x=177
x.to_bytes(1, byteorder='big').decode("ISO-8859-2")
Related
I get base64 chunks from microphone.
I need to concatenate them and send to Google API as one base64 string for speech recognition. Roughly speaking, in the first chunk the word Hello is encoded, and in the second world!. I need to glue two chunks, send them to google api of one line and receive Hello world! in response
You can see Google Speech-to-Text as example. Google also sends data from the microphone in base64 string using websockets (see Network).
Unfortunately, I don't have a microphone at hand - I can't check it. And we must do it now.
Suppose I get
chunk1 = "TgvsdUvK ...."
chunk2 = "UZZxgh5V ...."
Do I understand correctly that it will be enough just
base64.b64encode (chunk1 + chunk2))
Or do you need to know something else? Unfortunately, everything depends on the lack of a microphone (
Your example of encoding chunk1 + chunk2 wouldn't work, since base64 strings have padding at the end. If you just concatenated two base64 strings together, they couldn't be decoded.
For example, the strings StringA and StringB, when their ascii or utf-8 representations are encoded in base64, are the following: U3RyaW5nQQ== and U3RyaW5nQg==. Each one of those can be decoded fine. But, if you concatenated them, your result would be U3RyaW5nQQ==U3RyaW5nQg==, which is invalid:
concatenated_b64_strings = 'U3RyaW5nQQ==U3RyaW5nQg=='
concatenated_b64_strings_bytes = concatenated_b64_strings.encode('ascii')
decoded_strings = base64.b64decode(concatenated_b64_strings_bytes)
print(decoded_strings.decode('ascii')) # just outputs 'StringA', which is incorrect
So, in order to take those two strings (which I'm using as an example in place of binary data) and concatenate them together, starting with only their base64 representations, you have to decode them:
import base64
string1_base64 = 'U3RyaW5nQQ=='
string2_base64 = 'U3RyaW5nQg=='
# need to convert the strings to bytes first in order to decode them
base64_string1_bytes = string1_base64.encode('ascii')
base64_string2_bytes = string2_base64.encode('ascii')
# now, decode them into the actual bytes the base64 represents
base64_string1_bytes_decoded = base64.decodebytes(base64_string1_bytes)
base64_string2_bytes_decoded = base64.decodebytes(base64_string2_bytes)
# combine the bytes together
combined_bytes = base64_string1_bytes_decoded + base64_string2_bytes_decoded
# now, encode these bytes as base64
combined_bytes_base64 = base64.encodebytes(combined_bytes)
# finally, decode these bytes so you're left with a base64 string:
combined_bytes_base64_string = combined_bytes_base64.decode('ascii')
print(combined_bytes_base64_string) # output: U3RyaW5nQVN0cmluZ0I=
# let's prove that it concatenated successfully (you wouldn't do this in your actual code)
base64_combinedstring_bytes = combined_bytes_base64_string.encode('ascii')
base64_combinedstring_bytes_decoded_bytes = base64.decodebytes(base64_combinedstring_bytes)
base64_combinedstring_bytes_decoded_string = base64_combinedstring_bytes_decoded_bytes.decode('ascii')
print(base64_combinedstring_bytes_decoded_string) # output: StringAStringB
In your case, you'd be combining more than just two input base64 strings, but the process is the same. Take all the strings, encode each one to ascii bytes, decode them via base64.decodebytes(), and then add them all together via the += operator:
import base64
input_strings = ['U3RyaW5nQQ==', 'U3RyaW5nQg==']
input_strings_bytes = [input_string.encode('ascii') for input_string in input_strings]
input_strings_bytes_decoded = [base64.decodebytes(input_string_bytes) for input_string_bytes in input_strings_bytes]
combined_bytes = bytes()
for decoded in input_strings_bytes_decoded:
combined_bytes += decoded
combined_bytes_base64 = base64.encodebytes(combined_bytes)
combined_bytes_base64_string = combined_bytes_base64.decode('ascii')
print(combined_bytes_base64_string) # output: U3RyaW5nQVN0cmluZ0I=
Here is the I am trying:
import struct
#binary_data = open("your_binary_file.bin","rb").read()
#your binary data would show up as a big string like this one when you .read()
binary_data = '\x44\x69\x62\x65\x6e\x7a\x6f\x79\x6c\x70\x65\x72\x6f\x78\x69\x64\x20\x31\
x32\x30\x20\x43\x20\x30\x33\x2e\x30\x35\x2e\x31\x39\x39\x34\x20\x31\x34\x3a\x32\
x34\x3a\x33\x30'
def search(text):
#convert the text to binary first
s = ""
for c in text:
s+=struct.pack("b", ord(c))
results = binary_data.find(s)
if results == -1:
print ("no results found")
else:
print ("the string [%s] is found at position %s in the binary data"%(text, results))
search("Dibenzoylperoxid")
search("03.05.1994")
And this is the error I am getting:
Traceback (most recent call last):
File "dec_new.py", line 22, in <module>
search("Dibenzoylperoxid")
File "dec_new.py", line 14, in search
s+=struct.pack("b", ord(c))
TypeError: Can't convert 'bytes' object to str implicitly
Kindly, let me know what I can do to make it functional properly.
I am using Python 3.5.0.
s = ""
for c in text:
s+=struct.pack("b", ord(c))
This won't work because s is a string, and struct.pack returns a bytes, and you can't add a string and a bytes.
One possible solution is to make s a bytes.
s = b""
... But it seems like a lot of work to convert a string to a bytes this way. Why not just use encode()?
def search(text):
#convert the text to binary first
s = text.encode()
results = binary_data.find(s)
#etc
Also, "your binary data would show up as a big string like this one when you .read()" is not, strictly speaking, true. The binary data won't show up as a big string, because it is a bytes, not a string. If you want to create a bytes literal that resembles what might be returned by open("your_binary_file.bin","rb").read(), use the bytes literal syntax binary_data = b'\x44\x69<...etc...>\x33\x30'
I have Unicode Code Point of an emoticon represented as U+1F498:
emoticon = u'\U0001f498'
I would like to get utf-16 decimal groups of this character, which according to this website are 55357 and 56472.
I tried to do print emoticon.encode("utf16") but did not help me at all because it gives some other characters.
Also, trying to decode from UTF-8 before encode it to UTF-16 as follow print str(int("0001F498", 16)).decode("utf-8").encode("utf16") does not help either.
How do I correctly get the utf-16 decimal groups of a unicode character?
You can encode the character with the utf-16 encoding, and then convert every 2 bytes of the encoded data to integers with int.from_bytes (or struct.unpack in python 2).
Python 3
def utf16_decimals(char, chunk_size=2):
# encode the character as big-endian utf-16
encoded_char = char.encode('utf-16-be')
# convert every `chunk_size` bytes to an integer
decimals = []
for i in range(0, len(encoded_char), chunk_size):
chunk = encoded_char[i:i+chunk_size]
decimals.append(int.from_bytes(chunk, 'big'))
return decimals
Python 2 + Python 3
import struct
def utf16_decimals(char):
# encode the character as big-endian utf-16
encoded_char = char.encode('utf-16-be')
# convert every 2 bytes to an integer
decimals = []
for i in range(0, len(encoded_char), 2):
chunk = encoded_char[i:i+2]
decimals.append(struct.unpack('>H', chunk)[0])
return decimals
Result:
>>> utf16_decimals(u'\U0001f498')
[55357, 56472]
In a Python 2 "narrow" build, it is as simple as:
>>> emoticon = u'\U0001f498'
>>> map(ord,emoticon)
[55357, 56472]
This works in Python 2 (narrow and wide builds) and Python 3:
from __future__ import print_function
import struct
emoticon = u'\U0001f498'
print(struct.unpack('<2H',emoticon.encode('utf-16le')))
Output:
(55357, 56472)
This is a more general solution that prints the UTF-16 code points for any length of string:
from __future__ import print_function,division
import struct
def utf16words(s):
encoded = s.encode('utf-16le')
num_words = len(encoded) // 2
return struct.unpack('<{}H'.format(num_words),encoded)
emoticon = u'ABC\U0001f498'
print(utf16words(emoticon))
Output:
(65, 66, 67, 55357, 56472)
For some school assignments I've been trying to get pyplot to plot some scientific graphs based on data from Logger Pro for me. I'm met with the error
ValueError: could not convert string to float: '0'
This is the program:
plot.py
-------------------------------
import matplotlib.pyplot as plt
import numpy as np
infile = open('text', 'r')
xs = []
ys = []
for line in infile:
print (type(line))
x, y = line.split()
# print (x, y)
# print (type(line), type(x), type(y))
xs.append(float(x))
ys.append(float(y))
xs.sort()
ys.sort()
plt.plot(xs, ys, 'bo')
plt.grid(True)
# print (xs, ys)
plt.show()
infile.close()
And the input file is containing this:
text
-------------------------------
0 1.33
1 1.37
2 1.43
3 1.51
4 1.59
5 1.67
6 1.77
7 1.86
8 1.98
9 2.1
This is the error message I recieve when I'm running the program:
Traceback (most recent call last):
File "\route\to\the\file\plot01.py", line 36, in <module>
xs.append(float(x))
ValueError: could not convert string to float: '0'
You have a UTF-8 BOM in your data file; this is what my Python 2 interactive session states is being converted to a float:
>>> '0'
'\xef\xbb\xbf0'
The \xef\xbb\xbf bytes is a UTF-8 encoded U+FEFF ZERO WIDTH NO-BREAK SPACE, commonly used as a byte-order mark, especially by Microsoft products. UTF-8 has no byte order issues, the mark isn't required to record the byte ordering like you have to for UTF-16 or UTF-32; instead Microsoft uses it as an aid to detect encodings.
On Python 3, you could open the file using the utf-8-sig codec; this codec expects the BOM at the start and will remove it:
infile = open('text', 'r', encoding='utf-8-sig')
On Python 2, you could use the codecs.BOM_UTF8 constant to detect and strip;
for line in infile:
if line.startswith(codecs.BOM_UTF8):
line = line[len(codecs.BOM_UTF8):]
x, y = line.split()
As the codecs documentation explains it:
As UTF-8 is an 8-bit encoding no BOM is required and any U+FEFF character in the decoded string (even if it’s the first character) is treated as a ZERO WIDTH NO-BREAK SPACE.
Without external information it’s impossible to reliably determine which encoding was used for encoding a string. Each charmap encoding can decode any random byte sequence. However that’s not possible with UTF-8, as UTF-8 byte sequences have a structure that doesn’t allow arbitrary byte sequences. To increase the reliability with which a UTF-8 encoding can be detected, Microsoft invented a variant of UTF-8 (that Python 2.5 calls "utf-8-sig") for its Notepad program: Before any of the Unicode characters is written to the file, a UTF-8 encoded BOM (which looks like this as a byte sequence: 0xef, 0xbb, 0xbf) is written. As it’s rather improbable that any charmap encoded file starts with these byte values (which would e.g. map to
LATIN SMALL LETTER I WITH DIAERESIS
RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK
INVERTED QUESTION MARK
in iso-8859-1), this increases the probability that a utf-8-sig encoding can be correctly guessed from the byte sequence. So here the BOM is not used to be able to determine the byte order used for generating the byte sequence, but as a signature that helps in guessing the encoding. On encoding the utf-8-sig codec will write 0xef, 0xbb, 0xbf as the first three bytes to the file. On decoding utf-8-sig will skip those three bytes if they appear as the first three bytes in the file. In UTF-8, the use of the BOM is discouraged and should generally be avoided.
I have some files which contains a bunch of different kinds of binary data and I'm writing a module to deal with these files.
Amongst other, it contains UTF-8 encoded strings in the following format: 2 bytes big endian stringLength (which I parse using struct.unpack()) and then the string. Since it's UTF-8, the length in bytes of the string may be greater than stringLength and doing read(stringLength) will come up short if the string contains multi-byte characters (not to mention messing up all the other data in the file).
How do I read n UTF-8 characters (distinct from n bytes) from a file, being aware of the multi-byte properties of UTF-8? I've been googling for half an hour and all the results I've found are either not relevant or makes assumptions that I cannot make.
Given a file object, and a number of characters, you can use:
# build a table mapping lead byte to expected follow-byte count
# bytes 00-BF have 0 follow bytes, F5-FF is not legal UTF8
# C0-DF: 1, E0-EF: 2 and F0-F4: 3 follow bytes.
# leave F5-FF set to 0 to minimize reading broken data.
_lead_byte_to_count = []
for i in range(256):
_lead_byte_to_count.append(
1 + (i >= 0xe0) + (i >= 0xf0) if 0xbf < i < 0xf5 else 0)
def readUTF8(f, count):
"""Read `count` UTF-8 bytes from file `f`, return as unicode"""
# Assumes UTF-8 data is valid; leaves it up to the `.decode()` call to validate
res = []
while count:
count -= 1
lead = f.read(1)
res.append(lead)
readcount = _lead_byte_to_count[ord(lead)]
if readcount:
res.append(f.read(readcount))
return (''.join(res)).decode('utf8')
Result of a test:
>>> test = StringIO(u'This is a test containing Unicode data: \ua000'.encode('utf8'))
>>> readUTF8(test, 41)
u'This is a test containing Unicode data: \ua000'
In Python 3, it is of course much, much easier to just wrap the file object in a io.TextIOWrapper() object and leave decoding to the native and efficient Python UTF-8 implementation.
One character in UTF-8 can be 1byte,2bytes,3byte3.
If you have to read your file byte by byte, you have to follow the UTF-8 encoding rules. http://en.wikipedia.org/wiki/UTF-8
Most the time, you can just set the encoding to utf-8, and read the input stream.
You do not need to care how much bytes you have read.