I'm trying to remove just Emoji from Unicode text. I tried the various methods described in another Stack Overflow post but none of those are removing all emojis / smileys completely. For example:
Solution 1:
def remove_emoji(self, string):
emoji_pattern = re.compile("["
u"\U0001F600-\U0001F64F" # emoticons
u"\U0001F300-\U0001F5FF" # symbols & pictographs
u"\U0001F680-\U0001F6FF" # transport & map symbols
u"\U0001F1E0-\U0001F1FF" # flags (iOS)
u"\U00002702-\U000027B0"
u"\U000024C2-\U0001F251"
"]+", flags=re.UNICODE)
return emoji_pattern.sub(r'', string)
Leaves in 🤝 in the following example:
Input: తెలంగాణ రియల్ ఎస్టేట్ 🤝👍
Output: తెలంగాణ రియల్ ఎస్టేట్ 🤝
Another attempt, solution 2:
def deEmojify(self, inputString):
returnString = ""
for character in inputString:
try:
character.encode("ascii")
returnString += character
except UnicodeEncodeError:
returnString += ''
return returnString
Results in removing any non-English character:
Input: 🏣Testరియల్ ఎస్టేట్ A.P&T.S. 🤝🏩🏣👍
Output: Test A.P&T.S.
It removes not only all of the emoji but it also removed the non-English characters because of the character.encode("ascii"); my non-English inputs can not be encoded into ASCII.
Is there any way to properly remove Emoji from international Unicode text?
The regex is outdated. It appears to cover Emoji's defined up to Unicode 8.0 (since U+1F91D HANDSHAKE was added in Unicode 9.0). The other approach is just a very inefficient method of force-encoding to ASCII, which is rarely what you want when just removing Emoji (and can be much more easily and efficiently achieved with text.encode('ascii', 'ignore').decode('ascii')).
If you need a more up-to-date regex, take one from a package that is actively trying to keep up-to-date on Emoji; it specifically supports generating such a regex:
import emoji
def remove_emoji(text):
return emoji.get_emoji_regexp().sub(u'', text)
The package is currently up-to-date for Unicode 11.0 and has the infrastructure in place to update to future releases quickly. All your project has to do is upgrade along when there is a new release.
Demo using your sample inputs:
>>> print(remove_emoji(u'తెలంగాణ రియల్ ఎస్టేట్ 🤝👍'))
తెలంగాణ రియల్ ఎస్టేట్
>>> print(remove_emoji(u'🏣Testరియల్ ఎస్టేట్ A.P&T.S. 🤝🏩🏣👍'))
Testరియల్ ఎస్టేట్ A.P&T.S.
Note that the regex works on Unicode text, for Python 2 make sure you have decoded from str to unicode, for Python 3, from bytes to str first.
Emoji are complex beasts these days. The above will remove complete, valid Emoji. If you have 'incomplete' Emoji components such as skin-tone codepoints (meant to be combined with specific Emoji only) then you'll have much more trouble removing those. The skin-tone codepoints are easy (just remove those 5 codepoints afterwards), but there is a whole host of combinations that are made up of innocent characters such as ♀ U+2640 FEMALE SIGN or ♂ U+2642 MALE SIGN together with variant selectors and the U+200D ZERO-WIDTH JOINER that have specific meaning in other contexts too, and you can't just regex those out, not unless you don't mind breaking text using Devanagari, or Kannada or CJK ideographs, to name just a few examples.
That said, the following Unicode 11.0 codepoints are probably safe to remove (based on filtering the Emoji_Component Emoji-data designation):
20E3 ; (⃣) combining enclosing keycap
FE0F ; () VARIATION SELECTOR-16
1F1E6..1F1FF ; (🇦..🇿) regional indicator symbol letter a..regional indicator symbol letter z
1F3FB..1F3FF ; (🏻..🏿) light skin tone..dark skin tone
1F9B0..1F9B3 ; (🦰..🦳) red-haired..white-haired
E0020..E007F ; (..) tag space..cancel tag
which can be removed by creating a new regex to match those:
import re
try:
uchr = unichr # Python 2
import sys
if sys.maxunicode == 0xffff:
# narrow build, define alternative unichr encoding to surrogate pairs
# as unichr(sys.maxunicode + 1) fails.
def uchr(codepoint):
return (
unichr(codepoint) if codepoint <= sys.maxunicode else
unichr(codepoint - 0x010000 >> 10 | 0xD800) +
unichr(codepoint & 0x3FF | 0xDC00)
)
except NameError:
uchr = chr # Python 3
# Unicode 11.0 Emoji Component map (deemed safe to remove)
_removable_emoji_components = (
(0x20E3, 0xFE0F), # combining enclosing keycap, VARIATION SELECTOR-16
range(0x1F1E6, 0x1F1FF + 1), # regional indicator symbol letter a..regional indicator symbol letter z
range(0x1F3FB, 0x1F3FF + 1), # light skin tone..dark skin tone
range(0x1F9B0, 0x1F9B3 + 1), # red-haired..white-haired
range(0xE0020, 0xE007F + 1), # tag space..cancel tag
)
emoji_components = re.compile(u'({})'.format(u'|'.join([
re.escape(uchr(c)) for r in _removable_emoji_components for c in r])),
flags=re.UNICODE)
then update the above remove_emoji() function to use it:
def remove_emoji(text, remove_components=False):
cleaned = emoji.get_emoji_regexp().sub(u'', text)
if remove_components:
cleaned = emoji_components.sub(u'', cleaned)
return cleaned
The emoji.get_emoji_regexp() is outdated.
If you want to remove emoji from strings, you can use emoji.replace_emoji() as shown in the examples below.
import emoji
def remove_emoji(string):
return emoji.replace_emoji(string, '')
Visit https://carpedm20.github.io/emoji/docs/api.html#emoji.replace_emoji
If you use the regex library instead of the re library you get access to Unicode properties then you can change your function to
def remove_emoji(self, string):
emoji_pattern = re.compile("[\P{L}&&\P{D}&&\P{Z}&&\P{M}]", flags=re.UNICODE)
return emoji_pattern.sub(r'', string)
Which will keep all letters, digits, separators and marks (accents)
Related
Here is a snippet that includes my string.
'ls\r\n\x1b[00m\x1b[01;31mexamplefile.zip\x1b[00m\r\n\x1b[01;31m'
The string was returned from an SSH command that I executed. I can't use the string in its current state because it contains ANSI standardized escape sequences. How can I programmatically remove the escape sequences so that the only part of the string remaining is 'examplefile.zip'.
Delete them with a regular expression:
import re
# 7-bit C1 ANSI sequences
ansi_escape = re.compile(r'''
\x1B # ESC
(?: # 7-bit C1 Fe (except CSI)
[#-Z\\-_]
| # or [ for CSI, followed by a control sequence
\[
[0-?]* # Parameter bytes
[ -/]* # Intermediate bytes
[#-~] # Final byte
)
''', re.VERBOSE)
result = ansi_escape.sub('', sometext)
or, without the VERBOSE flag, in condensed form:
ansi_escape = re.compile(r'\x1B(?:[#-Z\\-_]|\[[0-?]*[ -/]*[#-~])')
result = ansi_escape.sub('', sometext)
Demo:
>>> import re
>>> ansi_escape = re.compile(r'\x1B(?:[#-Z\\-_]|\[[0-?]*[ -/]*[#-~])')
>>> sometext = 'ls\r\n\x1b[00m\x1b[01;31mexamplefile.zip\x1b[00m\r\n\x1b[01;31m'
>>> ansi_escape.sub('', sometext)
'ls\r\nexamplefile.zip\r\n'
The above regular expression covers all 7-bit ANSI C1 escape sequences, but not the 8-bit C1 escape sequence openers. The latter are never used in today's UTF-8 world where the same range of bytes have a different meaning.
If you do need to cover the 8-bit codes too (and are then, presumably, working with bytes values) then the regular expression becomes a bytes pattern like this:
# 7-bit and 8-bit C1 ANSI sequences
ansi_escape_8bit = re.compile(br'''
(?: # either 7-bit C1, two bytes, ESC Fe (omitting CSI)
\x1B
[#-Z\\-_]
| # or a single 8-bit byte Fe (omitting CSI)
[\x80-\x9A\x9C-\x9F]
| # or CSI + control codes
(?: # 7-bit CSI, ESC [
\x1B\[
| # 8-bit CSI, 9B
\x9B
)
[0-?]* # Parameter bytes
[ -/]* # Intermediate bytes
[#-~] # Final byte
)
''', re.VERBOSE)
result = ansi_escape_8bit.sub(b'', somebytesvalue)
which can be condensed down to
# 7-bit and 8-bit C1 ANSI sequences
ansi_escape_8bit = re.compile(
br'(?:\x1B[#-Z\\-_]|[\x80-\x9A\x9C-\x9F]|(?:\x1B\[|\x9B)[0-?]*[ -/]*[#-~])'
)
result = ansi_escape_8bit.sub(b'', somebytesvalue)
For more information, see:
the ANSI escape codes overview on Wikipedia
ECMA-48 standard, 5th edition (especially sections 5.3 and 5.4)
The example you gave contains 4 CSI (Control Sequence Introducer) codes, as marked by the \x1B[ or ESC [ opening bytes, and each contains a SGR (Select Graphic Rendition) code, because they each end in m. The parameters (separated by ; semicolons) in between those tell your terminal what graphic rendition attributes to use. So for each \x1B[....m sequence, the 3 codes that are used are:
0 (or 00 in this example): reset, disable all attributes
1 (or 01 in the example): bold
31: red (foreground)
However, there is more to ANSI than just CSI SGR codes. With CSI alone you can also control the cursor, clear lines or the whole display, or scroll (provided the terminal supports this of course). And beyond CSI, there are codes to select alternative fonts (SS2 and SS3), to send 'private messages' (think passwords), to communicate with the terminal (DCS), the OS (OSC), or the application itself (APC, a way for applications to piggy-back custom control codes on to the communication stream), and further codes to help define strings (SOS, Start of String, ST String Terminator) or to reset everything back to a base state (RIS). The above regexes cover all of these.
Note that the above regex only removes the ANSI C1 codes, however, and not any additional data that those codes may be marking up (such as the strings sent between an OSC opener and the terminating ST code). Removing those would require additional work outside the scope of this answer.
The accepted answer only takes into account ANSI Standardized escape sequences that are formatted to alter foreground colors & text style.
Many sequences do not end in 'm', such as: cursor positioning, erasing, and scroll regions. The pattern bellow attempts to cover all cases beyond setting foreground color and text-style.
Below is the regular expression for ANSI standardized control sequences:
/(\x9B|\x1B\[)[0-?]*[ -\/]*[#-~]/
Additional References:
ECMA-48 Section 5.4
ANSI escape code
Function
Based on Martijn Pieters♦'s answer with Jeff's regexp.
def escape_ansi(line):
ansi_escape = re.compile(r'(?:\x1B[#-_]|[\x80-\x9F])[0-?]*[ -/]*[#-~]')
return ansi_escape.sub('', line)
Test
def test_remove_ansi_escape_sequence(self):
line = '\t\u001b[0;35mBlabla\u001b[0m \u001b[0;36m172.18.0.2\u001b[0m'
escaped_line = escape_ansi(line)
self.assertEqual(escaped_line, '\tBlabla 172.18.0.2')
Testing
If you want to run it by yourself, use python3 (better unicode support, blablabla). Here is how the test file should be:
import unittest
import re
def escape_ansi(line):
…
class TestStringMethods(unittest.TestCase):
def test_remove_ansi_escape_sequence(self):
…
if __name__ == '__main__':
unittest.main()
The suggested regex didn't do the trick for me so I created one of my own.
The following is a python regex that I created based on the spec found here
ansi_regex = r'\x1b(' \
r'(\[\??\d+[hl])|' \
r'([=<>a-kzNM78])|' \
r'([\(\)][a-b0-2])|' \
r'(\[\d{0,2}[ma-dgkjqi])|' \
r'(\[\d+;\d+[hfy]?)|' \
r'(\[;?[hf])|' \
r'(#[3-68])|' \
r'([01356]n)|' \
r'(O[mlnp-z]?)|' \
r'(/Z)|' \
r'(\d+)|' \
r'(\[\?\d;\d0c)|' \
r'(\d;\dR))'
ansi_escape = re.compile(ansi_regex, flags=re.IGNORECASE)
I tested my regex on the following snippet (basically a copy paste from the ascii-table.com page)
\x1b[20h Set
\x1b[?1h Set
\x1b[?3h Set
\x1b[?4h Set
\x1b[?5h Set
\x1b[?6h Set
\x1b[?7h Set
\x1b[?8h Set
\x1b[?9h Set
\x1b[20l Set
\x1b[?1l Set
\x1b[?2l Set
\x1b[?3l Set
\x1b[?4l Set
\x1b[?5l Set
\x1b[?6l Set
\x1b[?7l Reset
\x1b[?8l Reset
\x1b[?9l Reset
\x1b= Set
\x1b> Set
\x1b(A Set
\x1b)A Set
\x1b(B Set
\x1b)B Set
\x1b(0 Set
\x1b)0 Set
\x1b(1 Set
\x1b)1 Set
\x1b(2 Set
\x1b)2 Set
\x1bN Set
\x1bO Set
\x1b[m Turn
\x1b[0m Turn
\x1b[1m Turn
\x1b[2m Turn
\x1b[4m Turn
\x1b[5m Turn
\x1b[7m Turn
\x1b[8m Turn
\x1b[1;2 Set
\x1b[1A Move
\x1b[2B Move
\x1b[3C Move
\x1b[4D Move
\x1b[H Move
\x1b[;H Move
\x1b[4;3H Move
\x1b[f Move
\x1b[;f Move
\x1b[1;2 Move
\x1bD Move/scroll
\x1bM Move/scroll
\x1bE Move
\x1b7 Save
\x1b8 Restore
\x1bH Set
\x1b[g Clear
\x1b[0g Clear
\x1b[3g Clear
\x1b#3 Double-height
\x1b#4 Double-height
\x1b#5 Single
\x1b#6 Double
\x1b[K Clear
\x1b[0K Clear
\x1b[1K Clear
\x1b[2K Clear
\x1b[J Clear
\x1b[0J Clear
\x1b[1J Clear
\x1b[2J Clear
\x1b5n Device
\x1b0n Response:
\x1b3n Response:
\x1b6n Get
\x1b[c Identify
\x1b[0c Identify
\x1b[?1;20c Response:
\x1bc Reset
\x1b#8 Screen
\x1b[2;1y Confidence
\x1b[2;2y Confidence
\x1b[2;9y Repeat
\x1b[2;10y Repeat
\x1b[0q Turn
\x1b[1q Turn
\x1b[2q Turn
\x1b[3q Turn
\x1b[4q Turn
\x1b< Enter/exit
\x1b= Enter
\x1b> Exit
\x1bF Use
\x1bG Use
\x1bA Move
\x1bB Move
\x1bC Move
\x1bD Move
\x1bH Move
\x1b12 Move
\x1bI
\x1bK
\x1bJ
\x1bZ
\x1b/Z
\x1bOP
\x1bOQ
\x1bOR
\x1bOS
\x1bA
\x1bB
\x1bC
\x1bD
\x1bOp
\x1bOq
\x1bOr
\x1bOs
\x1bOt
\x1bOu
\x1bOv
\x1bOw
\x1bOx
\x1bOy
\x1bOm
\x1bOl
\x1bOn
\x1bOM
\x1b[i
\x1b[1i
\x1b[4i
\x1b[5i
Hopefully this will help others :)
none of the regex solutions worked in my case with OSC sequences (\x1b])
to actually render the visible output, you will need a terminal emulator like pyte
#! /usr/bin/env python3
import pyte # terminal emulator: render terminal output to visible characters
pyte_screen = pyte.Screen(80, 24)
pyte_stream = pyte.ByteStream(pyte_screen)
bytes_ = b''.join([
b'$ cowsay hello\r\n', b'\x1b[?2004l', b'\r', b' _______\r\n',
b'< hello >\r\n', b' -------\r\n', b' \\ ^__^\r\n',
b' \\ (oo)\\_______\r\n', b' (__)\\ )\\/\\\r\n',
b' ||----w |\r\n', b' || ||\r\n',
b'\x1b]0;user#laptop1:/tmp\x1b\\', b'\x1b]7;file://laptop1/tmp\x1b\\', b'\x1b[?2004h$ ',
])
pyte_stream.feed(bytes_)
# pyte_screen.display always has 80x24 characters, padded with whitespace
# -> use rstrip to remove trailing whitespace from all lines
text = ("".join([line.rstrip() + "\n" for line in pyte_screen.display])).strip() + "\n"
print("text", text)
print("cursor", pyte_screen.cursor.y, pyte_screen.cursor.x)
print("title", pyte_screen.title)
If it helps future Stack Overflowers, I was using the crayons library to give my Python output a bit more visual impact, which is advantageous as it works on both Windows and Linux platforms. However I was both displaying onscreen as well as appending to log files, and the escape sequences were impacting legibility of the log files, so wanted to strip them out. However the escape sequences inserted by crayons produced an error:
expected string or bytes-like object
The solution was to cast the parameter to a string, so only a tiny modification to the commonly accepted answer was needed:
def escape_ansi(line):
ansi_escape = re.compile(r'(\x9B|\x1B\[)[0-?]*[ -/]*[#-~]')
return ansi_escape.sub('', str(line))
if you want to remove the \r\n bit, you can pass the string through this function (written by sarnold):
def stripEscape(string):
""" Removes all escape sequences from the input string """
delete = ""
i=1
while (i<0x20):
delete += chr(i)
i += 1
t = string.translate(None, delete)
return t
Careful though, this will lump together the text in front and behind the escape sequences. So, using Martijn's filtered string 'ls\r\nexamplefile.zip\r\n', you will get lsexamplefile.zip. Note the ls in front of the desired filename.
I would use the stripEscape function first to remove the escape sequences, then pass the output to Martijn's regular expression, which would avoid concatenating the unwanted bit.
For 2020 with python 3.5 it as easy as string.encode().decode('ascii')
ascii_string = 'ls\r\n\x1b[00m\x1b[01;31mexamplefile.zip\x1b[00m\r\n\x1b[01;31m'
decoded_string = ascii_string.encode().decode('ascii')
print(decoded_string)
>ls
>examplefile.zip
>
On my website people can post news and quite a few editors use MS word and similar tools to write the text and then copy&paste into my site's editor (simple textarea, no WYSIWYG etc.).
Those texts usually contain "nice" quotes instead of the plain ascii ones ("). They also sometimes contain those longer dashes like – instead of -.
Now I want to replace all those characters with their ascii counterparts. However, I do not want to remove umlauts and other non-ascii character. I'd also highly prefer to use a proper solution that does not involve creating a mapping dict for all those characters.
All my strings are unicode objects.
What about this?
It creates translation table first, but honestly I don't think you can do this without it.
transl_table = dict( [ (ord(x), ord(y)) for x,y in zip( u"‘’´“”–-", u"'''\"\"--") ] )
with open( "a.txt", "w", encoding = "utf-8" ) as f_out :
a_str = u" ´funny single quotes´ long–-and–-short dashes ‘nice single quotes’ “nice double quotes” "
print( " a_str = " + a_str, file = f_out )
fixed_str = a_str.translate( transl_table )
print( " fixed_str = " + fixed_str, file = f_out )
I wasn't able to run this printing to a console (on Windows) so I had to write to txt file.
The output in the a.txt file looks as follows:
a_str = ´funny single quotes´ long–-and–-short dashes ‘nice single
quotes’ “nice double quotes” fixed_str = 'funny single quotes'
long--and--short dashes 'nice single quotes' "nice double quotes"
By the way, the code above works in Python 3. If you need it for Python 2, it might need some fixes due to the difference in handling Unicode strings in both versions of the language
There is no such "proper" solution, because for any given Unicode character there is no "ASCII counterpart" defined.
For example, take the seemingly easy characters that you might want to map to ASCII single and double quotes and hyphens. First, lets generate all the Unicode characters with their official names. Second, lets find all the quotation marks, hyphens and dashes according to the names:
#!/usr/bin/env python3
import unicodedata
def unicode_character_name(char):
try:
return unicodedata.name(char)
except ValueError:
return None
# Generate all Unicode characters with their names
all_unicode_characters = []
for n in range(0, 0x10ffff): # Unicode planes 0-16
char = chr(n) # Python 3
#char = unichr(n) # Python 2
name = unicode_character_name(char)
if name:
all_unicode_characters.append((char, name))
# Find all Unicode quotation marks
print (' '.join([char for char, name in all_unicode_characters if 'QUOTATION MARK' in name]))
# " « » ‘ ’ ‚ ‛ “ ” „ ‟ ‹ › ❛ ❜ ❝ ❞ ❟ ❠ ❮ ❯ ⹂ 〝 〞 〟 " 🙶 🙷 🙸
# Find all Unicode hyphens
print (' '.join([char for char, name in all_unicode_characters if 'HYPHEN' in name]))
# - ֊ ᐀ ᠆ ‐ ‑ ‧ ⁃ ⸗ ⸚ ⹀ ゠ ﹣ -
# Find all Unicode dashes
print (' '.join([char for char, name in all_unicode_characters if 'DASH' in name and 'DASHED' not in name]))
# ‒ – — ⁓ ⊝ ⑈ ┄ ┅ ┆ ┇ ┈ ┉ ┊ ┋ ╌ ╍ ╎ ╏ ⤌ ⤍ ⤎ ⤏ ⤐ ⥪ ⥫ ⥬ ⥭ ⩜ ⩝ ⫘ ⫦ ⬷ ⸺ ⸻ ⹃ 〜 〰 ︱ ︲ ﹘ 💨
As you can see, as easy as this example is, there are many problems. There are many quotation marks in Unicode that don't look anything like the quotation marks in US-ASCII and there are many hyphens in Unicode that don't look anything like the hyphen-minus sign in US-ASCII.
And there are many questions. For example:
should the "SWUNG DASH" (⁓) symbol be replaced with an ASCII hyphen (-) or a tilde (~)?
should the "CANADIAN SYLLABICS HYPHEN" (᐀) be replaced with an ASCII hyphen (-) or an equals sign (=)?
should the "SINGLE LEFT-POINTING ANGLE QUOTATION MARK" (‹) be replaces with an ASCII quotation mark ("), an apostrophe (') or a less-than sign (<)?
To establish a "correct" ASCII counterpart, somebody needs to answer these questions based on the use context. That's why all the solutions to your problem are based on a mapping dictionary in one way or another. And all these solutions will provide different results.
You can build on top of the unidecode package.
This is pretty slow, since we are normalizing all the unicode first to the combined form, then trying to see what unidecode turns it into. If we match a latin letter, then we actually use the original NFC character. If not, then we yield whatever degarbling unidecode has suggested. This leaves accentuated letters alone, but will convert everything else.
import unidecode
import unicodedata
import re
def char_filter(string):
latin = re.compile('[a-zA-Z]+')
for char in unicodedata.normalize('NFC', string):
decoded = unidecode.unidecode(char)
if latin.match(decoded):
yield char
else:
yield decoded
def clean_string(string):
return "".join(char_filter(string))
print(clean_string(u"vis-à-vis “Beyoncé”’s naïve papier–mâché résumé"))
# prints vis-à-vis "Beyoncé"'s naïve papier-mâché résumé
You can use the str.translate() method (http://docs.python.org/library/stdtypes.html#str.translate). However, read the doc related to Unicode -- the translation table has another form: unicode ordinal number --> unicode string (usually char) or None.
Well, but it requires the dict. You have to capture the replacements anyway. How do you want to do that without any table or arrays? You could use str.replace() for the single characters, but this would be inefficient.
This tool will normalize punctuation in markdown: http://johnmacfarlane.net/pandoc/README.html
-S, --smart Produce typographically correct output, converting straight quotes to curly quotes, --- to em-dashes, -- to en-dashes,
and ... to ellipses. Nonbreaking spaces are inserted after certain
abbreviations, such as “Mr.” (Note: This option is significant only
when the input format is markdown or textile. It is selected
automatically when the input format is textile or the output format is
latex or context.)
It's haskell, so you'd have to figure out the interface.
I often work with utf-8 text containing characters like:
\xc2\x99
\xc2\x95
\xc2\x85
etc
These characters confuse other libraries I work with so need to be replaced.
What is an efficient way to do this, rather than:
text.replace('\xc2\x99', ' ').replace('\xc2\x85, '...')
There is always regular expressions; just list all of the offending characters inside square brackets like so:
import re
print re.sub(r'[\xc2\x99]'," ","Hello\xc2There\x99")
This prints: 'Hello There ', with the unwanted characters replaced by spaces.
Alternately, if you have a different replacement character for each:
# remove annoying characters
chars = {
'\xc2\x82' : ',', # High code comma
'\xc2\x84' : ',,', # High code double comma
'\xc2\x85' : '...', # Tripple dot
'\xc2\x88' : '^', # High carat
'\xc2\x91' : '\x27', # Forward single quote
'\xc2\x92' : '\x27', # Reverse single quote
'\xc2\x93' : '\x22', # Forward double quote
'\xc2\x94' : '\x22', # Reverse double quote
'\xc2\x95' : ' ',
'\xc2\x96' : '-', # High hyphen
'\xc2\x97' : '--', # Double hyphen
'\xc2\x99' : ' ',
'\xc2\xa0' : ' ',
'\xc2\xa6' : '|', # Split vertical bar
'\xc2\xab' : '<<', # Double less than
'\xc2\xbb' : '>>', # Double greater than
'\xc2\xbc' : '1/4', # one quarter
'\xc2\xbd' : '1/2', # one half
'\xc2\xbe' : '3/4', # three quarters
'\xca\xbf' : '\x27', # c-single quote
'\xcc\xa8' : '', # modifier - under curve
'\xcc\xb1' : '' # modifier - under line
}
def replace_chars(match):
char = match.group(0)
return chars[char]
return re.sub('(' + '|'.join(chars.keys()) + ')', replace_chars, text)
I think that there is an underlying problem here, and it might be a good idea to investigate and maybe solve it, rather than just trying to cover up the symptoms.
\xc2\x95 is the UTF-8 encoding of the character U+0095, which is a C1 control character (MESSAGE WAITING). It is not surprising that your library cannot handle it. But the question is, how did it get into your data?
Well, one very likely possibility is that it started out as the character 0x95 (BULLET) in the Windows-1252 encoding, was wrongly decoded as U+0095 instead of the correct U+2022, and then encoded into UTF-8. (The Japanese term mojibake describes this kind of mistake.)
If this is correct, then you can recover the original characters by putting them back into Windows-1252 and then decoding them into Unicode correctly this time. (In these examples I am using Python 3.3; these operations are a bit different in Python 2.)
>>> b'\x95'.decode('windows-1252')
'\u2022'
>>> import unicodedata
>>> unicodedata.name(_)
'BULLET'
If you want to do this correction for all the characters in the range 0x80–0x99 that are valid Windows-1252 characters, you can use this approach:
def restore_windows_1252_characters(s):
"""Replace C1 control characters in the Unicode string s by the
characters at the corresponding code points in Windows-1252,
where possible.
"""
import re
def to_windows_1252(match):
try:
return bytes([ord(match.group(0))]).decode('windows-1252')
except UnicodeDecodeError:
# No character at the corresponding code point: remove it.
return ''
return re.sub(r'[\u0080-\u0099]', to_windows_1252, s)
For example:
>>> restore_windows_1252_characters('\x95\x99\x85')
'•™…'
If you want to remove all non-ASCII characters from a string, you can use
text.encode("ascii", "ignore")
import unicodedata
# Convert to unicode
text_to_uncicode = unicode(text, "utf-8")
# Convert back to ascii
text_fixed = unicodedata.normalize('NFKD',text_to_unicode).encode('ascii','ignore')
This is not "Unicode characters" - it feels more like this an UTF-8 encoded string. (Although your prefix should be \xC3, not \xC2 for most chars). You should not just throw them away in 95% of the cases, unless you are comunicating with a COBOL backend. The World is not limited to 26 characters, you know.
There is a concise reading to explain the differences between Unicode strings (what is used as an Unicode object in python 2 and as strings in Python 3 here: http://www.joelonsoftware.com/articles/Unicode.html - please, for your sake do read that. Even if you are never planning to have anything that is not English in all of your applications, you still will stumble on symbols like € or º that won't fit in 7 bit ASCII. That article will help you.
That said, maybe the libraries you are using do accept Unicode python objects, and you can transform your UTF-8 Python 2 strings into unidoce by doing:
var_unicode = var.decode("utf-8")
If you really need 100% pure ASCII, replacing all non ASCII chars, after decoding the string to unicode, re-encode it to ASCII, telling it to ignore characters that don't fit in the charset with:
var_ascii = var_unicode.encode("ascii", "replace")
These characters are not in ASCII Library and that is the reason why you are getting the errors.
To avoid these errors, you can do the following while reading the file.
import codecs
f = codecs.open('file.txt', 'r',encoding='utf-8')
To know more about these kind of errors, go through this link.
I have strings that are multi-lingual consist of both languages that use whitespace as word separator (English, French, etc) and languages that don't (Chinese, Japanese, Korean).
Given such a string, I want to separate the English/French/etc part into words using whitespace as separator, and to separate the Chinese/Japanese/Korean part into individual characters.
And I want to put of all those separated components into a list.
Some examples would probably make this clear:
Case 1: English-only string. This case is easy:
>>> "I love Python".split()
['I', 'love', 'Python']
Case 2: Chinese-only string:
>>> list(u"我爱蟒蛇")
[u'\u6211', u'\u7231', u'\u87d2', u'\u86c7']
In this case I can turn the string into a list of Chinese characters. But within the list I'm getting unicode representations:
[u'\u6211', u'\u7231', u'\u87d2', u'\u86c7']
How do I get it to display the actual characters instead of the unicode? Something like:
['我', '爱', '蟒', '蛇']
??
Case 3: A mix of English & Chinese:
I want to turn an input string such as
"我爱Python"
and turns it into a list like this:
['我', '爱', 'Python']
Is it possible to do something like that?
I thought I'd show the regex approach, too. It doesn't feel right to me, but that's mostly because all of the language-specific i18n oddnesses I've seen makes me worried that a regular expression might not be flexible enough for all of them--but you may well not need any of that. (In other words--overdesign.)
# -*- coding: utf-8 -*-
import re
def group_words(s):
regex = []
# Match a whole word:
regex += [ur'\w+']
# Match a single CJK character:
regex += [ur'[\u4e00-\ufaff]']
# Match one of anything else, except for spaces:
regex += [ur'[^\s]']
regex = "|".join(regex)
r = re.compile(regex)
return r.findall(s)
if __name__ == "__main__":
print group_words(u"Testing English text")
print group_words(u"我爱蟒蛇")
print group_words(u"Testing English text我爱蟒蛇")
In practice, you'd probably want to only compile the regex once, not on each call. Again, filling in the particulars of character grouping is up to you.
In Python 3, it also splits the number if you needed.
def spliteKeyWord(str):
regex = r"[\u4e00-\ufaff]|[0-9]+|[a-zA-Z]+\'*[a-z]*"
matches = re.findall(regex, str, re.UNICODE)
return matches
print(spliteKeyWord("Testing English text我爱Python123"))
=> ['Testing', 'English', 'text', '我', '爱', 'Python', '123']
Formatting a list shows the repr of its components. If you want to view the strings naturally rather than escaped, you'll need to format it yourself. (repr should not be escaping these characters; repr(u'我') should return "u'我'", not "u'\\u6211'. Apparently this does happen in Python 3; only 2.x is stuck with the English-centric escaping for Unicode strings.)
A basic algorithm you can use is assigning a character class to each character, then grouping letters by class. Starter code is below.
I didn't use a doctest for this because I hit some odd encoding issues that I don't want to look into (out of scope). You'll need to implement a correct grouping function.
Note that if you're using this for word wrapping, there are other per-language considerations. For example, you don't want to break on non-breaking spaces; you do want to break on hyphens; for Japanese you don't want to split apart きゅ; and so on.
# -*- coding: utf-8 -*-
import itertools, unicodedata
def group_words(s):
# This is a closure for key(), encapsulated in an array to work around
# 2.x's lack of the nonlocal keyword.
sequence = [0x10000000]
def key(part):
val = ord(part)
if part.isspace():
return 0
# This is incorrect, but serves this example; finding a more
# accurate categorization of characters is up to the user.
asian = unicodedata.category(part) == "Lo"
if asian:
# Never group asian characters, by returning a unique value for each one.
sequence[0] += 1
return sequence[0]
return 2
result = []
for key, group in itertools.groupby(s, key):
# Discard groups of whitespace.
if key == 0:
continue
str = "".join(group)
result.append(str)
return result
if __name__ == "__main__":
print group_words(u"Testing English text")
print group_words(u"我爱蟒蛇")
print group_words(u"Testing English text我爱蟒蛇")
Modified Glenn's solution to drop symbols and work for Russian, French, etc alphabets:
def rec_group_words():
regex = []
# Match a whole word:
regex += [r'[A-za-z0-9\xc0-\xff]+']
# Match a single CJK character:
regex += [r'[\u4e00-\ufaff]']
regex = "|".join(regex)
return re.compile(regex)
The following works for python3.7:
import re
def group_words(s):
return re.findall(u'[\u4e00-\u9fff]|[a-zA-Z0-9]+', s)
if __name__ == "__main__":
print(group_words(u"Testing English text"))
print(group_words(u"我爱蟒蛇"))
print(group_words(u"Testing English text我爱蟒蛇"))
['Testing', 'English', 'text']
['我', '爱', '蟒', '蛇']
['Testing', 'English', 'text', '我', '爱', '蟒', '蛇']
For some reason, I cannot adapt Glenn Maynard's answer to python3.
I use to run
$s =~ s/[^[:print:]]//g;
on Perl to get rid of non printable characters.
In Python there's no POSIX regex classes, and I can't write [:print:] having it mean what I want. I know of no way in Python to detect if a character is printable or not.
What would you do?
EDIT: It has to support Unicode characters as well. The string.printable way will happily strip them out of the output.
curses.ascii.isprint will return false for any unicode character.
Iterating over strings is unfortunately rather slow in Python. Regular expressions are over an order of magnitude faster for this kind of thing. You just have to build the character class yourself. The unicodedata module is quite helpful for this, especially the unicodedata.category() function. See Unicode Character Database for descriptions of the categories.
import unicodedata, re, itertools, sys
all_chars = (chr(i) for i in range(sys.maxunicode))
categories = {'Cc'}
control_chars = ''.join(c for c in all_chars if unicodedata.category(c) in categories)
# or equivalently and much more efficiently
control_chars = ''.join(map(chr, itertools.chain(range(0x00,0x20), range(0x7f,0xa0))))
control_char_re = re.compile('[%s]' % re.escape(control_chars))
def remove_control_chars(s):
return control_char_re.sub('', s)
For Python2
import unicodedata, re, sys
all_chars = (unichr(i) for i in xrange(sys.maxunicode))
categories = {'Cc'}
control_chars = ''.join(c for c in all_chars if unicodedata.category(c) in categories)
# or equivalently and much more efficiently
control_chars = ''.join(map(unichr, range(0x00,0x20) + range(0x7f,0xa0)))
control_char_re = re.compile('[%s]' % re.escape(control_chars))
def remove_control_chars(s):
return control_char_re.sub('', s)
For some use-cases, additional categories (e.g. all from the control group might be preferable, although this might slow down the processing time and increase memory usage significantly. Number of characters per category:
Cc (control): 65
Cf (format): 161
Cs (surrogate): 2048
Co (private-use): 137468
Cn (unassigned): 836601
Edit Adding suggestions from the comments.
As far as I know, the most pythonic/efficient method would be:
import string
filtered_string = filter(lambda x: x in string.printable, myStr)
You could try setting up a filter using the unicodedata.category() function:
import unicodedata
printable = {'Lu', 'Ll'}
def filter_non_printable(str):
return ''.join(c for c in str if unicodedata.category(c) in printable)
See Table 4-9 on page 175 in the Unicode database character properties for the available categories
The following will work with Unicode input and is rather fast...
import sys
# build a table mapping all non-printable characters to None
NOPRINT_TRANS_TABLE = {
i: None for i in range(0, sys.maxunicode + 1) if not chr(i).isprintable()
}
def make_printable(s):
"""Replace non-printable characters in a string."""
# the translate method on str removes characters
# that map to None from the string
return s.translate(NOPRINT_TRANS_TABLE)
assert make_printable('Café') == 'Café'
assert make_printable('\x00\x11Hello') == 'Hello'
assert make_printable('') == ''
My own testing suggests this approach is faster than functions that iterate over the string and return a result using str.join.
In Python 3,
def filter_nonprintable(text):
import itertools
# Use characters of control category
nonprintable = itertools.chain(range(0x00,0x20),range(0x7f,0xa0))
# Use translate to remove all non-printable characters
return text.translate({character:None for character in nonprintable})
See this StackOverflow post on removing punctuation for how .translate() compares to regex & .replace()
The ranges can be generated via nonprintable = (ord(c) for c in (chr(i) for i in range(sys.maxunicode)) if unicodedata.category(c)=='Cc') using the Unicode character database categories as shown by #Ants Aasma.
This function uses list comprehensions and str.join, so it runs in linear time instead of O(n^2):
from curses.ascii import isprint
def printable(input):
return ''.join(char for char in input if isprint(char))
Yet another option in python 3:
re.sub(f'[^{re.escape(string.printable)}]', '', my_string)
Based on #Ber's answer, I suggest removing only control characters as defined in the Unicode character database categories:
import unicodedata
def filter_non_printable(s):
return ''.join(c for c in s if not unicodedata.category(c).startswith('C'))
The best I've come up with now is (thanks to the python-izers above)
def filter_non_printable(str):
return ''.join([c for c in str if ord(c) > 31 or ord(c) == 9])
This is the only way I've found out that works with Unicode characters/strings
Any better options?
In Python there's no POSIX regex classes
There are when using the regex library: https://pypi.org/project/regex/
It is well maintained and supports Unicode regex, Posix regex and many more. The usage (method signatures) is very similar to Python's re.
From the documentation:
[[:alpha:]]; [[:^alpha:]]
POSIX character classes are supported. These
are normally treated as an alternative form of \p{...}.
(I'm not affiliated, just a user.)
An elegant pythonic solution to stripping 'non printable' characters from a string in python is to use the isprintable() string method together with a generator expression or list comprehension depending on the use case ie. size of the string:
''.join(c for c in my_string if c.isprintable())
str.isprintable()
Return True if all characters in the string are printable or the string is empty, False otherwise. Nonprintable characters are those characters defined in the Unicode character database as “Other” or “Separator”, excepting the ASCII space (0x20) which is considered printable. (Note that printable characters in this context are those which should not be escaped when repr() is invoked on a string. It has no bearing on the handling of strings written to sys.stdout or sys.stderr.)
The one below performs faster than the others above. Take a look
''.join([x if x in string.printable else '' for x in Str])
Adapted from answers by Ants Aasma and shawnrad:
nonprintable = set(map(chr, list(range(0,32)) + list(range(127,160))))
ord_dict = {ord(character):None for character in nonprintable}
def filter_nonprintable(text):
return text.translate(ord_dict)
#use
str = "this is my string"
str = filter_nonprintable(str)
print(str)
tested on Python 3.7.7
To remove 'whitespace',
import re
t = """
\n\t<p> </p>\n\t<p> </p>\n\t<p> </p>\n\t<p> </p>\n\t<p>
"""
pat = re.compile(r'[\t\n]')
print(pat.sub("", t))
Error description
Run the copied and pasted python code report:
Python invalid non-printable character U+00A0
The cause of the error
The space in the copied code is not the same as the format in Python;
Solution
Delete the space and re-enter the space. For example, the red part in the above picture is an abnormal space. Delete and re-enter the space to run;
Source : Python invalid non-printable character U+00A0
I used this:
import sys
import unicodedata
# the test string has embedded characters, \u2069 \u2068
test_string = """"ABC. 6", "}"""
nonprintable = list((ord(c) for c in (chr(i) for i in range(sys.maxunicode)) if
unicodedata.category(c) in ['Cc','Cf']))
translate_dict = {character: None for character in nonprintable}
print("Before translate, using repr()", repr(test_string))
print("After translate, using repr()", repr(test_string.translate(translate_dict)))