I use to run
$s =~ s/[^[:print:]]//g;
on Perl to get rid of non printable characters.
In Python there's no POSIX regex classes, and I can't write [:print:] having it mean what I want. I know of no way in Python to detect if a character is printable or not.
What would you do?
EDIT: It has to support Unicode characters as well. The string.printable way will happily strip them out of the output.
curses.ascii.isprint will return false for any unicode character.
Iterating over strings is unfortunately rather slow in Python. Regular expressions are over an order of magnitude faster for this kind of thing. You just have to build the character class yourself. The unicodedata module is quite helpful for this, especially the unicodedata.category() function. See Unicode Character Database for descriptions of the categories.
import unicodedata, re, itertools, sys
all_chars = (chr(i) for i in range(sys.maxunicode))
categories = {'Cc'}
control_chars = ''.join(c for c in all_chars if unicodedata.category(c) in categories)
# or equivalently and much more efficiently
control_chars = ''.join(map(chr, itertools.chain(range(0x00,0x20), range(0x7f,0xa0))))
control_char_re = re.compile('[%s]' % re.escape(control_chars))
def remove_control_chars(s):
return control_char_re.sub('', s)
For Python2
import unicodedata, re, sys
all_chars = (unichr(i) for i in xrange(sys.maxunicode))
categories = {'Cc'}
control_chars = ''.join(c for c in all_chars if unicodedata.category(c) in categories)
# or equivalently and much more efficiently
control_chars = ''.join(map(unichr, range(0x00,0x20) + range(0x7f,0xa0)))
control_char_re = re.compile('[%s]' % re.escape(control_chars))
def remove_control_chars(s):
return control_char_re.sub('', s)
For some use-cases, additional categories (e.g. all from the control group might be preferable, although this might slow down the processing time and increase memory usage significantly. Number of characters per category:
Cc (control): 65
Cf (format): 161
Cs (surrogate): 2048
Co (private-use): 137468
Cn (unassigned): 836601
Edit Adding suggestions from the comments.
As far as I know, the most pythonic/efficient method would be:
import string
filtered_string = filter(lambda x: x in string.printable, myStr)
You could try setting up a filter using the unicodedata.category() function:
import unicodedata
printable = {'Lu', 'Ll'}
def filter_non_printable(str):
return ''.join(c for c in str if unicodedata.category(c) in printable)
See Table 4-9 on page 175 in the Unicode database character properties for the available categories
The following will work with Unicode input and is rather fast...
import sys
# build a table mapping all non-printable characters to None
NOPRINT_TRANS_TABLE = {
i: None for i in range(0, sys.maxunicode + 1) if not chr(i).isprintable()
}
def make_printable(s):
"""Replace non-printable characters in a string."""
# the translate method on str removes characters
# that map to None from the string
return s.translate(NOPRINT_TRANS_TABLE)
assert make_printable('Café') == 'Café'
assert make_printable('\x00\x11Hello') == 'Hello'
assert make_printable('') == ''
My own testing suggests this approach is faster than functions that iterate over the string and return a result using str.join.
In Python 3,
def filter_nonprintable(text):
import itertools
# Use characters of control category
nonprintable = itertools.chain(range(0x00,0x20),range(0x7f,0xa0))
# Use translate to remove all non-printable characters
return text.translate({character:None for character in nonprintable})
See this StackOverflow post on removing punctuation for how .translate() compares to regex & .replace()
The ranges can be generated via nonprintable = (ord(c) for c in (chr(i) for i in range(sys.maxunicode)) if unicodedata.category(c)=='Cc') using the Unicode character database categories as shown by #Ants Aasma.
This function uses list comprehensions and str.join, so it runs in linear time instead of O(n^2):
from curses.ascii import isprint
def printable(input):
return ''.join(char for char in input if isprint(char))
Yet another option in python 3:
re.sub(f'[^{re.escape(string.printable)}]', '', my_string)
Based on #Ber's answer, I suggest removing only control characters as defined in the Unicode character database categories:
import unicodedata
def filter_non_printable(s):
return ''.join(c for c in s if not unicodedata.category(c).startswith('C'))
The best I've come up with now is (thanks to the python-izers above)
def filter_non_printable(str):
return ''.join([c for c in str if ord(c) > 31 or ord(c) == 9])
This is the only way I've found out that works with Unicode characters/strings
Any better options?
In Python there's no POSIX regex classes
There are when using the regex library: https://pypi.org/project/regex/
It is well maintained and supports Unicode regex, Posix regex and many more. The usage (method signatures) is very similar to Python's re.
From the documentation:
[[:alpha:]]; [[:^alpha:]]
POSIX character classes are supported. These
are normally treated as an alternative form of \p{...}.
(I'm not affiliated, just a user.)
An elegant pythonic solution to stripping 'non printable' characters from a string in python is to use the isprintable() string method together with a generator expression or list comprehension depending on the use case ie. size of the string:
''.join(c for c in my_string if c.isprintable())
str.isprintable()
Return True if all characters in the string are printable or the string is empty, False otherwise. Nonprintable characters are those characters defined in the Unicode character database as “Other” or “Separator”, excepting the ASCII space (0x20) which is considered printable. (Note that printable characters in this context are those which should not be escaped when repr() is invoked on a string. It has no bearing on the handling of strings written to sys.stdout or sys.stderr.)
The one below performs faster than the others above. Take a look
''.join([x if x in string.printable else '' for x in Str])
Adapted from answers by Ants Aasma and shawnrad:
nonprintable = set(map(chr, list(range(0,32)) + list(range(127,160))))
ord_dict = {ord(character):None for character in nonprintable}
def filter_nonprintable(text):
return text.translate(ord_dict)
#use
str = "this is my string"
str = filter_nonprintable(str)
print(str)
tested on Python 3.7.7
To remove 'whitespace',
import re
t = """
\n\t<p> </p>\n\t<p> </p>\n\t<p> </p>\n\t<p> </p>\n\t<p>
"""
pat = re.compile(r'[\t\n]')
print(pat.sub("", t))
Error description
Run the copied and pasted python code report:
Python invalid non-printable character U+00A0
The cause of the error
The space in the copied code is not the same as the format in Python;
Solution
Delete the space and re-enter the space. For example, the red part in the above picture is an abnormal space. Delete and re-enter the space to run;
Source : Python invalid non-printable character U+00A0
I used this:
import sys
import unicodedata
# the test string has embedded characters, \u2069 \u2068
test_string = """"ABC. 6", "}"""
nonprintable = list((ord(c) for c in (chr(i) for i in range(sys.maxunicode)) if
unicodedata.category(c) in ['Cc','Cf']))
translate_dict = {character: None for character in nonprintable}
print("Before translate, using repr()", repr(test_string))
print("After translate, using repr()", repr(test_string.translate(translate_dict)))
Related
I'm working with a .txt file. I want a string of the text from the file with no non-ASCII characters. However, I want to leave spaces and periods. At present, I'm stripping those too. Here's the code:
def onlyascii(char):
if ord(char) < 48 or ord(char) > 127: return ''
else: return char
def get_my_string(file_path):
f=open(file_path,'r')
data=f.read()
f.close()
filtered_data=filter(onlyascii, data)
filtered_data = filtered_data.lower()
return filtered_data
How should I modify onlyascii() to leave spaces and periods? I imagine it's not too complicated but I can't figure it out.
You can filter all characters from the string that are not printable using string.printable, like this:
>>> s = "some\x00string. with\x15 funny characters"
>>> import string
>>> printable = set(string.printable)
>>> filter(lambda x: x in printable, s)
'somestring. with funny characters'
string.printable on my machine contains:
0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ
!"#$%&\'()*+,-./:;<=>?#[\\]^_`{|}~ \t\n\r\x0b\x0c
EDIT: On Python 3, filter will return an iterable. The correct way to obtain a string back would be:
''.join(filter(lambda x: x in printable, s))
An easy way to change to a different codec, is by using encode() or decode(). In your case, you want to convert to ASCII and ignore all symbols that are not supported. For example, the Swedish letter å is not an ASCII character:
>>>s = u'Good bye in Swedish is Hej d\xe5'
>>>s = s.encode('ascii',errors='ignore')
>>>print s
Good bye in Swedish is Hej d
Edit:
Python3: str -> bytes -> str
>>>"Hej då".encode("ascii", errors="ignore").decode()
'hej d'
Python2: unicode -> str -> unicode
>>> u"hej då".encode("ascii", errors="ignore").decode()
u'hej d'
Python2: str -> unicode -> str (decode and encode in reverse order)
>>> "hej d\xe5".decode("ascii", errors="ignore").encode()
'hej d'
According to #artfulrobot, this should be faster than filter and lambda:
import re
re.sub(r'[^\x00-\x7f]',r'', your-non-ascii-string)
See more examples here Replace non-ASCII characters with a single space
You may use the following code to remove non-English letters:
import re
str = "123456790 ABC#%? .(朱惠英)"
result = re.sub(r'[^\x00-\x7f]',r'', str)
print(result)
This will return
123456790 ABC#%? .()
Your question is ambiguous; the first two sentences taken together imply that you believe that space and "period" are non-ASCII characters. This is incorrect. All chars such that ord(char) <= 127 are ASCII characters. For example, your function excludes these characters !"#$%&\'()*+,-./ but includes several others e.g. []{}.
Please step back, think a bit, and edit your question to tell us what you are trying to do, without mentioning the word ASCII, and why you think that chars such that ord(char) >= 128 are ignorable. Also: which version of Python? What is the encoding of your input data?
Please note that your code reads the whole input file as a single string, and your comment ("great solution") to another answer implies that you don't care about newlines in your data. If your file contains two lines like this:
this is line 1
this is line 2
the result would be 'this is line 1this is line 2' ... is that what you really want?
A greater solution would include:
a better name for the filter function than onlyascii
recognition that a filter function merely needs to return a truthy value if the argument is to be retained:
def filter_func(char):
return char == '\n' or 32 <= ord(char) <= 126
# and later:
filtered_data = filter(filter_func, data).lower()
Working my way through Fluent Python (Ramalho) - highly recommended.
List comprehension one-ish-liners inspired by Chapter 2:
onlyascii = ''.join([s for s in data if ord(s) < 127])
onlymatch = ''.join([s for s in data if s in
'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'])
If you want printable ascii characters you probably should correct your code to:
if ord(char) < 32 or ord(char) > 126: return ''
this is equivalent, to string.printable (answer from #jterrace), except for the absence of returns and tabs ('\t','\n','\x0b','\x0c' and '\r') but doesnt correspond to the range on your question
this is best way to get ascii characters and clean code, Checks for all possible errors
from string import printable
def getOnlyCharacters(texts):
_type = None
result = ''
if type(texts).__name__ == 'bytes':
_type = 'bytes'
texts = texts.decode('utf-8','ignore')
else:
_type = 'str'
texts = bytes(texts, 'utf-8').decode('utf-8', 'ignore')
texts = str(texts)
for text in texts:
if text in printable:
result += text
if _type == 'bytes':
result = result.encode('utf-8')
return result
text = '�Ahm�����ed Sheri��'
result = getOnlyCharacters(text)
print(result)
#input --> �Ahm�����ed Sheri��
#output --> Ahmed Sheri
I am looking for a function that combines the methods isalpha() and isspace() into a single method.
I want to check if a given string only contains letters and/or spaces, for example:
"This is text".isalpha_or_space()
# True
However, with the 2 methods, I get:
"This is text".isalpha() or "This is text".isspace()
# False
as the string is not only alpha nor space.
Of course, I could iterate over every character and check it for space or alpha.
I could also compare the string with ("abcdefghijklmnopqrstuvwxyz" + " ")
However, both of these approaches don't seem very pythonic to me - convince me otherwise.
The most Pythonic will be to use a def for this:
def isalpha_or_space(self):
if self == "":
return False
for char in self:
if not (char.isalpha() or char.isspace()):
return False
return True
It is not easy to contribute this as a method on str, since Python does not encourage the monkeypatching of built-in types. My recommendation is just to leave this as a module level function.
Nonetheless, it is still possible to mimic the interface of a method, since most namespaces in Python are writable if you know where to find them. The suggestion below is not Pythonic, and relies on implementation detail.
>>> import gc
>>> def monkeypatch(type_, func):
... gc.get_referents(type_.__dict__)[0][func.__name__] = func
...
>>> monkeypatch(str, isalpha_or_space)
>>> "hello world".isalpha_or_space()
True
Use a regular expression (regex):
>>> import re
>>> result = re.match('[a-zA-Z\s]+$', "This is text")
>>> bool(result)
True
Breakdown:
re - Python's regex module
[a-zA-Z\s] - Any letter or whitespace
+ - One or more of the previous item
$ - End of string
The above works with ASCII letters. For the full Unicode range on Python 3, unfortunately the regex is a bit complicated:
>>> result = re.match('([^\W\d_]|\s)+$', 'un café')
Breakdown:
(x|y) - x or y
[^\W\d_] - Any word character except a number or an underscore
From Mark Tolonen's answer on How to match all unicode alphabetic characters and spaces in a regex?
You can use the following solution:
s != '' and all(c.isalpha() or c.isspace() for c in s)
What is the simplest way to strip the character modifiers from a unicode string in Python?
For example:
A͋͠r͍̞̫̜͌ͦ̈́͐ͅt̼̭͞h́u̡̙̞̘̙̬͖͓rͬͣ̐ͮͥͨ̀͏̣ should become Arthur
I tried the docs but I couldn't find anything that does this.
Try this
import unicodedata
a = u"STRING GOES HERE" # using an actual string would break stackoverflow's code formatting.
u"".join( x for x in a if not unicodedata.category(x).startswith("M") )
This will remove all characters classified as marks, which is what I think you want. In general, you can get the category of a character with unicodedata.category.
You could also use r'\p{M}' that is supported by regex module:
import regex
def remove_marks(text):
return regex.sub(ur"\p{M}+", "", text)
Example:
>>> print s
A͋͠r͍̞̫̜t̼̭͞h́u̡̙̞̘rͬͣ̐ͮ
>>> def remove_marks(text):
... return regex.sub(ur"\p{M}+", "", text)
...
...
>>> print remove_marks(s)
Arthur
Depending on your use-case a whitelist approach might be better e.g., to limit the input only to ascii characters:
>>> s.encode('ascii', 'ignore').decode('ascii')
u'Arthur'
The result might depend on Unicode normalization used in the text.
I have 3 API's that return json data to 3 dictionary variables. I am taking some of the values from the dictionary to process them. I read the specific values that I want to the list valuelist. One of the steps is to remove the punctuation from them. I normally use string.translate(None, string.punctuation) for this process but because the dictionary data is unicode I get the error:
wordlist = [s.translate(None, string.punctuation)for s in valuelist]
TypeError: translate() takes exactly one argument (2 given)
Is there a way around this? Either by encoding the unicode or a replacement for string.translate?
The translate method work differently on Unicode objects than on byte-string objects:
>>> help(unicode.translate)
S.translate(table) -> unicode
Return a copy of the string S, where all characters have been mapped
through the given translation table, which must be a mapping of
Unicode ordinals to Unicode ordinals, Unicode strings or None.
Unmapped characters are left untouched. Characters mapped to None
are deleted.
So your example would become:
remove_punctuation_map = dict((ord(char), None) for char in string.punctuation)
word_list = [s.translate(remove_punctuation_map) for s in value_list]
Note however that string.punctuation only contains ASCII punctuation. Full Unicode has many more punctuation characters, but it all depends on your use case.
I noticed that string.translate is deprecated. Since you are removing punctuation, not actually translating characters, you can use the re.sub function.
>>> import re
>>> s1="this.is a.string, with; (punctuation)."
>>> s1
'this.is a.string, with; (punctuation).'
>>> re.sub("[\.\t\,\:;\(\)\.]", "", s1, 0, 0)
'thisis astring with punctuation'
>>>
In this version you can relatively make one's letters to other
def trans(to_translate):
tabin = u'привет'
tabout = u'тевирп'
tabin = [ord(char) for char in tabin]
translate_table = dict(zip(tabin, tabout))
return to_translate.translate(translate_table)
Python re module allows to use a function as a replacement argument, which should take a Match object and return a suitable replacement. We may use this function to build a custom character translation function:
import re
def mk_replacer(oldchars, newchars):
"""A function to build a replacement function"""
mapping = dict(zip(oldchars, newchars))
def replacer(match):
"""A replacement function to pass to re.sub()"""
return mapping.get(match.group(0), "")
return replacer
An example. Match all lower-case letters ([a-z]), translate 'h' and 'i' to 'H' and 'I' respectively, delete other matches:
>>> re.sub("[a-z]", mk_replacer("hi", "HI"), "hail")
'HI'
As you can see, it may be used with short (incomplete) replacement sets, and it may be used to delete some characters.
A Unicode example:
>>> re.sub("[\W]", mk_replacer(u'\u0435\u0438\u043f\u0440\u0442\u0432', u"EIPRTV"), u'\u043f\u0440\u0438\u0432\u0435\u0442')
u'PRIVET'
As I stumbled upon the same problem and Simon's answer was the one that helped me to solve my case, I thought of showing an easier example just for clarification:
from collections import defaultdict
And then for the translation, say you'd like to remove '#' and '\r' characters:
remove_chars_map = defaultdict()
remove_chars_map['#'] = None
remove_chars_map['\r'] = None
new_string = old_string.translate(remove_chars_map)
And an example:
old_string = "word1#\r word2#\r word3#\r"
new_string = "word1 word2 word3"
'#' and '\r' removed
I have strings that are multi-lingual consist of both languages that use whitespace as word separator (English, French, etc) and languages that don't (Chinese, Japanese, Korean).
Given such a string, I want to separate the English/French/etc part into words using whitespace as separator, and to separate the Chinese/Japanese/Korean part into individual characters.
And I want to put of all those separated components into a list.
Some examples would probably make this clear:
Case 1: English-only string. This case is easy:
>>> "I love Python".split()
['I', 'love', 'Python']
Case 2: Chinese-only string:
>>> list(u"我爱蟒蛇")
[u'\u6211', u'\u7231', u'\u87d2', u'\u86c7']
In this case I can turn the string into a list of Chinese characters. But within the list I'm getting unicode representations:
[u'\u6211', u'\u7231', u'\u87d2', u'\u86c7']
How do I get it to display the actual characters instead of the unicode? Something like:
['我', '爱', '蟒', '蛇']
??
Case 3: A mix of English & Chinese:
I want to turn an input string such as
"我爱Python"
and turns it into a list like this:
['我', '爱', 'Python']
Is it possible to do something like that?
I thought I'd show the regex approach, too. It doesn't feel right to me, but that's mostly because all of the language-specific i18n oddnesses I've seen makes me worried that a regular expression might not be flexible enough for all of them--but you may well not need any of that. (In other words--overdesign.)
# -*- coding: utf-8 -*-
import re
def group_words(s):
regex = []
# Match a whole word:
regex += [ur'\w+']
# Match a single CJK character:
regex += [ur'[\u4e00-\ufaff]']
# Match one of anything else, except for spaces:
regex += [ur'[^\s]']
regex = "|".join(regex)
r = re.compile(regex)
return r.findall(s)
if __name__ == "__main__":
print group_words(u"Testing English text")
print group_words(u"我爱蟒蛇")
print group_words(u"Testing English text我爱蟒蛇")
In practice, you'd probably want to only compile the regex once, not on each call. Again, filling in the particulars of character grouping is up to you.
In Python 3, it also splits the number if you needed.
def spliteKeyWord(str):
regex = r"[\u4e00-\ufaff]|[0-9]+|[a-zA-Z]+\'*[a-z]*"
matches = re.findall(regex, str, re.UNICODE)
return matches
print(spliteKeyWord("Testing English text我爱Python123"))
=> ['Testing', 'English', 'text', '我', '爱', 'Python', '123']
Formatting a list shows the repr of its components. If you want to view the strings naturally rather than escaped, you'll need to format it yourself. (repr should not be escaping these characters; repr(u'我') should return "u'我'", not "u'\\u6211'. Apparently this does happen in Python 3; only 2.x is stuck with the English-centric escaping for Unicode strings.)
A basic algorithm you can use is assigning a character class to each character, then grouping letters by class. Starter code is below.
I didn't use a doctest for this because I hit some odd encoding issues that I don't want to look into (out of scope). You'll need to implement a correct grouping function.
Note that if you're using this for word wrapping, there are other per-language considerations. For example, you don't want to break on non-breaking spaces; you do want to break on hyphens; for Japanese you don't want to split apart きゅ; and so on.
# -*- coding: utf-8 -*-
import itertools, unicodedata
def group_words(s):
# This is a closure for key(), encapsulated in an array to work around
# 2.x's lack of the nonlocal keyword.
sequence = [0x10000000]
def key(part):
val = ord(part)
if part.isspace():
return 0
# This is incorrect, but serves this example; finding a more
# accurate categorization of characters is up to the user.
asian = unicodedata.category(part) == "Lo"
if asian:
# Never group asian characters, by returning a unique value for each one.
sequence[0] += 1
return sequence[0]
return 2
result = []
for key, group in itertools.groupby(s, key):
# Discard groups of whitespace.
if key == 0:
continue
str = "".join(group)
result.append(str)
return result
if __name__ == "__main__":
print group_words(u"Testing English text")
print group_words(u"我爱蟒蛇")
print group_words(u"Testing English text我爱蟒蛇")
Modified Glenn's solution to drop symbols and work for Russian, French, etc alphabets:
def rec_group_words():
regex = []
# Match a whole word:
regex += [r'[A-za-z0-9\xc0-\xff]+']
# Match a single CJK character:
regex += [r'[\u4e00-\ufaff]']
regex = "|".join(regex)
return re.compile(regex)
The following works for python3.7:
import re
def group_words(s):
return re.findall(u'[\u4e00-\u9fff]|[a-zA-Z0-9]+', s)
if __name__ == "__main__":
print(group_words(u"Testing English text"))
print(group_words(u"我爱蟒蛇"))
print(group_words(u"Testing English text我爱蟒蛇"))
['Testing', 'English', 'text']
['我', '爱', '蟒', '蛇']
['Testing', 'English', 'text', '我', '爱', '蟒', '蛇']
For some reason, I cannot adapt Glenn Maynard's answer to python3.