I am processing a large number of CSV files in python. The files are received from external organizations and are encoded with a range of encodings. I would like to find an automated method to remove the following:
Non-ASCII Characters
Control characters
Null (ASCII 0) Characters
I have a product called 'Find and Replace It!' that would use regular expressions so a way to solve the above with a regular expression would be very helpful.
Thank you
An alternative you might be interested in would be:
import string
clean = lambda dirty: ''.join(filter(string.printable.__contains__, dirty))
It simply filters out all non-printable characters from the dirty string it receives.
>>> len(clean(map(chr, range(0x110000))))
100
Try this:
clean = re.sub('[\0\200-\377]', '', dirty)
The idea is to match each NUL or "high ASCII" character (i.e. \0 and those that do not fit in 7 bits) and remove them. You could add more characters as you find them, such as ASCII ESC or BEL.
Or this:
clean = re.sub('[^\040-\176]', '', dirty)
The idea being to only permit the limited range of "printable ASCII," but note that this also removes newlines. If you want to keep newlines or tabs or the like, just add them into the brackets.
Replace anything that isn't a desirable character with a blank (delete it):
clean = re.sub('[^\s!-~]', '', dirty)
This allows all whitespace (spaces, newlines, tabs etc), and all "normal" characters (! is the first ascii printable and ~ is the last ascii printable under decimal 128).
Since this shows up on Google and we're no longer targeting Python 2.x, I should probably mention the isprintable method on strings.
It's not perfect, since it sees spaces as printables but newlines and tabs as non-printable, but I'd probably do something like this:
whitespace_normalizer = re.compile('\s+', re.UNICODE)
cleaner = lambda instr: ''.join(x for x in whitespace_normalizer.sub(' ', instr) if x.isprintable())
The regex does HTML-like whitespace collapsing (i.e. it converts arbitrary spans of whitespace as defined by Unicode into single spaces) and then the lambda strips any characters other than space that are classified by Unicode as "Separator" or "Other".
Then you get a result like this:
>>> cleaner('foo\0bar\rbaz\nquux\tspam eggs')
'foobar baz quux spam eggs'
Related
i have a long text which i need to be as clean as possible.
I have collapsed multiple spaces in one space only. I have removed \n and \t. I stripped the resulting string.
I then found characters like \u2003 and \u2019
What are these? How do I make sure that in my text I will have removed all special characters?
Besides the \n \t and the \u2003, should I check for more characters to remove?
I am using python 3.6
Try this:
import re
# string contains the \u2003 character
string = u'This is a test string ’'
# this regex will replace all special characters with a space
re.sub('\W+',' ',string).strip()
Result
'This is a test string'
If you want to preserve ascii special characters:
re.sub('[^!-~]+',' ',string).strip()
This regex reads: select [not characters 34-126] one or more times, where characters 34-126 are the visible range of ascii.
In regex , the ^ says not and the - indicates a range. Looking at an ascii table, 32 is space and all characters below are either a button interrupt or another form of white space like tab and newline. Character 33 is the ! mark and the last displayable character in ascii is 126 or ~.
Thank you Mike Peder, this solution worked for me. However I had to do it for both sides of the comparison
if((re.sub('[^!-~]+',' ',date).strip())==(re.sub('[^!-~]+',' ',calendarData[i]).strip())):
I've tried in several different ways and none of them work.
Suppose I have a string s defined as follows:
s = '[မန္း],[aa]'.decode('utf-8')
Suppose I want to parse the two strings within the square brackes. I've compiled the following regex:
pattern = re.compile(r'\[(\w+)\]', re.UNICODE)
and then I look for occurrences using:
pattern.findall(s, re.UNICODE)
The result is basically just [] instead of the expected list of two matches. Furthermore if I remove the re.UNICODE from the findall call I get the single string [u'aa'], i.e. the non-unicode one:
pattern.findall(s)
Of course
s = '[bb],[aa]'.decode('utf-8')
pattern.findall(s)
returns [u'bb', u'aa']
And to make things even more interesting:
s = '[မနbb],[aa]'.decode('utf-8')
pattern.findall(s)
returns [u'\u1019\u1014bb', u'aa']
It's actually rather simple. \w matches all alphanumeric characters and not all of the characters in your initial string are alphanumeric.
If you still want to match all characters between the brackets, one solution is to match everything but a closing bracket (]). This can be made as
import re
s = '[မန္း],[aa]'.decode('utf-8')
pattern = re.compile('\[([^]]+)\]', re.UNICODE)
re.findall(pattern, s)
where the [^]] creates a matching pattern of all characters except the ones following the circumflex (^) character.
Also, note that the re.UNICODE argument to re.compile is not necessary, since the pattern itself does not contain any unicode characters.
First, note that the following only works in Python 2.x if you've saved the source file in UTF-8 encoding, and you declare the source code encoding at the top of the file; otherwise, the default encoding of the source is assumed to be ascii:
#coding: utf8
s = '[မန္း],[aa]'.decode('utf-8')
A shorter way to write it is to code a Unicode string directly:
#coding: utf8
s = u'[မန္း],[aa]'
Next, \w matches alphanumeric characters. With the re.UNICODE flag it matches characters that are categorized as alphanumeric in the Unicode database.
Not all of the characters in မန္း are alphanumeric. If you want whatever is between the brackets, use something like the following. Note the use of .*? for a non-greedy match of everything. It's also a good habit to use Unicode strings for all text, and raw strings in particular for regular expressions.
#coding:utf8
import re
s = u'[မန္း],[aa],[မနbb]'
pattern = re.compile(ur'\[(.*?)\]')
print re.findall(pattern,s)
Output:
[u'\u1019\u1014\u1039\u1038', u'aa', u'\u1019\u1014bb']
Note that Python 2 displays an unambiguous version of the strings in lists with escape codes for non-ASCII and non-printable characters.
To see the actual string content, print the strings, not the list:
for item in re.findall(pattern,s):
print item
Output:
မန္း
aa
မနbb
I am working on a project (content based search), for that I am using 'pdftotext' command line utility in Ubuntu which writes all the text from pdf to some text file.
But it also writes bullets, now when I'm reading the file to index each word, it also gets some escape sequence indexed(like '\x01').I know its because of bullets(•).
I want only text, so is there any way to remove this escape sequence.I have done something like this
escape_char = re.compile('\+x[0123456789abcdef]*')
re.sub(escape_char, " ", string)
But this do not remove escape sequence
Thanks in advance.
The problem is that \xXX is just a representation of a control character, not the character itself. Therefore, you can't literally match \x unless you're working with the repr of the string.
You can remove nonprintable characters using a character class:
re.sub(r'[\x00-\x08\x0b\x0c\x0e-\x1f\x7f-\xff]', '', text)
Example:
>>> re.sub(r'[\x00-\x1f\x7f-\xff]', '', ''.join(map(chr, range(256))))
' !"#$%&\'()*+,-./0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~'
Your only real problem is that backslashes are tricky. In a string, a backslash might be treated specially; for example \t would turn into a tab. Since \+ isn't special in strings, the string was actually what you expected. So then the regular expression compiler looked at it, and \+ in a regular expression would just be a plain + character. Normally the + has a special meaning ("1 or more instances of the preceding pattern") and the backslash escapes it.
The solution is just to double the backslash, which makes a pattern that matches a single backslash.
I put the pattern into r'', to make it a "raw string" where Python leaves backslashes alone. If you don't do that, Python's string parser will turn the two backslashes into a single backslash; just as \t turns into a tab, \\ turns into a single backslash. So, use a raw string and put exactly what you want the regular expression compiler to see.
Also, a better pattern would be: backslash, then an x, then 1 or more instances of the character class matching a hex character. I rewrote the pattern to this.
import re
s = r'+\x01+'
escape_char = re.compile(r'\\x[0123456789abcdef]+')
s = re.sub(escape_char, " ", s)
Instead of using a raw string, you could use a normal string and just be very careful with backslashes. In this case we would have to put four backslashes! The string parser would turn each doubled backslash into a single backslash, and we want the regular expression compiler to see two backslashes. It's easier to just use the raw string!
Also, your original pattern would remove zero or more hex digits. My pattern removes one or more. But I think it is likely that there will always be exactly two hex digits, or perhaps with Unicode maybe there will be four. You should figure out how many there can be and put a pattern that ensures this. Here's a pattern that matches 2, 3, or 4 hex digits:
escape_char = re.compile(r'\\x[0123456789abcdef]{2,4}')
And here is one that matches exactly two or exactly four. We have to use a vertical bar to make two alternatives, and we need to make a group with parentheses. I'm using a non-matching group here, with (?:pattern) instead of just (pattern) (where pattern means a pattern, not literally the word pattern).
escape_char = re.compile(r'\\x(?:[0123456789abcdef]{2,2}|[0123456789abcdef]{4,4})')
Here is example code. The bullet sequence is immediately followed by a 1 character, and this pattern leaves it alone.
import re
s = r'+\x011+'
pat = re.compile(r'\\x(?:[0123456789abcdef]{2,2}|[0123456789abcdef]{4,4})')
s = pat.sub("#", s)
print("Result: '%s'" % s)
This prints: Result: '+#1+'
NOTE: all of this is assuming that you actually are trying to match a backslash character followed by hex chars. If you are actually trying to match character byte values that might or might not be "printable" chars, then use the answer by #nneonneo instead of this one.
If you're working with 8-bit char values, it's possible to forgo regex's by building some simple tables beforehand and then use them inconjunction with str.translate() method to remove unwanted characters in strings very quickly and easily:
import random
import string
allords = [i for i in xrange(256)]
allchars = ''.join(chr(i) for i in allords)
printableords = [ord(ch) for ch in string.printable]
deletechars = ''.join(chr(i) for i in xrange(256) if i not in printableords)
test = ''.join(chr(random.choice(allords)) for _ in xrange(10, 40)) # random string
print test.translate(allchars, deletechars)
not enough reputation to comment, but the accepted answer removes printable characters as well.
s = "pörféct änßwer"
re.sub(r'[\x00-\x08\x0b\x0c\x0e-\x1f\x7f-\xff]', '', s)
'prfct nwer'
For non-English strings, please use answer https://stackoverflow.com/a/62530464/3021668
import unicodedata
''.join(c for c in s if not unicodedata.category(c).startswith('C'))
'pörféct änßwer'
I know similar questions were asked around here on StackOverflow. I tryed to adapt some of the approaches but I couldn't get anything to work, that fits my needs:
Given a python string I want to strip every non alpha numeric charater - but - leaving any special charater like µ æ Å Ç ß... Is this even possible? with regexes I tryed variations of this
re.sub(r'[^a-zA-Z0-9: ]', '', x) # x is my string to sanitize
but it strips me more then I want. An example of what I want would be:
Input: "A string, with characters µ, æ, Å, Ç, ß,... Some whitespace confusion ?"
Output: "A string with characters µ æ Å Ç ß Some whitespace confusion"
Is this even possible without getting complicated?
Use \w with the UNICODE flag set. This will match the underscore also, so you might need to take care of that separately.
Details on http://docs.python.org/library/re.html.
EDIT: Here is some actual code. It will keep unicode letters, unicode digits, and spaces.
import re
x = u'$a_bßπ7: ^^#p'
pattern = re.compile(r'[^\w\s]', re.U)
re.sub(r'_', '', re.sub(pattern, '', x))
If you did not use re.U then the ß and π characters would have been stripped.
Sorry I can't figure out a way to do this with one regex. If you can, can you post a solution?
Eliminate characters in "Punctuation, Other" Unicode category.
# -*- coding: utf-8 -*-
import unicodedata
# This removes punctuation characters.
def strip_po(s):
return ''.join(x for x in s if unicodedata.category(x) != 'Po')
# This reduces multiple whitespace characters into a single space.
def fix_space(s):
return ' '.join(s.split())
s = u'A string, with characters µ, æ, Å, Ç, ß,... Some whitespace confusion ?'
print fix_space(strip_po(s))
You'll have to better define what you mean by special characters. There are certain flags that will group things like whitespace, non-whitespace, digits, etc. and do it specific to a locale. See http://docs.python.org/library/re.html for more details.
However, since this is a character by character operation, you may find it easier to simply explicitly specify every character, or, if the number of characters you want to exclude is smaller, writing an expression that only excludes those.
If you're ok with the Unicode Consortium's classification of what's a letter or a digit, an easy way to do this without RegEx or importing anything outside the built-ins:
filter(unicode.isalnum, u"A string, with characters µ, æ, Å, Ç, ß,... Some whitespace confusion ?")
If you have a str instead of a unicode, you'll need to encode it first.
IS it possible to define that specific languages characters would be considered as word.
I.e. re do not accept ä,ö as word characters if i search them in following way
Ft=codecs.open('c:\\Python27\\Scripts\\finnish2\\textfields.txt','r','utf–8')
word=Ft.readlines()
word=smart_str(word, encoding='utf-8', strings_only=False, errors='replace')
word=re.sub('[^äÄöÖåÅA-Za-z0-9]',"""\[^A-Za-z0-9]*""", word) ; print 'word= ', word #works in skipping ö,ä,å characters
I would like that these character would be included to [A-Za-z].
How to define this?
[A-Za-z0-9] will only match the characters listed here, but the docs also mention some other special constructs like:
\w which stands for alphanumeric characters (namely [a-zA-Z0-9_] plus all unicode characters which are declared to be alphanumeric
\W which stands for all nun-alphanumeric characters [^a-zA-Z0-9_] plus unicode
\d which stands for digits
\b which matches word boundaries (including all rules from the unicode tables)
So, you will to (a) use this constructs instead (which are shorter and maybe easier to read), and (b) tell re that you want to "localize" those strings with the current locale by setting the UNICODE flag like:
re_word = re.compile(r'\w+', re.U)
For a start, you appear to be slightly confused about the args for re.sub.
The first arg is the pattern. You have '[^äÄöÖåÅA-Za-z0-9]' which matches each character which is NOT in the Finnish alphabet nor a digit.
The second arg is the replacement. You have """[^A-Za-z0-9]*""" ... so each of those non-Finnish-alphanumeric characters is going to be replaced by the literal string [^A-Za-z0-9]*. It's reasonable to assume that this is not what you want.
What do you want to do?
You need to explain your third line; after your first 2 lines, word will be a list of unicode objects, which is A Good Thing. However the encoding= and the errors= indicate that the unknown (to us) smart_str() is converting your lovely unicode back to UTF-8. Processing data in UTF-8 bytes instead of Unicode characters is EVIL, unless you know what you are doing.
What encoding directive do you have at the top of your source file?
Advice: Get your data into unicode. Work on it in unicode. All your string constants should have the u prefix; if you consider that too much wear and tear on your typing fingers, at least put it on the non-ASCII constants e.g. u'[^äÄöÖåÅA-Za-z0-9]'. When you have done all the processing, encode your results for display or storage using an appropriate encoding.
When working with re, consider \w which will match any alphanumeric (and also the underscore) instead of listing out what is alphabetic in one language. Do use the re.UNICODE flag; docs here.
Something like this might do the trick:
pattern = re.compile("(?u)pattern")
or
pattern = re.compile("pattern", re.UNICODE)