How to remove this \xa0 from a string in python? - python

I have the following string:
word = u'Buffalo,\xa0IL\xa060625'
I don't want the "\xa0" in there. How can I get rid of it? The string I want is:
word = 'Buffalo, IL 06025

The most robust way would be to use the unidecode module to convert all non-ASCII characters to their closest ASCII equivalent automatically.
The character \xa0 (not \xa as you stated) is a NO-BREAK SPACE, and the closest ASCII equivalent would of course be a regular space.
import unidecode
word = unidecode.unidecode(word)

If you know for sure that is the only character you don't want, you can .replace it:
>>> word.replace(u'\xa0', ' ')
u'Buffalo, IL 60625'
If you need to handle all non-ascii characters, encoding and replacing bad characters might be a good start...:
>>> word.encode('ascii', 'replace')
'Buffalo,?IL?60625'

There is no \xa there. If you try to put that into a string literal, you're going to get a syntax error if you're lucky, or it's going to swallow up the next attempted character if you're not, because \x sequences aways have to be followed by two hexadecimal digits.
What you have is \xa0, which is an escape sequence for the character U+00A0, aka "NO-BREAK SPACE".
I think you want to replace them with spaces, but whatever you want to do is pretty easy to write:
word.replace(u'\xa0', u' ') # replaced with space
word.replace(u'\xa0', u'0') # closest to what you were literally asking for
word.replace(u'\xa0', u'') # removed completely

You can easily use unicodedata to get rid of all of \x... characters.
from unicodedata import normalize
normalize('NFKD', word)
>>> 'Buffalo, IL 60625'

This seems to work for getting rid of non-ascii characters:
fixedword = word.encode('ascii','ignore')

Related

find all the matches for unicodes in a string in python

import re
b="united thats weak. See ya 👋"
print b.decode('utf-8') #output: u'united thats weak. See ya \U0001f44b'
print re.findall(r'[\U0001f600-\U0001f650]',b.decode('utf-8'),flags=re.U) # output: [u'S']
How to get a output \U0001f44b. Please help
Emojis that i need to handle are "😀❤️😁😂😃😄😅😆😇😈😉😊😋😌😍😎😏😐😑😒😓😔😕😖😗😘😙😚😛😜😝😞😟😠😡😢😣😤😥😦😧😨😩😪😫😬😭😮😯😰😱😲😳😴😵😶😷😸😹😺😻😼😽😾😿🙀🙁🙂🙃🙄🙅🙆🙇🙈🙉🙊🙋🙌🙍🙎🙏🚀🚁🚂🚃🚄🚅🚆🚇🚈🚉🚊🚋🚌🚍🚎🚏🚐🚑🚒🚓🚔🚕🚖🚗🚘🚙🚚🚛🚜🚝🚞🚟🚠🚡🚢🚣🚤🚥🚦🚧🚨🚩🚪🚫🚬🚭🚮🚯🚰🚱🚲🚳🚴🚵🚶🚷🚸🚹🚺🚻🚼🚽🚾🚿🛀🛁🛂🛃🛄🛅🛋🛌🛍🛎🛏🛐🛠🛡🛢🛣🛤🛥🛩🛫🛬🛰🛳🤐🤑🤒🤓🤔🤕🤖🤗🤘🦀🦁🦂🦃🦄🧀"
Searching for a unicode range works exactly the same as searching for any sort of character range. But, you'll need to represent the strings correctly. Here is a working example:
#coding: utf-8
import re
b=u"united thats weak. See ya 😇 "
assert re.findall(u'[\U0001f600-\U0001f650]',b) == [u'😇']
assert re.findall(ur'[😀-🙏]',b) == [u'😇']
Notes:
You need #coding: utf-8 or similar on the first or second line of your program.
In your example, the emoji that you used, U-1f44b is not in the range U-1f600 to U-1f650. In my example, I used one that is.
If you want to use \U to include a unicode character, you can't use the raw string prefix (r'').
But if you use the characters themselves (instead of \U escapes), then you can use the raw string prefix.
You need to ensure that both the pattern and the input string are unicode strings. Neither of them may be UTF8-encoded strings.
But you don't need the re.U flag unless your pattern includes \s, \w, or similar.

Regular expression including and excluding characters

I have the following regular expression that almost works fine.
WORD_REGEXP = re.compile(r"[a-zA-Zá-úÁ-Úñ]+")
It includes lower and upper case letters with and without an accent plus the Spanish letter «ñ». Unfortunately, it also includes (I don't know why) characters that are also used in Spanish like «¡» or «¿» which I would like to remove as well.
In a line like ¡España, olé! I would like to extract just España and olé, by means of the regular expression.
How can I exclude these two characters («¿», «¡») in the regular expression?
According to stribizhe, it seems as if the regex was OK. So the problem must be other. I include the full Python code:
import re
linea = "¡Arriba Éspáña, ¿olé!"
WORD_REGEXP = re.compile(r"([a-zA-Zá-úÁ-Úñ]+)", re.UNICODE)
palabras = WORD_REGEXP.findall(linea)
for pal in palabras:
pal = unicode(pal,'latin1').encode('latin1', 'replace')
print pal
The result is the following:
¡Arriba
Éspáña
¿olé
Use the special sequence '\w', according to documentation:
If UNICODE is set, this will match the characters [0-9_] plus whatever is classified as alphanumeric in the Unicode character properties database.
Note, however that your string must be a unicode string:
import re
linea = u"¡Arriba Éspáña, ¿olé!"
regex = re.compile(r"\w+", re.UNICODE)
regex.findall(linea)
# [u'Arriba', u'\xc9sp\xe1\xf1a', u'ol\xe9']
NOTE: The cause of your error is that your regex is being interpreted as UTF-8, e.g.:
You pattern r'([a-zA-Zá-úÁ-Úñ]+)' is not defined as a unicode string, so it's encoded to UTF-8 by your text editor and read by python as '([a-zA-Z\xc3\xa1-\xc3\xba\xc3\x81-\xc3\x9a\xc3\xb1]+)', note the patterns starting with \xc3 (that is the unicode start byte).
You can confirm that by printing the repr of WORD_REGEXP. So the actual pattern used by the re module is:
patt = r"([a-zA-Zá-úÁ-Úñ]+)"
print patt.decode('latin1')
Or:
a-z
A-Z
\xc3
\xa1-\xc3
\xba
\xc3
\x81-\xc3
\x9a
\xc3
\xb1
Simplifying it, you are actually using pattern
a-zA-Z\x81-\xc3
That last range, covers a lot of characters!!
It's better to use code points. The codepoint's for those characters are
¡ - \x{A1}
¿ - \x{BF}
which seem to fall outside the range of your accent characters.
[a-zA-Z\x{E1}-\x{FA}\x{C1}-\x{DA}\x{F1}]+

Python regex sub function with matching group

I'm trying to get a python regex sub function to work but I'm having a bit of trouble. Below is the code that I'm using.
string = 'á:tdfrec'
newString = re.sub(ur"([aeioäëöáéíóàèìò])([aeioäëöáéíóúàèìò]):", ur"\1:\2", string)
#newString = re.sub(ur"([a|e|i|o|ä|ë|ö|á|é|í|ó|à|è|ì|ò])([a|e|i|o|ä|ë|ö|á|é|í|ó|ú|à|è|ì|ò]):", ur"\1:\2", string)
print newString
# a:́tdfrec is printed
So the the above code is not working the way that I intend. It's not displaying correctly but the string printed has the accute accent over the :. The regex statement is moving the accute accent from over the a to over the :. For the string that I'm declaring this regex is not suppose be applied. My intention for this regex statement is to only be applied for the following examples:
aä:dtcbd becomes a:ädtcbd
adfseì:gh becomes adfse:ìgh
éò:fdbh becomes é:òfdbh
but my regex statement is being applied and I don't want it to be. I think my problem is the second character set followed by the : (ie á:) is what's causing the regex statement to be applied. I've been staring at this for a while and tried a few other things and I feel like this should work but I'm missing something. Any help is appreciated!
The follow code with re.UNICODE flag also doesn't achieve the desired output:
>>> import re
>>> original = u'á:tdfrec'
>>> pattern = re.compile(ur"([aeioäëöáéíóàèìò])([aeioäëöáéíóúàèìò]):", re.UNICODE)
>>> print pattern.sub(ur'\1:\2', string)
á:tdfrec
Is it because of the diacritic and the tony the pony example for les misérable? The diacritic is on the wrong character after reversing it:
>>> original = u'les misérable'
>>> print ''.join([i for i in reversed(original)])
elbarésim sel
edit: Definitely an issue with the combining diacritics, you need to normalize both the regular expression and the strings you are trying to match. For example:
import unicodedata
regex = unicodedata.normalize('NFC', ur'([aeioäëöáéíóàèìò])([aeioäëöáéíóúàèìò]):')
string = unicodedata.normalize('NFC', u'aä:dtcbd')
newString = re.sub(regex, ur'\1:\2', string)
Here is an example that shows why you might hit an issue without the normalization. The string u'á' could either be the single code point LATIN SMALL LETTER A WITH ACCUTE (U+00E1) or it could be two code points, LATIN SMALL LETTER A (U+0061) followed by COMBINING ACUTE ACCENT (U+0301). These will probably look the same, but they will have very different behaviors in a regex because you can match the combining accent as its own character. That is what is happening here with the string 'á:tdfrec', a regular 'a' is captured in group 1, and the combining diacritic is captured in group 2.
By normalizing both the regex and the string you are matching you ensure this doesn't happen, because the NFC normalization will replace the diacritic and the character before it with a single equivalent character.
Original answer below.
I think your issue here is that the string you are attempting to do the replacement on is a byte string, not a Unicode string.
If these are string literals make sure you are using the u prefix, e.g. string = u'aä:dtcbd'. If they are not literals you will need to decode them, e.g. string = string.decode('utf-8') (although you may need to use a different codec).
You should probably also normalize your string, because part of the issue may have something to do with combining diacritics.
Note that in this case the re.UNICODE flag will not make a difference, because that only changes the meaning of character class shorthands like \w and \d. The important thing here is that if you are using a Unicode regular expression, it should probably be applied to a Unicode string.

How to remove escape sequence like '\xe2' or '\x0c' in python

I am working on a project (content based search), for that I am using 'pdftotext' command line utility in Ubuntu which writes all the text from pdf to some text file.
But it also writes bullets, now when I'm reading the file to index each word, it also gets some escape sequence indexed(like '\x01').I know its because of bullets(•).
I want only text, so is there any way to remove this escape sequence.I have done something like this
escape_char = re.compile('\+x[0123456789abcdef]*')
re.sub(escape_char, " ", string)
But this do not remove escape sequence
Thanks in advance.
The problem is that \xXX is just a representation of a control character, not the character itself. Therefore, you can't literally match \x unless you're working with the repr of the string.
You can remove nonprintable characters using a character class:
re.sub(r'[\x00-\x08\x0b\x0c\x0e-\x1f\x7f-\xff]', '', text)
Example:
>>> re.sub(r'[\x00-\x1f\x7f-\xff]', '', ''.join(map(chr, range(256))))
' !"#$%&\'()*+,-./0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~'
Your only real problem is that backslashes are tricky. In a string, a backslash might be treated specially; for example \t would turn into a tab. Since \+ isn't special in strings, the string was actually what you expected. So then the regular expression compiler looked at it, and \+ in a regular expression would just be a plain + character. Normally the + has a special meaning ("1 or more instances of the preceding pattern") and the backslash escapes it.
The solution is just to double the backslash, which makes a pattern that matches a single backslash.
I put the pattern into r'', to make it a "raw string" where Python leaves backslashes alone. If you don't do that, Python's string parser will turn the two backslashes into a single backslash; just as \t turns into a tab, \\ turns into a single backslash. So, use a raw string and put exactly what you want the regular expression compiler to see.
Also, a better pattern would be: backslash, then an x, then 1 or more instances of the character class matching a hex character. I rewrote the pattern to this.
import re
s = r'+\x01+'
escape_char = re.compile(r'\\x[0123456789abcdef]+')
s = re.sub(escape_char, " ", s)
Instead of using a raw string, you could use a normal string and just be very careful with backslashes. In this case we would have to put four backslashes! The string parser would turn each doubled backslash into a single backslash, and we want the regular expression compiler to see two backslashes. It's easier to just use the raw string!
Also, your original pattern would remove zero or more hex digits. My pattern removes one or more. But I think it is likely that there will always be exactly two hex digits, or perhaps with Unicode maybe there will be four. You should figure out how many there can be and put a pattern that ensures this. Here's a pattern that matches 2, 3, or 4 hex digits:
escape_char = re.compile(r'\\x[0123456789abcdef]{2,4}')
And here is one that matches exactly two or exactly four. We have to use a vertical bar to make two alternatives, and we need to make a group with parentheses. I'm using a non-matching group here, with (?:pattern) instead of just (pattern) (where pattern means a pattern, not literally the word pattern).
escape_char = re.compile(r'\\x(?:[0123456789abcdef]{2,2}|[0123456789abcdef]{4,4})')
Here is example code. The bullet sequence is immediately followed by a 1 character, and this pattern leaves it alone.
import re
s = r'+\x011+'
pat = re.compile(r'\\x(?:[0123456789abcdef]{2,2}|[0123456789abcdef]{4,4})')
s = pat.sub("#", s)
print("Result: '%s'" % s)
This prints: Result: '+#1+'
NOTE: all of this is assuming that you actually are trying to match a backslash character followed by hex chars. If you are actually trying to match character byte values that might or might not be "printable" chars, then use the answer by #nneonneo instead of this one.
If you're working with 8-bit char values, it's possible to forgo regex's by building some simple tables beforehand and then use them inconjunction with str.translate() method to remove unwanted characters in strings very quickly and easily:
import random
import string
allords = [i for i in xrange(256)]
allchars = ''.join(chr(i) for i in allords)
printableords = [ord(ch) for ch in string.printable]
deletechars = ''.join(chr(i) for i in xrange(256) if i not in printableords)
test = ''.join(chr(random.choice(allords)) for _ in xrange(10, 40)) # random string
print test.translate(allchars, deletechars)
not enough reputation to comment, but the accepted answer removes printable characters as well.
s = "pörféct änßwer"
re.sub(r'[\x00-\x08\x0b\x0c\x0e-\x1f\x7f-\xff]', '', s)
'prfct nwer'
For non-English strings, please use answer https://stackoverflow.com/a/62530464/3021668
import unicodedata
''.join(c for c in s if not unicodedata.category(c).startswith('C'))
'pörféct änßwer'

Strip Non alpha numeric characters from string in python but keeping special characters

I know similar questions were asked around here on StackOverflow. I tryed to adapt some of the approaches but I couldn't get anything to work, that fits my needs:
Given a python string I want to strip every non alpha numeric charater - but - leaving any special charater like µ æ Å Ç ß... Is this even possible? with regexes I tryed variations of this
re.sub(r'[^a-zA-Z0-9: ]', '', x) # x is my string to sanitize
but it strips me more then I want. An example of what I want would be:
Input: "A string, with characters µ, æ, Å, Ç, ß,... Some whitespace confusion ?"
Output: "A string with characters µ æ Å Ç ß Some whitespace confusion"
Is this even possible without getting complicated?
Use \w with the UNICODE flag set. This will match the underscore also, so you might need to take care of that separately.
Details on http://docs.python.org/library/re.html.
EDIT: Here is some actual code. It will keep unicode letters, unicode digits, and spaces.
import re
x = u'$a_bßπ7: ^^#p'
pattern = re.compile(r'[^\w\s]', re.U)
re.sub(r'_', '', re.sub(pattern, '', x))
If you did not use re.U then the ß and π characters would have been stripped.
Sorry I can't figure out a way to do this with one regex. If you can, can you post a solution?
Eliminate characters in "Punctuation, Other" Unicode category.
# -*- coding: utf-8 -*-
import unicodedata
# This removes punctuation characters.
def strip_po(s):
return ''.join(x for x in s if unicodedata.category(x) != 'Po')
# This reduces multiple whitespace characters into a single space.
def fix_space(s):
return ' '.join(s.split())
s = u'A string, with characters µ, æ, Å, Ç, ß,... Some whitespace confusion ?'
print fix_space(strip_po(s))
You'll have to better define what you mean by special characters. There are certain flags that will group things like whitespace, non-whitespace, digits, etc. and do it specific to a locale. See http://docs.python.org/library/re.html for more details.
However, since this is a character by character operation, you may find it easier to simply explicitly specify every character, or, if the number of characters you want to exclude is smaller, writing an expression that only excludes those.
If you're ok with the Unicode Consortium's classification of what's a letter or a digit, an easy way to do this without RegEx or importing anything outside the built-ins:
filter(unicode.isalnum, u"A string, with characters µ, æ, Å, Ç, ß,... Some whitespace confusion ?")
If you have a str instead of a unicode, you'll need to encode it first.

Categories

Resources