how can I deal with the garbled string like '\xe7\xbe\x8e'? - python

I have a list of words like s = ['a','\xe7\xbe\x8e\xe7','b'], and I want to remove the members like '\xe7\xbe\x8e\xe7', but I cannot think of any useful method. I have never deal with such kind of encoded or decoded words. I wish any suggestion in python. Thanks!

def is_ascii(s):
return all(ord(c) < 128 for c in s)
s=[e for e in s if is_ascii(e)]
Try this. It will remove entries with non-ascii characters (like \xe7\xbe\x8e\xe7). Hope this helps!

You can check if each word in a list is alphanumeric using isalnum
function. If word is alphanumeric then keep it otherwise drop it. This can be achieved using list comprehension
>>> s = ['a','\xe7\xbe\x8e\xe7','b']
>>> [a for a in s if a.isalnum()]
>>> ['a', 'b']
Note: isalnum checks if string is alphanumeric i.e. contains letters and/or numbers. If you want to allow letters only then use isalpha instead

Try this:
import itertools
s = ['a','\xe7\xbe\x8e\xe7','b']
for i in range(s.count("\xe7\xbe\x8e\xe7")):
s.remove('\xe7\xbe\x8e\xe7')
Then all occurences of "\xe7\xbe\x8e\xe7" will be removed from the list.

Related

Python3: Replace splitted string

This is my string:
VISA1240129006|36283354A|081016860665
I need to replace first string.
FIXED_REPLACED_STRING|36283354A|081016860665
I mean, I for example, I need to get next string:
Is there any elegant way to get it using python3?
You can do this way:
>>> l = 'VISA1240129006|36283354A|081016860665'.split('|')
>>> l[0] = 'FIXED_REPLACED_STRING'
>>> l
['FIXED_REPLACED_STRING', '36283354A', '081016860665']
>>> '|'.join(l)
'FIXED_REPLACED_STRING|36283354A|081016860665'
Explanation: first, you split a string into a list. Then, you change what you need in the position(s) you want. Finally, you rebuild the string from such a modified list.
If you need a complete replacement of all the occurrences regardless of their position, check out also the other answers here.
You can use the .replace() method:
l="VISA1240129006|36283354A|081016860665"
l=l.replace("VISA1240129006","FIXED_REPLACED_STRING")
You can use re.sub() from regex library. See similar problem with yours. replace string
My solution using regex is:
import re
l="VISA1240129006|36283354A|081016860665"
new_l = re.sub('^(\w+|)',r'FIXED_REPLACED_STRING',l)
It replaces first string before "|" character

Does split method in python returned something containing \u for some characters and how to get rid of it?

I have a unicode string:
s = "ᠤᠷᠢᠳᠤ ᠲᠠᠯ᠎ᠠ ᠶᠢᠨ ᠬᠠᠪᠲᠠᠭᠠᠢ ᠬᠡᠪᠲᠡᠭᠡ"
the split method it returns is somewhat changed, with a \u180e in the second word.
>>> print(s.split())
['ᠤᠷᠢᠳᠤ', 'ᠲᠠᠯ\u180eᠠ', 'ᠶᠢᠨ', 'ᠬᠠᠪᠲᠠᠭᠠᠢ', 'ᠬᠡᠪᠲᠡᠭᠡ']
What I want to get is:
['ᠤᠷᠢᠳᠤ', 'ᠲᠠᠯ᠎ᠠ ᠶᠢᠨ', 'ᠶᠢᠨ', 'ᠬᠠᠪᠲᠠᠭᠠᠢ', 'ᠬᠡᠪᠲᠡᠭᠡ']
What is the reason causing this, and how to solve it?
I don't think the problem is with the split function, but with the list itself.
>>> s = ["ᠤᠷᠢᠳᠤ ᠲᠠᠯ᠎ᠠ ᠶᠢᠨ ᠬᠠᠪᠲᠠᠭᠠᠢ ᠬᠡᠪᠲᠡᠭᠡ"]
>>> print(s)
['ᠤᠷᠢᠳᠤ ᠲᠠᠯ\u180eᠠ ᠶᠢᠨ ᠬᠠᠪᠲᠠᠭᠠᠢ ᠬᠡᠪᠲᠡᠭᠡ']
You should still be able to use the list normally, because it corrects itself when the element is used.
>>> s = "ᠤᠷᠢᠳᠤ ᠲᠠᠯ᠎ᠠ ᠶᠢᠨ ᠬᠠᠪᠲᠠᠭᠠᠢ ᠬᠡᠪᠲᠡᠭᠡ"
>>> s = s.split()
>>> [print(e) for e in s]
ᠤᠷᠢᠳᠤ
ᠲᠠᠯ᠎ᠠ
ᠶᠢᠨ
ᠬᠠᠪᠲᠠᠭᠠᠢ
ᠬᠡᠪᠲᠡᠭᠡ
According to Wikipedia: https://en.wikipedia.org/wiki/Whitespace_character#Unicode
U+180E is a space character until Uncode 6.3.0 so if python implements a earlier Unicode spec than i guess split() would break on all space characters. You could work arround this by giving split an argument if you want to only split on certain characters (s.split(" ")) that would give you:
>>> s.split(" ")
['ᠤᠷᠢᠳᠤ', 'ᠲᠠᠯ\u180eᠠ\u202fᠶᠢᠨ', 'ᠬᠠᠪᠲᠠᠭᠠᠢ', 'ᠬᠡᠪᠲᠡᠭᠡ']

Why is the split() returning list objects that are empty? [duplicate]

I have the following file names that exhibit this pattern:
000014_L_20111007T084734-20111008T023142.txt
000014_U_20111007T084734-20111008T023142.txt
...
I want to extract the middle two time stamp parts after the second underscore '_' and before '.txt'. So I used the following Python regex string split:
time_info = re.split('^[0-9]+_[LU]_|-|\.txt$', f)
But this gives me two extra empty strings in the returned list:
time_info=['', '20111007T084734', '20111008T023142', '']
How do I get only the two time stamp information? i.e. I want:
time_info=['20111007T084734', '20111008T023142']
I'm no Python expert but maybe you could just remove the empty strings from your list?
str_list = re.split('^[0-9]+_[LU]_|-|\.txt$', f)
time_info = filter(None, str_list)
Don't use re.split(), use the groups() method of regex Match/SRE_Match objects.
>>> f = '000014_L_20111007T084734-20111008T023142.txt'
>>> time_info = re.search(r'[LU]_(\w+)-(\w+)\.', f).groups()
>>> time_info
('20111007T084734', '20111008T023142')
You can even name the capturing groups and retrieve them in a dict, though you use groupdict() rather than groups() for that. (The regex pattern for such a case would be something like r'[LU]_(?P<groupA>\w+)-(?P<groupB>\w+)\.')
If the timestamps are always after the second _ then you can use str.split and str.strip:
>>> strs = "000014_L_20111007T084734-20111008T023142.txt"
>>> strs.strip(".txt").split("_",2)[-1].split("-")
['20111007T084734', '20111008T023142']
Since this came up on google and for completeness, try using re.findall as an alternative!
This does require a little re-thinking, but it still returns a list of matches like split does. This makes it a nice drop-in replacement for some existing code and gets rid of the unwanted text. Pair it with lookaheads and/or lookbehinds and you get very similar behavior.
Yes, this is a bit of a "you're asking the wrong question" answer and doesn't use re.split(). It does solve the underlying issue- your list of matches suddenly have zero-length strings in it and you don't want that.
>>> f='000014_L_20111007T084734-20111008T023142.txt'
>>> f[10:-4].split('-')
['0111007T084734', '20111008T023142']
or, somewhat more general:
>>> f[f.rfind('_')+1:-4].split('-')
['20111007T084734', '20111008T023142']

Python split a string at an underscore

How do I split a string at the second underscore in Python so that I get something like this
name = this_is_my_name_and_its_cool
split name so I get this ["this_is", "my_name_and_its_cool"]
the following statement will split name into a list of strings
a=name.split("_")
you can combine whatever strings you want using join, in this case using the first two words
b="_".join(a[:2])
c="_".join(a[2:])
maybe you can write a small function that takes as argument the number of words (n) after which you want to split
def func(name, n):
a=name.split("_")
b="_".join(a[:n])
c="_".join(a[n:])
return [b,c]
Assuming that you have a string with multiple instances of the same delimiter and you want to split at the nth delimiter, ignoring the others.
Here's a solution using just split and join, without complicated regular expressions. This might be a bit easier to adapt to other delimiters and particularly other values of n.
def split_at(s, c, n):
words = s.split(c)
return c.join(words[:n]), c.join(words[n:])
Example:
>>> split_at('this_is_my_name_and_its_cool', '_', 2)
('this_is', 'my_name_and_its_cool')
I think you're trying the split the string based on second underscore. If yes, then you used use findall function.
>>> import re
>>> s = "this_is_my_name_and_its_cool"
>>> re.findall(r'^[^_]*_[^_]*|[^_].*$', s)
['this_is', 'my_name_and_its_cool']
>>> [i for i in re.findall(r'^[^_]*_[^_]*|(?!_).*$', s) if i]
['this_is', 'my_name_and_its_cool']
print re.split(r"(^[^_]+_[^_]+)_","this_is_my_name_and_its_cool")
Try this.
Here's a quick & dirty way to do it:
s = 'this_is_my_name_and_its_cool'
i = s.find('_'); i = s.find('_', i+1)
print [s[:i], s[i+1:]]
output
['this_is', 'my_name_and_its_cool']
You could generalize this approach to split on the nth separator by putting the find() into a loop.

How do I strip a string given a list of unwanted characters? Python

Is there a way to pass in a list instead of a char to str.strip() in python? I have been doing it this way:
unwanted = [c for c in '!##$%^&*(FGHJKmn']
s = 'FFFFoFob*&%ar**^'
for u in unwanted:
s = s.strip(u)
print s
Desired output, this output is correct but there should be some sort of a more elegant way than how i'm coding it above:
oFob*&%ar
Strip and friends take a string representing a set of characters, so you can skip the loop:
>>> s = 'FFFFoFob*&%ar**^'
>>> s.strip('!##$%^&*(FGHJKmn')
'oFob*&%ar'
(the downside of this is that things like fn.rstrip(".png") seems to work for many filenames, but doesn't really work)
Since, you are looking to not delete elements from the middle, you can just use.
>>> 'FFFFoFob*&%ar**^'.strip('!##$%^&*(FGHJKmn')
'oFob*&%ar'
Otherwise, Use str.translate().
>>> 'FFFFoFob*&%ar**^'.translate(None, '!##$%^&*(FGHJKmn')
'oobar'

Categories

Resources