re.split with spaces in python - python

I have a string of text that looks like this:
' 19,301 14,856 18,554'
Where is a space.
I'm trying to split it on the white space, but I need to retain all of the white space as an item in the new list. Like this:
[' ', '19,301',' ', '14,856', ' ', '18,554']
I have been using the following code:
re.split(r'( +)(?=[0-9])', item)
and it returns:
['', ' ', '19,301', ' ', '14,856', ' ', '18,554']
Notice that it always adds an empty element to the beginning of my list. It's easy enough to drop it, but I'm really looking to understand what is going on here, so I can get the code to treat things consistently. Thanks.

When using the re.split method, if the capture group is matched at the start of a string, the "result will start with an empty string". The reason for this is so that join method can behave as the inverse of the split method.
It might not make a lot of sense for your case, where the separator matches are of varying sizes, but if you think about the case where the separators were a | character and you wanted to perform a join on them, with the extra empty string it would work:
>> item = '|19,301|14,856|18,554'
>> items = re.split(r'\|', item)
>> print items
['', '19,301', '14,856', '18,554']
>> '|'.join(items)
'|19,301|14,856|18,554'
But without it, the initial pipe would be missing:
>> items = ['19,301', '14,856', '18,554']
>> '|'.join(items)
'19,301|14,856|18,554'

You can do it with re.findall():
>>> s = '\s\s\s\s\s\s\s\s\s\s\s\s\s\s\s\s\s\s\s\s\s19,301\s\s\s\s\s\s\s\s\s14,856\s\s\s\s\s\s\s\s18,554'.replace('\\s',' ')
>>> re.findall(r' +|[^ ]+', s)
[' ', '19,301', ' ', '14,856', ' ', '18,554']
You said "space" in the question, so the pattern works with space. For matching runs of any whitespace character you can use:
>>> re.findall(r'\s+|\S+', s)
[' ', '19,301', ' ', '14,856', ' ', '18,554']
The pattern matches one or more whitespace characters or one or more non-whitespace character, for example:
>>> s=' \t\t ab\ncd\tef g '
>>> re.findall(r'\s+|\S+', s)
[' \t\t ', 'ab', '\n', 'cd', '\t', 'ef', ' ', 'g', ' ']

Related

How to replace strings that are similar

I am creating some code that will replace spaces.
I want a double space to turn into a single space and a single space to become nothing.
Example:
string = "t e s t t e s t"
string = string.replace(' ', ' ').replace(' ', '')
print (string)
The output is "testest" because it replaces all the spaces.
How can I make the output "test test"?
Thanks
A regular expression approach is doubtless possible, but for a quick solution, first split on the double space, then rejoin on a single space after using a comprehension to remove the single spaces in each of the elements in the split:
>>> string = "t e s t t e s t"
>>> ' '.join(word.replace(' ', '') for word in string.split(' '))
'test test'
Just another idea:
>>> s = 't e s t t e s t'
>>> s.replace(' ', ' ').replace(' ', '').replace(' ', '')
'test test'
Seems to be faster:
>>> timeit(lambda: s.replace(' ', ' ').replace(' ', '').replace(' ', ''))
2.7822862677683133
>>> timeit(lambda: ' '.join(w.replace(' ','') for w in s.split(' ')))
7.702567737466012
And regex (at least this one) is shorter but a lot slower:
>>> timeit(lambda: re.sub(' ( ?)', r'\1', s))
37.2261058654488
I like this regex solution because you can easily read what's going on:
>>> import re
>>> string = "t e s t t e s t"
>>> re.sub(' {1,2}', lambda m: '' if m.group() == ' ' else ' ', string)
'test test'
We search for one or two spaces, and substitute one space with the empty string but two spaces with a single space.

Split strings on whitespaces, but do not remove them [duplicate]

This question already has answers here:
Preserve whitespaces when using split() and join() in python
(3 answers)
Closed 7 years ago.
I want to split strings based on whitespace and punctuation, but the whitespace and punctuation should still be in the result.
For example:
Input: text = "This is a text; this is another text.,."
Output: ['This', ' ', 'is', ' ', 'a', ' ', 'text', '; ', 'this', ' ', 'is', ' ', 'another', ' ', 'text', '.,.']
Here is what I'm currently doing:
def classify(b):
"""
Classify a character.
"""
separators = string.whitespace + string.punctuation
if (b in separators):
return "separator"
else:
return "letter"
def tokenize(text):
"""
Split strings to words, but do not remove white space.
The input must be of type str, not bytes
"""
if (len(text) == 0):
return []
current_word = "" + text[0]
previous_mode = classify(text)
offset = 1
results = []
while offset < len(text):
current_mode = classify(text[offset])
if current_mode == previous_mode:
current_word += text[offset]
else:
results.append(current_word)
current_word = text[offset]
previous_mode = current_mode
offset += 1
results.append(current_word)
return results
It works, but it's so C-style. Is there a better way in Python?
You can use a regular expression:
import re
re.split('([\s.,;()]+)', text)
This splits on arbitrary-width whitespace (including tabs and newlines) plus a selection of punctuation characters, and by grouping the split text you tell re.sub() to include it in the output:
>>> import re
>>> text = "This is a text; this is another text.,."
>>> re.split('([\s.,;()]+)', text)
['This', ' ', 'is', ' ', 'a', ' ', 'text', '; ', 'this', ' ', 'is', ' ', 'another', ' ', 'text', '.,.', '']
If you only wanted to match spaces (and not other whitespace), replace \s with a space:
>>> re.split('([ .,;()]+)', text)
['This', ' ', 'is', ' ', 'a', ' ', 'text', '; ', 'this', ' ', 'is', ' ', 'another', ' ', 'text', '.,.', '']
Note the extra trailing empty string; a split always has a head and a tail, so text starting or ending in a split group will always have an extra empty string at the start or end. This is easily removed.

splitting characters of a string by a delimiter in those characters

I have a string with this pattern: repeat of a char in [' ', '.', "#"] plus space.
For example: # . #.
I want to split this string based on space separator (getting ['#', '.', ' ', '#'] but the problem is that space is one of characters itself, so split(" ") doesn't work.
How can I do this?
There's no need to use comprehensions here - you can just use a stepping slice:
>>> text = "# . #"
>>> text[::2]
'#. #'
>>> list(text[::2])
['#', '.', ' ', '#']
result = []
for c in yourString:
if c == ' ' and result[-1] == ' ':
continue
result.append(c)
Assuming exactly one space delimiter between each word, the below would work as well
str = "# . #."
result = []
for index,c in enumerate(str):
if index%2==0:
result.append(c)
If your string always has a (char,space,char,space,...) sequence, you can do:
new_list = [old_string[x] for x in range(0,len(old_string),2)]
>>> old_string = '# # # . #'
#Run code above
>>> print new_string
['#','#','#','.',' ','#']

Efficiently split a string using multiple separators and retaining each separator?

I need to split strings of data using each character from string.punctuation and string.whitespace as a separator.
Furthermore, I need for the separators to remain in the output list, in between the items they separated in the string.
For example,
"Now is the winter of our discontent"
should output:
['Now', ' ', 'is', ' ', 'the', ' ', 'winter', ' ', 'of', ' ', 'our', ' ', 'discontent']
I'm not sure how to do this without resorting to an orgy of nested loops, which is unacceptably slow. How can I do it?
A different non-regex approach from the others:
>>> import string
>>> from itertools import groupby
>>>
>>> special = set(string.punctuation + string.whitespace)
>>> s = "One two three tab\ttabandspace\t end"
>>>
>>> split_combined = [''.join(g) for k, g in groupby(s, lambda c: c in special)]
>>> split_combined
['One', ' ', 'two', ' ', 'three', ' ', 'tab', '\t', 'tabandspace', '\t ', 'end']
>>> split_separated = [''.join(g) for k, g in groupby(s, lambda c: c if c in special else False)]
>>> split_separated
['One', ' ', 'two', ' ', 'three', ' ', 'tab', '\t', 'tabandspace', '\t', ' ', 'end']
Could use dict.fromkeys and .get instead of the lambda, I guess.
[edit]
Some explanation:
groupby accepts two arguments, an iterable and an (optional) keyfunction. It loops through the iterable and groups them with the value of the keyfunction:
>>> groupby("sentence", lambda c: c in 'nt')
<itertools.groupby object at 0x9805af4>
>>> [(k, list(g)) for k,g in groupby("sentence", lambda c: c in 'nt')]
[(False, ['s', 'e']), (True, ['n', 't']), (False, ['e']), (True, ['n']), (False, ['c', 'e'])]
where terms with contiguous values of the keyfunction are grouped together. (This is a common source of bugs, actually -- people forget that they have to sort by the keyfunc first if they want to group terms which might not be sequential.)
As #JonClements guessed, what I had in mind was
>>> special = dict.fromkeys(string.punctuation + string.whitespace, True)
>>> s = "One two three tab\ttabandspace\t end"
>>> [''.join(g) for k,g in groupby(s, special.get)]
['One', ' ', 'two', ' ', 'three', ' ', 'tab', '\t', 'tabandspace', '\t ', 'end']
for the case where we were combining the separators. .get returns None if the value isn't in the dict.
import re
import string
p = re.compile("[^{0}]+|[{0}]+".format(re.escape(
string.punctuation + string.whitespace)))
print p.findall("Now is the winter of our discontent")
I'm no big fan of using regexps for all problems, but I don't think you have much choice in this if you want it fast and short.
I'll explain the regexp since you're not familiar with it:
[...] means any of the characters inside the square brackets
[^...] means any of the characters not inside the square brackets
+ behind means one or more of the previous thing
x|y means to match either x or y
So the regexp matches 1 or more characters where either all must be punctuation and whitespace, or none must be. The findall method finds all non-overlapping matches of the pattern.
Try this:
import re
re.split('(['+re.escape(string.punctuation + string.whitespace)+']+)',"Now is the winter of our discontent")
Explanation from the Python documentation:
If capturing parentheses are used in pattern, then the text of all groups in the pattern are also returned as part of the resulting list.
Solution in linear (O(n)) time:
Let's say you have a string:
original = "a, b...c d"
First convert all separators to space:
splitters = string.punctuation + string.whitespace
trans = string.maketrans(splitters, ' ' * len(splitters))
s = original.translate(trans)
Now s == 'a b c d'. Now you can use itertools.groupby to alternate between spaces and non-spaces:
result = []
position = 0
for _, letters in itertools.groupby(s, lambda c: c == ' '):
letter_count = len(list(letters))
result.append(original[position:position + letter_count])
position += letter_count
Now result == ['a', ', ', 'b', '...', 'c', ' ', 'd'], which is what you need.
My take:
from string import whitespace, punctuation
import re
pattern = re.escape(whitespace + punctuation)
print re.split('([' + pattern + '])', 'now is the winter of')
Depending on the text you are dealing with, you may be able to simplify your concept of delimiters to "anything other than letters and numbers". If this will work, you can use the following regex solution:
re.findall(r'[a-zA-Z\d]+|[^a-zA-Z\d]', text)
This assumes that you want to split on each individual delimiter character even if they occur consecutively, so 'foo..bar' would become ['foo', '.', '.', 'bar']. If instead you expect ['foo', '..', 'bar'], use [a-zA-Z\d]+|[^a-zA-Z\d]+ (only difference is adding + at the very end).
from string import punctuation, whitespace
s = "..test. and stuff"
f = lambda s, c: s + ' ' + c + ' ' if c in punctuation else s + c
l = sum([reduce(f, word).split() for word in s.split()], [])
print l
For any arbitrary collection of separators:
def separate(myStr, seps):
answer = []
temp = []
for char in myStr:
if char in seps:
answer.append(''.join(temp))
answer.append(char)
temp = []
else:
temp.append(char)
answer.append(''.join(temp))
return answer
In [4]: print separate("Now is the winter of our discontent", set(' '))
['Now', ' ', 'is', ' ', 'the', ' ', 'winter', ' ', 'of', ' ', 'our', ' ', 'discontent']
In [5]: print separate("Now, really - it is the winter of our discontent", set(' ,-'))
['Now', ',', '', ' ', 'really', ' ', '', '-', '', ' ', 'it', ' ', 'is', ' ', 'the', ' ', 'winter', ' ', 'of', ' ', 'our', ' ', 'discontent']
Hope this helps
from itertools import chain, cycle, izip
s = "Now is the winter of our discontent"
words = s.split()
wordsWithWhitespace = list( chain.from_iterable( izip( words, cycle([" "]) ) ) )
# result : ['Now', ' ', 'is', ' ', 'the', ' ', 'winter', ' ', 'of', ' ', 'our', ' ', 'discontent', ' ']

Is there a function in Python to split a string without ignoring the spaces?

Is there a function in Python to split a string without ignoring the spaces in the resulting list?
E.g:
s="This is the string I want to split".split()
gives me
>>> s
['This', 'is', 'the', 'string', 'I', 'want', 'to', 'split']
I want something like
['This',' ','is',' ', 'the',' ','string', ' ', .....]
>>> import re
>>> re.split(r"(\s+)", "This is the string I want to split")
['This', ' ', 'is', ' ', 'the', ' ', 'string', ' ', 'I', ' ', 'want', ' ', 'to', ' ', 'split']
Using the capturing parentheses in re.split() causes the function to return the separators as well.
I don't think there is a function in the standard library that does that by itself, but "partition" comes close
The best way is probably to use regular expressions (which is how I'd do this in any language!)
import re
print re.split(r"(\s+)", "Your string here")
Silly answer just for the heck of it:
mystring.replace(" ","! !").split("!")
The hard part with what you're trying to do is that you aren't giving it a character to split on. split() explodes a string on the character you provide to it, and removes that character.
Perhaps this may help:
s = "String to split"
mylist = []
for item in s.split():
mylist.append(item)
mylist.append(' ')
mylist = mylist[:-1]
Messy, but it'll do the trick for you...

Categories

Resources