I have been playing around with this code that I'm trying to get to read the string of text without spaces. The code needs to separate the string by identifying the all capital letters using regular expressions. However I can’t seem to get it to display the capital letters.
import re
mystring = 'ThisIsStringWithoutSpacesWordsTextManDogCow!'
wordList = re.sub("[^\^a-z]"," ",mystring)
print (wordList)
Try:
re.sub("([A-Z])"," \\1",mystring).split()
This prepends a space in front of every capital letter and splits on these spaces.
Output:
['This',
'Is',
'String',
'Without',
'Spaces',
'Words',
'Text',
'Man',
'Dog',
'Cow!']
As an alternative to sub, you could use re.findall to find all the words (beginning with an uppercase letter followed by zero or more non-uppercase characters) and then join them back together:
>>> ' '.join(re.findall(r'[A-Z][^A-Z]*', mystring))
'This Is String Without Spaces Words Text Man Dog Cow!'
Try
>>> re.split('([A-Z][a-z]*)', mystring)
['', 'This', '', 'Is', '', 'String', '', 'Without', '', 'Spaces', '', 'Words', '', 'Text', '', 'Man', '', 'Dog', '', 'Cow', '!']
This gives you word per word output. Even the ! is separated out.
If you dont want the extra '', then you can remove it by filter(lambda x: x != '', a) if a is the output of above command
>>> filter(lambda x: x != '', a)
['This', 'Is', 'String', 'Without', 'Spaces', 'Words', 'Text', 'Man', 'Dog', 'Cow', '!']
Not a regular expression solution, but you can do it in normal code as well :-)
mystring = 'ThisIsStringWithoutSpacesWordsTextManDogCow!'
output_list = []
for i, letter in enumerate(mystring):
if i!=index and letter.isupper():
output_list.append(mystring[index:i])
index = i
else:
output_list.append(mystring[index:i])
Now on topic, this could be something what you are looking for?
mystring = re.sub(r"([a-z\d])([A-Z])", r'\1 \2', mystring)
# Makes the string space separated. You can use split to convert it to list
mystring = mystring.split()
Related
I have a problem with my python script, it's a straightforward problem but I can't resolve it.
For example, I have 11 words text and then I split using re.split(r"\s+", text) function
import re
text = "this is the example text and i will splitting this text"
split = re.split(r"\s+", text)
for a in (range(len(split))):
print(split[a])
The result is
this
is
the
example
text
and
i
will
splitting
this
text
I only need to take 10 words from 11 words, so the result I need is only like this
is
the
example
text
and
i
will
splitting
this
text
Can you solve this problem? It will very helpful
Thank you!
Just index like that
>>> import re
>>>
>>> text = "this is the example text and i will splitting this text"
>>> split = re.split(r"\s+", text)
>>> split
['this', 'is', 'the', 'example', 'text', 'and', 'i', 'will', 'splitting', 'this', 'text']
>>> split[-10:]
['is', 'the', 'example', 'text', 'and', 'i', 'will', 'splitting', 'this', 'text']
No need of regex:
text = "this is the example text and i will splitting this text"
l = text.split() # Split with whitespace
l.pop(0) # Remove first item
print(l) # Print the results
Results: ['is', 'the', 'example', 'text', 'and', 'i', 'will', 'splitting', 'this', 'text']
See Python proof.
I'm trying to remove the unwanted special symbols from my string in a list by using .isalnum() function in looping through each character in the words and I use the condition for putting exception for apostrophe symbol for cases, like "can't", "didn't", "won't". But it also keeps this symbol for the cases I don't need like " ' ", " 'cant", " 'hello' ". Is there a way to keep just for when the symbol is in the middle of the words?
data_set = "Hello WOrld &()*hello world ////dog /// cat world hello can't "
split_it = data_set.lower().split()
new_word = ''
new_list = list()
for word in split_it:
new_word = ''.join([x for x in word if x.isalnum() or x == " ' "])
new_list.append(new_word)
print(new_list)
['hello', 'world', 'hello', 'world', 'dog', '', 'cat', 'world', 'hello', "can't"]
If you know all of the characters you don't want, you could use .strip() to only remove them from the start and end:
>>> words = "Hello WOrld &()*hello world ////dog /// cat world hello can't ".lower().split()
>>> cleaned_words = [word.strip("&()*/") for word in words]
>>> print(cleaned_words)
['hello', 'world', 'hello', 'world', 'dog', '', 'cat', 'world', 'hello', "can't"]
Otherwise, you'll probably want a regexp that matches any character except those whitelisted, anchored to the start or end of the string, and then use re.sub() to remove them:
>>> import re
>>> nonalnum_at_edge_re = re.compile(r'^[^a-z0-9]+|[^a-z0-9]+$', re.I)
>>> cleaned_words = [re.sub(nonalnum_at_edge_re, '', word) for word in words]
['hello', 'world', 'hello', 'world', 'dog', '', 'cat', 'world', 'hello', "can't"]
You could use a regular expression that matches any character that isn't either a lowercase letter or number, and either doesn't have such a character before it (start of word) or after it (end of word):
import re
phrase = "Hello WOrld &()*hello world ////dog /// cat world hello can't "
regex = re.compile(r'(?<![a-z0-9])([^a-z0-9])|([^a-z0-9])(?![a-z0-9])')
print([re.sub(regex, '', word) for word in phrase.lower().split()])
Output:
['hello', 'world', 'hello', 'world', 'dog', '', 'cat', 'world', 'hello', "can't"]
I am trying to clean the string such that it does not have any punctuation or number, it must only have a-z and A-Z.
For example,given String is:
"coMPuter scien_tist-s are,,, the rock__stars of tomorrow_ <cool> ????"
Required output is :
['computer', 'scientists', 'are', 'the', 'rockstars', 'of', 'tomorrow']
My solution is
re.findall(r"([A-Za-z]+)" ,string)
My output is
['coMPuter', 'scien', 'tist', 's', 'are', 'the', 'rock', 'stars', 'of', 'tomorrow', 'cool']
You don't need to use regular expression:
(Convert the string into lower case if you want all lower-cased words), Split words, then filter out word that starts with alphabet:
>>> s = "coMPuter scien_tist-s are,,, the rock__stars of tomorrow_ <cool> ????"
>>> [filter(str.isalpha, word) for word in s.lower().split() if word[0].isalpha()]
['computer', 'scientists', 'are', 'the', 'rockstars', 'of', 'tomorrow']
In Python 3.x, filter(str.isalpha, word) should be replaced with ''.join(filter(str.isalpha, word)), because in Python 3.x, filter returns a filter object.
With the recommendation of all of the people who answered I got the correct solution that i really wants , Thanks to every one...
s = "coMPuter scien_tist-s are,,, the rock__stars of tomorrow_ <cool> ????"
cleaned = re.sub(r'(<.*>|[^a-zA-Z\s]+)', '', s).split()
print cleaned
using re, although I'm not sure this is what you want because you said you didn't want "cool" leftover.
import re
s = "coMPuter scien_tist-s are,,, the rock__stars of tomorrow_ <cool> ????"
REGEX = r'([^a-zA-Z\s]+)'
cleaned = re.sub(REGEX, '', s).split()
# ['coMPuter', 'scientists', 'are', 'the', 'rockstars', 'of', 'tomorrow', 'cool']
EDIT
WORD_REGEX = re.compile(r'(?!<?\S+>)(?=\w)(\S+)')
CLEAN_REGEX = re.compile(r'([^a-zA-Z])')
def cleaned(match_obj):
return re.sub(CLEAN_REGEX, '', match_obj.group(1)).lower()
[cleaned(x) for x in re.finditer(WORD_REGEX, s)]
# ['computer', 'scientists', 'are', 'the', 'rockstars', 'of', 'tomorrow']
WORD_REGEX uses a positive lookahead for any word characters and a negative lookahead for <...>. Whatever non-whitespace that makes it past the lookaheads is grouped:
(?!<?\S+>) # negative lookahead
(?=\w) # positive lookahead
(\S+) #group non-whitespace
cleaned takes the match groups and removes any non-word characters with CLEAN_REGEX
I am having trouble joining a pre-split string after modification while preserving the previous structure.
say I have a string like this:
string = """
This is a nice piece of string isn't it?
I assume it is so. I have to keep typing
to use up the space. La-di-da-di-da.
This is a spaced out sentence
Bonjour.
"""
I have to do some tests of that string.. finding specific words and characters within those words etc...and then replace them accordingly. so to accomplish that I had to break it up using
string.split()
The problem with this is, is that split also gets rid of the \n and extra spaces immediately ruining the integrity of the previous structure
Are there some extra methods in split that will allow me to accomplish this or should I seek an alternative route?
Thank you.
The split method takes an optional argument to specify the delimiter. If you only want to split words using space (' ') characters, you can pass that as an argument:
>>> string = """
...
... This is a nice piece of string isn't it?
... I assume it is so. I have to keep typing
... to use up the space. La-di-da-di-da.
...
... Bonjour.
... """
>>>
>>> string.split()
['This', 'is', 'a', 'nice', 'piece', 'of', 'string', "isn't", 'it?', 'I', 'assume', 'it', 'is', 'so.', 'I', 'have', 'to', 'keep', 'typing', 'to', 'use', 'up', 'the', 'space.', 'La-di-da-di-da.', 'Bonjour.']
>>> string.split(' ')
['\n\nThis', 'is', 'a', 'nice', 'piece', 'of', 'string', "isn't", 'it?\nI', 'assume', 'it', 'is', 'so.', 'I', 'have', 'to', 'keep', 'typing\nto', 'use', 'up', 'the', 'space.', 'La-di-da-di-da.\n\nBonjour.\n']
>>>
The split method will split your string based on all white-spaces by default. If you want to split the lies separately, you can first split your string with new-lines then split the lines with white-space:
>>> [line.split() for line in string.strip().split('\n')]
[['This', 'is', 'a', 'nice', 'piece', 'of', 'string', "isn't", 'it?'], ['I', 'assume', 'it', 'is', 'so.', 'I', 'have', 'to', 'keep', 'typing'], ['to', 'use', 'up', 'the', 'space.', 'La-di-da-di-da.'], [], ['Bonjour.']]
Just split with a delimiter:
>>> string.split(' ')
['\n\nThis', 'is', 'a', 'nice', 'piece', 'of', 'string', "isn't", 'it?\nI', 'assume', 'it', 'is', 'so.', 'I', 'have', 'to', 'keep', 'typing\nto', 'use', 'up', 'the', 'space.', 'La-di-da-di-da.\n\nThis', '', '', 'is', '', '', '', 'a', '', '', '', 'spaced', '', '', 'out', '', '', 'sentence\n\nBonjour.\n']
And to get it back:
>>> ' '.join(a)
This is a nice piece of string isn't it?
I assume it is so. I have to keep typing
to use up the space. La-di-da-di-da.
This is a spaced out sentence
Bonjour.
just do string.split(' ') (note the space argument to the split method).
this will keep your precious new lines within the strings that go into the resulting array...
You can save the spaces in another list then after modifying the words list you join them together.
In [1]: from nltk.tokenize import RegexpTokenizer
In [2]: spacestokenizer = RegexpTokenizer(r'\s+', gaps=False)
In [3]: wordtokenizer = RegexpTokenizer(r'\s+', gaps=True)
In [4]: string = """
...:
...: This is a nice piece of string isn't it?
...: I assume it is so. I have to keep typing
...: to use up the space. La-di-da-di-da.
...:
...: This is a spaced out sentence
...:
...: Bonjour.
...: """
In [5]: spaces = spacestokenizer.tokenize(string)
In [6]: words = wordtokenizer.tokenize(string)
In [7]: print ''.join([s+w for s, w in zip(spaces, words)])
This is a nice piece of string isn't it?
I assume it is so. I have to keep typing
to use up the space. La-di-da-di-da.
This is a spaced out sentence
Bonjour.
I'm trying to convert a string to a list of words using python. I want to take something like the following:
string = 'This is a string, with words!'
Then convert to something like this :
list = ['This', 'is', 'a', 'string', 'with', 'words']
Notice the omission of punctuation and spaces. What would be the fastest way of going about this?
I think this is the simplest way for anyone else stumbling on this post given the late response:
>>> string = 'This is a string, with words!'
>>> string.split()
['This', 'is', 'a', 'string,', 'with', 'words!']
Try this:
import re
mystr = 'This is a string, with words!'
wordList = re.sub("[^\w]", " ", mystr).split()
How it works:
From the docs :
re.sub(pattern, repl, string, count=0, flags=0)
Return the string obtained by replacing the leftmost non-overlapping occurrences of pattern in string by the replacement repl. If the pattern isn’t found, string is returned unchanged. repl can be a string or a function.
so in our case :
pattern is any non-alphanumeric character.
[\w] means any alphanumeric character and is equal to the character set
[a-zA-Z0-9_]
a to z, A to Z , 0 to 9 and underscore.
so we match any non-alphanumeric character and replace it with a space .
and then we split() it which splits string by space and converts it to a list
so 'hello-world'
becomes 'hello world'
with re.sub
and then ['hello' , 'world']
after split()
let me know if any doubts come up.
To do this properly is quite complex. For your research, it is known as word tokenization. You should look at NLTK if you want to see what others have done, rather than starting from scratch:
>>> import nltk
>>> paragraph = u"Hi, this is my first sentence. And this is my second."
>>> sentences = nltk.sent_tokenize(paragraph)
>>> for sentence in sentences:
... nltk.word_tokenize(sentence)
[u'Hi', u',', u'this', u'is', u'my', u'first', u'sentence', u'.']
[u'And', u'this', u'is', u'my', u'second', u'.']
The most simple way:
>>> import re
>>> string = 'This is a string, with words!'
>>> re.findall(r'\w+', string)
['This', 'is', 'a', 'string', 'with', 'words']
Using string.punctuation for completeness:
import re
import string
x = re.sub('['+string.punctuation+']', '', s).split()
This handles newlines as well.
Well, you could use
import re
list = re.sub(r'[.!,;?]', ' ', string).split()
Note that both string and list are names of builtin types, so you probably don't want to use those as your variable names.
Inspired by #mtrw's answer, but improved to strip out punctuation at word boundaries only:
import re
import string
def extract_words(s):
return [re.sub('^[{0}]+|[{0}]+$'.format(string.punctuation), '', w) for w in s.split()]
>>> str = 'This is a string, with words!'
>>> extract_words(str)
['This', 'is', 'a', 'string', 'with', 'words']
>>> str = '''I'm a custom-built sentence with "tricky" words like https://stackoverflow.com/.'''
>>> extract_words(str)
["I'm", 'a', 'custom-built', 'sentence', 'with', 'tricky', 'words', 'like', 'https://stackoverflow.com']
Personally, I think this is slightly cleaner than the answers provided
def split_to_words(sentence):
return list(filter(lambda w: len(w) > 0, re.split('\W+', sentence))) #Use sentence.lower(), if needed
A regular expression for words would give you the most control. You would want to carefully consider how to deal with words with dashes or apostrophes, like "I'm".
list=mystr.split(" ",mystr.count(" "))
This way you eliminate every special char outside of the alphabet:
def wordsToList(strn):
L = strn.split()
cleanL = []
abc = 'abcdefghijklmnopqrstuvwxyz'
ABC = abc.upper()
letters = abc + ABC
for e in L:
word = ''
for c in e:
if c in letters:
word += c
if word != '':
cleanL.append(word)
return cleanL
s = 'She loves you, yea yea yea! '
L = wordsToList(s)
print(L) # ['She', 'loves', 'you', 'yea', 'yea', 'yea']
I'm not sure if this is fast or optimal or even the right way to program.
def split_string(string):
return string.split()
This function will return the list of words of a given string.
In this case, if we call the function as follows,
string = 'This is a string, with words!'
split_string(string)
The return output of the function would be
['This', 'is', 'a', 'string,', 'with', 'words!']
This is from my attempt on a coding challenge that can't use regex,
outputList = "".join((c if c.isalnum() or c=="'" else ' ') for c in inputStr ).split(' ')
The role of apostrophe seems interesting.
Probably not very elegant, but at least you know what's going on.
my_str = "Simple sample, test! is, olny".lower()
my_lst =[]
temp=""
len_my_str = len(my_str)
number_letter_in_data=0
list_words_number=0
for number_letter_in_data in range(0, len_my_str, 1):
if my_str[number_letter_in_data] in [',', '.', '!', '(', ')', ':', ';', '-']:
pass
else:
if my_str[number_letter_in_data] in [' ']:
#if you want longer than 3 char words
if len(temp)>3:
list_words_number +=1
my_lst.append(temp)
temp=""
else:
pass
else:
temp = temp+my_str[number_letter_in_data]
my_lst.append(temp)
print(my_lst)
You can try and do this:
tryTrans = string.maketrans(",!", " ")
str = "This is a string, with words!"
str = str.translate(tryTrans)
listOfWords = str.split()