Related
I was trying to create a program that removes all sorts of punctuation from a given input sentence. The code looked somewhat like this
from string import punctuation
sent = str(input())
def rempunc(string):
for i in string:
word =''
list = [0]
if i in punctuation:
x = string.index(i)
word += string[list[-1]:x]+' '
list.append(x)
list_2 = word.split(' ')
return list_2
print(rempunc(sent))
However the output is coming out as follows:
This state ment has # 1 ! punc.
['This', 'state', 'ment', 'has', '#', '1', '!', 'punc', '']
Why isn't the punctuation being removed entirely? Am I missing something in the code?
I tried changing x with x-1 in line 7 but it did not help. Now I'm stuck and don't know what else to try.
Repeated string slicing isn't necessary here.
I would suggest using filter() to filter out the undesired characters for each word, and then reading that result into a list comprehension. From there, you can use a second filter() operation to remove the empty strings:
from string import punctuation
def remove_punctuation(s):
cleaned_words = [''.join(filter(lambda x: x not in punctuation, word))
for word in s.split()]
return list(filter(lambda x: x != "", cleaned_words))
print(remove_punctuation(input()))
This outputs:
['This', 'state', 'ment', 'has', '1', 'punc']
How to remove alphabets and extract numbers using regex in python?
import re
l=["098765432123 M","123456789012"]
s = re.findall(r"(?<!\d)\d{12}", l)
print(s)
Expected Output:
123456789012
If all you want is to have filtered list, consisting elements with pure digits, use filter with str.isdigit:
list(filter(str.isdigit, l))
Or as #tobias_k suggested, list comprehension is always your friend:
[s for s in l if s.isdigit()]
Output:
['123456789012']
I would suggest to use a negative lookahead assertion, if as stated you want to use regex only.
l=["098765432123 M","123456789012"]
res=[]
for a in l:
s = re.search(r"(?<!\d)\d{12}(?! [a-zA-Z])", a)
if s is not None:
res.append(s.group(0))
The result would then be:
['123456789012']
To keep only digits you can do re.findall('\d',s), but you'll get a list:
s = re.findall('\d', "098765432123 M")
print(s)
> ['0', '9', '8', '7', '6', '5', '4', '3', '2', '1', '2', '3']
So to be clear, you want to ignore the whole string if there is a alphabetic character in it? Or do you still want to extract the numbers of a string with both numbers and alphabetic characters in it?
If you want to find all numbers, and always find the longest number use this:
regex = r"\d+"
matches = re.finditer(regex, test_str, re.MULTILINE)
\d will search for digits, + will find one or more of the defined characters, and will always find the longest consecutive line of these characters.
If you only want to find strings without alphabets:
import re
regex = r"[a-zA-Z]"
test_str = ("098765432123 M", "123456789012")
for x in test_str:
if not re.search(regex, x):
print(x)
I have the following string
myString = "cat(50),dog(60),pig(70)"
I try to convert the above string to 2D array.
The result I want to get is
myResult = [['cat', 50], ['dog', 60], ['pig', 70]]
I already know the way to solve by using the legacy string method but it is quite complicated. So I don't want to use this approach.
# Legacy approach
# 1. Split string by ","
# 2. Run loop and split string by "(" => got the <name of animal>
# 3. Got the number by exclude ")".
Any suggestion would appreciate.
You can use the re.findall method:
>>> import re
>>> re.findall(r'(\w+)\((\d+)\)', myString)
[('cat', '50'), ('dog', '60'), ('pig', '70')]
If you want a list of lists as noticed by RomanPerekhrest convert it with a list comprehension:
>>> [list(t) for t in re.findall(r'(\w+)\((\d+)\)', myString)]
[['cat', '50'], ['dog', '60'], ['pig', '70']]
Alternative solution using re.split() function:
import re
myString = "cat(50),dog(60),pig(70)"
result = [re.split(r'[)(]', i)[:-1] for i in myString.split(',')]
print(result)
The output:
[['cat', '50'], ['dog', '60'], ['pig', '70']]
r'[)(]' - pattern, treats parentheses as delimiters for splitting
[:-1] - slice containing all items except the last one(which is empty space ' ')
I'm trying to convert a string to a list of words using python. I want to take something like the following:
string = 'This is a string, with words!'
Then convert to something like this :
list = ['This', 'is', 'a', 'string', 'with', 'words']
Notice the omission of punctuation and spaces. What would be the fastest way of going about this?
I think this is the simplest way for anyone else stumbling on this post given the late response:
>>> string = 'This is a string, with words!'
>>> string.split()
['This', 'is', 'a', 'string,', 'with', 'words!']
Try this:
import re
mystr = 'This is a string, with words!'
wordList = re.sub("[^\w]", " ", mystr).split()
How it works:
From the docs :
re.sub(pattern, repl, string, count=0, flags=0)
Return the string obtained by replacing the leftmost non-overlapping occurrences of pattern in string by the replacement repl. If the pattern isn’t found, string is returned unchanged. repl can be a string or a function.
so in our case :
pattern is any non-alphanumeric character.
[\w] means any alphanumeric character and is equal to the character set
[a-zA-Z0-9_]
a to z, A to Z , 0 to 9 and underscore.
so we match any non-alphanumeric character and replace it with a space .
and then we split() it which splits string by space and converts it to a list
so 'hello-world'
becomes 'hello world'
with re.sub
and then ['hello' , 'world']
after split()
let me know if any doubts come up.
To do this properly is quite complex. For your research, it is known as word tokenization. You should look at NLTK if you want to see what others have done, rather than starting from scratch:
>>> import nltk
>>> paragraph = u"Hi, this is my first sentence. And this is my second."
>>> sentences = nltk.sent_tokenize(paragraph)
>>> for sentence in sentences:
... nltk.word_tokenize(sentence)
[u'Hi', u',', u'this', u'is', u'my', u'first', u'sentence', u'.']
[u'And', u'this', u'is', u'my', u'second', u'.']
The most simple way:
>>> import re
>>> string = 'This is a string, with words!'
>>> re.findall(r'\w+', string)
['This', 'is', 'a', 'string', 'with', 'words']
Using string.punctuation for completeness:
import re
import string
x = re.sub('['+string.punctuation+']', '', s).split()
This handles newlines as well.
Well, you could use
import re
list = re.sub(r'[.!,;?]', ' ', string).split()
Note that both string and list are names of builtin types, so you probably don't want to use those as your variable names.
Inspired by #mtrw's answer, but improved to strip out punctuation at word boundaries only:
import re
import string
def extract_words(s):
return [re.sub('^[{0}]+|[{0}]+$'.format(string.punctuation), '', w) for w in s.split()]
>>> str = 'This is a string, with words!'
>>> extract_words(str)
['This', 'is', 'a', 'string', 'with', 'words']
>>> str = '''I'm a custom-built sentence with "tricky" words like https://stackoverflow.com/.'''
>>> extract_words(str)
["I'm", 'a', 'custom-built', 'sentence', 'with', 'tricky', 'words', 'like', 'https://stackoverflow.com']
Personally, I think this is slightly cleaner than the answers provided
def split_to_words(sentence):
return list(filter(lambda w: len(w) > 0, re.split('\W+', sentence))) #Use sentence.lower(), if needed
A regular expression for words would give you the most control. You would want to carefully consider how to deal with words with dashes or apostrophes, like "I'm".
list=mystr.split(" ",mystr.count(" "))
This way you eliminate every special char outside of the alphabet:
def wordsToList(strn):
L = strn.split()
cleanL = []
abc = 'abcdefghijklmnopqrstuvwxyz'
ABC = abc.upper()
letters = abc + ABC
for e in L:
word = ''
for c in e:
if c in letters:
word += c
if word != '':
cleanL.append(word)
return cleanL
s = 'She loves you, yea yea yea! '
L = wordsToList(s)
print(L) # ['She', 'loves', 'you', 'yea', 'yea', 'yea']
I'm not sure if this is fast or optimal or even the right way to program.
def split_string(string):
return string.split()
This function will return the list of words of a given string.
In this case, if we call the function as follows,
string = 'This is a string, with words!'
split_string(string)
The return output of the function would be
['This', 'is', 'a', 'string,', 'with', 'words!']
This is from my attempt on a coding challenge that can't use regex,
outputList = "".join((c if c.isalnum() or c=="'" else ' ') for c in inputStr ).split(' ')
The role of apostrophe seems interesting.
Probably not very elegant, but at least you know what's going on.
my_str = "Simple sample, test! is, olny".lower()
my_lst =[]
temp=""
len_my_str = len(my_str)
number_letter_in_data=0
list_words_number=0
for number_letter_in_data in range(0, len_my_str, 1):
if my_str[number_letter_in_data] in [',', '.', '!', '(', ')', ':', ';', '-']:
pass
else:
if my_str[number_letter_in_data] in [' ']:
#if you want longer than 3 char words
if len(temp)>3:
list_words_number +=1
my_lst.append(temp)
temp=""
else:
pass
else:
temp = temp+my_str[number_letter_in_data]
my_lst.append(temp)
print(my_lst)
You can try and do this:
tryTrans = string.maketrans(",!", " ")
str = "This is a string, with words!"
str = str.translate(tryTrans)
listOfWords = str.split()
I have a string:
'Specified, if char, else 10 (default).'
I want to split it into two tuples
words=('Specified', 'if', 'char', 'else', '10', 'default')
separators=(',', ' ', ',', ' ', ' (', ').')
Does anyone have a quick solution of this?
PS: this symbol '-' is a word separator, not part of the word
import re
line = 'Specified, if char, else 10 (default).'
words = re.split(r'\)?[, .]\(?', line)
# words = ['Specified', '', 'if', 'char', '', 'else', '10', 'default', '']
separators = re.findall(r'\)?[, .]\(?', line)
# separators = [',', ' ', ' ', ',', ' ', ' ', ' (', ').']
If you really want tuples pass the results in tuple(), if you do not want words to have the empty entries (from between the commas and spaces), use the following:
words = [x for x in re.split(r'\)?[, .]\(?', line) if x]
or
words = tuple(x for x in re.split(r'\)?[, .]\(?', line) if x)
You can use regex for that.
>>> a='Specified, if char, else 10 (default).'
>>> from re import split
>>> split(",? ?\(?\)?\.?",a)
['Specified', 'if', 'char', 'else', '10', 'default', '']
But in this solution you should write that pattern yourself. If you want to use that tuple, you should convert it contents to regex pattern for that in this solution.
Regex to find all separators (assumed anything that's not alpha numeric
import re
re.findall('[^\w]', string)
I probably would first .split() on spaces into a list, then iterate through the list, using a regex to check for a character after the word boundary.
import re
s = 'Specified, if char, else 10 (default).'
w = s.split()
seperators = []
finalwords = []
for word in words:
match = re.search(r'(\w+)\b(.*)', word)
sep = '' if match is None else match.group(2)
finalwords.append(match.group(1))
seperators.append(sep)
In pass to get both separators and words you could use findall as follows:
import re
line = 'Specified, if char, else 10 (default).'
words = []
seps = []
for w,s in re.findall("(\w*)([), .(]+)", line):
words.append(w)
seps.append(s)
Here's my crack at it:
>>> p = re.compile(r'(\)? *[,.]? *\(?)')
>>> tmp = p.split('Specified, char, else 10 (default).')
>>> words = tmp[::2]
>>> separators = tmp[1::2]
>>> print words
['Specified', 'char', 'else', '10', 'default', '']
>>> print separators
[', ', ', ', ' ', ' (', ').']
The only problem is you can have a '' at the end or the beginning of words if there is a separator at the beginning/end of the sentence without anything before/after it. However, that is easily checked for and eliminated.