Can python regex negate a list of words? - python

I have to match all the alphanumeric words from a text.
>>> import re
>>> text = "hello world!! how are you?"
>>> final_list = re.findall(r"[a-zA-Z0-9]+", text)
>>> final_list
['hello', 'world', 'how', 'are', 'you']
>>>
This is fine, but further I have few words to negate i.e. the words that shouldn't be in my final list.
>>> negate_words = ['world', 'other', 'words']
A bad way to do it
>>> negate_str = '|'.join(negate_words)
>>> filter(lambda x: not re.match(negate_str, x), final_list)
['hello', 'how', 'are', 'you']
But i can save a loop if my very first regex-pattern can be changed to consider negation of those words. I found negation of characters but i have words to negate, also i found regex-lookbehind in other questions, but that doesn't help either.
Can it be done using python re?
Update
My text can span a few hundered lines. Also, list of negate_words can be lengthy too.
Considering this, is using regex for such task, correct in the first place?? Any suggestions??

I don't think there is a clean way to do this using regular expressions. The closest I could find was bit ugly and not exactly what you wanted:
>>> re.findall(r"\b(?:world|other|words)|([a-zA-Z0-9]+)\b", text)
['hello', '', 'how', 'are', 'you']
Why not use Python's sets instead. They are very fast:
>>> list(set(final_list) - set(negate_words))
['hello', 'how', 'are', 'you']
If order is important, see the reply from #glglgl below. His list comprehension version is very readable. Here's a fast but less readable equivalent using itertools:
>>> negate_words_set = set(negate_words)
>>> list(itertools.ifilterfalse(negate_words_set.__contains__, final_list))
['hello', 'how', 'are', 'you']
Another alternative is the build-up the word list in a single pass using re.finditer:
>>> result = []
>>> negate_words_set = set(negate_words)
>>> result = []
>>> for mo in re.finditer(r"[a-zA-Z0-9]+", text):
word = mo.group()
if word not in negate_words_set:
result.append(word)
>>> result
['hello', 'how', 'are', 'you']

Maybe is worth trying pyparsing for this:
>>> from pyparsing import *
>>> negate_words = ['world', 'other', 'words']
>>> parser = OneOrMore(Suppress(oneOf(negate_words)) ^ Word(alphanums)).ignore(CharsNotIn(alphanums))
>>> parser.parseString('hello world!! how are you?').asList()
['hello', 'how', 'are', 'you']
Note that oneOf(negate_words) must be before Word(alphanums) to make sure that it matches earlier.
Edit: Just for the fun of it, I repeated the exercise using lepl (also an interesting parsing library)
>>> from lepl import *
>>> negate_words = ['world', 'other', 'words']
>>> parser = OneOrMore(~Or(*negate_words) | Word(Letter() | Digit()) | ~Any())
>>> parser.parse('hello world!! how are you?')
['hello', 'how', 'are', 'you']

Don't ask uselessly too much to regex.
Instead, think to generators.
import re
unwanted = ('world', 'other', 'words')
text = "hello world!! how are you?"
gen = (m.group() for m in re.finditer("[a-zA-Z0-9]+",text))
li = [ w for w in gen if w not in unwanted ]
And a generator can be created instead of li, also

Related

Python: Regex Search

I wanted to split a sentence on multiple delimiters:
.?!\n
However, I want to keep the comma along with the word.
For example for the string
'Hi, How are you?'
I want the result
['Hi,', 'How', 'are', 'you', '?']
I tried the following, but not getting the required result
words = re.findall(r"\w+|\W+", text)
re.split and keep your delimiters, then filter out the strings which only contain whitespace.
>>> import re
>>> s = 'Hi, How are you?'
>>> [x for x in re.split('(\s|!|\.|\?|\n)', s) if x.strip()]
['Hi,', 'How', 'are', 'you', '?']
If using re.findall:
>>> ss = """
... Hi, How are
...
... yo.u
... do!ing?
... """
>>> [ w for w in re.findall('(\w+\,?|[.?!]?)?\s*', ss) if w ]
['Hi,', 'How', 'are', 'yo', '.', 'u', 'do', '!', 'ing', '?']
You can use:
re.findall('(.*?)([\s\.\?!\n])', text)
With a bit of itertools magic and list comprehensions:
[i.strip() for i in itertools.chain.from_iterable(re.findall('(.*?)([\s\.\?!\n])', text)) if i.strip()]
And a bit more comprehensible version:
words = []
found = itertools.chain.from_iterable(re.findall('(.*?)([\s\.\?!\n])', text)
for i in found:
w = i.strip()
if w:
words.append(w)

Python - Printing words by length

I have a task where I have to print words in a sentence out by their length.
For example:
Sentence: I like programming in python because it is very fun and simple.
>>> I
>>> in it is
>>> fun and
>>> like very
>>> python simple
>>> because
And if there is no repetitions:
Sentence: Nothing repeated here
>>> here
>>> Nothing
>>> repeated
So far I have got this so far:
wordsSorted = sorted(sentence, key=len)
That sorts the words by their length, but I dont know how to get the correct output from the sorted words. Any help appreciated. I also understand that dictionaries are needed, but Im not sure.
Thanks in advance.
First sort the words based on length and then group them using itertools.groupby again on length:
>>> from itertools import groupby
>>> s = 'I like programming in python because it is very fun and simple'
>>> for _, g in groupby(sorted(s.split(), key=len), key=len):
print ' '.join(g)
...
I
in it is
fun and
like very
python simple
because
programming
You can also do it using a dict:
>>> d = {}
>>> for word in s.split():
d.setdefault(len(word), []).append(word)
...
Now d contains:
>>> d
{1: ['I'], 2: ['in', 'it', 'is'], 3: ['fun', 'and'], 4: ['like', 'very'], 6: ['python', 'simple'], 7: ['because'], 11: ['programming']}
Now we need to iterate over sorted keys and fetch the related value:
>>> for _, v in sorted(d.items()):
print ' '.join(v)
...
I
in it is
fun and
like very
python simple
because
programming
If you want to ignore punctuation then you can strip them using str.strip with string.punctuation:
>>> from string import punctuation
>>> s = 'I like programming in python. Because it is very fun and simple.'
>>> sorted((word.strip(punctuation) for word in s.split()), key=len)
['I', 'in', 'it', 'is', 'fun', 'and', 'like', 'very', 'python', 'simple', 'Because', 'programming']
This can be done using a defaultdict (or a regular dict) in O(N) time. sort+groupby is O(N log N)
words = "I like programming in python because it is very fun and simple".split()
from collections import defaultdict
D = defaultdict(list)
for w in words:
D[len(w)].append(w)
for k in sorted(D):
print " ".join(d[k])
I
in it is
fun and
like very
python simple
because
programming
try this:
str='I like programming in python because it is very fun and simple'
l=str.split(' ')
sorted(l,key=len)
it will return
['I', 'in', 'it', 'is', 'fun', 'and', 'like', 'very', 'python', 'simple', 'because', 'programming']
Using dictionary simplifies it
input = "I like programming in python because it is very fun and simple."
output_dict = {}
for word in input.split(" "):
if not word[-1].isalnum():
word = word[:-1]
if len(word) not in output_dict:
output_dict[len(word)] = []
output_dict[len(word)].append(word)
for key in sorted(output_dict.keys()):
print " ".join(output_dict[key])
This actually removes the comma, semicolon or full stop in a sentence.

In python NLTK, I want to get morphological analysis result on non-whitespace string

I want to get a morphological analysis result from NLTK on a non-whitesapce string.
For example:
The string is "societynamebank".
I want to get ['society', 'name', 'bank']
How to get that result on NLTK ?
Here is a simple code that may help you. It uses pyEnchant dictionary for morphological analysis:
>>> import enchant
>>> d = enchant.Dict("en_US")
>>> tokens=[]
>>> def tokenize(st):
... if not st:return
... for i in xrange(len(st),-1,-1):
... if d.check(st[0:i]):
... tokens.append(st[0:i])
... st=st[i:]
... tokenize(st)
... break
...
>>> tokenize("societynamebank")
>>> tokens
['society', 'name', 'bank']
>>> tokens=[]
>>> tokenize("HelloSirthereissomethingwrongwiththistext")
>>> tokens
['Hello', 'Sir', 'there', 'is', 'something', 'wrong', 'with', 'this', 'text']

Converting a String to a List of Words?

I'm trying to convert a string to a list of words using python. I want to take something like the following:
string = 'This is a string, with words!'
Then convert to something like this :
list = ['This', 'is', 'a', 'string', 'with', 'words']
Notice the omission of punctuation and spaces. What would be the fastest way of going about this?
I think this is the simplest way for anyone else stumbling on this post given the late response:
>>> string = 'This is a string, with words!'
>>> string.split()
['This', 'is', 'a', 'string,', 'with', 'words!']
Try this:
import re
mystr = 'This is a string, with words!'
wordList = re.sub("[^\w]", " ", mystr).split()
How it works:
From the docs :
re.sub(pattern, repl, string, count=0, flags=0)
Return the string obtained by replacing the leftmost non-overlapping occurrences of pattern in string by the replacement repl. If the pattern isn’t found, string is returned unchanged. repl can be a string or a function.
so in our case :
pattern is any non-alphanumeric character.
[\w] means any alphanumeric character and is equal to the character set
[a-zA-Z0-9_]
a to z, A to Z , 0 to 9 and underscore.
so we match any non-alphanumeric character and replace it with a space .
and then we split() it which splits string by space and converts it to a list
so 'hello-world'
becomes 'hello world'
with re.sub
and then ['hello' , 'world']
after split()
let me know if any doubts come up.
To do this properly is quite complex. For your research, it is known as word tokenization. You should look at NLTK if you want to see what others have done, rather than starting from scratch:
>>> import nltk
>>> paragraph = u"Hi, this is my first sentence. And this is my second."
>>> sentences = nltk.sent_tokenize(paragraph)
>>> for sentence in sentences:
... nltk.word_tokenize(sentence)
[u'Hi', u',', u'this', u'is', u'my', u'first', u'sentence', u'.']
[u'And', u'this', u'is', u'my', u'second', u'.']
The most simple way:
>>> import re
>>> string = 'This is a string, with words!'
>>> re.findall(r'\w+', string)
['This', 'is', 'a', 'string', 'with', 'words']
Using string.punctuation for completeness:
import re
import string
x = re.sub('['+string.punctuation+']', '', s).split()
This handles newlines as well.
Well, you could use
import re
list = re.sub(r'[.!,;?]', ' ', string).split()
Note that both string and list are names of builtin types, so you probably don't want to use those as your variable names.
Inspired by #mtrw's answer, but improved to strip out punctuation at word boundaries only:
import re
import string
def extract_words(s):
return [re.sub('^[{0}]+|[{0}]+$'.format(string.punctuation), '', w) for w in s.split()]
>>> str = 'This is a string, with words!'
>>> extract_words(str)
['This', 'is', 'a', 'string', 'with', 'words']
>>> str = '''I'm a custom-built sentence with "tricky" words like https://stackoverflow.com/.'''
>>> extract_words(str)
["I'm", 'a', 'custom-built', 'sentence', 'with', 'tricky', 'words', 'like', 'https://stackoverflow.com']
Personally, I think this is slightly cleaner than the answers provided
def split_to_words(sentence):
return list(filter(lambda w: len(w) > 0, re.split('\W+', sentence))) #Use sentence.lower(), if needed
A regular expression for words would give you the most control. You would want to carefully consider how to deal with words with dashes or apostrophes, like "I'm".
list=mystr.split(" ",mystr.count(" "))
This way you eliminate every special char outside of the alphabet:
def wordsToList(strn):
L = strn.split()
cleanL = []
abc = 'abcdefghijklmnopqrstuvwxyz'
ABC = abc.upper()
letters = abc + ABC
for e in L:
word = ''
for c in e:
if c in letters:
word += c
if word != '':
cleanL.append(word)
return cleanL
s = 'She loves you, yea yea yea! '
L = wordsToList(s)
print(L) # ['She', 'loves', 'you', 'yea', 'yea', 'yea']
I'm not sure if this is fast or optimal or even the right way to program.
def split_string(string):
return string.split()
This function will return the list of words of a given string.
In this case, if we call the function as follows,
string = 'This is a string, with words!'
split_string(string)
The return output of the function would be
['This', 'is', 'a', 'string,', 'with', 'words!']
This is from my attempt on a coding challenge that can't use regex,
outputList = "".join((c if c.isalnum() or c=="'" else ' ') for c in inputStr ).split(' ')
The role of apostrophe seems interesting.
Probably not very elegant, but at least you know what's going on.
my_str = "Simple sample, test! is, olny".lower()
my_lst =[]
temp=""
len_my_str = len(my_str)
number_letter_in_data=0
list_words_number=0
for number_letter_in_data in range(0, len_my_str, 1):
if my_str[number_letter_in_data] in [',', '.', '!', '(', ')', ':', ';', '-']:
pass
else:
if my_str[number_letter_in_data] in [' ']:
#if you want longer than 3 char words
if len(temp)>3:
list_words_number +=1
my_lst.append(temp)
temp=""
else:
pass
else:
temp = temp+my_str[number_letter_in_data]
my_lst.append(temp)
print(my_lst)
You can try and do this:
tryTrans = string.maketrans(",!", " ")
str = "This is a string, with words!"
str = str.translate(tryTrans)
listOfWords = str.split()

Python regex string to list of words (including words with hyphens)

I would like to parse a string to obtain a list including all words (hyphenated words, too). Current code is:
s = '-this is. A - sentence;one-word'
re.compile("\W+",re.UNICODE).split(s)
returns:
['', 'this', 'is', 'A', 'sentence', 'one', 'word']
and I would like it to return:
['', 'this', 'is', 'A', 'sentence', 'one-word']
If you don't need the leading empty string, you could use the pattern \w(?:[-\w]*\w)? for matching:
>>> import re
>>> s = '-this is. A - sentence;one-word'
>>> rx = re.compile(r'\w(?:[-\w]*\w)?')
>>> rx.findall(s)
['this', 'is', 'A', 'sentence', 'one-word']
Note that it won't match words with apostrophes like won't.
Here my traditional "why to use regexp language when you can use Python" alternative:
import string
s = "-this is. A - sentence;one-word what's"
s = filter(None,[word.strip(string.punctuation)
for word in s.replace(';','; ').split()
])
print s
""" Output:
['this', 'is', 'A', 'sentence', 'one-word', "what's"]
"""
You could use "[^\w-]+" instead.
s = "-this is. A - sentence;one-word what's"
re.findall("\w+-\w+|[\w']+",s)
result:
['this', 'is', 'A', 'sentence', 'one-word', "what's"]
make sure you notice that the correct ordering is to look for hyypenated words first!
Yo can try with the NLTK library:
>>> import nltk
>>> s = '-this is a - sentence;one-word'
>>> hyphen = r'(\w+\-\s?\w+)'
>>> wordr = r'(\w+)'
>>> r = "|".join([ hyphen, wordr])
>>> tokens = nltk.tokenize.regexp_tokenize(s,r)
>>> print tokens
['this', 'is', 'a', 'sentence', 'one-word']
I found it here: http://www.cs.oberlin.edu/~jdonalds/333/lecture03.html Hope it helps

Categories

Resources