I am attempting to chunk sentences using RegEx at the word 'but' (or any other coordinating conjunction words). It's not working...
sentence = nltk.pos_tag(word_tokenize("There are no large collections present but there is spinal canal stenosis."))
result = nltk.RegexpParser(grammar).parse(sentence)
DigDug = nltk.RegexpParser(r'CHUNK: {.*<CC>.*}')
for subtree in DigDug.parse(sentence).subtrees():
if subtree.label() == 'CHUNK': print(subtree.node())
I need to split the sentence "There are no large collections present but there is spinal canal stenosis." into two:
1. "There are no large collections present"
2. "there is spinal canal stenosis."
I also wish to use the same code to split sentences at 'and' and other coordinating conjunction (CC) words. But my code isn't working. Please help.
I think you can simply do
import re
result = re.split(r"\s+(?:but|and)\s+", sentence)
where
`\s` Match a single character that is a "whitespace character" (spaces, tabs, line breaks, etc.)
`+` Between one and unlimited times, as many times as possible, giving back as needed (greedy)
`(?:` Match the regular expression below, do not capture
Match either the regular expression below (attempting the next alternative only if this one fails)
`but` Match the characters "but" literally
`|` Or match regular expression number 2 below (the entire group fails if this one fails to match)
`and` Match the characters "and" literally
)
`\s` Match a single character that is a "whitespace character" (spaces, tabs, line breaks, etc.)
`+` Between one and unlimited times, as many times as possible, giving back as needed (greedy)
You can add more conjunction words in there separated by a pipe-character |.
Take care though that these words do not contain characters that have special meaning in regex. If in doubt, escape them first with re.escape(word)
If you want to avoid hardcoding conjunction words like 'but' and 'and', try chinking along with chunking:
import nltk
Digdug = nltk.RegexpParser(r"""
CHUNK_AND_CHINK:
{<.*>+} # Chunk everything
}<CC>+{ # Chink sequences of CC
""")
sentence = nltk.pos_tag(nltk.word_tokenize("There are no large collections present but there is spinal canal stenosis."))
result = Digdug.parse(sentence)
for subtree in result.subtrees(filter=lambda t: t.label() ==
'CHUNK_AND_CHINK'):
print (subtree)
Chinking basically excludes what we dont need from a chunk phrase - 'but' in this case.
For more details , see: http://www.nltk.org/book/ch07.html
Related
Using the below code, I imported a few .csv files with sentences like the following into Python:
df = pd.concat((pd.read_csv(f) for f in path), ignore_index=True)
Sample sentence:
I WANT TO UNDERSTAND WHERE TH\nERE ARE\nSOME \n NEW RESTAURANTS. \n
While I have no problem removing the newline characters surrounded by spaces, in the middle of words, or at the end of the string, I don't know what to do with the newline characters separating words.
The output I want is as follows:
Goal sentence:
I WANT TO UNDERSTAND WHERE THERE ARE SOME NEW RESTAURANTS.
Is there a way for me to indicate in my code that the newline character is surrounded by two distinct words? Or is this classic garbage in, garbage out?
df = df[~df['Sentence'].str.contains("\n")]
After doing some digging, I came up with two solutions.
1. The textwrap package: Though it seems that the textwrap package is normally used for visual formatting (i.e. telling a UI when to show "..." to signify a long string), it successfully identified the \n patterns I was having issues with. Though it's still necessary to remove extra whitespace of other kinds, this package got me 90% of the way there.
import textwrap
sample = 'I WANT TO UNDERSTAND WHERE TH\nERE ARE\nSOME \n NEW RESTAURANTS. \n'
sample_wrap = textwrap.wrap(sample)
print(sample_wrap)
'I WANT TO UNDERSTAND WHERE THERE ARE SOME NEW RESTAURANTS. '
2. Function to ID different \n appearance patterns: The 'boil the ocean' solution I came up with before learning about textwrap, and it doesn't work as well. This function finds matches defined as a newline character surrounded by two word (alphanumeric) characters. For all matches, the function searches NLTK's words.words() list for each string surrounding the newline character. If at least one of the two strings is a word in that list, it's considered to be two separate words.
This doesn't take into consideration domain-specific words, which have to be added to the wordlist, or words like "about", which would be incorrectly categorized by this function if the newline character appeared as "ab\nout". I'd recommend textwrap for this reason, but still thought I'd share.
carriage = re.compile(r'(\n+)')
wordword = re.compile(r'((\w+)\n+(\w+))')
def carriage_return(sentence):
if carriage.search(sentence):
if not wordword.search(sentence):
sentence = re.sub(carriage, '', sentence)
else:
matches = re.findall(wordword, sentence)
for match in matches:
word1 = match[1].lower()
word2 = match[2].lower()
if word1 in wordlist or word2 in wordlist or word1.isdigit() or word2.isdigit():
sentence = sentence.replace(match[0], word1 + ' ' + word2)
else:
sentence = sentence.replace(match[0], word1+word2)
sentence = re.sub(carriage, '', sentence)
display(sentence)
return sentence
I'm working on a sentencizer and tokenizer for a tutorial. This means splitting a document string into sentences and sentences into words. Examples:
#Sentencizing
"This is a sentence. This is another sentence! A third..."=>["This is a sentence.", "This is another sentence!", "A third..."]
#Tokenizatiion
"Tokens are 'individual' bits of a sentence."=>["Tokens", "are", "'individual'", "bits", "of", "a", "sentence", "."]
As seen, there's a need for something more than just a string.split(). I'm using re.sub() appending a 'special' tag for each match (and later splitting in this tag), first for sentences and then for tokens.
So far it works great, but there's a problem: how to make a regex that can split at dots, but not at (...) or at numbers (3.14)?
I've been working with these options with lookahead (I need to match the group and then be able to recall it for appending), but none works:
#Do a negative look behind for preceding numbers or dots, central capture group is a dot, do the same as first for a look ahead.
(?![\d\.])(\.)(?<![\d\.])
The application is:
sentence = re.sub(pattern, '\g<0>'+special_tag, raw_sentence)
I used the following to find the periods that it looked like were relevant:
import re
m = re.compile(r'[0-9]\.[^0-9.]|[^0-9]\.[^0-9.]|[!?]')
st = "This is a sentence. This is another sentence! A third... Pi is 3.14. This is 1984. Hello?"
m.findall(st)
# if you want to use lookahead, you can use something like this:
m = re.compile(r'(?<=[0-9])\.(?=[^0-9.])|(?<=[^0-9])\.(?=[^0-9.])|[!?]')
It's not particularly elegant, but I also tried to deal with the case of "We have a .1% chance of success."
Good luck!
This might be overkill, or need a bit of cleanup, but here is the best regex I could come up with:
((([^\.\n ]+|(\.+\d+))\b[^\.]? ?)+)([\.?!\)\"]+)
To break it down:
[^\.\n ]+ // Matches 1+ times any char that isn't a dot, newline or space.
(\.+\d+) // Captures the special case of decimal numbers
\b[^\.]? ? // \b is a word boundary. This may be optionally
// followed by any non-dot character, and optionally a space.
All these previous parts are matches 1+ times. In order to determine that a sentence is finished, we use the following:
[\.?!\)\"] // Matches any of the common sentences terminators 1+ times
Try it out!
reso- lution
sug- gest
evolu- tion
are all words that have contain hyphens due to limited space in a line in a piece of text. e.g.
Analysis of two high reso- lution nucleosome maps revealed strong
signals that even though they do not constitute a definite proof are
at least consistent with such a view. Taken together, all these
findings sug- gest the intriguing possibility that nucleosome
positions are the product of a mechanical evolu- tion of DNA
molecules.
I would like to replace with their natural forms i.e.
resolution
suggest
evolution
How can I do this in a text with python?
Make sure there is a lowercase letter before - and a lowercase letter after the -+space, capture the letters and use backreferences to get these letters back after replacement:
([a-z])- ([a-z])
See regex demo (replace with \1\2 backreference sequence). Note that you may adjust the number of spaces with {1,max} quantifier (say, if there is one or two spaces between the parts of the word, use ([a-z])- {1,2}([a-z])). If there can be any whitespace, use \s rather than .
Python code:
import re
s = 'Analysis of two high reso- lution nucleosome maps revealed strong signals that even though they do not constitute a definite proof are at least consistent with such a view. Taken together, all these findings sug- gest the intriguing possibility that nucleosome positions are the product of a mechanical evolu- tion of DNA molecules.'
s = re.sub(r'([a-z])- ([a-z])', r'\1\2', s)
print(s)
Use str.replace() to replace "- " with "". For example:
>>> my_text = 'reso- lution'
>>> my_text = my_text.replace('- ', '')
>>> my_text # Updated value without "- "
'resolution'
I have a script that gives me sentences that contain one of a specified list of key words. A sentence is defined as anything between 2 periods.
Now I want to use it to select all of a sentence like 'Put 1.5 grams of powder in' where if powder was a key word it would get the whole sentence and not '5 grams of powder'
I am trying to figure out how to express that a sentence is between to sequences of period then space. My new filter is:
def iterphrases(text):
return ifilter(None, imap(lambda m: m.group(1), finditer(r'([^\.\s]+)', text)))
However now I no longer print any sentences just pieces/phrases of words (including my key word). I am very confused as to what I am doing wrong.
if you don't HAVE to use an iterator, re.split would be a bit simpler for your use case (custom definition of a sentence):
re.split(r'\.\s', text)
Note the last sentence will include . or will be empty (if text ends with whitespace after last period), to fix that:
re.split(r'\.\s', re.sub(r'\.\s*$', '', text))
also have a look at a bit more general case in the answer for Python - RegEx for splitting text into sentences (sentence-tokenizing)
and for a completely general solution you would need a proper sentence tokenizer, such as nltk.tokenize
nltk.tokenize.sent_tokenize(text)
Here you get it as an iterator. Works with my testcases. It considers a sentence to be anything (non-greedy) until a period, which is followed by either a space or the end of the line.
import re
sentence = re.compile("\w.*?\.(?= |$)", re.MULTILINE)
def iterphrases(text):
return (match.group(0) for match in sentence.finditer(text))
If you are sure that . is used for nothing besides sentences delimiters and that every relevant sentence ends with a period, then the following may be useful:
matches = re.finditer('([^.]*?(powder|keyword2|keyword3).*?)\.', text)
result = [m.group() for m in matches]
Using python: How do i get the regex to continue only if a positive lookahead has been matched at least once.
I'm trying to match:
Clinton-Orfalea-Brittingham Fellowship Program
Here's the code I'm using now:
dp2= r'[A-Z][a-z]+(?:-\w+|\s[A-Z][a-z]+)+'
print np.unique(re.findall(dp2, tt))
I'm matching the word, but it's also matching a bunch of other extraneous words.
My thought was that I'd like the \s[A-Z][a-z] to kick in ONLY IF -\w+ has been hit at least once (or maybe twice). would appreciate any thoughts.
To clarify: I'm not aiming to match specifically this set of words, but to be able to generically match Proper noun- Proper noun- (indefinite number of times) and then a non-hyphenated Proper noun.
eg.
Noun-Noun-Noun Noun Noun
Noun-Noun Noun
Noun-Noun-Noun Noun
THE LATEST ITERATION:
dp5= r'(?:[A-Z][a-z]+-?){2,3}(?:\s\w+){2,4}'
The {m,n} notation can be used to force the regex to ONLY MATCH if the previous expression exists between m and n times. Maybe something like
(?:[A-Z][a-z]+-?){2,3}\s\w+\s\w+ # matches 'Clinton-Orfalea-Brittingham Fellowship Program'
If you're SPECIFICALLY looking for "Clinton-Orfalea-Brittingham Fellowship Program", why are you using Regex to find it? Just use word in string. If you're looking for things of the form: Name-Name-Name Noun Noun, this should work, but be aware that Name-Name-Name-Name Noun Noun won't, nor will Name-Name-Name Noun Noun Noun (In fact, something like "Alice-Bob-Catherine Program" will match not only that but whatever word comes after it!)
# Explanation
RE = r"""(?: # Begins the group so we can repeat it
[A-Z][a-z]+ # Matches one cap letter then any number of lowercase
-? # Allows a hyphen at the end of the word w/o requiring it
){2,3} # Ends the group and requires the group match 2 or 3 times in a row
\s\w+ # Matches a space and the next word
\s\w+ # Does so again
# those last two lines could just as easily be (?:\s\w+){2}
"""
RE = re.compile(RE,re.verbose) # will compile the expression as written
If you're looking specifically for hyphenated proper nouns followed by non-hyphenated proper nouns, I would do this:
[A-Z][a-z]+-(?:[A-Z][a-z]+(?:-|\s))+
# Explanation
RE = r"""[A-Z][a-z]+- # Cap letter+small letters ending with a hyphen
(?: # start a non-cap group so we can repeat it
[A-Z][a-z]+# As before, but doesn't require a hyphen
(?:
-|\s # but if it doesn't have a hyphen, it MUST have a space
) # (this group is just to give precedence to the |
)+ # can match multiple of these.
"""