Python regex: Match ALL consecutive capitalized words - python

Short question:
I have a string:
title="Announcing Elasticsearch.js For Node.js And The Browser"
I want to find all pairs of words where each word is properly capitalized.
So, expected output should be:
['Announcing Elasticsearch.js', 'Elasticsearch.js For', 'For Node.js', 'Node.js And', 'And The', 'The Browser']
What I have right now is this:
'[A-Z][a-z]+[\s-][A-Z][a-z.]*'
This gives me the output:
['Announcing Elasticsearch.js', 'For Node.js', 'And The']
How can I change my regex to give desired output?

You can use this:
#!/usr/bin/python
import re
title="Announcing Elasticsearch.js For Node.js And The Browser TEst"
pattern = r'(?=((?<![A-Za-z.])[A-Z][a-z.]*[\s-][A-Z][a-z.]*))'
print re.findall(pattern, title)
A "normal" pattern can't match overlapping substrings, all characters are founded once for all. However, a lookahead (?=..) (i.e. "followed by") is only a check and match nothing. It can parse the string several times. Thus if you put a capturing group inside the lookahead, you can obtain overlapping substrings.

There's probably a more efficient way to do this, but you could use a regex like this:
(\b[A-Z][a-z.-]+\b)
Then iterate through the capture groups like so testing with this regex: (^[A-Z][a-z.-]+$) to ensure the matched group(current) matches the matched group(next).
Working example:
import re
title = "Announcing Elasticsearch.js For Node.js And The Browser"
matchlist = []
m = re.findall(r"(\b[A-Z][a-z.-]+\b)", title)
i = 1
if m:
for i in range(len(m)):
if re.match(r"(^[A-Z][a-z.-]+$)", m[i - 1]) and re.match(r"(^[A-Z][a-z.-]+$)", m[i]):
matchlist.append([m[i - 1], m[i]])
print matchlist
Output:
[
['Browser', 'Announcing'],
['Announcing', 'Elasticsearch.js'],
['Elasticsearch.js', 'For'],
['For', 'Node.js'],
['Node.js', 'And'],
['And', 'The'],
['The', 'Browser']
]

If your Python code at the moment is this
title="Announcing Elasticsearch.js For Node.js And The Browser"
results = re.findall("[A-Z][a-z]+[\s-][A-Z][a-z.]*", title)
then your program is skipping odd numbered pairs. An easy solution would be to research the pattern after skipping the first word like this:
m = re.match("[A-Z][a-z]+[\s-]", title)
title_without_first_word = title[m.end():]
results2 = re.findall("[A-Z][a-z]+[\s-][A-Z][a-z.]*", title_without_first_word)
Now just combine results and result2 together.

Related

How to put all words and phrases in list into a search expression (Python)

I have this list of lists:
groups = [['|FOOD|','shrimps','chicken wok','bowl of rice'],['|DRINK|','water','cranberry juice','tea']]
I'm trying to get the output to be:
[['|FOOD|',
'[lemma="shrimps"]',
'[lemma="chicken"][lemma="wok"]',
'[lemma="bowl"][lemma="of"][lemma="rice"]'],
['|DRINK|',
'[lemma="water"]',
'[lemma="cranberry"][lemma="juice"]',
'[lemma="tea"]']]
So, basically I need every word lemmatized for a corpus search. Some words though, are not words but phrases. I've only yet figured out the code for single words, here it is:
import re
groups = [[f'[lemma="{word}"]' if not ' ' in word and not re.search(r'\|.*\|', word) else word for word in group] for group in groups]
This returns groups as:
[['|FOOD|',
'[lemma="shrimps"]',
'chicken wok',
'bowl of rice'],
['|DRINK|',
'[lemma="water"]',
'cranberry juice',
'[lemma="tea"]']]
So I made it not include that words containing a whitespace (phrases), plus the topic words. What then is the code to deal with these phrases and have them look like like I typed above?
I'm a beginner, so if you know a better way to organise all this data, let me know.
You do not really need a regex here, you may use if not word.startswith("|") and not word.endswith("|") to check if the entry has no pipes on both ends:
groups = [[''.join([r"""[lemma="{}"]""".format(w) for w in word.split()]) if not word.startswith("|") and not word.endswith("|") else word for word in group] for group in groups]
See the Python demo online. Output:
[['|FOOD|',
'[lemma="shrimps"]',
'[lemma="chicken"][lemma="wok"]',
'[lemma="bowl"][lemma="of"][lemma="rice"]'],
['|DRINK|',
'[lemma="water"]',
'[lemma="cranberry"][lemma="juice"]',
'[lemma="tea"]']
]

How can I look for specific bigrams in text example - python?

I am interested in finding how often (in percentage) a set of words, as in n_grams appears in a sentence.
example_txt= ["order intake is strong for Q4"]
def find_ngrams(text):
text = re.findall('[A-z]+', text)
content = [w for w in text if w.lower() in n_grams] # you can calculate %stopwords using "in"
return round(float(len(content)) / float(len(text)), 5)
#the goal is for the above procedure to work on a pandas datafame, but for now lets use 'text' as an example.
#full_MD['n_grams'] = [find_ngrams(x) for x in list(full_MD.loc[:,'text_no_stopwords'])]
Below you see two examples. The first one works, the last doesn't.
n_grams= ['order']
res = [find_ngrams(x) for x in list(example_txt)]
print(res)
Output:
[0.16667]
n_grams= ['order intake']
res = [find_ngrams(x) for x in list(example_txt)]
print(res)
Output:
[0.0]
How can I make the find_ngrams() function process bigrams, so the last example from above works?
Edit: Any other ideas?
You can use SpaCy Matcher:
import spacy
from spacy.matcher import Matcher
nlp = spacy.load("en_core_web_sm")
matcher = Matcher(nlp.vocab)
# Add match ID "orderintake" with no callback and one pattern
pattern = [{"LOWER": "order"}, {"LOWER": "intake"}]
matcher.add("orderintake", None, pattern)
doc = nlp("order intake is strong for Q4")
matches = matcher(doc)
print(len(matches)) #Number of times the bi-gram appears in text
maybe you have already exploited this option, but why not use the a simple .count combined with len:
(example_txt[0].count(n_grams[0]) * len(n_grams[0])) / len(example_txt[0])
or if you are not interested in the spaces as part of your calculation you can use the following:
(example_txt[0].count(n_grams[0])* len(n_grams[0])) / len(example_txt[0].replace(' ',''))
of course you can use them in a list comprehension, this was just for demonstration purposes
The line
re.findall('[A-z]+', text)
returns
['order', 'intake', 'is', 'strong', 'for', 'Q'].
For this reason, the string 'order intake' will not be matched in your for here:
content = [w for w in text if w.lower() in n_grams]
If you want it to match, you'll need to make one single of string from each Bigram.
Instead, you should probably use this to find Bigrams.
For N-grams, have a look at this answer.

Python - How to use re.finditer with multiple patterns

I want to search 3 Words in a String and put them in a List
something like:
sentence = "Tom once got a bike which he had left outside in the rain so it got rusty"
pattern = ['had', 'which', 'got' ]
and the answer should look like:
['got', 'which','had','got']
I haven't found a way to use re.finditer in such a way. Sadly im required to use finditer
rather that findall
You can build the pattern from your list of searched words, then build your output list with a list comprehension from the matches returned by finditer:
import re
sentence = "Tom once got a bike which he had left outside in the rain so it got rusty"
pattern = ['had', 'which', 'got' ]
regex = re.compile(r'\b(' + '|'.join(pattern) + r')\b')
# the regex will be r'\b(had|which|got)\b'
out = [m.group() for m in regex.finditer(sentence)]
print(out)
# ['got', 'which', 'had', 'got']
The idea is to combine the entries of the pattern list to form a regular expression with ors.
Then, you can use the following code fragment:
import re
sentence = 'Tom once got a bike which he had left outside in the rain so it got rusty. ' \
'Luckily, Margot and Chad saved money for him to buy a new one.'
pattern = ['had', 'which', 'got']
regex = re.compile(r'\b({})\b'.format('|'.join(pattern)))
# regex = re.compile(r'\b(had|which|got)\b')
results = [match.group(1) for match in regex.finditer(sentence)]
print(results)
The result is ['got', 'which', 'had', 'got'].

Using regular expressions in python to extract location mentions in a sentence

I am writing a code using python to extract the name of a road,street, highway, for example a sentence like "There is an accident along Uhuru Highway", I want my code to be able to extract the name of the highway mentioned, I have written the code below.
sentence="there is an accident along uhuru highway"
listw=[word for word in sentence.lower().split()]
for i in range(len(listw)):
if listw[i] == "highway":
print listw[i-1] + " "+ listw[i]
I can achieve this but my code is not optimized, i am thinking of using regular expressions, any help please
'uhuru highway' can be found as follows
import re
m = re.search(r'\S+ highway', sentence) # non-white-space followed by ' highway'
print(m.group())
# 'uhuru highway'
If the location you want to extract will always have highway after it, you can use:
>>> sentence = "there is an accident along uhuru highway"
>>> a = re.search(r'.* ([\w\s\d\-\_]+) highway', sentence)
>>> print(a.group(1))
>>> uhuru
You can do the following without using regexes:
sentence.split("highway")[0].strip().split(' ')[-1]
First split according to "highway". You'll get:
['there is an accident along uhuru', '']
And now you can easily extract the last word from the first part.

python, re.search / re.split for phrases which looks like a title, i.e. starting with an uppper case

I have a list of phrases (input by user) I'd like to locate them in a text file, for examples:
titles = ['Blue Team', 'Final Match', 'Best Player',]
text = 'In today Final match, The Best player is Joe from the Blue Team and the second best player is Jack from the Red team.'
1./ I can find all the occurrences of these phrases like so
titre = re.compile(r'(?P<title>%s)' % '|'.join(titles), re.M)
list = [ t for t in titre.split(text) if titre.search(t) ]
(For simplicity, I am assuming a perfect spacing.)
2./ I can also find variants of these phrases e.g. 'Blue team', final Match', 'best player' ... using re.I, if they ever appear in the text.
But I want to restrict to finding only variants of the input phrases with their first letter upper-cased e.g. 'Blue team' in the text, regardless how they were entered as input, e.g. 'bluE tEAm'.
Is it possible to write something to "block" the re.I flag for a portion of a phrase? In pseudo code I imagine generate something like '[B]lue Team|[F]inal Match'.
Note: My primary goal is not, for example, calculating frequency of the input phrases in the text but extracting and analyzing the text fragments between or around them.
I would use re.I and modify the list-comp to:
l = [ t for t in titre.split(text) if titre.search(t) and t[0].isupper() ]
I think regular expressions won't let you specify just a region where the ignore case flag is applicable. However, you can generate a new version of the text in which all the characters have been lower cased, but the first one for every word:
new_text = ' '.join([word[0] + word[1:].lower() for word in text.split()])
This way, a regular expression without the ignore flag will match taking into account the casing only for the first character of each word.
How about modifying the input so that it is in the correct case before you use it in the regular expression?

Categories

Resources