Sentiment analysis Python tokenization - python

my problem is the follow: I want to do a sentiment analysis on Italian tweet and I would to tokenise and lemmatise my Italian text in order to find new analysis dimension for my thesis. The problem is that I would like to tokenise my hashtag, splitting also the composed one. For example if I have #nogreenpass, I would have also without the # symbol, because the sentiment of the phrase would be better understand with all word of the text. How could I do this? I tried with sapCy, but I have no results. I created a function to clean my text, but I can't have the hashtag in the way I would. I'm using this code:
import re
import spacy
from spacy.tokenizer import Tokenizer
nlp = spacy.load('it_core_news_lg')
# Clean_text function
def clean_text(text):
text = str(text).lower()
doc = nlp(text)
text = re.sub(r'#[a-z0-9]+', str(' '.join(t in nlp(doc))), str(text))
text = re.sub(r'\n', ' ', str(text)) # Remove /n
text = re.sub(r'#[A-Za-z0-9]+', '<user>', str(text)) # Remove and replace #mention
text = re.sub(r'RT[\s]+', '', str(text)) # Remove RT
text = re.sub(r'https?:\/\/\S+', '<url>', str(text)) # Remove and replace links
return text
For example here I don't know how add the first < and last > replacing the # symbol and the tokenisation process doesn't work as I would. Thank you for the time spent for me and for the patience. I hope to became stronger in the Jupiter analysis and python coding so I could give an help also to your problem. Thank you guys!

You can tweak your current clean_code with
def clean_text(text):
text = str(text).lower()
text = re.sub(r'#(\w+)', r'<\1>', text)
text = re.sub(r'\n', ' ', text) # Remove /n
text = re.sub(r'#[A-Za-z0-9]+', '<user>', text) # Remove and replace #mention
text = re.sub(r'RT\s+', '', text) # Remove RT
text = re.sub(r'https?://\S+\b/?', '<url>', text) # Remove and replace links
return text
See the Python demo online.
The following line of code:
print(clean_text("#Marcorossi hanno ragione I #novax htt"+"p://www.asfag.com/"))
will yield
<user> hanno ragione i <novax> <url>
Note there is no easy way to split a glued string into its constituent words. See How to split text without spaces into list of words for ideas how to do that.

Related

Python to Find-replace a string and Create Two Paragraphs Before String in Words Document

I have a VBA Macro. In that, I have
.Find Text = 'Pollution'
.Replacement Text = '^p^pChemical'
Here, '^p^pChemical' means Replace the Word Pollution with Chemical and create two empty paragraphs before the word sea.
Before:
After:
Have you noticed that The Word Pollution has been replaced With Chemical and two empty paragraphs preceds it ? This is how I want in Python.
My Code so far:
import docx
from docx import Document
document = Document('Example.docx')
for Paragraph in document.paragraphs:
if 'Pollution' in paragraph:
replace(Pollution, Chemical)
document.add_paragraph(before('Chemical'))
document.add_paragraph(before('Chemical'))
I want to open a word document to find the word, replace it with another word, and create two empty paragraphs before the replaced word.
You can search through each paragraph to find the word of interest, and call insert_paragraph_before to add the new elements:
def replace(doc, target, replacement):
for par in list(document.paragraphs):
text = par.text
while (index := text.find(target)) != -1:
par.insert_paragraph_before(text[:index].rstrip())
par.insert_paragraph_before('')
par.text = replacement + text[index + len(target)]
list(doc.paragraphs) makes a copy of the list, so that the iteration is not thrown off when you insert elements.
Call this function as many times as you need to replace whatever words you have.
This will take the text from the your document, replace the instances of the word pollution with chemical and add paragraphs in between, but it doesn't change the first document, instead it creates a copy. This is probably the safer route to go anyway.
import re
from docx import Document
ref = {"Pollution":"Chemicals", "Ocean":"Sea", "Speaker":"Magnet"}
def get_old_text():
doc1 = Document('demo.docx')
fullText = []
for para in doc1.paragraphs:
fullText.append(para.text)
text = '\n'.join(fullText)
return text
def create_new_document(ref, text):
doc2 = Document()
lines = text.split('\n')
for line in lines:
for k in ref:
if k.lower() in line.lower():
parts = re.split(f'{k}', line, flags=re.I)
doc2.add_paragraph(parts[0])
for part in parts[1:]:
doc2.add_paragraph('')
doc2.add_paragraph('')
doc2.add_paragraph(ref[k] + " " + part)
doc2.save('demo.docx')
text = get_old_text()
create_new_document(ref, text)
You need to use \n for new line. Using re should work like so:
import re
before = "The term Pollution means the manifestation of any unsolicited foregin substance in something. When we talk about pollution on earth, we refer to the contamination that is happening of the natural resources by various pollutants"
pattern = re.compile("pollution", re.IGNORECASE)
after = pattern.sub("\n\nChemical", before)
print(after)
Which will output:
The term
Chemical means the manifestation of any unsolicited foregin substance in something. When we talk about
Chemical on earth, we refer to the contamination that is happening of the natural resources by various pollutants

Scraping a sentence across many lines | Recursive error unresolved

Goal: if pdf line contains sub-string, then copy entire sentence (across multiple lines).
I am able to print() the line the phrase appears in.
Now, once I find this line, I want to go back iterations, until I find a sentence terminator: . ! ?, from the previous sentence, and iterate forward again until the next sentence terminator.
This is so as I can print() the entire sentence the phrase belongs in.
However, I have a Recursive Error with scrape_sentence() getting stuck infinitely running.
Jupyter Notebook:
# pip install PyPDF2
# pip install pdfplumber
# ---
# import re
import glob
import PyPDF2
import pdfplumber
# ---
phrase = "Responsible Care Company"
# SENTENCE_REGEX = re.pattern('^[A-Z][^?!.]*[?.!]$')
def scrape_sentence(sentence, lines, index, phrase):
if '.' in lines[index] or '!' in lines[index] or '?' in lines[index]:
return sentence.replace('\n', '').strip()
sentence = scrape_sentence(lines[index-1] + sentence, lines, index-1, phrase) # previous line
sentence = scrape_sentence(sentence + lines[index+1], lines, index+1, phrase) # following line
sentence = sentence.replace('!', '.')
sentence = sentence.replace('?', '.')
sentence = sentence.split('.')
sentence = [s for s in sentence if phrase in s]
sentence = sentence[0] # first occurance
print(sentence)
return sentence
# ---
with pdfplumber.open('../data/gri/reports/GPIC_Sustainability_Report_2020__-_40_Years_of_Sustainable_Success.pdf') as opened_pdf:
for page in opened_pdf.pages:
text = page.extract_text()
lines = text.split('\n')
i = 0
sentence = ''
while i < len(lines):
if 'and Knowledge of Individuals; Behaviours; Attitudes, Perception ' in lines[i]:
sentence = scrape_sentence('', lines, i) # !
print(sentence) # !
i += 1
Output:
connection and the linkage to the relevant UN’s 17 SDGs.and Leadership. We have long realized and recognized that there
Phrase:
Responsible Care Company
Sentence (across multiple lines):
"GPIC is a Responsible Care Company certified for RC 14001
since July 2010."
PDF (pg. 2).
Please let me know if there is anything else I can add to post.
I solved this problem here by removing any recursion from scrape_sentence().

Regex not specific enough

So I wrote a program for my Kindle e-reader that searches my highlights and deletes repetitive text (it's usually information about the book title, author, page number, etc.). I thought it was functional but sometimes there would random be periods (.) on certain lines of the output. At first I thought the program was just buggy but then I realized that the regex I'm using to match the books title and author was also matching any sentence that ended in brackets.
This is the code for the regex that I'm using to detect the books title and author
titleRegex = re.compile('(.+)\((.+)\)')
Example
Desired book title and author match: Book title (Author name)
What would also get matched: *I like apples because they are green (they are sometimes red as well). *
In this case it would delete everything and leave just the period at the end of the sentence. This is obviously not ideal because it deletes the text I highlighted
Here is the unformatted text file that goes into my program
The program works by finding all of the matches for the regexes I wrote, looping through those matches and one by one replacing them with empty strings.
Would there be any ways to make my title regex more specific so that it only picks up author titles and not full sentences that end in brackets? If not, what steps would I have to take to restructure this program?
I've attached my code to the bottom of this post. I would greatly appreciate any help as I'm a total coding newbie. Thanks :)
import re
titleRegex = re.compile('(.+)\((.+)\)')
titleRegex2 = re.compile(r'\ufeff (.+)\((.+)\)')
infoRegex = re.compile(r'(.) ([a-zA-Z]+) (Highlight|Bookmark|Note) ([a-zA-Z]+) ([a-zA-Z]+) ([0-9]+) (\|)')
locationRegex = re.compile(r' Location (\d+)(-\d+)? (\|)')
dateRegex = re.compile(r'([a-zA-Z]+) ([a-zA-Z]+) ([a-zA-Z]+), ([a-zA-Z]+) ([0-9]+), ([0-9]+)')
timeRegex = re.compile(r'([0-9]+):([0-9]+):([0-9]+) (AM|PM)')
newlineRegex = re.compile(r'\n')
sepRegex = re.compile('==========')
regexList = [titleRegex, titleRegex2, infoRegex, locationRegex, dateRegex, timeRegex, sepRegex, newlineRegex]
string = open("/Users/devinnagami/myclippings.txt").read()
for x in range(len(regexList)):
newString = re.sub(regexList[x], ' ', string)
string = newString
finalText = newString.split(' ')
with open('booknotes.txt', 'w') as f:
for item in finalText:
f.write('%s\n' % item)
There isn't enough information to tell if "Book title (Book Author)" is different than something like "I like Books (Good Ones)" without context. Thankfully, the text you showed has plenty of context. Instead of creating several different regular expressions, you can combine them into one expression to encode that context.
For instance:
quoteInfoRegex = re.compile(
r"^=+\n(?P<title>.*?) \((?P<author>.*?)\)\n" +
r"- Your Highlight on page (?P<page>[\d]+) \| Location (?P<location>[\d-]+) \| Added on (?P<added>.*?)\n" +
r"\n" +
r"(?P<quote>.*?)\n", flags=re.MULTILINE)
for m in quoteInfoRegex.finditer(data):
print(m.groupdict())
This will pull out each line of the text, and parse it, knowing that the book title is the first line after the equals, and the quote itself is below that.

Searching text for multiple phrases python

I am currently attempting to search through multiple pdfs for certain pieces of equipment. I have figured out how to parse the pdf file in python along with the equipment list. I am currently having trouble with the actual search function. The best way I found to do it online was to tokenize the text and the search through it with the keywords (code below), but unfortunately some of the names of the equipment are multiple words long, causing those names to be tokenized into meaningless words like "blue" and "evaporate" which are found many times in the text and thus saturate the returns. The only way I have thought of to deal with this is to only look for unique words in the names of the equipment and remove the more common ones, but I was wondering if there was a more elegant solution as even the unique words have a tendency to have multiple false returns per document.
Mainly, I am looking for way to search through a text file for phrases of words such as "Blue Transmitter 3", without parsing that phrase into ["Blue", "Transmitter", "3"]
here is what I have so far
import PyPDF2
import nltk
from nltk import word_tokenize
from nltk.corpus import stopwords
import re
#open up pdf and get text
pdfName = 'example.pdf'
read_pdf = PyPDF2.PdfFileReader(pdfName)
text = ""
for i in range(read_pdf.getNumPages()):
page = read_pdf.getPage(i)
text += "Page No - " + str(1+read_pdf.getPageNumber(page)) + "\n"
page_content = page.extractText()
text += page_content + "\n"
#tokenize pdf text
tokens = word_tokenize(text)
punctuations = ['(',')',';',':','[',']',',','.']
stop_words = stopwords.words('english')
keywords = [word for word in tokens if not word in stop_words and not word in punctuations]
#take out the endline symbol and join the whole equipment data set into one long string
lines = [line.rstrip('\n') for line in open('equipment.txt')]
totalEquip = " ".join(lines)
tokens = word_tokenize(totalEquip)
trash = ['Black', 'furnace', 'Evaporation', 'Evaporator', '500', 'Chamber', 'A']
searchWords = [word for word in tokens if not word in stop_words and not word in punctuations and not word in trash]
for i in searchWords:
for word in splitKeys:
if i.lower() in word.lower():
print(i)
print(word + "\n")
Any help or ideas yall might have would be much appreciated

Python to extract the #user and url link in twitter text data with regex

There is a list string twitter text data, for example, the following data (actually, there is a large number of text,not just these data), I want to extract the all the user name after # and url link in the twitter text, for example: galaxy5univ and url link.
tweet_text = ['#galaxy5univ I like you',
'RT #BestOfGalaxies: Let's sit under the stars ...',
'#jonghyun__bot .........((thanks)',
'RT #yosizo: thanks.ddddd <https://yahoo.com>',
'RT #LDH_3_yui: #fam, ccccc https://msn.news.com']
my code:
import re
pu = re.compile(r'http\S+')
pn = re.compile(r'#(\S+)')
for row in twitter_text:
text = pu.findall(row)
name = (pn.findall(row))
print("url: ", text)
print("name: ", name)
Through testing the code in a large number of twitter data, I have got that my two patterns for url and name both are wrong(although in a few twitter text data is right). Do you guys have some documents or link about extract name and url from twitter text in the case of large twitter data.
If you have advices about extracting name and url from twitter data, please tell me, thanks!
Note that your pn = re.compile(r'#(\S+)') regex will capture any 1+ non-whitespace characters after #.
To exclude matching :, you need to convert the shorthand \S class to [^\s] negated character class equivalent, and add : to it:
pn = re.compile(r'#([^\s:]+)')
Now, it will stop capturing non-whitespace symbols before the first :. See the regex demo.
If you need to capture until the last :, you can just add : after the capturing group: pn = re.compile(r'#(\S+):').
As for a URL matching regex, there are many on the Web, just choose the one that works best for you.
Here is an example code:
import re
p = re.compile(r'#([^\s:]+)')
test_str = "#galaxy5univ I like you\nRT #BestOfGalaxies: Let's sit under the stars ...\n#jonghyun__bot .........((thanks)\nRT #yosizo: thanks.ddddd <https://y...content-available-to-author-only...o.com>\nRT #LDH_3_yui: #fam, ccccc https://m...content-available-to-author-only...s.com"
print(p.findall(test_str))
p2 = re.compile(r'(?:http|ftp|https)://(?:[\w_-]+(?:(?:\.[\w_-]+)+))(?:[\w.,#?^=%&:/~+#-]*[\w#?^=%&/~+#-])?')
print(p2.findall(test_str))
# => ['galaxy5univ', 'BestOfGalaxies', 'jonghyun__bot', 'yosizo', 'LDH_3_yui']
# => ['https://yahoo.com', 'https://msn.news.com']
If the usernames doesn't contain special chars, you can use:
#([\w]+)
See Live demo

Categories

Resources