I have a list of very long strings, and I want to know if theres any way to separate every string´s element by a certain number of words like the image on the bottom.
List = [“End extreme poverty in all forms by 2030”,
“End hunger, achieve food security and improved nutrition and promote sustainable agriculture”,
“Ensure healthy lives and promote well-being for all at all ages”,
“Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all”,
“Promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all”]
Print(list)
Out:
“End extreme poverty in all forms by 2030”,
“End hunger, achieve food security and improved
nutrition and promote sustainable agriculture”,
“Ensure healthy lives and promote well-being for all at all ages”,
“Ensure inclusive and equitable quality education and promote
lifelong learning opportunities for all”,
“Promote sustained, inclusive and sustainable economic growth,
full and productive employment and decent work for all”]
the final result is kinda like the image
You can use textwrap:
import textwrap
print(textwrap.fill(List[1], 50))
End hunger, achieve food security and improved
nutrition and promote sustainable agriculture
If you want to break it in terms of number of words, you can:
Split the string into a list of words:
my_list = my_string.split()
Then you can break it into K chunks using numpy:
import numpy as np
my_chunks = np.array_split(my_list , K) #K is number of chunks
Then, to get a chunk back into a string, you can do:
my_string = " ".join(my_chunk[0])
Or to get all of them to strings, you can do:
my_strings = list(map(" ".join, my_chunks))
Related
I want to create deduplication process on my database.
I want to measure cosine similarity scores with Pythons Sklearn lib. between new texts and texts that are already in the database.
I want to add only documents that have cosine similarity score less than 0.90.
This is my code:
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.metrics.pairwise import cosine_similarity
list_of_texts_in_database = ["More now on the UK prime minister’s plan to impose sanctions against Russia, after it sent troops into eastern Ukraine.",
"UK ministers say sanctions could target companies and individuals linked to the Russian government.",
"Boris Johnson also says the UK could limit Russian firms ability to raise capital on London's markets.",
"He has suggested Western allies are looking at stopping Russian companies trading in pounds and dollars.",
"Other measures Western nations could impose include restricting exports to Russia, or excluding it from the Swift financial messaging service.",
"The rebels and Ukrainian military have been locked for years in a bitter stalemate, along a frontline called the line of control",
"A big question in the coming days, is going to be whether Russia also recognises as independent some of the Donetsk and Luhansk regions that are still under Ukrainian government control",
"That could lead to a major escalation in conflict."]
list_of_new_texts = ["This is a totaly new document that needs to be added into the database one way or another.",
"Boris Johnson also says the UK could limit Russian firm ability to raise capital on London's market.",
"Other measure Western nation can impose include restricting export to Russia, or excluding from the Swift financial messaging services.",
"UK minister say sanctions could target companies and individuals linked to the Russian government.",
"That could lead to a major escalation in conflict."]
vectorizer = TfidfVectorizer(lowercase=True, analyzer='word', stop_words = None, ngram_range=(1, 1))
list_of_texts_in_database_tfidf = vectorizer.fit_transform(list_of_texts_in_database)
list_of_new_texts_tfidf = vectorizer.transform(list_of_new_texts)
cosineSimilarities = cosine_similarity(list_of_new_texts_tfidf, list_of_texts_in_database_tfidf)
print(cosineSimilarities)
This code works good, but I do not know how to map the results (how to get texts that have similarity score less than 0.90)
My suggestion would be as follows. You only add those texts with a score less than (or equal) 0.9.
import numpy as np
idx = np.where((cosineSimilarities <= 0.9).all(axis=1))
Then you have the indices of the new texts in list_of_new_texts that do not have a corresponding text with a score of > 0.9 in the already existing list list_of_texts_in_database.
Combining them you can do as follows (although somebody else might have a cleaner method for this...)
print(
list_of_texts_in_database + list(np.array(list_of_new_texts)[idx[0]])
)
Output:
['More now on the UK prime minister’s plan to impose sanctions against Russia, after it sent troops into eastern Ukraine.',
'UK ministers say sanctions could target companies and individuals linked to the Russian government.',
"Boris Johnson also says the UK could limit Russian firms ability to raise capital on London's markets.",
'He has suggested Western allies are looking at stopping Russian companies trading in pounds and dollars.',
'Other measures Western nations could impose include restricting exports to Russia, or excluding it from the Swift financial messaging service.',
'The rebels and Ukrainian military have been locked for years in a bitter stalemate, along a frontline called the line of control',
'A big question in the coming days, is going to be whether Russia also recognises as independent some of the Donetsk and Luhansk regions that are still under Ukrainian government control',
'That could lead to a major escalation in conflict.',
'This is a totaly new document that needs to be added into the database one way or another.',
'Other measure Western nation can impose include restricting export to Russia, or excluding from the Swift financial messaging services.',
'UK minister say sanctions could target companies and individuals linked to the Russian government.']
why dont you work within a dataframe?
import pandas as pd
d = {'old_text':list_of_texts_in_database[:5], 'new_text':list_of_new_texts, 'old_emb': list_of_texts_in_database_tfidf[:5], 'new_emb': list_of_new_texts_tfidf}
df = pd.DataFrame(data=d)
df['score'] = df.apply(lambda row: cosine_similarity(row['old_emb'], row['new_emb'])[0][0], axis=1)
df = df.loc[df.score > 0.9, 'score']
df.head()
I have a pandas dataframe, a column with text data. I want to extract all unique acronyms and abbreviations in that text column.
So far I have a function to extract all acronyms and abbreviations from a given text.
def extract_acronyms_abbreviations(text):
eaa = {}
for match in re.finditer(r"\((.*?)\)", text):
start_index = match.start()
abbr = match.group(1)
size = len(abbr)
words = text[:start_index].split()[-size:]
definition = " ".join(words)
eaa[abbr] = definition
return eaa
extract_acronyms_abbreviations(a)
{'FHH': 'family health history', 'NP': 'nurse practitioner'}
I want to apply/extract all unique acronyms and abbreviations from the text column.
Sample Data:
s = """The MLCommons Association, an open engineering consortium dedicated to improving machine learning for everyone, today announced the general availability of the People's Speech Dataset and the Multilingual Spoken Words Corpus (MSWC). This trail-blazing and permissively licensed datasets advance innovation in machine learning research and commercial applications. Also today, the MLCommons Association is issuing a call for participation in the new DataPerf benchmark suite, which measures and encourages innovation in data-centric AI."""
k = """The MLCommons Association is a firm proponent of Data-Centric AI (DCAI), the discipline of systematically engineering the data for AI systems by developing efficient software tools and engineering practices to make dataset creation and curation easier. Our open datasets and tools like DataPerf concretely support the DCAI movement and drive machine learning innovation."""
j = """The key global provider of sustainable packaging solutions has now taken a significant step towards reaching these ambitions by signing two 10-year virtual Power Purchase Agreements (VPPA) with global renewable energy developer BayWa r.e covering its operations in Europe. The agreements form the largest solar VPPA for the packaging industry in Europe, as well as the first major solar VPPA by a Finnish company."""
a = """Although family health history (FHH) is commonly accepted as an important risk factor for common, chronic diseases, it is rarely considered by a nurse practitioner (NP)."""
import pandas as pd
data = {"text":[s,k,j,a,s,k,j]}
df = pd.DataFrame(data)
Desired output
{'MSWC': 'Multilingual Spoken Words Corpus',
'DCAI': 'proponent of Data-Centric AI',
'VPPA': 'virtual Power Purchase Agreements',
'NP': 'nurse practitioner',
'FHH': 'family health history'}
Lets say df['text'] contains the text data you want to work with.
df["acronyms"] = df.apply(extract_acronyms_abbreviations)
# It will create a new columns containing dictionary return by your function.
Now create a master dict like
master_dict = dict()
for d in df["acronyms"].values:
master_dict.update(d)
print(master_dict)
I want to detect multiple type of flag patterns from the Stock market the first one that I want to identify is the Bull Flag Pattern. I have tried some formula's but they all missed the point and gave me lot of stock name which did not have the pattern.
In the recent way I did
find the continuous rise and then check that the following values are lying between the mean of the continuous rise.
I'm also wondering if I plot this data in graph using matplot or plotly and then apply machine learning to it will that be a solution or not.
The code to get the data is as below
from pprint import print
from nsepy import get_history
from datetime import date
from datetime import datetime, timedelta
import matplotlib
from nsetools import Nse
nse = Nse()
old_date=date.today()-timedelta(days=30)
for stock in nse.get_stock_codes():
print("Stock",stock)
stock_data = get_history(symbol=stock,
start=date(old_date.year,old_date.month,old_date.day),
end=date(datetime.now().year,datetime.now().month,datetime.now().day)))
Any help will be useful. Thanks in advance.
Bull flag pattern matcher for your pattern day trading code
Pseudocode:
Get the difference between the min() and max() close price over the last n=20 timeperiods, here called flag_width. Get the difference between the min() and max() close price over the last m=30 timeperiods, here called poll_height.
When the relative gain percentage between poll_height and flag_width is over some massive threshold like (thresh=0.90) then you can say there is a tight flag in the last n=20 timeperiods, and a tall flag pole on period -20 to -30.
Another name for this could be: "price data elbow" or "hokey stick shape".
macd does a kind of 12,26 variation on this approach, but using 9,12,26 day exponential moving average.
Code Jist:
#movingMax returns the maximum price over the last t=20 timeperiods
highest_close_of_flag = movingMax(closeVector, 20);
lowest_close_of_flag = movingMin(closeVector, 20);
#movingMin returns the minimum price over the last t=20 timeperiods
highest_close_of_poll = movingMax(closeVector, 30);
lowest_close_of_poll = movingMin(closeVector, 30);
#We want poll to be much longer than the flag is wide.
flag_width = highest_close_of_flag - lowest_close_of_flag;
poll_height = highest_close_of_poll - lowest_close_of_poll;
# ((new-old)/old) yields percentage gain between.
bull_flag = (poll_height - flag_width ) ./ flag_width;
#Filter out bull flags who's flagpole tops go too high over the flapping flag
bull_flag -= (highest_close_of_poll -highest_close_of_flag ) ./ highest_close_of_flag;
thresh = 0.9;
bull_flag_trigger = (bull_flag > thresh);
bull_flag_trigger interpretation
A whale (relatively speaking) bought a green candle flagpole by aggressively hitting bids, or covering a prior naked short position via market options from timeperiod -30 to -20. The fish fought over the new higher price in a narrow horizontal band from timeperiod -20 to 0 at the top of the range.
The bull flag pattern is one of the most popular false flags, because it's so good, which means the other end of your trade spent money to paint that structure, so that you would happily scoop up their unwanted distribution of scrip at a much higher price, and now you're bag-holding a scrip at an unrealistic price that nobody wants to buy. You are playing a game of Chess/Checkers against very strong AI designed by corporations who pull trillions of dollars per year out of this constant sum game, and your losses are their gains make your time.
Drawbacks to this approach:
This approach doesn't take into account other desirable properties of a bull flags, such as straightness of the poll, high volume in the poll or flag, gain in trading range in the poll, the squareness/triangularness/resonant sloped-ness of the flag, or 10 other variations on what cause people with lots of money to a pattern day trade on appearance of this arbitrary structure. This is financial advice, you will lose all of your money to other people who can write better AI code in skyscrapers who get $1*10^9 annual pay packages in exchange for isolated alpha with math proofs and code demos.
Bull Flag Pattern is a pattern that is visible after plotting the chart. This pattern is hard to find at the data level.
According to me finding pattern in an image using Deep Learning(object detection) is a better choice. By doing so you can find other types of patterns also such as Bearish Flag, etc.
Here is my code
import json
data = []
with open("review.json") as f:
for line in f:
data.append(json.loads(line))
lst_string = []
lst_num = []
for i in range(len(data)):
if (data[i]["stars"] == 5.0):
x = data[i]["text"]
for word in x.split():
if word in lst_string:
lst_num[lst_string.index(word)] += 1
else:
lst_string.append(word)
lst_num.append(1)
result = set(zip(lst_string, lst_num))
print(result)
with open("set.txt", "w") as g:
g.write(str(result))
I'm trying to write a set of all words in reviews that were given 5 stars from a pulled in json file formatted like
{"review_id":"Q1sbwvVQXV2734tPgoKj4Q","user_id":"hG7b0MtEbXx5QzbzE6C_VA","business_id":"ujmEBvifdJM6h6RLv4wQIg","stars":1.0,"useful":6,"funny":1,"cool":0,"text":"Total bill for this horrible service? Over $8Gs. These crooks actually had the nerve to charge us $69 for 3 pills. I checked online the pills can be had for 19 cents EACH! Avoid Hospital ERs at all costs.","date":"2013-05-07 04:34:36"}
{"review_id":"GJXCdrto3ASJOqKeVWPi6Q","user_id":"yXQM5uF2jS6es16SJzNHfg","business_id":"NZnhc2sEQy3RmzKTZnqtwQ","stars":1.0,"useful":0,"funny":0,"cool":0,"text":"I *adore* Travis at the Hard Rock's new Kelly Cardenas Salon! I'm always a fan of a great blowout and no stranger to the chains that offer this service; however, Travis has taken the flawless blowout to a whole new level! \n\nTravis's greets you with his perfectly green swoosh in his otherwise perfectly styled black hair and a Vegas-worthy rockstar outfit. Next comes the most relaxing and incredible shampoo -- where you get a full head message that could cure even the very worst migraine in minutes --- and the scented shampoo room. Travis has freakishly strong fingers (in a good way) and use the perfect amount of pressure. That was superb! Then starts the glorious blowout... where not one, not two, but THREE people were involved in doing the best round-brush action my hair has ever seen. The team of stylists clearly gets along extremely well, as it's evident from the way they talk to and help one another that it's really genuine and not some corporate requirement. It was so much fun to be there! \n\nNext Travis started with the flat iron. The way he flipped his wrist to get volume all around without over-doing it and making me look like a Texas pagent girl was admirable. It's also worth noting that he didn't fry my hair -- something that I've had happen before with less skilled stylists. At the end of the blowout & style my hair was perfectly bouncey and looked terrific. The only thing better? That this awesome blowout lasted for days! \n\nTravis, I will see you every single time I'm out in Vegas. You make me feel beauuuutiful!","date":"2017-01-14 21:30:33"}
{"review_id":"2TzJjDVDEuAW6MR5Vuc1ug","user_id":"n6-Gk65cPZL6Uz8qRm3NYw","business_id":"WTqjgwHlXbSFevF32_DJVw","stars":1.0,"useful":3,"funny":0,"cool":0,"text":"I have to say that this office really has it together, they are so organized and friendly! Dr. J. Phillipp is a great dentist, very friendly and professional. The dental assistants that helped in my procedure were amazing, Jewel and Bailey helped me to feel comfortable! I don't have dental insurance, but they have this insurance through their office you can purchase for $80 something a year and this gave me 25% off all of my dental work, plus they helped me get signed up for care credit which I knew nothing about before this visit! I highly recommend this office for the nice synergy the whole office has!","date":"2016-11-09 20:09:03"}
{"review_id":"yi0R0Ugj_xUx_Nek0-_Qig","user_id":"dacAIZ6fTM6mqwW5uxkskg","business_id":"ikCg8xy5JIg_NGPx-MSIDA","stars":1.0,"useful":0,"funny":0,"cool":0,"text":"Went in for a lunch. Steak sandwich was delicious, and the Caesar salad had an absolutely delicious dressing, with a perfect amount of dressing, and distributed perfectly across each leaf. I know I'm going on about the salad ... But it was perfect.\n\nDrink prices were pretty good.\n\nThe Server, Dawn, was friendly and accommodating. Very happy with her.\n\nIn summation, a great pub experience. Would go again!","date":"2018-01-09 20:56:38"}
{"review_id":"yi0R0Ugj_xUx_Nek0-_Qig","user_id":"dacAIZ6fTM6mqwW5uxkskg","business_id":"ikCg8xy5JIg_NGPx-MSIDA","stars":5.0,"useful":0,"funny":0,"cool":0,"text":"a b aa bb a b","date":"2018-01-09 20:56:38"}
but it is using all the memory on my computer before it can output into a text file. How can I use a less memory intensive way?
Only get text where stars == 5:
Data:
Based on the question, the data is a file containing rows of dicts.
Get the text into a list:
Given the data from Yelp Challenge, getting the 5 stars text into a list, doesn't take that much memory.
The Windows resource manager showed an increase of about 1.3GB, but the object size of text_list was about 25MB.
import json
text_list = list()
with open("review.json", encoding="utf8") as f:
for line in f:
line = json.loads(line)
if line['stars'] == 5:
text_list.append(line['text'])
print(text_list)
>>> ['Test text, example 1!', 'Test text, example 2!']
Extra:
Everything after loading the data, seems to require a lot of memory that isn't being released.
When cleaning the text, Windows resource manager went up by 16GB, though the final size of clean_text was also only about 25MB.
Interestingly, deleting clean_text does not release the 16GB of memory.
In Jupyter Lab, restarting the Kernel will release the memory
In PyCharm, stopping the process also releases the memory
I tried manually running the garbage collector, but that didn't release the memory
Clean text_list:
import string
def clean_string(value: str) -> list:
value = value.lower()
value = value.translate(str.maketrans('', '', string.punctuation))
value = value.split()
return value
clean_text = [clean_string(item) for item in text_list]
print(clean_text)
>>> [['test', 'text', 'example', '1'], ['test', 'text', 'example', '2']]
Count words in clean_text:
from collection import Counter
words = Counter()
for item in clean_text:
words.update(item)
print(words)
>>> Counter({'test': 2, 'text': 2, 'example': 2, '1': 1, '2': 1})
The title for this one was quite tricky.
I'm trying to solve a scenario,
Imagine a survey was sent out to XXXXX amount of people, asking them what their favourite football club was.
From the response back, it's obvious that while many are favourites of the same club, they all "expressed" it in different ways.
For example,
For Manchester United, some variations include...
Man U
Man Utd.
Man Utd.
Manchester U
Manchester Utd
All are obviously the same club however, if using a simple technique, of just trying to get an extract string match, each would be a separate result.
Now, if we further complication the scenario, let's say that because of the sheer volume of different clubs (eg. Man City, as M. City, Manchester City, etc), again plagued with this problem, its impossible to manually "enter" these variances and use that to create a custom filter such that converters all Man U -> Manchester United, Man Utd. > Manchester United, etc. But instead we want to automate this filter, to look for the most likely match and converter the data accordingly.
I'm trying to do this in Python (from a .cvs file) however welcome any pseudo answers that outline a good approach to solving this.
Edit: Some additional information
This isn't working off a set list of clubs, the idea is to "cluster" the ones we have together.
The assumption is there are no spelling mistakes.
There is no assumed length of how many clubs
And the survey list is long. Long enough that it doesn't warranty doing this manually (1000s of queries)
Google Refine does just this, but I'll assume you want to roll your own.
Note, difflib is built into Python, and has lots of features (including eliminating junk elements). I'd start with that.
You probably don't want to do it in a completely automated fashion. I'd do something like this:
# load corrections file, mapping user input -> output
# load survey
import difflib
possible_values = corrections.values()
for answer in survey:
output = corrections.get(answer,None)
if output = None:
likely_outputs = difflib.get_close_matches(input,possible_values)
output = get_user_to_select_output_or_add_new(likely_outputs)
corrections[answer] = output
possible_values.append(output)
save_corrections_as_csv
Please edit your question with answers to the following:
You say "we want to automate this filter, to look for the most likely match" -- match to what?? Do you have a list of the standard names of all of the possible football clubs, or do the many variations of each name need to be clustered to create such a list?
How many clubs?
How many survey responses?
After doing very light normalisation (replace . by space, strip leading/trailing whitespace, replace runs of whitespace by a single space, convert to lower case [in that order]) and counting, how many unique responses do you have?
Your focus seems to be on abbreviations of the standard name. Do you need to cope with nicknames e.g. Gunners -> Arsenal, Spurs -> Tottenham Hotspur? Acronyms (WBA -> West Bromwich Albion)? What about spelling mistakes, keyboard mistakes, SMS-dialect, ...? In general, what studies of your data have you done and what were the results?
You say """its impossible to manually "enter" these variances""" -- is it possible/permissible to "enter" some "variances" e.g. to cope with nicknames as above?
What are your criteria for success in this exercise, and how will you measure it?
It seems to me that you could convert many of these into a standard form by taking the string, lower-casing it, removing all punctuation, then comparing the start of each word.
If you had a list of all the actual club names, you could compare directly against that as well; and for strings which don't match first-n-letters to any actual team, you could try lexigraphical comparison against any of the returned strings which actually do match.
It's not perfect, but it should get you 99% of the way there.
import string
def words(s):
s = s.lower().strip(string.punctuation)
return s.split()
def bestMatchingWord(word, matchWords):
score,best = 0., ''
for matchWord in matchWords:
matchScore = sum(w==m for w,m in zip(word,matchWord)) / (len(word) + 0.01)
if matchScore > score:
score,best = matchScore,matchWord
return score,best
def bestMatchingSentence(wordList, matchSentences):
score,best = 0., []
for matchSentence in matchSentences:
total,words = 0., []
for word in wordList:
s,w = bestMatchingWord(word,matchSentence)
total += s
words.append(w)
if total > score:
score,best = total,words
return score,best
def main():
data = (
"Man U",
"Man. Utd.",
"Manch Utd",
"Manchester U",
"Manchester Utd"
)
teamList = (
('arsenal',),
('aston', 'villa'),
('birmingham', 'city', 'bham'),
('blackburn', 'rovers', 'bburn'),
('blackpool', 'bpool'),
('bolton', 'wanderers'),
('chelsea',),
('everton',),
('fulham',),
('liverpool',),
('manchester', 'city', 'cty'),
('manchester', 'united', 'utd'),
('newcastle', 'united', 'utd'),
('stoke', 'city'),
('sunderland',),
('tottenham', 'hotspur'),
('west', 'bromwich', 'albion'),
('west', 'ham', 'united', 'utd'),
('wigan', 'athletic'),
('wolverhampton', 'wanderers')
)
for d in data:
print "{0:20} {1}".format(d, bestMatchingSentence(words(d), teamList))
if __name__=="__main__":
main()
run on sample data gets you
Man U (1.9867767507647776, ['manchester', 'united'])
Man. Utd. (1.7448074166742613, ['manchester', 'utd'])
Manch Utd (1.9946817328797555, ['manchester', 'utd'])
Manchester U (1.989100008901989, ['manchester', 'united'])
Manchester Utd (1.9956787398647866, ['manchester', 'utd'])