I have a list that contains pair of keywords ('k1', 'k2'). Here's a sample:
print (word_pairs)
--->[('salaire', 'dépense'), ('gratuité', 'argent'), ('causesmwedemwelamwemort', 'cadres'), ('caractèresmwedumwedispositif', 'historique'), ('psychomotricienmwediplôme', 'infirmier'), ('impôtmwesurmwelesmweréunionsmwesportives', 'compensation'), ('affichage', 'affichagemweopinion'), ('délaimweprorogation', 'défaillance'), ('créancemwenotion', 'généralités')]
I have a text file r_isa.txt (205MB) that contain words that share an "isa" relationship. Here's a sample, where \t represents a literal tab character:
égalité de Parseval\tformule_0.9333\tégalité_1.0
filiation illégitime\tfiliation_1.0
Loi reconnaissant l'égalité\tloi_1.0
égalité entre les sexes\tégalité_1.0
liberté égalité fraternité\tliberté_1.0
This basically means, "égalité de Parseval" isa "formule" with a score of 0.9333 and isa "égalité" with a score of 1. And so go on..
I want to know based on the r_isa file, if the keyword k1 isa k2, and if k2 is-a k1. On the output file, I want to save on each line the pair of words that do have the is-a relationship.
Here's what I did:
#Reading data as list
keywords = [line for line in open('version_final_PMI_espace.txt', encoding='utf8')]
keywords = ast.literal_eval(keywords[0])
word_pairs = []
for k,v in keywords.items():
if v:
word_pairs.append((k,v[0][0]))
len(list(set(word_pairs)))
#####
with open("r_isa.txt",encoding="utf-8") as readfile, open('Hyperonymy_file_pair.txt', 'w') as writefile:
for line in readfile:
firstfield = line.split('\t')[0].lower()
for w in word_pairs:
if w[0]==firstfield:
if w[1] in line:
writefile.write("".join(w[0]) + "\t"+"".join(w[1]) +"\n" )
This returns random pairs to me, for exemple:
salaire\targent
dépense\tcadres
unstead of ( in case of an existing isa relationship)
salaire\tdépense
causesmwedemwelamwemort\tcadres
Where did I go wrong ?
Updated Answer
The statement if w[1] in line: is highly suspect. See the following code for what I believe the logic should be. Since I don't have access to your files, I have turned readfile into a list of strings for testing purposes and instead of writing output to writefile, I am just printing some results. I have added some values to word_pairs and readfile so that I get some results. Also note that if you are converting the input file to lower case, then your word pairs must also be lower case.
This code checks if k1 isa k2 and if not, then checks if k2 isa k1.
word_pairs = [('égalité de parseval', 'égalité'), ('salaire', 'dépense'), ('gratuité', 'argent'), ('causesmwedemwelamwemort', 'cadres'), ('caractèresmwedumwedispositif', 'historique'), ('psychomotricienmwediplôme', 'infirmier'), ('impôtmwesurmwelesmweréunionsmwesportives', 'compensation'), ('affichage', 'affichagemweopinion'), ('délaimweprorogation', 'défaillance'), ('créancemwenotion', 'généralités')]
word_pairs2 = [(pair[1], pair[0]) for pair in word_pairs] # reverse the words
word_dict = dict(word_pairs) # create a dictionary for fast searching
word_dict2 = dict(word_pairs2)
readfile = [
'égalité de Parseval\tformule_0.9333\tégalité_1.0',
'filiation illégitime\tfiliation_1.0',
'Loi reconnaissant l\'égalité\tloi_1.0',
'égalité entre les sexes\tégalité_1.0',
'liberté égalité fraternité\tliberté_1.0',
'dépense\tsalaire_.9'
]
for line in readfile:
fields = line.lower().split('\t')
first_word = fields.pop(0)
isa_word = word_dict.get(first_word, word_dict2.get(first_word)) # check k2 isa k1 if k1 isa k2 is false
if isa_word is not None:
for field in fields: # check each one
fields2 = field.split('_')
second_word, score = fields2
if second_word == isa_word:
print(first_word, second_word, score)
Prints:
égalité de parseval égalité 1.0
dépense salaire .9
If it is possible that k1 isa k2 and k2 isa k1, then you need the more general (but more complicated) code:
word_pairs = [('égalité de parseval', 'égalité'), ('salaire', 'dépense'), ('gratuité', 'argent'), ('causesmwedemwelamwemort', 'cadres'), ('caractèresmwedumwedispositif', 'historique'), ('psychomotricienmwediplôme', 'infirmier'), ('impôtmwesurmwelesmweréunionsmwesportives', 'compensation'), ('affichage', 'affichagemweopinion'), ('délaimweprorogation', 'défaillance'), ('créancemwenotion', 'généralités')]
word_pairs2 = [(pair[1], pair[0]) for pair in word_pairs] # reverse the words
word_dict = dict(word_pairs) # create a dictionary for fast searching
word_dict2 = dict(word_pairs2)
readfile = [
'égalité de Parseval\tformule_0.9333\tégalité_1.0',
'filiation illégitime\tfiliation_1.0',
'Loi reconnaissant l\'égalité\tloi_1.0',
'égalité entre les sexes\tégalité_1.0',
'liberté égalité fraternité\tliberté_1.0',
'salaire\tdépense_1.0',
'dépense\tsalaire_.9'
]
for line in readfile:
fields = line.lower().split('\t')
first_word = fields.pop(0)
# k1 isa k2?
isa_word = word_dict.get(first_word)
if isa_word is not None:
for field in fields: # check each one
fields2 = field.split('_')
second_word, score = fields2
if second_word == isa_word:
print(first_word, second_word, score)
# k2 isa k1?
isa_word = word_dict2.get(first_word)
if isa_word is not None:
for field in fields: # check each one
fields2 = field.split('_')
second_word, score = fields2
if second_word == isa_word:
print(first_word, second_word, score)
Prints:
égalité de parseval égalité 1.0
salaire dépense 1.0
dépense salaire .9
kw = [('salaire', 'dépense'),
('gratuité', 'argent'),
('causesmwedemwelamwemort', 'cadres'),
('caractèresmwedumwedispositif', 'historique'),
('psychomotricienmwediplôme', 'infirmier'),
('impôtmwesurmwelesmweréunionsmwesportives', 'compensation'),
('affichage', 'affichagemweopinion'),
('délaimweprorogation', 'défaillance'),
('créancemwenotion', 'généralités')]
lines_from_file = ['égalité de Parseval\tformule_0.9333\tégalité_1.0',
'filiation illégitime\tfiliation_1.0',
'Loi reconnaissant l\'égalité\tloi_1.0',
'égalité entre les sexes\tégalité_1.0',
'liberté égalité fraternité\tliberté_1.0',
'créancemwenotion\tgénéralités_1.0',
'généralités\tcréancemwenotion_1.0']
who_is_who_dict = {}
for line in lines_from_file:
words = line.split('\t')
key = words[0]
other_words = [w.split('_')[0] for w in words[1:]]
if key in who_is_who_dict:
who_is_who_dict[key] = who_is_who_dict[key] + other_words
else:
who_is_who_dict[key] = other_words
pairs_to_write = []
for kw1, kw2 in kw:
if (kw1 in who_is_who_dict and kw2 in who_is_who_dict[kw1]
and kw2 in who_is_who_dict and kw1 in who_is_who_dict[kw2]):
pairs_to_write.append((kw1, kw2))
print(pairs_to_write)
output :
[('créancemwenotion', 'généralités')]
Related
I'm trying to retrieve wikipedia pages' characters count, for articles in different languages. I'm using a dictionary with as key the name of the page and as value a dictionary with the language as key and the count as value.
The code is:
pages = ["L'arte della gioia", "Il nome della rosa"]
langs = ["it", "en"]
dicty = {}
dicto ={}
numz = 0
for x in langs:
wikipedia.set_lang(x)
for y in pages:
pagelang = wikipedia.page(y)
splittedpage = pagelang.content
dicto[y] = dicty
for char in splittedpage:
numz +=1
dicty[x] = numz
If I print dicto, I get
{"L'arte della gioia": {'it': 72226, 'en': 111647}, 'Il nome della rosa': {'it': 72226, 'en': 111647}}
The count should be different for the two pages.
Please try this code. I didn't run it because I don't have the wikipedia module.
Notes:
Since your expected result is dict[page,dict[lan,cnt]], I think first iterate pages is more natural, then iterate languages. Maybe for performance reason you want first iterate languages, please comment.
Characters count of text can simply be len(text), why iterate and sum again?
Variable names. You will soon be lost in x y like variables.
pages = ["L'arte della gioia", "Il nome della rosa"]
langs = ["it", "en"]
dicto = {}
for page in pages:
lang_cnt_dict = {}
for lang in langs:
wikipedia.set_lang(lang)
page_lang = wikipedia.page(page)
chars_cnt = len(pagelang.content)
lang_cnt_dict[lan] = chars_cnt
dicto[page] = lang_cnt_dict
print(dicto)
update
If you want iterate langs first
pages = ["L'arte della gioia", "Il nome della rosa"]
langs = ["it", "en"]
dicto = {}
for lang in langs:
wikipedia.set_lang(lang)
for page in pages:
page_lang = wikipedia.page(page)
chars_cnt = len(pagelang.content)
if page in dicto:
dicto[page][lang] = chars_cnt
else:
dicto[page] = {lang: chars_cnt}
print(dicto)
I have to write a program that correlates smoking with lung cancer risk. For that I have data in two files.
My code is computing the data given in the same lines (eg:America,23.3 with Spain,77.9 and
Italy,24.2 with Russia,60.8)
How to modify my code so that it computes the numbers of the same countries and leaves out the countries that occur only in one file (it shouldn't compute Germany, France, China, Korea because they are only in one file)
Thank you so much for your help in advance:)
smoking file:
Country, Percent Cigarette Smokers Data
America,23.3
Italy,24.2
Russia,23.7
France,14.9
England,17.9
Spain,17
Germany,21.7
second file:
Cases Lung Cancer per 100000
Spain,77.9
Russia,60.8
Korea,61.3
America,73.3
China,66.8
Vietnam,64.5
Italy,43.9
and my code:
def readFiles(smoking_datafile, cancer_datafile):
'''
Reads the data from the provided file objects smoking_datafile
and cancer_datafile. Returns a list of the data read from each
in a tuple of the form (smoking_datafile, cancer_datafile).
'''
# init
smoking_data = []
cancer_data = []
empty_str = ''
# read past file headers
smoking_datafile.readline()
cancer_datafile.readline()
# read data files
eof = False
while not eof:
# read line of data from each file
s_line = smoking_datafile.readline()
c_line = cancer_datafile.readline()
# check if at end-of-file of both files
if s_line == empty_str and c_line == empty_str:
eof = True
# check if end of smoking data file only
elif s_line == empty_str:
raise OSError('Unexpected end-of-file for smoking data file')
# check if at end of cancer data file only
elif c_line == empty_str:
raise OSError('Unexpected end-of-file for cancer data file')
# append line of data to each list
else:
smoking_data.append(s_line.strip().split(','))
cancer_data.append(c_line.strip().split(','))
# return list of data from each file
return (smoking_data, cancer_data)
def calculateCorrelation(smoking_data, cancer_data):
'''
Calculates and returns the correlation value for the data
provided in lists smoking_data and cancer_data
'''
# init
sum_smoking_vals = sum_cancer_vals = 0
sum_smoking_sqrd = sum_cancer_sqrd = 0
sum_products = 0
# calculate intermediate correlation values
num_values = len(smoking_data)
for k in range(0,num_values):
sum_smoking_vals = sum_smoking_vals + float(smoking_data[k][1])
sum_cancer_vals = sum_cancer_vals + float(cancer_data[k][1])
sum_smoking_sqrd = sum_smoking_sqrd + \
float(smoking_data[k][1]) ** 2
sum_cancer_sqrd = sum_cancer_sqrd + \
float(cancer_data[k][1]) ** 2
sum_products = sum_products + float(smoking_data[k][1]) * \
float(cancer_data[k][1])
# calculate and display correlation value
numer = (num_values * sum_products) - \
(sum_smoking_vals * sum_cancer_vals)
denom = math.sqrt(abs( \
((num_values * sum_smoking_sqrd) - (sum_smoking_vals ** 2)) * \
((num_values * sum_cancer_sqrd) - (sum_cancer_vals ** 2)) \
))
return numer / denom
Let's just focus on getting the data into a format that is easy to work with. The code below will get you a dictionary of the form ...
smokers_cancer_data = {
'America': {
'smokers': '23.3',
'cancer': '73.3'
},
'Italy': {
'smokers': '24.2',
'cancer': '43.9'
},
...
}
Once you have this you can get any values you need and perform your calculations. See the code below.
def read_data(filename: str) -> dict:
with open(filename, 'r') as file:
next(file) # Skip the header
data = dict();
for line in file:
cleaned_line = line.rstrip()
# Skip blank lines
if cleaned_line:
data_item = (cleaned_line.split(','))
data[data_item[0]] = float(data_item[1])
return data
# Load data into python dictionaries
smokers_data = read_data('smokersData.txt')
cancer_data = read_data('lungCancerData.txt')
# Build one dictionary that is easy to work with
smokers_cancer_data = dict()
for (key, value) in smokers_data.items():
if key in cancer_data:
smokers_cancer_data[key] = {
'smokers': smokers_data[key],
'cancer' : cancer_data[key]
}
print(smokers_cancer_data)
For example, if you want to calculate the sum of the smoker and cancer values.
smokers_total = 0
cancer_total = 0
for (key, value) in smokers_cancer_data.items():
smokers_total += value['smokers']
cancer_total += value['cancer']
This will return a list of all the countries that have datas, along with the data:
l3 = []
with open('smoking.txt','r') as f1, open('cancer.txt','r') as f2:
l1, l2 = f1.readlines(), f2.readlines()
for s1 in l1:
for s2 in l2:
if s1.split(',')[0] == s2.split(',')[0]:
cty = s1.split(',')[0]
smk = s1.split(',')[1].strip()
cnr = s2.split(',')[1].strip()
l3.append(f"{cty}: smoking: {smk}, cancer: {cnr}")
print(l3)
Output:
['Spain: smoking: 77.9, cancer: 17', 'Russia: smoking: 60.8, cancer: 23.7', 'America: smoking: 73.3, cancer: 23.3', 'Italy: smoking: 43.9, cancer24.2']
I am having this text
/** Goodmorning
Alex
Dog
House
Red
*/
/** Goodnight
Maria
Cat
Office
Green
*/
I would like to have Alex , Dog , House and red in one list and Maria,Cat,office,green in an other list.
I am having this code
with open(filename) as f :
for i in f:
if i.startswith("/** Goodmorning"):
#add files to list
elif i.startswith("/** Goodnight"):
#add files to other list
So, is there any way to write the script so it can understands that Alex belongs in the part of the text that has Goodmorning?
I'd recommend you to use dict, where "section name" will be a key:
with open(filename) as f:
result = {}
current_list = None
for line in f:
if line.startswith("/**"):
current_list = []
result[line[3:].strip()] = current_list
elif line != "*/":
current_list.append(line.strip())
Result:
{'Goodmorning': ['Alex', 'Dog', 'House', 'Red'], 'Goodnight': ['Maria', 'Cat', 'Office', 'Green']}
To search which key one of values belongs you can use next code:
search_value = "Alex"
for key, values in result.items():
if search_value in values:
print(search_value, "belongs to", key)
break
I would recommend to use Regular expressions. In python there is a module for this called re
import re
s = """/** Goodmorning
Alex
Dog
House
Red
*/
/** Goodnight
Maria
Cat
Office
Green
*/"""
pattern = r'/\*\*([\w \n]+)\*/'
word_groups = re.findall(pattern, s, re.MULTILINE)
d = {}
for word_group in word_groups:
words = word_group.strip().split('\n\n')
d[words[0]] = words[1:]
print(d)
Output:
{'Goodmorning': ['Alex', 'Dog', 'House', 'Red'], 'Goodnight':
['Maria', 'Cat', 'Office', 'Green']}
expanding on Olvin Roght (sorry can't comment - not enough reputation) I would keep a second dictionary for the reverse lookup
with open(filename) as f:
key_to_list = {}
name_to_key = {}
current_list = None
current_key = None
for line in f:
if line.startswith("/**"):
current_list = []
current_key = line[3:].strip()
key_to_list[current_key] = current_list
elif line != "*/":
current_name=line.strip()
name_to_key[current_name]=current_key
current_list.append(current_name)
print key_to_list
print name_to_key['Alex']
alternative is to convert the dictionary afterwards:
name_to_key = {n : k for k in key_to_list for n in key_to_list[k]}
(i.e if you want to go with the regex version from ashwani)
Limitation is that this only permits one membership per name.
This is a continuation of my previous question posted here, where I was struggling with parsing RIS file. However, now I have combined some code into a new parser, which correctly reads a record. Unfortunately, the code stops after the first record, while I have no idea how to differentiate between end of file and double newspace character which separate records. Any idea?
The input file is provided here:
Record #1 of 306
ID: CN-01160769
AU: Uedo N
AU: Yao K
AU: Muto M
AU: Ishikawa H
TI: Development of an E-learning system.
SO: United European Gastroenterology Journal
YR: 2015
VL: 3
NO: 5 SUPPL. 1
PG: A490
XR: EMBASE 72267184
PT: Journal: Conference Abstract
DOI: 10.1177/2050640615601623
US: http://onlinelibrary.wiley.com/o/cochrane/clcentral/articles/769/CN-01160769/frame.html
Record #2 of 306
ID: CN-01070265
AU: Krogh LQ
AU: Bjornshave K
AU: Vestergaard LD
AU: Sharma MB
AU: Rasmussen SE
AU: Nielsen HV
AU: Thim T
AU: Lofgren B
TI: E-learning in pediatric basic life support: A randomized controlled non-inferiority study.
SO: Resuscitation
YR: 2015
VL: 90
PG: 7-12
XR: EMBASE 2015935529
PT: Journal: Article
DOI: 10.1016/j.resuscitation.2015.01.030
US: http://onlinelibrary.wiley.com/o/cochrane/clcentral/articles/265/CN-01070265/frame.html
Record #3 of 306
ID: CN-00982835
AU: Worm BS
AU: Jensen K
TI: Does peer learning or higher levels of e-learning improve learning abilities?
SO: Medical education online
YR: 2013
VL: 18
NO: 1
PG: 21877
PM: PUBMED 28166018
XR: EMBASE 24229729
PT: Journal Article; Randomized Controlled Trial
DOI: 10.3402/meo.v18i0.21877
US: http://onlinelibrary.wiley.com/o/cochrane/clcentral/articles/835/CN-00982835/frame.html
And the code is pasted below:
import re
# Function to process single record
def read_record(infile):
line = infile.readline()
line = line.strip()
if not line:
# End of file
return None
if not line.startswith("Record"):
raise TypeError("Not a proper file: %r" % line)
# Read tags and fields
tags = []
fields = []
while 1:
line = infile.readline().rstrip()
if line == "":
# Reached the end of the record or end of the file
break
prog = re.compile("^([A-Z][A-Z0-9][A-Z]?): (.*)")
match = prog.match(line)
tag = match.groups()[0]
field = match.groups()[1]
tags.append(tag)
fields.append(field)
return [tags, fields]
# Function to loop through records
def read_records(input_file):
records = []
while 1:
record = read_record(input_file)
if record is None:
break
records.append(record)
return records
infile = open("test.txt")
for record in read_records(infile):
print(record)
Learn how to iterate over a file line by line using for line in infile:. No need to test for end of file with a "", the for loop iteration will do that for you:
for line in infile:
# remove trailing newlines, and truncate lines that
# are all-whitespace down to just ''
line = line.rstrip()
if line:
# there is something on this line
else:
# this is a blank line - but it is definitely NOT the end-of-file
As suggested by #PaulMcG here is a solution which iterates over a file line by line.
import re
records = []
count_records = 0
count_newlines = 0
prog = re.compile("^([A-Z][A-Z0-9][A-Z]?): (.*)")
bom = re.compile("^\ufeff")
with open("test.ris") as infile:
for line in infile:
line = line.rstrip()
if bom.match(line):
line = re.sub("^\ufeff", "", line)
if line:
if line.startswith("Record"):
print("START NEW RECORD")
count_records += 1
count_newlines = 0
current_record = {}
continue
match = prog.match(line)
tag = match.groups()[0]
field = match.groups()[1]
if tag == "AU":
if tag in current_record:
current_record[tag].append(field)
else:
current_record[tag] = [field]
else:
current_record.update({tag: field})
else:
count_newlines += 1
if count_newlines > 1 and count_records > 0:
print("# of records: ", count_records)
print("# of newlines: ", count_newlines)
records.append(current_record)
My list is formatted like:
gymnastics_school,participant_name,all-around_points_earned
I need to divide it up by schools but keep the scores.
import collections
def main():
names = ["gymnastics_school", "participant_name", "all_around_points_earned"]
Data = collections.namedtuple("Data", names)
data = []
with open('state_meet.txt','r') as f:
for line in f:
line = line.strip()
items = line.split(',')
items[2] = float(items[2])
data.append(Data(*items))
These are examples of how they're set up:
Lanier City Gymnastics,Ben W.,55.301
Lanier City Gymnastics,Alex W.,54.801
Lanier City Gymnastics,Sky T.,51.2
Lanier City Gymnastics,William G.,47.3
Carrollton Boys,Cameron M.,61.6
Carrollton Boys,Zachary W.,58.7
Carrollton Boys,Samuel B.,58.6
La Fayette Boys,Nate S.,63
La Fayette Boys,Kaden C.,62
La Fayette Boys,Cohan S.,59.1
La Fayette Boys,Cooper J.,56.101
La Fayette Boys,Avi F.,53.401
La Fayette Boys,Frederic T.,53.201
Columbus,Noah B.,50.3
Savannah Metro,Levi B.,52.801
Savannah Metro,Taylan T.,52
Savannah Metro,Jacob S.,51.5
SAAB Gymnastics,Dawson B.,58.1
SAAB Gymnastics,Dean S.,57.901
SAAB Gymnastics,William L.,57.101
SAAB Gymnastics,Lex L.,52.501
Suwanee Gymnastics,Colin K.,57.3
Suwanee Gymnastics,Matthew B.,53.201
After processing it should look like:
Lanier City Gymnastics:participants(4)
as it own list
Carrollton Boys(3)
as it own list
La Fayette Boys(6)
etc.
I would recommend putting them in dictionaries:
data = {}
with open('state_meet.txt','r') as f:
for line in f:
line = line.strip()
items = line.split(',')
items[2] = float(items[2])
if items[0] in data:
data[items[0]].append(items[1:])
else:
data[items[0]] = [items[1:]]
Then access schools could be done in the following way:
>>> data['Lanier City Gymnastics']
[['Ben W.',55.301],['Alex W.',54.801],['Sky T'.,51.2],['William G.',47.3]
EDIT:
Assuming you need the whole dataset as a list first, then you want to divide it into smaller lists you can generate the dictionary from the list:
data = []
with open('state_meet.txt','r') as f:
for line in f:
line = line.strip()
items = line.split(',')
items[2] = float(items[2])
data.append(items)
#perform median or other operation on your data
nested_data = {}
for items in data:
if items[0] in data:
data[items[0]].append(items[1:])
else:
data[items[0]] = [items[1:]]
nested_data[item[0]]
When you need to get a subset of a list you can use slicing:
mylist[start:stop:step]
where start, stop and step are optional (see link for more comprehensive introduction)