I'm sort of new to programming. I have created a class that uses list comprehension in its initializer. It is as follows:
class Collection_of_word_counts():
'''this class has one instance variable, called counts which stores a
dictionary where the keys are words and the values are their occurences'''
def __init__(self:'Collection_of_words', file_name: str) -> None:
''' this initializer will read in the words from the file,
and store them in self.counts'''
l_words = open(file_name).read().split()
s_words = set(l_words)
self.counts = dict([ [word, l_words.count(word)]
for word
in s_words])
I think I did alright for a novice. It works! But I don't exactly understand how this would be represented in a for-loop. My guess was terribly wrong:
self.counts =[]
for word in s_words:
self.counts = [word, l_words.count(word)]
dict(self.counts)
This is what your comprehension is as a for loop:
dictlist = []
for word in s_words:
dictlist.append([word, l_words.count(word)])
self.counts = dict(dictlist)
Your guess was not wrong at all; you just forgot to append and assign back to self.counts:
counts = []
for word in s_words:
counts.append([word, l_words.count(word)])
self.counts = dict(counts)
That's what a list comprehension does, essentially; build a list from the loop expression.
You could also translate that to a dictionary comprehension instead:
self.counts = {word: l_words.count(word) for word in s_words}
or better still, use a collections.Counter() object and save yourself all that work:
from collections import Counter
def __init__(self:'Collection_of_words', file_name: str) -> None:
''' this initializer will read in the words from the file,
and store them in self.counts'''
with open(file_name) as infile:
self.counts = Counter(infile.read().split())
The Counter() object goes about counting your words a little more efficiently, and gives you additional helpful functionality such as listing the top N counts and the ability to merge counts.
You are essentially creating a dictionary, where the key of the dictionary is the word and the value corresponding to that key is the number of times the word appears.
self.counts ={}
for word in s_words:
self.counts[word] = l_words.count(word)
Related
I have a list of (unique) words:
words = [store, worry, periodic, bucket, keen, vanish, bear, transport, pull, tame, rings, classy, humorous, tacit, healthy]
That i want to crosscheck with two different lists of lists (with the same range), while counting the number of hits.
l1 = [[terrible, worry, not], [healthy], [fish, case, bag]]
l2 = [[vanish, healthy, dog], [plant], [waves, healthy, bucket]]
I was thinking of using a dictionary and assume the word as the key, but would need two 'values' (one for each list) for the number of hits.
So the output would be something like:
{"store": [0, 0]}
{"worry": [1, 0]}
...
{"healthy": [1, 2]}
How would something like this work?
Thank you in advance!
You can use itertools to flatten the list and then use dictionary comprehension:
from itertools import chain
words = [store, worry, periodic, bucket, keen, vanish, bear, transport, pull, tame, rings, classy, humorous, tacit, healthy]
l1 = [[terrible, worry, not], [healthy], [fish, case, bag]]
l2 = [[vanish, healthy, dog], [plant], [waves, healthy, bucket]]
l1 = list(chain(*l1))
l2 = list(chain(*l2))
final_count = {i:[l1.count(i), l2.count(i)] for i in words}
For your dictionary example, you would just need to iterate over each list and add those to the dictionary as so:
my_dict = {}
for word in l1:
if word in words: #This makes sure you only work with words that are in your list of unique words
if word not in my_dict:
my_dict[word] = [0,0]
my_dict[word][0] += 1
for word in l2:
if word in words:
if word not in my_dict:
my_dict[word] = [0,0]
my_dict[word][1] += 1
(Or you could make that repeated code a function that passes in for parameter the list, dictionary, and the index, that way you repeat fewer lines)
If your lists are 2d like in your example, then you just change the first iteration in the for loop to be 2d.
my_dict = {}
for group in l1:
for word in group:
if word in words:
if word not in my_dict:
my_dict[word] = [0,0]
my_dict[word][0] += 1
for group in l2
for word in group:
if word in words:
if word not in my_dict:
my_dict[word] = [0,0]
my_dict[word][1] += 1
Though if you are just wanting to know the words in common, perhaps sets could be an option as well, since you have the union operators in sets for easy viewing of all words in common, but sets eliminate duplicates so if the counts are necessary, then the set isn't an option.
I wrote this function:
def make_upper(words):
for word in words:
ind = words.index(word)
words[ind] = word.upper()
I also wrote a function that counts the frequency of occurrences of each letter:
def letter_cnt(word,freq):
for let in word:
if let == 'A': freq[0]+=1
elif let == 'B': freq[1]+=1
elif let == 'C': freq[2]+=1
elif let == 'D': freq[3]+=1
elif let == 'E': freq[4]+=1
Counting letter frequency would be much more efficient with a dictionary, yes. Note that you are manually lining up each letter with a number ("A" with 0, et cetera). Wouldn't it be easier if we could have a data type that directly associated a letter with the number of times it occurs, without adding an extra set of numbers in between?
Consider the code:
freq = {"A":0, "B":0, "C":0, "D":0, ... ..., "Z":0}
for letter in text:
freq[letter] += 1
This dictionary is used to count frequencies much more efficiently than your current code does. You just add one to an entry for a given letter each time you see it.
I will also mention that you can count frequencies effectively with certain libraries. If you are interested in analyzing frequencies, look into collections.Counter() and possibly the collections.Counter.most_common() method.
Whether or not you decide to just use collections.Counter(), I would attempt to learn why dictionaries are useful in this context.
One final note: I personally found typing out the values for the "freq" dictionary to be tedious. If you want you could construct an empty dictionary of alphabet letters on-the-fly with this code:
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
freq = {letter:0 for letter in alphabet}
If you want to convert strings in the list to upper case using lambda, you may use it with map() as:
>>> words = ["Hello", "World"]
>>> map(lambda word: word.upper(), words) # In Python 2
['HELLO', 'WORLD']
# In Python 3, use it as: list(map(...))
As per the map() document:
map(function, iterable, ...)
Apply function to every item of iterable and return a list of the results.
For finding the frequency of each character in word, you may use collections.Counter() (sub class dict type) as:
>>> from collections import Counter
>>> my_word = "hello world"
>>> c = Counter(my_word)
# where c holds dictionary as:
# {'l': 3,
# 'o': 2,
# ' ': 1,
# 'e': 1,
# 'd': 1,
# 'h': 1,
# 'r': 1,
# 'w': 1}
As per Counter Document:
A Counter is a dict subclass for counting hashable objects. It is an unordered collection where elements are stored as dictionary keys and their counts are stored as dictionary values.
for the letter counting, don't reinvent the wheel collections.Counter
A Counter is a dict subclass for counting hashable objects. It is an unordered collection where elements are stored as dictionary keys and their counts are stored as dictionary values. Counts are allowed to be any integer value including zero or negative counts. The Counter class is similar to bags or multisets in other languages.
def punc_remove(words):
for word in words:
if word.isalnum() == False:
charl = []
for char in word:
if char.isalnum()==True:
charl.append(char)
ind = words.index(word)
delimeter = ""
words[ind] = delimeter.join(charl)
def letter_cnt_dic(word,freq_d):
for let in word:
freq_d[let] += 1
import string
def letter_freq(fname):
fhand = open(fname)
freqs = dict()
alpha = list(string.uppercase[:26])
for let in alpha: freqs[let] = freqs.get(let,0)
for line in fhand:
line = line.rstrip()
words = line.split()
punc_remove(words)
#map(lambda word: word.upper(),words)
words = [word.upper() for word in words]
for word in words:
letter_cnt_dic(word,freqs)
fhand.close()
return freqs.values()
You can read the docs about the Counter and the List Comprehensions or run this as a small demo:
from collections import Counter
words = ["acdefg","abcdefg","abcdfg"]
#list comprehension no need for lambda or map
new_words = [word.upper() for word in words]
print(new_words)
# Lets create a dict and a counter
letters = {}
letters_counter = Counter()
for word in words:
# The counter count and add the deltas.
letters_counter += Counter(word)
# We can do it to
for letter in word:
letters[letter] = letters.get(letter,0) + 1
print(letters_counter)
print(letters)
I'm trying to create a function that will read a text file that has one word on each line, like
afd
asmv
adsasd
It will take words of the user given length and will construct a python dictionary where the key is a string of the word where the letters are sorted. The values will be a set of all words that have the same key. So far I have:
def setdict():
wordfile = argv[1]
open(wordfile, "r")
setdict = {}
for line in wordfile:
words = line.split()
for word in words:
word = word.rstrip("\n")
if word == wordlength:
key = str(sorted(word))
I'm a little lost on how to create the sets with words that have the same key and put them in the dictionary. Any help would be appreciated.
collections.defaultdict is useful here:
from collections import defaultdict
from pprint import pprint
words = defaultdict(set)
with open('input.txt') as input_file:
for line in input_file:
for word in line.split():
sorted_list = sorted(word)
sorted_str = ''.join(sorted_list)
words[sorted_str].add(word)
pprint(words)
Of course, anything you can do with defaultdict, you can also do with dict.setdefault():
words = dict()
with open('input.txt') as input_file:
for line in input_file:
for word in line.split():
sorted_list = sorted(word)
sorted_str = ''.join(sorted_list)
words.setdefault(sorted_str, set()).add(word)
start with something simple
words = ["hello","python","world"]
my_dict = {}
for word in words:
try:
my_dict[sorted(word)].append(word)
except KeyError:
my_dict[sorted(word)] = [word]
now instead of using predefined words read them from a file
words = map(str.split,open("some_word_file.txt"))
the key here is to access the dictionary with a for loop that makes the value set available for manipulation. you can solve your problem by reading the file linewise (readline) and checking the following:
for key, value in my_dict:
if sorted(word) == key:
value.append(word)
else:
my_dict[sorted(word)] = value
I am learning python from an introductory Python textbook and I am stuck on the following problem:
You will implement function index() that takes as input the name of a text file and a list of words. For every word in the list, your function will find the lines in the text file where the word occurs and print the corresponding line numbers.
Ex:
>>>> index('raven.txt', ['raven', 'mortal', 'dying', 'ghost', 'ghastly', 'evil', 'demon'])
ghost 9
dying 9
demon 122
evil 99, 106
ghastly 82
mortal 30
raven 44, 53, 55, 64, 78, 97, 104, 111, 118, 120
Here is my attempt at the problem:
def index(filename, lst):
infile = open(filename, 'r')
lines = infile.readlines()
lst = []
dic = {}
for line in lines:
words = line.split()
lst. append(words)
for i in range(len(lst)):
for j in range(len(lst[i])):
if lst[i][j] in lst:
dic[lst[i][j]] = i
return dic
When I run the function, I get back an empty dictionary. I do not understand why I am getting an empty dictionary. So what is wrong with my function? Thanks.
You are overwriting the value of lst. You use it as both a parameter to a function (in which case it is a list of strings) and as the list of words in the file (in which case it's a list of list of strings). When you do:
if lst[i][j] in lst
The comparison always returns False because lst[i][j] is a str, but lst contains only lists of strings, not strings themselves. This means that the assignment to the dic is never executed and you get an empty dict as result.
To avoid this you should use a different name for the list in which you store the words, for example:
In [4]: !echo 'a b c\nd e f' > test.txt
In [5]: def index(filename, lst):
...: infile = open(filename, 'r')
...: lines = infile.readlines()
...: words = []
...: dic = {}
...: for line in lines:
...: line_words = line.split()
...: words.append(line_words)
...: for i in range(len(words)):
...: for j in range(len(words[i])):
...: if words[i][j] in lst:
...: dic[words[i][j]] = i
...: return dic
...:
In [6]: index('test.txt', ['a', 'b', 'c'])
Out[6]: {'a': 0, 'c': 0, 'b': 0}
There are also a lot of things you can change.
When you want to iterate a list you don't have to explicitly use indexes. If you need the index you can use enumerate:
for i, line_words in enumerate(words):
for word in line_words:
if word in lst: dict[word] = i
You can also iterate directly on a file (refer to Reading and Writing Files section of the python tutorial for a bit more information):
# use the with statement to make sure that the file gets closed
with open('test.txt') as infile:
for i, line in enumerate(infile):
print('Line {}: {}'.format(i, line))
In fact I don't see why would you first build that words list of list. Just itertate on the file directly while building the dictionary:
def index(filename, lst):
with open(filename, 'r') as infile:
dic = {}
for i, line in enumerate(infile):
for word in line.split():
if word in lst:
dic[word] = i
return dic
Your dic values should be lists, since more than one line can contain the same word. As it stands your dic would only store the last line where a word is found:
from collections import defaultdict
def index(filename, words):
# make faster the in check afterwards
words = frozenset(words)
with open(filename) as infile:
dic = defaultdict(list)
for i, line in enumerate(infile):
for word in line.split():
if word in words:
dic[word].append(i)
return dic
If you don't want to use the collections.defaultdict you can replace dic = defaultdict(list) with dic = {} and then change the:
dic[word].append(i)
With:
if word in dic:
dic[word] = [i]
else:
dic[word].append(i)
Or, alternatively, you can use dict.setdefault:
dic.setdefault(word, []).append(i)
although this last way is a bit slower than the original code.
Note that all these solutions have the property that if a word isn't found in the file it will not appear in the result at all. However you may want it in the result, with an emty list as value. In such a case it's simpler the dict with empty lists before starting to loop, such as in:
dic = {word : [] for word in words}
for i, line in enumerate(infile):
for word in line.split():
if word in words:
dic[word].append(i)
Refer to the documentation about List Comprehensions and Dictionaries to understand the first line.
You can also iterate over words instead of the line, like this:
dic = {word : [] for word in words}
for i, line in enumerate(infile):
for word in words:
if word in line.split():
dic[word].append(i)
Note however that this is going to be slower because:
line.split() returns a list, so word in line.split() will have to scan all the list.
You are repeating the computation of line.split().
You can try to solve these two problems doing:
dic = {word : [] for word in words}
for i, line in enumerate(infile):
line_words = frozenset(line.split())
for word in words:
if word in line_words:
dic[word].append(i)
Note that here we are iterating once over line.split() to build the set and also over words. Depending on the sizes of the two sets this may be slower or faster than the original version (iteratinv over line.split()).
However at this point it's probably faster to intersect the sets:
dic = {word : [] for word in words}
for i, line in enumerate(infile):
line_words = frozenset(line.split())
for word in words & line_words: # & stands for set intersection
dic[word].append(i)
Try this,
def index(filename, lst):
dic = {w:[] for w in lst}
for n,line in enumerate( open(filename,'r') ):
for word in lst:
if word in line.split(' '):
dic[word].append(n+1)
return dic
There are some features of the language introduced here that you should be aware of because they will make life a lot easier in the long run.
The first is a dictionary comprehension. It basically initializes a dictionary using the words in lst as keys and an empty list [] as the value for each key.
Next the enumerate command. This allows us to iterate over the items in a sequence but also gives us the index of those items. In this case, because we passed a file object to enumerate it will loop over the lines. For each iteration, n will be the 0-based index of the line and line will be the line itself. Next we iterate over the words in lst.
Notice that we don't need any indices here. Python encourages looping over objects in sequences rather than looping over indices and then accessing the objects in a sequence based on index (for example discourages doing for i in range(len(lst)): do something with lst[i]).
Finally, the in operator is a very straightforward way to test membership for many types of objects and the syntax is very intuitive. In this case, we are asking is the current word from lst in the current line.
Note that we use line.split(' ') to get a list of the words in the line. If we don't do this, 'the' in 'there was a ghost' would return True as the is a substring of one of the words.
On the other hand 'the' in ['there', 'was', 'a', 'ghost'] would return False. If the conditional returns True, we append it to the list associated to the key in our dictionary.
That might be a lot to chew on, but these concepts make problems like this more straight forward.
First, your function param with the words is named lst and also the list where you put all the words in the file is also named lst, so you are not saving the words passed to your functions, because on line 4 you're redeclaring the list.
Second, You are iterating over each line in the file (the first for), and getting the words in that line. After that lst has all the words in the entire file. So in the for i ... you are iterating over all the words readed from the file, there's no need to use the third for j where you are iterating over each character in every word.
In resume, in that if you are saying "If this single character is in the lists of words ..." wich is not, so the dict will be never filled up.
for i in range(len(lst)):
if words[i] in lst:
dic[words[i]] = dic[words[i]] + i # To count repetitions
You need to rethink the problem, even my answer will fail because the word in the dict will not exist giving an error, but you get the point. Good luck!
I am trying to get the difference between 2 containers but the containers are in a weird structure so I dont know whats the best way to perform a difference on it. One containers type and structure I cannot alter but the others I can(variable delims).
delims = ['on','with','to','and','in','the','from','or']
words = collections.Counter(s.split()).most_common()
# words results in [("the",2), ("a",9), ("diplomacy", 1)]
#I want to perform a 'difference' operation on words to remove all the delims words
descriptive_words = set(words) - set(delims)
# because of the unqiue structure of words(list of tuples) its hard to perform a difference
# on it. What would be the best way to perform a difference? Maybe...
delims = [('on',0),('with',0),('to',0),('and',0),('in',0),('the',0),('from',0),('or',0)]
words = collections.Counter(s.split()).most_common()
descriptive_words = set(words) - set(delims)
# Or maybe
words = collections.Counter(s.split()).most_common()
n_words = []
for w in words:
n_words.append(w[0])
delims = ['on','with','to','and','in','the','from','or']
descriptive_words = set(n_words) - set(delims)
How about just modifying words by removing all the delimiters?
words = collections.Counter(s.split())
for delim in delims:
del words[delim]
This I how I would do it:
delims = set(['on','with','to','and','in','the','from','or'])
# ...
descriptive_words = filter(lamdba x: x[0] not in delims, words)
Using the filter method. A viable alternative would be:
delims = set(['on','with','to','and','in','the','from','or'])
# ...
decsriptive_words = [ (word, count) for word,count in words if word not in delims ]
Making sure that the delims are in a set to allow for O(1) lookup.
The simplest answer is to do:
import collections
s = "the a a a a the a a a a a diplomacy"
delims = {'on','with','to','and','in','the','from','or'}
// For older versions of python without set literals:
// delims = set(['on','with','to','and','in','the','from','or'])
words = collections.Counter(s.split())
not_delims = {key: value for (key, value) in words.items() if key not in delims}
// For older versions of python without dict comprehensions:
// not_delims = dict(((key, value) for (key, value) in words.items() if key not in delims))
Which gives us:
{'a': 9, 'diplomacy': 1}
An alternative option is to do it pre-emptively:
import collections
s = "the a a a a the a a a a a diplomacy"
delims = {'on','with','to','and','in','the','from','or'}
counted_words = collections.Counter((word for word in s.split() if word not in delims))
Here you apply the filtering on the list of words before you give it to the counter, and this gives the same result.
If you're iterating through it anyway why bother converting them to sets?
dwords = [delim[0] for delim in delims]
words = [word for word in words if word[0] not in dwords]
For performance, you can use lambda functions
filter(lambda word: word[0] not in delim, words)