Im doing a course in bioinformatics. We were supposed to create a function that takes a list of strings like this:
Motifs =[
"AACGTA",
"CCCGTT",
"CACCTT",
"GGATTA",
"TTCCGG"]
and turn it into a count matrix that counts the occurrence of the nucleotides (the letters A, C, G and T) in each column and adds a pseudocount 1 to it, represented by a dictionary with multiple values for each key like this:
count ={
'A': [2, 3, 2, 1, 1, 3],
'C': [3, 2, 5, 3, 1, 1],
'G': [2, 2, 1, 3, 2, 2],
'T': [2, 2, 1, 2, 5, 3]}
For example A occurs 1 + 1 pseudocount = 2 in the first column. C appears 2 + 1 pseudocount = 3 in the fourth column.
Here is my solution:
def CountWithPseudocounts(Motifs):
t = len(Motifs)
k = len(Motifs[0])
count = {}
for symbol in "ACGT":
count[symbol] = [1 for j in range(k)]
for i in range(t):
for j in range(k):
symbol = Motifs[i][j]
count[symbol][j] += 1
return count
The first set of for loops generates a dictionary with the keys A,C,G,T and the initial values 1 for each column like this:
count ={
'A': [1, 1, 1, 1, 1, 1],
'C': [1, 1, 1, 1, 1, 1],
'G': [1, 1, 1, 1, 1, 1],
'T': [1, 1, 1, 1, 1, 1]}
The second set of for loops counts the occurrence of the nucleotides and adds it to the values of the existing dictionary as seen above.
This works and does its job, but I want to know how to further compress both for loops using dict comprehensions.
NOTE:
I am fully aware that there are a multitude of modules and libraries like biopython, scipy and numpy that probably can turn the entire function into a one liner. The problem with modules is that their output format often doesnt match with what the automated solution check from the course is expecting.
This
count = {}
for symbol in "ACGT":
count[symbol] = [1 for j in range(k)]
can be changed to comprehension as follows
count = {symbol:[1 for j in range(k)] for symbol in "ACGT"}
and then further simplified by using pythons ability to multiply list by integer to
count = {symbol:[1]*k for symbol in "ACGT"}
compressing the first loop:
count = {symbol: [1 for j in range(k)] for symbol in "ACGT"}
This method is called a generator (or dict comprehension) - it generates a dict using a for loop.
I'm not sure you can compress the second (nested) loop, since it's not generating anything, but changing the first dict.
You can compress a lot your code using collections.Counter and collections.defaultdict:
from collections import Counter, defaultdict
out = defaultdict(list)
bases = 'ACGT'
for m in zip(*Motifs):
c = Counter(m)
for b in bases:
out[b].append(c[b]+1)
dict(out)
output:
{'A': [2, 3, 2, 1, 1, 3],
'C': [3, 2, 5, 3, 1, 1],
'G': [2, 2, 1, 3, 2, 2],
'T': [2, 2, 1, 2, 5, 3]}
You can use collections.Counter:
from collections import Counter
m = ['AACGTA', 'CCCGTT', 'CACCTT', 'GGATTA', 'TTCCGG']
d = [Counter(i) for i in zip(*m)]
r = {a:[j.get(a, 0)+1 for j in d] for a in 'ACGT'}
Output:
{'A': [2, 3, 2, 1, 1, 3], 'C': [3, 2, 5, 3, 1, 1], 'G': [2, 2, 1, 3, 2, 2], 'T': [2, 2, 1, 2, 5, 3]}
I have such a problem
Given an array nums of n integers, are there elements a, b in nums such that a + b = 10? Find all unique couples in the array which gives the sum of target.
Note:
The solution set must not contain duplicate couples.
Example:
Given nums = [4, 7, 6, 3, 5], target = 10
because 4+ 6= 7+ 3 = 10
return [[4, 6], [7,3]]
My solution:
class SolutionAll: #Single Pass Approach
def twoSum(self, nums, target) -> List[List[int]]:
"""
:type nums: List[int]
:type target: int
"""
nums.sort()
nums_d:dict = {}
couples = []
if len(nums) < 2:
return []
for i in range(len(nums)):
if i > 0 and nums[i] == nums[i-1]: continue #skip the duplicates
complement = target - nums[i]
if nums_d.get(complement) != None:
couples.append([nums[i], complement])
nums_d[nums[i]] = i
return couples
TestCase Results:
target: 9
nums: [4, 7, 6, 3, 5]
DEBUG complement: 6
DEBUG nums_d: {3: 0}
DEBUG couples: []
DEBUG complement: 5
DEBUG nums_d: {3: 0, 4: 1}
DEBUG couples: []
DEBUG complement: 4
DEBUG nums_d: {3: 0, 4: 1, 5: 2}
DEBUG couples: [[5, 4]]
DEBUG complement: 3
DEBUG nums_d: {3: 0, 4: 1, 5: 2, 6: 3}
DEBUG couples: [[5, 4], [6, 3]]
DEBUG complement: 2
DEBUG nums_d: {3: 0, 4: 1, 5: 2, 6: 3, 7: 4}
DEBUG couples: [[5, 4], [6, 3]]
result: [[5, 4], [6, 3]]
.
target: 2
nums: [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
DEBUG complement: 2
DEBUG nums_d: {0: 0}
DEBUG couples: []
DEBUG complement: 1
DEBUG nums_d: {0: 0, 1: 9}
DEBUG couples: []
result: []
The solution works with [4, 7, 6, 3, 5] but failed with [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
I tried to remove the duplicates but get an unexpected results.
How could solve the problem with this one Pass solution?
The problem with your code is that it skips duplicate numbers, not duplicate pairs. Because of the
if i > 0 and nums[i] == nums[i-1]: continue #skip the duplicates
your code never tries to sum 1 + 1 = 2.
Here's a working solution with O(n) complexity:
from collections import Counter
def two_sum(nums, target):
nums = Counter(nums) # count how many times each number occurs
for num in list(nums): # iterate over a copy because we'll delete elements
complement = target - num
if complement not in nums:
continue
# if the number is its own complement, check if it
# occurred at least twice in the input
if num == complement and nums[num] < 2:
continue
yield (num, complement)
# delete the number from the dict so that we won't
# output any duplicate pairs
del nums[num]
>>> list(two_sum([4, 7, 6, 3, 5], 10))
[(4, 6), (7, 3)]
>>> list(two_sum([0, 0, 0, 1, 1, 1], 2))
[(1, 1)]
See also:
collections.Counter
What does the "yield" keyword do?
Not sure what's wrong with your solution (and not sure what's right with it either), but you can achieve this easily in a "pretty-pythonic" manner:
def func(nums,target):
return [(a,b) for a in nums for b in nums if a+b == target]
It assumes that two tuples which differ only by the order of elements are unique, and that an element can be used twice in the same tuple. If the definitions of the question are otherwise, then you can filter those tuples out of the returned value.
edited: see discussion below.
from itertools import combinations
list(set([(a,b) for a,b in combinations(sorted(nums),2) if a+b == target]))
This will remove duplicates as well.
Other version:
>>> nums = [4, 7, 6, 3, 5]
>>> target = 9
>>> set((a, target-a) for a in nums if target-a in set(nums))
{(4, 5), (5, 4), (3, 6), (6, 3)}
For every element a of nums, if target-a in also in nums, we have:
a + target-a = target (obvious);
a and target-a are both in nums.
Since we loop over every a, we get all the solutions.
To get rid of the duplicates (x, y) and (y, x):
>>> set((a, target-a) for a in nums if 2*a<=target and target-a in set(nums))
{(4, 5), (3, 6)}
Because 2*a <= target is equivalent to a <= target-a. When a > target-a and the requested conditions are met, we have a previous b = target-a so that (b, target-b) is a solution.
I'd like to do the cartesian product of multiple dicts, based on their keys, and then sum the produced tuples, and return that as a dict. Keys that don't exist in one dict should be ignored (this constraint is ideal, but not necessary; i.e. you may assume all keys exist in all dicts if needed). Below is basically what I'm trying to achieve (example shown with two dicts). Is there a simpler way to do this, and with N dicts?
def doProdSum(inp1, inp2):
prod = defaultdict(lambda: 0)
for key in set(list(inp1.keys())+list(inp2.keys())):
if key not in prod:
prod[key] = []
if key not in inp1 or key not in inp2:
prod[key] = inp1[key] if key in inp1 else inp2[key]
continue
for values in itertools.product(inp1[key], inp2[key]):
prod[key].append(values[0] + values[1])
return prod
x = doProdSum({"a":[0,1,2],"b":[10],"c":[1,2,3,4]}, {"a":[1,1,1],"b":[1,2,3,4,5]})
print(x)
Output (as expected):
{'c': [1, 2, 3, 4], 'b': [11, 12, 13, 14, 15], 'a': [1, 1, 1, 2, 2, 2,
3, 3, 3]}
You can do it like this, by first reorganizing your data by key:
from collections import defaultdict
from itertools import product
def doProdSum(list_of_dicts):
# We reorganize the data by key
lists_by_key = defaultdict(list)
for d in list_of_dicts:
for k, v in d.items():
lists_by_key[k].append(v)
# list_by_key looks like {'a': [[0, 1, 2], [1, 1, 1]], 'b': [[10], [1, 2, 3, 4, 5]],'c': [[1, 2, 3, 4]]}
# Then we generate the output
out = {}
for key, lists in lists_by_key.items():
out[key] = [sum(prod) for prod in product(*lists)]
return out
Example output:
list_of_dicts = [{"a":[0,1,2],"b":[10],"c":[1,2,3,4]}, {"a":[1,1,1],"b":[1,2,3,4,5]}]
doProdSum(list_of_dicts)
# {'a': [1, 1, 1, 2, 2, 2, 3, 3, 3],
# 'b': [11, 12, 13, 14, 15],
# 'c': [1, 2, 3, 4]}
This question already has answers here:
Using a dictionary to count the items in a list
(8 answers)
Closed 7 months ago.
Given an unordered list of values like
a = [5, 1, 2, 2, 4, 3, 1, 2, 3, 1, 1, 5, 2]
How can I get the frequency of each value that appears in the list, like so?
# `a` has 4 instances of `1`, 4 of `2`, 2 of `3`, 1 of `4,` 2 of `5`
b = [4, 4, 2, 1, 2] # expected output
In Python 2.7 (or newer), you can use collections.Counter:
>>> import collections
>>> a = [5, 1, 2, 2, 4, 3, 1, 2, 3, 1, 1, 5, 2]
>>> counter = collections.Counter(a)
>>> counter
Counter({1: 4, 2: 4, 5: 2, 3: 2, 4: 1})
>>> counter.values()
dict_values([2, 4, 4, 1, 2])
>>> counter.keys()
dict_keys([5, 1, 2, 4, 3])
>>> counter.most_common(3)
[(1, 4), (2, 4), (5, 2)]
>>> dict(counter)
{5: 2, 1: 4, 2: 4, 4: 1, 3: 2}
>>> # Get the counts in order matching the original specification,
>>> # by iterating over keys in sorted order
>>> [counter[x] for x in sorted(counter.keys())]
[4, 4, 2, 1, 2]
If you are using Python 2.6 or older, you can download an implementation here.
If the list is sorted, you can use groupby from the itertools standard library (if it isn't, you can just sort it first, although this takes O(n lg n) time):
from itertools import groupby
a = [5, 1, 2, 2, 4, 3, 1, 2, 3, 1, 1, 5, 2]
[len(list(group)) for key, group in groupby(sorted(a))]
Output:
[4, 4, 2, 1, 2]
Python 2.7+ introduces Dictionary Comprehension. Building the dictionary from the list will get you the count as well as get rid of duplicates.
>>> a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
>>> d = {x:a.count(x) for x in a}
>>> d
{1: 4, 2: 4, 3: 2, 4: 1, 5: 2}
>>> a, b = d.keys(), d.values()
>>> a
[1, 2, 3, 4, 5]
>>> b
[4, 4, 2, 1, 2]
Count the number of appearances manually by iterating through the list and counting them up, using a collections.defaultdict to track what has been seen so far:
from collections import defaultdict
appearances = defaultdict(int)
for curr in a:
appearances[curr] += 1
In Python 2.7+, you could use collections.Counter to count items
>>> a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
>>>
>>> from collections import Counter
>>> c=Counter(a)
>>>
>>> c.values()
[4, 4, 2, 1, 2]
>>>
>>> c.keys()
[1, 2, 3, 4, 5]
Counting the frequency of elements is probably best done with a dictionary:
b = {}
for item in a:
b[item] = b.get(item, 0) + 1
To remove the duplicates, use a set:
a = list(set(a))
You can do this:
import numpy as np
a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
np.unique(a, return_counts=True)
Output:
(array([1, 2, 3, 4, 5]), array([4, 4, 2, 1, 2], dtype=int64))
The first array is values, and the second array is the number of elements with these values.
So If you want to get just array with the numbers you should use this:
np.unique(a, return_counts=True)[1]
Here's another succint alternative using itertools.groupby which also works for unordered input:
from itertools import groupby
items = [5, 1, 1, 2, 2, 1, 1, 2, 2, 3, 4, 3, 5]
results = {value: len(list(freq)) for value, freq in groupby(sorted(items))}
results
format: {value: num_of_occurencies}
{1: 4, 2: 4, 3: 2, 4: 1, 5: 2}
I would simply use scipy.stats.itemfreq in the following manner:
from scipy.stats import itemfreq
a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
freq = itemfreq(a)
a = freq[:,0]
b = freq[:,1]
you may check the documentation here: http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.stats.itemfreq.html
from collections import Counter
a=["E","D","C","G","B","A","B","F","D","D","C","A","G","A","C","B","F","C","B"]
counter=Counter(a)
kk=[list(counter.keys()),list(counter.values())]
pd.DataFrame(np.array(kk).T, columns=['Letter','Count'])
seta = set(a)
b = [a.count(el) for el in seta]
a = list(seta) #Only if you really want it.
Suppose we have a list:
fruits = ['banana', 'banana', 'apple', 'banana']
We can find out how many of each fruit we have in the list like so:
import numpy as np
(unique, counts) = np.unique(fruits, return_counts=True)
{x:y for x,y in zip(unique, counts)}
Result:
{'banana': 3, 'apple': 1}
This answer is more explicit
a = [1,1,1,1,2,2,2,2,3,3,3,4,4]
d = {}
for item in a:
if item in d:
d[item] = d.get(item)+1
else:
d[item] = 1
for k,v in d.items():
print(str(k)+':'+str(v))
# output
#1:4
#2:4
#3:3
#4:2
#remove dups
d = set(a)
print(d)
#{1, 2, 3, 4}
For your first question, iterate the list and use a dictionary to keep track of an elements existsence.
For your second question, just use the set operator.
def frequencyDistribution(data):
return {i: data.count(i) for i in data}
print frequencyDistribution([1,2,3,4])
...
{1: 1, 2: 1, 3: 1, 4: 1} # originalNumber: count
I am quite late, but this will also work, and will help others:
a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
freq_list = []
a_l = list(set(a))
for x in a_l:
freq_list.append(a.count(x))
print 'Freq',freq_list
print 'number',a_l
will produce this..
Freq [4, 4, 2, 1, 2]
number[1, 2, 3, 4, 5]
a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
counts = dict.fromkeys(a, 0)
for el in a: counts[el] += 1
print(counts)
# {1: 4, 2: 4, 3: 2, 4: 1, 5: 2}
a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
# 1. Get counts and store in another list
output = []
for i in set(a):
output.append(a.count(i))
print(output)
# 2. Remove duplicates using set constructor
a = list(set(a))
print(a)
Set collection does not allow duplicates, passing a list to the set() constructor will give an iterable of totally unique objects. count() function returns an integer count when an object that is in a list is passed. With that the unique objects are counted and each count value is stored by appending to an empty list output
list() constructor is used to convert the set(a) into list and referred by the same variable a
Output
D:\MLrec\venv\Scripts\python.exe D:/MLrec/listgroup.py
[4, 4, 2, 1, 2]
[1, 2, 3, 4, 5]
Simple solution using a dictionary.
def frequency(l):
d = {}
for i in l:
if i in d.keys():
d[i] += 1
else:
d[i] = 1
for k, v in d.iteritems():
if v ==max (d.values()):
return k,d.keys()
print(frequency([10,10,10,10,20,20,20,20,40,40,50,50,30]))
#!usr/bin/python
def frq(words):
freq = {}
for w in words:
if w in freq:
freq[w] = freq.get(w)+1
else:
freq[w] =1
return freq
fp = open("poem","r")
list = fp.read()
fp.close()
input = list.split()
print input
d = frq(input)
print "frequency of input\n: "
print d
fp1 = open("output.txt","w+")
for k,v in d.items():
fp1.write(str(k)+':'+str(v)+"\n")
fp1.close()
from collections import OrderedDict
a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
def get_count(lists):
dictionary = OrderedDict()
for val in lists:
dictionary.setdefault(val,[]).append(1)
return [sum(val) for val in dictionary.values()]
print(get_count(a))
>>>[4, 4, 2, 1, 2]
To remove duplicates and Maintain order:
list(dict.fromkeys(get_count(a)))
>>>[4, 2, 1]
i'm using Counter to generate a freq. dict from text file words in 1 line of code
def _fileIndex(fh):
''' create a dict using Counter of a
flat list of words (re.findall(re.compile(r"[a-zA-Z]+"), lines)) in (lines in file->for lines in fh)
'''
return Counter(
[wrd.lower() for wrdList in
[words for words in
[re.findall(re.compile(r'[a-zA-Z]+'), lines) for lines in fh]]
for wrd in wrdList])
For the record, a functional answer:
>>> L = [1,1,1,1,2,2,2,2,3,3,4,5,5]
>>> import functools
>>> >>> functools.reduce(lambda acc, e: [v+(i==e) for i, v in enumerate(acc,1)] if e<=len(acc) else acc+[0 for _ in range(e-len(acc)-1)]+[1], L, [])
[4, 4, 2, 1, 2]
It's cleaner if you count zeroes too:
>>> functools.reduce(lambda acc, e: [v+(i==e) for i, v in enumerate(acc)] if e<len(acc) else acc+[0 for _ in range(e-len(acc))]+[1], L, [])
[0, 4, 4, 2, 1, 2]
An explanation:
we start with an empty acc list;
if the next element e of L is lower than the size of acc, we just update this element: v+(i==e) means v+1 if the index i of acc is the current element e, otherwise the previous value v;
if the next element e of L is greater or equals to the size of acc, we have to expand acc to host the new 1.
The elements do not have to be sorted (itertools.groupby). You'll get weird results if you have negative numbers.
Another approach of doing this, albeit by using a heavier but powerful library - NLTK.
import nltk
fdist = nltk.FreqDist(a)
fdist.values()
fdist.most_common()
Found another way of doing this, using sets.
#ar is the list of elements
#convert ar to set to get unique elements
sock_set = set(ar)
#create dictionary of frequency of socks
sock_dict = {}
for sock in sock_set:
sock_dict[sock] = ar.count(sock)
For an unordered list you should use:
[a.count(el) for el in set(a)]
The output is
[4, 4, 2, 1, 2]
Yet another solution with another algorithm without using collections:
def countFreq(A):
n=len(A)
count=[0]*n # Create a new list initialized with '0'
for i in range(n):
count[A[i]]+= 1 # increase occurrence for value A[i]
return [x for x in count if x] # return non-zero count
num=[3,2,3,5,5,3,7,6,4,6,7,2]
print ('\nelements are:\t',num)
count_dict={}
for elements in num:
count_dict[elements]=num.count(elements)
print ('\nfrequency:\t',count_dict)
You can use the in-built function provided in python
l.count(l[i])
d=[]
for i in range(len(l)):
if l[i] not in d:
d.append(l[i])
print(l.count(l[i])
The above code automatically removes duplicates in a list and also prints the frequency of each element in original list and the list without duplicates.
Two birds for one shot ! X D
This approach can be tried if you don't want to use any library and keep it simple and short!
a = [1,1,1,1,2,2,2,2,3,3,4,5,5]
marked = []
b = [(a.count(i), marked.append(i))[0] for i in a if i not in marked]
print(b)
o/p
[4, 4, 2, 1, 2]