How to vectorize dict_keys to list converting? - python

I have an 2D-array of dict_keys array and want to convert it to 3D-array.
Y = array([[Counter({'Q': 1, 'S': 1}),
Counter({'Z': 2, 'P': 2, 'L': 1})],
[Counter({'I': 2}),
Counter({'R': 2, 'Q': 1})]], dtype=object)
I have vectorKeys, which give me
vectorKeys(Y) = array([[dict_keys(['Q','S']),
dict_keys(['Z','P', 'L']],
[dict_keys(['I']),
dict_keys(['R','Q'])]], dtype=object)
and i want to get
Y = array([[['Q','S'],
['Z','P', 'L']],
[['I'],
['R','Q']]], dtype=object)
I tried to get it with
vectorList = np.vectorize(list)
vectorNPArray = np.vectorize(np.array)
but it doesn't work

Related

Python, how to find patterns of different length and to sum the number of match

I have a list like that:hg = [['A1'], ['A1b'], ['A1b1a1a2a1a~'], ['BT'], ['CF'], ['CT'], ['F'], ['GHIJK'], ['I'], ['I1a2a1a1d2a1a~'], ['I2'], ['I2~'], ['I2a'], ['I2a1'], ['I2a1a'], ['I2a1a2'], ['I2a1a2~'], ['IJ'], ['IJK'], ['L1a2']]
For example, if we look at :['A1'] ['A1b'] ['A1b1a1a2a1a~']
I want to count how many time the pattern 'A1','A1b' and 'A1b1a1a2a1a~' occurs.
Basically, A1 appears 3 times (A1 itself, A1 in A1b and A1 in A1b1a1a2a1a) and A1b two times (A1b itself and A1b in A1b1a1a2a1a) and A1b1a1a2a1a one time. Obviously, I want to do that for the entire list.
However, if in the list we have for example E1b1a1, I don't want to count a match of A1 in E1b1a1.
So what I did is:
dic_test = {}
for i in hg:
for j in hg:
if ''.join(i) in ''.join(j):
if ''.join(i) not in dic_test.keys():
dic_test[''.join(i)]=1
else:
dic_test[''.join(i)]+=1
print (dic_test)
output:{'A1': 3, 'A1b': 2, 'A1b1a1a2a1a~': 1, 'BT': 1, 'CF': 1, 'CT': 1, 'F': 2, 'GHIJK': 1, 'I': 12, 'I1a2a1a1d2a1a~': 1, 'I2': 7, 'I2~': 1, 'I2a': 5, 'I2a1': 4, 'I2a1a': 3, 'I2a1a2': 2, 'I2a1a2~': 1, 'IJ': 3, 'IJK': 2, 'L1a2': 1}
However, as explained above, there is one issue. For example, F should be equal at one and not 2. The reason is because with the code above, I look for F anywhere in the list. But I don't know how to correct that!
There is a second thing that I don't know how to do:
Based on the output:
{'A1': 3, 'A1b': 2, 'A1b1a1a2a1a~': 1, 'BT': 1, 'CF': 1, 'CT': 1, 'F': 2, 'GHIJK': 1, 'I': 12, 'I1a2a1a1d2a1a~': 1, 'I2': 7, 'I2~': 1, 'I2a': 5, 'I2a1': 4, 'I2a1a': 3, 'I2a1a2': 2, 'I2a1a2~': 1, 'IJ': 3, 'IJK': 2, 'L1a2': 1}
I would like to sum the values of the dic based on shared pattern:
example of the desired output{A1b1a1a2a1a~: 6, 'BT': 1,'CF': 1, 'CT': 1, 'F': 1, 'GHIJK': 1, 'I1a2a1a1d2a1a~': 13, I2a1a2:35, 'IJK': 5, 'IJK': 5}:
For example, A1b1a1a2a1a = 6 it's because it is made by A1 which has a value of 3, A1b with a value of 2 and the value of A1b1a1a2a1a equal at 1.
I don't know how to do that.
Any helps will be much appreciated!
Thanks
You count 'F' twice because you are iterating over the product of hg and hg so that the condition if ''.join(i) in ''.join(j) happens twice for 'F'. I solved that by checking the indexes.
You mentioned in the comment that the pattern should be at the beginning of the string so in doesn't work here. You can use .startswith() for that.
I first created a dictionary from the items but sorted(That's important for your second question about summing the values). They all start with the value of 1. Then I iterated over the the items, increased the value only if they are not in the same position.
For the second part of your question, because they are sorted, only the previous items can be at the beginning of the next items. So I got the pairs with .popitem() which hands the last pair (in Python 3.7 and above) and check its previous ones until the dictionary is empty.
hg = [['A1'], ['A1b'], ['A1b1a1a2a1a~'], ['BT'], ['CF'], ['CT'], ['F'], ['GHIJK'], ['I'], ['I1a2a1a1d2a1a~'], ['I2'], ['I2~'], ['I2a'], ['I2a1'], ['I2a1a'], ['I2a1a2'], ['I2a1a2~'], ['IJ'], ['IJK'], ['L1a2']]
# create a sorted dicitonary of all items each with the value of 1.
d = dict.fromkeys((item[0] for item in sorted(hg)), 1)
for idx1, (k, v) in enumerate(d.items()):
for idx2, item in enumerate(hg):
if idx1 != idx2 and item[0].startswith(k):
d[k] += 1
print(d)
print("-----------------------------------")
# last pair in `d`
k, v = d.popitem()
result = {k: v}
while d:
# pop last pair in `d`
k1, v1 = d.popitem()
# get last pair in `result`
k2, v2 = next(reversed(result.items()))
if k2.startswith(k1):
result[k2] += v1
else:
result[k1] = v1
print({k: result[k] for k in reversed(result)})
output:
{'A1': 3, 'A1b': 2, 'A1b1a1a2a1a~': 1, 'BT': 1, 'CF': 1, 'CT': 1, 'F': 1, 'GHIJK': 1, 'I': 11, 'I1a2a1a1d2a1a~': 1, 'I2': 7, 'I2a': 6, 'I2a1': 5, 'I2a1a': 4, 'I2a1a2': 3, 'I2a1a2~': 2, 'I2~': 2, 'IJ': 2, 'IJK': 1, 'L1a2': 1}
-----------------------------------
{'A1b1a1a2a1a~': 6, 'BT': 1, 'CF': 1, 'CT': 1, 'F': 1, 'GHIJK': 1, 'I1a2a1a1d2a1a~': 12, 'I2a1a2~': 27, 'I2~': 2, 'IJK': 3, 'L1a2': 1}
I think you made a mistake for your expected result and it should be like this, but let me know if mine is wrong.
#S.B helped me to better understand what I wanted to do, so I did some modifications to the second part of the script.
I converted the dictionary "d" (re-named "hg_d") into a list of list:
hg_d_to_list = list(map(list, hg_d.items()))
Then, I created a dictionary where the keys are the words and the values the list of the words that matches with startswith() like:
nested_HGs = defaultdict(list)
for i in range(len(hg_d_to_list)):
for j in range(i+1,len(hg_d_to_list)):
if hg_d_to_list[j][0].startswith(hg_d_to_list[i][0]):
nested_HGs[hg_d_to_list[j][0]].append(hg_d_to_list[i][0])
nested_HGs defaultdict(<class 'list'>, {'A1b': ['A1'], 'A1b1a1a2a1a': ['A1', 'A1b'], 'I1a2a1a1d2a1a~': ['I'], 'I2': ['I'], 'I2a': ['I', 'I2'], 'I2a1': ['I', 'I2', 'I2a'], 'I2a1a': ['I', 'I2', 'I2a', 'I2a1'], 'I2a1a2': ['I', 'I2', 'I2a', 'I2a1', 'I2a1a'], 'I2a1a2~': ['I', 'I2', 'I2a', 'I2a1', 'I2a1a', 'I2a1a2'], 'I2~': ['I', 'I2'], 'IJ': ['I'], 'IJK': ['I', 'IJ']})
Then, I sum each key and the value(s) associated to the dictionary "nested_HGs" based on the values of the dictionary "hg_d" like:
HGs_score = {}
for key,val in hg_d.items():
for key2,val2 in nested_HGs.items():
if key in val2 or key in key2:
if key2 not in HGs_score.keys():
HGs_score[key2]=val
else:
HGs_score[key2]+=val
HGs_score {'A1b': 5, 'A1b1a1a2a1a': 6, 'I1a2a1a1d2a1a~': 12, 'I2': 18, 'I2a': 24, 'I2a1': 29, 'I2a1a': 33, 'I2a1a2': 36, 'I2a1a2~': 38, 'I2~': 20, 'IJ': 13, 'IJK': 14}
Here, I realized that I don't care about the key with a value = at 1.
To finish, I get the key of the dictionary that has the highest value :
final_HG_classification = max(HGs_score, key=HGs_score.get)
final_HG_classification=I2a1a2~
It looks like it's working! Any suggestions or improvements are more than welcome.
Thanks in advance.

How to perform letter frequency?

This problem requires me to find the frequency analysis of a .txt file.
This is my code so far:
This finds the frequency of the words, but how would I get the frequency of the actual letters?
f = open('cipher.txt', 'r')
word_count = []
for c in f:
word_count.append(c)
word_count.sort()
decoding = {}
for i in word_count:
decoding[i] = word_count.count(i)
for n in decoding:
print(decoding)
This outputs (as a short example, since the txt file is pretty long):
{'\n': 12, 'vlvf zev jvg jrgs gvzef\n': 1, 'z uvfgriv sbhfv bu wboof!\n': 1, "gsv yrewf zoo nbhea zaw urfsvf'\n": 1, 'xbhow ube gsv avj bjave yv\n': 1, ' gsv fcerat rf czffrat -\n': 1, 'viva gsrf tezff shg\n': 1, 'bph ab sbfbnrxsr (azeebj ebzw gb gsv wvvc abegs)\n': 1, 'cbfg rafrwv gsv shg.\n': 1, 'fb gszg lvze -- gsv fvxbaw lvze bu tvaebph [1689] -- r szw fhwwvaol gzpva\n': 1, 'fb r czgxsvw hc nl gebhfvef, chg avj xbewf ra nl fgezj szg, zaw\n': 1, 'fcrergf bu gsv ebzw yvxpbavw nv, zaw r xbhow abg xbaxvagezgv ba zalgsrat.\n': 1, 'fgbbw zg gsv xebffebzwf bu czegrat, r jvcg tbbwylv.\n': 1,
It gives me the words, but how would I get the letters, such as how many "a"'s there are, or how many "b"'s there are?
Counter is quite a useful class native to Python, which can be used to solve your problem elegantly.
# count the letter freqency
from collections import Counter
with open('cipher.txt', 'r') as f:
s = f.read()
c = Counter(s) # the type of c is collection.Counter
# if you want dict as your output type
decoding = dict(c)
print(decoding)
If you put "every parting from you is like a little eternity" to your cipher.txt, you'll get the following result with the code above:
{'e': 6, 'v': 1, 'r': 4, 'y': 3, ' ': 8, 'p': 1, 'a': 2, 't': 5, 'i': 5, 'n': 2, 'g': 1, 'f': 1, 'o': 2, 'm': 1, 'u': 1, 's': 1, 'l': 3, 'k': 1}
However, if you want to implement the counting by yourself, here's a possible solution, providing the same result as using Counter.
# count the letter freqency, manually, without using collections.Counter
with open('cipher.txt', 'r') as f:
s = f.read()
decoding = {}
for c in s:
if c in decoding:
decoding[c] += 1
else:
decoding[c] = 1
print(decoding)
You can use a Counter from the collections standard library, it'll generate a dictionary of results:
from collections import Counter
s = """
This problem requires me to find the frequency analysis of a .txt file.
This is my code so far: This finds the frequency of the words, but how would I get the frequency of the actual letters?"""
c = Counter(s)
print(c.most_common(5))
This will print:
[(' ', 35), ('e', 20), ('t', 13), ('s', 11), ('o', 10)]
EDIT: Without using a Counter, we can use a dictionary and keep incrementing the count:
c = {}
for character in s:
try:
c[character] += 1
except KeyError:
c[character] = 1
print(c)
This will print:
{'\n': 4, 'T': 3, 'h': 9, 'i': 9, 's': 11, ' ': 35, 'p': 1, 'r': 9, 'o': 10, 'b': 2, 'l': 6, 'e': 20, 'm': 3, 'q': 4, 'u': 7, 't': 13, 'f': 10, 'n': 6, 'd': 5, 'c': 5, 'y': 5, 'a': 6, '.': 2, 'x': 1, ':': 1, 'w': 3, ',': 1, 'I': 1, 'g': 1, '?': 1}

How do i work on Checksum of Singapore Car License Plate with Python

I have researched and searched internet on the checksum of Singapore Car License Plate. For the license plate of SBA 1234, I need to convert all the digits excluding the S to numbers. A being 1, B being 2, and so on. SBA 1234 is in a string in a text format. How do i convert B and A to numbers for the calculation for the checksum while making sure that the value B and A do not change. The conversion of B and A to numbers is only for the calculation.
How do i do the conversion for this with Python. Please help out. Thank you.
There are multiple ways to create a dictionary with values A thru Z representing values 1 thru 26. One of the simple way to do it will be:
value = dict(zip("ABCDEFGHIJKLMNOPQRSTUVWXYZ", range(1,27)))
An alternate way to it would be using the ord() function.
ord('A') is 65. You can create a dictionary with values A thru Z representing values 1 thru 26. To do that, you can use simple code like this.
atoz = {chr(i): i - 64 for i in range(ord("A"), ord("A") + 26)}
This will provide an output with a dictionary
{'A': 1, 'B': 2, 'C': 3, 'D': 4, 'E': 5, 'F': 6, 'G': 7, 'H': 8, 'I': 9, 'J': 10, 'K': 11, 'L': 12, 'M': 13, 'N': 14, 'O': 15, 'P': 16, 'Q': 17, 'R': 18, 'S': 19, 'T': 20, 'U': 21, 'V': 22, 'W': 23, 'X': 24, 'Y': 25, 'Z': 26}
You can search for the char in the dictionary to get 1 thru 26.
Alternate, you can directly use ord(x) - 64 to get a value of the alphabet. If x is A, you will get 1. Similarly, if x is Z, the value will be 26.
So you can write the code directly to calculate the value of the Singapore Number Plate as:
snp = 'SBA 1234'
then you can get a value of
snp_num = [ord(snp[1]) - 64,ord(snp[2]) - 64, int(snp[4]), int(snp[5]), int(snp[6])]
This will result in
[2, 1, 1, 2, 3]
I hope one of these options will work for you. Then use the checksum function to do your calculation. I hope this is what you are looking for.

Simpler method for performing replacement on multiple columns at a time?

import pandas as pd
w=pd.read_csv('w.csv')
Takes sections of a CSV to add them up. Two columns require numerical conversion
w["Social Media Use Score"]=w.iloc[:,[6,7,8,9,10,11,12,13,14,15,16]].sum(axis=1)
Switches Yes or No in this section to 1 o 0 and adds them up, other section switches ABCD to 1234 and sums
w['Q1'],w['Q3'],w['Q6'] = w['Q1'].map({'No': 1, 'Yes': 0}),\
w['Q3'].map({'No': 1, 'Yes': 0}),\
w['Q6'].map({'No': 1, 'Yes': 0})
w['Q2'],w['Q4'],w['Q5'],w['Q7'],w['Q8'],w['Q9'],w['Q10']=\
w['Q2'].map({'Yes': 1, 'No': 0}),\
w['Q4'].map({'Yes': 1, 'No': 0}),\
w['Q5'].map({'Yes': 1, 'No': 0}),\
w['Q7'].map({'Yes': 1, 'No': 0}),\
w['Q8'].map({'Yes': 1, 'No': 0}),\
w['Q9'].map({'Yes': 1, 'No': 0}),\
w['Q10'].map({'Yes': 1, 'No': 0})
w["Anxiety Score"]=w.iloc[:,[17,18,19,20,21,22,23,24,25,26]].sum(axis=1)
w['d1'],w['d2'],w['d3'],w['d4'],w['d5'],w['d6'],w['d7'],w['d8'],w['d9'],w['d10']=\
w['d1'].map({'A': 1, 'B': 2,'C':3,'D':4}),\
w['d2'].map({'A': 1, 'B': 2,'C':3,'D':4}),\
w['d3'].map({'A': 1, 'B': 2,'C':3,'D':4}),\
w['d4'].map({'A': 1, 'B': 2,'C':3,'D':4}),\
w['d5'].map({'A': 1, 'B': 2,'C':3,'D':4}),\
w['d6'].map({'A': 1, 'B': 2,'C':3,'D':4}),\
w['d7'].map({'A': 1, 'B': 2,'C':3,'D':4}),\
w['d8'].map({'A': 1, 'B': 2,'C':3,'D':4}),\
w['d9'].map({'A': 1, 'B': 2,'C':3,'D':4}),\
w['d10'].map({'A': 1, 'B': 2,'C':3,'D':4})
w['Depression Score']=w.iloc[:,[27,28,29,30,31,32,33,34,35,36]].sum(axis=1)
w.to_csv("foranal.csv")
If you want to perform replacement on multiple columns simultaneously, you should use df.replace (it is slower than map, so use it only if you can afford to).
# Mapping for replacement.
repl_dict = {'A':1, 'B':2,'C':3, 'D':4}
repl_dict.update({'Yes':1, 'No':0})
# Generate the list of columns to perform replace on.
cols = [f'{x}{y}' for x in ('Q','d') for y in range(1, 11)]
w[cols] = w[cols].replace(repl_dict)
# Fix values for special columns.
w.loc[:, ['Q1', 'Q3', 'Q6']] = 1 - w.loc[:, ['Q1', 'Q3', 'Q6']]
"Social Media Use Score" and "Anxiety Score" are fine.

Converting list to aggregated list

So I have a list like this:
date, type
29-5-2017, x
30-5-2017, x
31-5-2017, y
1-6-2017, z
2-6-2017, z
3-6-2017, y
28-5-2017, y
29-5-2017, z
30-5-2017, z
31-5-2017, y
1-6-2017, z
2-6-2017, z
3-6-2017, x
29-5-2017, x
30-5-2017, z
31-5-2017, z
1-6-2017, y
2-6-2017, x
3-6-2017, z
4-6-2017, y
How would I create an aggregated version of this list? So I get each date only once, and see how many of each type there are on a given date.
Like this:
date, no._of_x, no._of_y, no._of_z
28-5-2017, 0, 1, 0
29-5-2017, 2, 0, 1
30-5-2017, 1, 0, 2
31-5-2017, 0, 2, 1
1-6-2017, 0, 1, 2
2-6-2017, 1, 0, 2
3-6-2017, 1, 1, 1
4-6-2017, 0, 1, 0
Assuming that your list is a list of lists with each sublist containing a date string and of of x, y or z, you can first group your list by dates. Just create a dictionary, or collections.defaultdict, mapping dates to lists of x/z/y.
dates = collections.defaultdict(list)
for date, xyz in data:
dates[date].append(xyz)
Next, you can create another dictionary, mapping each of those x/y/z lists to a Counter dict:
counts = {date: collections.Counter(xyz) for date, xyz in dates.items()}
Afterwards, counts is this:
{'2-6-2017,': Counter({'z': 2, 'x': 1}),
'29-5-2017,': Counter({'x': 2, 'z': 1}),
'3-6-2017,': Counter({'x': 1, 'y': 1, 'z': 1}),
'28-5-2017,': Counter({'y': 1}),
'1-6-2017,': Counter({'z': 2, 'y': 1}),
'4-6-2017,': Counter({'y': 1}),
'31-5-2017,': Counter({'y': 2, 'z': 1}),
'30-5-2017,': Counter({'z': 2, 'x': 1})}

Categories

Resources