I have a table like this:
Group
Item
A
a, b, c
B
b, c, d
And I want to convert to like this:
Item
Group
a
A
b
A, B
c
A, B
d
B
What is the best way to achieve this?
Thank you!!
If you are working in pandas, you can use 'explode' to unpack items, and can use 'to_list' lambda for the grouping stage.
Here is some info on 'explode' method https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html.
import pandas as pd
df = pd.DataFrame(data={'Group': ['A', 'B'], 'Item': [['a','b','c'], ['b','c','d']]})
Exploding
df.explode('Item').reset_index(drop=True).to_dict(orient='records')
[{'Group': 'A', 'Item': 'a'},
{'Group': 'A', 'Item': 'b'},
{'Group': 'A', 'Item': 'c'},
{'Group': 'B', 'Item': 'b'},
{'Group': 'B', 'Item': 'c'},
{'Group': 'B', 'Item': 'd'}]
Exploding and then using 'to_list' lambda
df.explode('Item').groupby('Item')['Group'].apply(lambda x: x.tolist()).reset_index().to_dict(orient='records')
[{'Item': 'a', 'Group': ['A']},
{'Item': 'b', 'Group': ['A', 'B']},
{'Item': 'c', 'Group': ['A', 'B']},
{'Item': 'd', 'Group': ['B']}]
Not the most efficient, but very short:
>>> table = {'A': ['a', 'b', 'c'], 'B': ['b', 'c', 'd']}
>>> reversed_table = {v: [k for k, vs in table.items() if v in vs] for v in set(v for vs in table.values() for v in vs)}
>>> print(reversed_table)
{'b': ['A', 'B'], 'c': ['A', 'B'], 'd': ['B'], 'a': ['A']}
With dictionaries, you wouldtypically approach it like this:
table = {'A': ['a', 'b', 'c'], 'B': ['b', 'c', 'd']}
revtable = dict()
for v,keys in table.items():
for k in keys:
revtable.setdefault(k,[]).append(v)
print(revtable)
# {'a': ['A'], 'b': ['A', 'B'], 'c': ['A', 'B'], 'd': ['B']}
Assuming that your tables are in the form of a pandas dataframe, you could try something like this:
import pandas as pd
import numpy as np
# Create initial dataframe
data = {'Group': ['A', 'B'], 'Item': [['a','b','c'], ['b','c','d']]}
df = pd.DataFrame(data=data)
Group Item
0 A [a, b, c]
1 B [b, c, d]
# Expand number of rows based on list column ("Item") contents
list_col = 'Item'
df = pd.DataFrame({
col:np.repeat(df[col].values, df[list_col].str.len())
for col in df.columns.drop(list_col)}
).assign(**{list_col:np.concatenate(df[list_col].values)})[df.columns]
Group Item
0 A a
1 A b
2 A c
3 B b
4 B c
5 B d
*Above snippet taken from here, which includes a more detailed explanation of the code
# Perform groupby operation
df = df.groupby('Item')['Group'].apply(list).reset_index(name='Group')
Item Group
0 a [A]
1 b [A, B]
2 c [A, B]
3 d [B]
Related
From dataframe sructured like this
A B
0 1 2
1 3 4
I need to get list like this:
[{"A": 1, "B": 2}, {"A": 3, "B": 4}]
It looks like you want:
df.values.tolist()
example:
df = pd.DataFrame([['A', 'B', 'C'],
['D', 'E', 'F']])
df.values.tolist()
output:
[['A', 'B', 'C'],
['D', 'E', 'F']]
other options
df.T.to_dict('list')
{0: ['A', 'B', 'C'],
1: ['D', 'E', 'F']}
df.to_dict('records')
[{0: 'A', 1: 'B', 2: 'C'},
{0: 'D', 1: 'E', 2: 'F'}]
I have df after read_excel where some of values (from one column, with strings) are divided. How can i merge them back?
for example:
the df i have
{'CODE': ['A', None, 'B', None, None, 'C'],
'TEXT': ['A', 'a', 'B', 'b', 'b', 'C'],
'NUMBER': ['1', None, '2', None, None,'3']}
the df i want
{'CODE': ['A','B','C'],
'TEXT': ['Aa','Bbb','C'],
'NUMBER': ['1','2','3']}
I can't find the right solution. I tried to import data in different ways but it also did not help
You can forward fill missing values or Nones for groups with aggregate join and first non None value for NUMBER column:
d = {'CODE': ['A', None, 'B', None, None, 'C'],
'TEXT': ['A', 'a', 'B', 'b', 'b', 'C'],
'NUMBER': ['1', None, '2', None, None,'3']}
df = pd.DataFrame(d)
df1 = df.groupby(df['CODE'].ffill()).agg({'TEXT':''.join, 'NUMBER':'first'}).reset_index()
print (df1)
CODE TEXT NUMBER
0 A Aa 1
1 B Bbb 2
2 C C 3
You can generate dictionary:
cols = df.columns.difference(['CODE'])
d1 = dict.fromkeys(cols, 'first')
d1['TEXT'] = ''.join
df1 = df.groupby(df['CODE'].ffill()).agg(d1).reset_index()
I am grouping and counting a set of data.
df = pd.DataFrame({'key': ['A', 'B', 'A'],
'data': np.ones(3,)})
df.groupby('key').count()
outputs
data
key
A 2
B 1
The piece of code above works though, I wonder if there is a simpler one.
'data': np.ones(3,) seems to be a placeholder and indispensable.
pd.DataFrame(['A', 'B', 'A']).groupby(0).count()
outputs
A
B
My question is, is there a simpler way to do this, produce the count of 'A' and 'B' respectively, without something like 'data': np.ones(3,) ?
It doesn't have to be a pandas method, numpy or python native function are also appreciated.
Use a Series instead.
>>> import pandas as pd
>>>
>>> data = ['A', 'A', 'A', 'B', 'C', 'C', 'D', 'D', 'D', 'D', 'D']
>>>
>>> pd.Series(data).value_counts()
D 5
A 3
C 2
B 1
dtype: int64
Use a defaultdict:
from collections import defaultdict
data = ['A', 'A', 'B', 'A', 'C', 'C', 'A']
d = defaultdict(int)
for element in data:
d[element] += 1
d # output: defaultdict(int, {'A': 4, 'B': 1, 'C': 2})
There's not any grouping , just counting, so you can use
from collections import Counter
counter(['A', 'B', 'A'])
I want to convert this csv file Format:
into a hdf5 file with this structure:
I am using Pandas. Is there a simple way to do that?
You can use nested dictionaries via collections.defaultdict for this:
from collections import defaultdict
import pandas as pd
# read csv file
# df = pd.read_csv('input.csv', header=None)
df = pd.DataFrame([['A', 'a', 'a1'],
['A', 'a', 'a2'],
['A', 'b', 'b1'],
['A', 'b', 'b2'],
['A', 'c', 'c1'],
['A', 'c', 'c2']],
columns=['col1', 'col2', 'col3'])
d = defaultdict(lambda: defaultdict(list))
for row in df.itertuples():
d[row[1]][row[2]].append(row[3])
Result
defaultdict(<function __main__.<lambda>>,
{'A': defaultdict(list,
{'a': ['a1', 'a2'],
'b': ['b1', 'b2'],
'c': ['c1', 'c2']})})
Thanks, I will check out defaultdict. My solution is probably more hacky, but in case someone needs something customizable:
import pandas as pd
df = pd.DataFrame([['A', 'a', 'a1'],
['A', 'a', 'a2'],
['A', 'b', 'b1'],
['A', 'b', 'b2'],
['A', 'c', 'c1'],
['A', 'c', 'c2']],
columns=['col1', 'col2', 'col3'])
cols = ['col1', 'col2', 'col3']
children = {p : {} for p in cols}
parent = {p : {} for p in cols}
for x in df.iterrows():
for i in range(len(cols)-1):
_parent = x[1][cols[i]]
_child = x[1][cols[i+1]]
parent[cols[i+1]].update({_child : _parent})
if _parent in children[cols[i]]:
children_list = children[cols[i]][_parent]
children_list.add(_child)
children[cols[i]].update({_parent : children_list})
else:
children[cols[i]].update({_parent : set([_child])})
Result:
parent =
{'col1': {},
'col2': {'a': 'A', 'b': 'A', 'c': 'A'},
'col3': {'a1': 'a', 'a2': 'a', 'b1': 'b', 'b2': 'b', 'c1': 'c', 'c2': 'c'}}
Then you can walk up and down your hierarchy.
I have a dictionary.
dict = {'A':['a', 'b'], 'B':['c', 'b', 'a'], 'C':['d', 'c'], }
what is easy way to find out similar values from keys of dictionary?
output :
A&B : 'a', 'b'
A&C : None
B&C : 'c'
How this can be achieved?
In [1]: dct = {'A':['a', 'b'], 'B':['c', 'b', 'a'], 'C':['d', 'c'], }
In [2]: set(dct['A']).intersection(dct['B'])
Out[2]: {'a', 'b'}
In [3]: set(dct['A']).intersection(dct['C'])
Out[3]: set()
In [4]: set(dct['B']).intersection(dct['C'])
Out[4]: {'c'}
Using set & other_set operator or set.intersection and itertools.combinations:
>>> import itertools
>>>
>>> d = {'A':['a', 'b'], 'B':['c', 'b', 'a'], 'C':['d', 'c'], }
>>> for a, b in itertools.combinations(d, 2):
... common = set(d[a]) & set(d[b])
... print('{}&{}: {}'.format(a, b, common))
...
A&C: set()
A&B: {'b', 'a'}
C&B: {'c'}