Convert N by N Dataframe to 3 Column Dataframe - python

I am using Python 2.7 with Pandas on a Windows 10 machine.
I have an n by n Dataframe where:
1) The index represents peoples names
2) The column headers are the same peoples names in the same order
3) Each cell of the Dataframeis the average number of times they email each other each day.
How would I transform that Dataframeinto a Dataframewith 3 columns, where:
1) Column 1 would be the index of the n by n Dataframe
2) Column 2 would be the row headers of the n by n Dataframe
3) Column 3 would be the cell value corresponding to those two names from the index, column header combination from the n by n Dataframe
Edit
Appologies for not providing an example of what I am looking for. I would like to take df1 and turn it into rel_df, from the code below.
import pandas as pd
from itertools import permutations
df1 = pd.DataFrame()
df1['index'] = ['a', 'b','c','d','e']
df1.set_index('index', inplace = True)
df1['a'] = [0,1,2,3,4]
df1['b'] = [1,0,2,3,4]
df1['c'] = [4,1,0,3,4]
df1['d'] = [5,1,2,0,4]
df1['e'] = [7,1,2,3,0]
##df of all relationships to build
flds = pd.Series(SO_df.fld1.unique())
flds = pd.Series(flds.append(pd.Series(SO_df.fld2.unique())).unique())
combos = []
for L in range(0, len(flds)+1):
for subset in permutations(flds, L):
if len(subset) == 2:
combos.append(subset)
if len(subset) > 2:
break
rel_df = pd.DataFrame.from_records(data = combos, columns = ['fld1','fld2'])
rel_df['value'] = [1,4,5,7,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4]
print df1
>>> print df1
a b c d e
index
a 0 1 4 5 7
b 1 0 1 1 1
c 2 2 0 2 2
d 3 3 3 0 3
e 4 4 4 4 0
>>> print rel_df
fld1 fld2 value
0 a b 1
1 a c 4
2 a d 5
3 a e 7
4 b a 1
5 b c 1
6 b d 1
7 b e 1
8 c a 2
9 c b 2
10 c d 2
11 c e 2
12 d a 3
13 d b 3
14 d c 3
15 d e 3
16 e a 4
17 e b 4
18 e c 4
19 e d 4

Use melt:
df1 = df1.reset_index()
pd.melt(df1, id_vars='index', value_vars=df1.columns.tolist()[1:])
(If in your actual code you're explicitly setting the index as you do here, just skip that step rather than doing the reset_index; melt doesn't work on an index.)

# Flatten your dataframe.
df = df1.stack().reset_index()
# Remove duplicates (e.g. fld1 = 'a' and fld2 = 'a').
df = df.loc[df.iloc[:, 0] != df.iloc[:, 1]]
# Rename columns.
df.columns = ['fld1', 'fld2', 'value']
>>> df
fld1 fld2 value
1 a b 1
2 a c 4
3 a d 5
4 a e 7
5 b a 1
7 b c 1
8 b d 1
9 b e 1
10 c a 2
11 c b 2
13 c d 2
14 c e 2
15 d a 3
16 d b 3
17 d c 3
19 d e 3
20 e a 4
21 e b 4
22 e c 4
23 e d 4

Related

Join an array to every row in the pandas dataframe

I have a data frame and an array as follows:
df = pd.DataFrame({'x': range(0,5), 'y' : range(1,6)})
s = np.array(['a', 'b', 'c'])
I would like to attach the array to every row of the data frame, such that I got a data frame as follows:
What would be the most efficient way to do this?
Just plain assignment:
# replace the first `s` with your desired column names
df[s] = [s]*len(df)
Try this:
for i in s:
df[i] = i
Output:
x y a b c
0 0 1 a b c
1 1 2 a b c
2 2 3 a b c
3 3 4 a b c
4 4 5 a b c
You could use pandas.concat:
pd.concat([df, pd.DataFrame(s).T], axis=1).ffill()
output:
x y 0 1 2
0 0 1 a b c
1 1 2 a b c
2 2 3 a b c
3 3 4 a b c
4 4 5 a b c
You can try using df.loc here.
df.loc[:, s] = s
print(df)
x y a b c
0 0 1 a b c
1 1 2 a b c
2 2 3 a b c
3 3 4 a b c
4 4 5 a b c

Pandas row replication python

So my dataframe has multiple columns, one of them is named "multiple" which contains boolean, only 1s and 0s. Now, I want to replicate all the rows 4 times only for all the df.loc[df.multiple==1]. How can I do that? (I don't want to replicate indexes)
example input:
df=
index strings multiple
0 A 0
1 B 1
2 C 1
3 D 0
4 E 1
Expected output:
index strings multiple
0 A 0
1 B 1
2 B 1
3 B 1
4 B 1
5 B 1
6 C 1
7 C 1
8 C 1
9 C 1
10 C 1
11 D 0
12 E 1
13 E 1
14 E 1
15 E 1
16 E 1
Here is another alternative, based on #Vinzent answer.
It is using the same approach to construct the repeats, but doesn't require to reconstruct the full dataframe. It is instead based on indexing. This solution is ~30% faster on the provided dataset and larger datasets.
df.loc[np.repeat(df.multiple, df.multiple.values*4+1).index].reset_index(drop=True)
This is what numpy.repeat is for:
import pandas as pd
import numpy as np
df = pd.DataFrame([['A', 0],
['B', 1],
['C', 1],
['D', 0],
['E', 1]],
columns=['strings', 'multiple'])
df = pd.DataFrame(np.repeat(df.values, df['multiple']*4+1, axis=0), columns=df.columns)
print(df)
# strings multiple
# 0 A 0
# 1 B 1
# 2 B 1
# 3 B 1
# 4 B 1
# 5 B 1
# 6 C 1
# 7 C 1
# 8 C 1
# 9 C 1
# 10 C 1
# 11 D 0
# 12 E 1
# 13 E 1
# 14 E 1
# 15 E 1
# 16 E 1
You can do it with pandas:
(df.groupby('multiple')
.apply(lambda x: pd.concat([x]*4) if x.name else x)
.droplevel(level=0)
.sort_index()
.reset_index(drop=True)
)

Rearanging table structure based on number of rows and columns pandas

I have the following data frame table. The table has the columns Id, columns, rows, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
Id columns rows 1 2 3 4 5 6 7 8 9
1 3 3 A B C D E F G H Z
2 3 2 I J K
By considering Id, the number of rows, and columns I would like to restructure the table as follows.
Id columns rows col_1 col_2 col_3
1 3 3 A B C
1 3 3 D E F
1 3 3 G H Z
2 3 2 I J K
2 3 2 - - -
Can anyone help to do this in Python Pandas?
Here's a solution using MultiIndex and .itterrows():
df
Id columns rows 1 2 3 4 5 6 7 8 9
0 1 3 3 A B C D E F G H Z
1 2 3 2 I J K None None None None None None
You can set n to any length, in your case 3:
n = 3
df = df.set_index(['Id', 'columns', 'rows'])
new_index = []
new_rows = []
for index, row in df.iterrows():
max_rows = index[-1] * (len(index)-1) # read amount of rows
for i in range(0, len(row), n):
if i > max_rows: # max rows reached, stop appending
continue
new_index.append(index)
new_rows.append(row.values[i:i+n])
pd.DataFrame(new_rows, index=pd.MultiIndex.from_tuples(new_index))
0 1 2
1 3 3 A B C
3 D E F
3 G H Z
2 3 2 I J K
2 None None None
And if you are keen on getting your old index and headers back:
new_headers = ['Id', 'columns', 'rows'] + list(range(1, n+1))
df2.reset_index().set_axis(new_headers, axis=1)
Id columns rows 1 2 3
0 1 3 3 A B C
1 1 3 3 D E F
2 1 3 3 G H Z
3 2 3 2 I J K
4 2 3 2 None None None
Using melt and str.split with floor division against your index to create groups of 3.
s = pd.melt(df,id_vars=['Id','columns','rows'])
s1 = (
s.sort_values(["Id", "variable"])
.assign(idx=s.index // 3)
.fillna("-")
.groupby(["idx", "Id"])
.agg(
columns=("columns", "first"), rows=("rows", "first"), value=("value", ",".join)
)
)
s2 = s1["value"].str.split(",", expand=True).rename(
columns=dict(zip(s1["value"].str.split(",", expand=True).columns,
[f'col_{i+1}' for i in range(s1["value"].str.split(',').apply(len).max())]
))
)
df1 = pd.concat([s1.drop('value',axis=1),s2],axis=1)
print(df1)
columns rows col_1 col_2 col_3
idx Id
0 1 3 3 A B C
1 1 3 3 D E F
2 1 3 3 G H Z
3 2 3 2 I J K
4 2 3 2 - - -
5 2 3 2 - - -
I modify unutbu solution for create array for each row by expected length of new rows, columns, then create Dataframe in list comprehension and join together by concat:
def f(x):
c, r = x.name[1], x.name[2]
#print (c, r)
arr = np.empty(c * r, dtype='O')
vals = x.iloc[:len(arr)]
arr[:len(vals)] = vals
idx = pd.MultiIndex.from_tuples([x.name] * r, names=df.columns[:3])
cols = [f'col_{c+1}' for c in range(c)]
return pd.DataFrame(arr.reshape((r, c)), index=idx, columns=cols).fillna('-')
df1 = (pd.concat([x for x in df.set_index(['Id', 'columns', 'rows'])
.apply(f, axis=1)])
.reset_index())
print (df1)
Id columns rows col_1 col_2 col_3
0 1 3 3 A B C
1 1 3 3 D E F
2 1 3 3 G H Z
3 2 3 2 I J K
4 2 3 2 - - -

Using isin() to determine what should be printed

Right now I have two dataframes (data1 and data2)
I would like to print a column of string values in the dataframe called data1, based on whether the ID exists in both data2 and data1.
What I am doing now gives me a boolean list (True or False if the ID exists in the both dataframes but not the column of strings).
print(data2['id'].isin(data1.id).to_string())
yields
0 True
1 True
2 True
3 True
4 True
5 True
Any ideas would be appreciated.
Here is a sample of data1
'user_id', 'id', 'rating', 'unix_timestamp'
196 242 3 881250949
186 302 3 891717742
22 377 1 878887116
And data2 contains something like this
'id', 'title', 'release_date',
'video_release_date', 'imdb_url'
37|Nadja (1994)|01-Jan-1994||http://us.imdb.com/M/title-exact?Nadja%20(1994)|0|0|0|0|0|0|0|0|1|0|0|0|0|0|0|0|0|0|0
38|Net, The (1995)|01-Jan-1995||http://us.imdb.com/M/title-exact?Net,%20The%20(1995)|0|0|0|0|0|0|0|0|0|0|0|0|0|0|0|1|1|0|0
39|Strange Days (1995)|01-Jan-1995||http://us.imdb.com/M/title-exact?Strange%20Days%20(1995)|0|1|0|0|0|0|1|0|0|0|0|0|0|0|0|1|0|0|0
If all values of ids are unique:
I think you need merge with inner join. For data2 select only id column, on parameter should be omit, because joining on all columns - here only id:
df = pd.merge(data1, data2[['id']])
Sample:
data1 = pd.DataFrame({'id':list('abcdef'),
'B':[4,5,4,5,5,4],
'C':[7,8,9,4,2,3]})
print (data1)
B C id
0 4 7 a
1 5 8 b
2 4 9 c
3 5 4 d
4 5 2 e
5 4 3 f
data2 = pd.DataFrame({'id':list('frcdeg'),
'D':[1,3,5,7,1,0],
'E':[5,3,6,9,2,4],})
print (data2)
D E id
0 1 5 f
1 3 3 r
2 5 6 c
3 7 9 d
4 1 2 e
5 0 4 g
df = pd.merge(data1, data2[['id']])
print (df)
B C id
0 4 9 c
1 5 4 d
2 5 2 e
3 4 3 f
If id are duplicated in one or another Dataframe use another answer, also added similar solutions:
df = data1[data1['id'].isin(set(data1['id']) & set(data2['id']))]
ids = set(data1['id']) & set(data2['id'])
df = data2.query('id in #ids')
df = data1[np.in1d(data1['id'], np.intersect1d(data1['id'], data2['id']))]
Sample:
data1 = pd.DataFrame({'id':list('abcdef'),
'B':[4,5,4,5,5,4],
'C':[7,8,9,4,2,3]})
print (data1)
B C id
0 4 7 a
1 5 8 b
2 4 9 c
3 5 4 d
4 5 2 e
5 4 3 f
data2 = pd.DataFrame({'id':list('fecdef'),
'D':[1,3,5,7,1,0],
'E':[5,3,6,9,2,4],})
print (data2)
D E id
0 1 5 f
1 3 3 e
2 5 6 c
3 7 9 d
4 1 2 e
5 0 4 f
df = data1[data1['id'].isin(set(data1['id']) & set(data2['id']))]
print (df)
B C id
2 4 9 c
3 5 4 d
4 5 2 e
5 4 3 f
EDIT:
You can use:
df = data2.loc[data1['id'].isin(set(data1['id']) & set(data2['id'])), ['title']]
ids = set(data1['id']) & set(data2['id'])
df = data2.query('id in #ids')[['title']]
df = data2.loc[np.in1d(data1['id'], np.intersect1d(data1['id'], data2['id'])), ['title']]
You can compute the set intersection of the two columns -
ids = set(data1['id']).intersection(data2['id'])
Or,
ids = np.intersect1d(data1['id'], data2['id'])
Next, query/filter out relevant rows.
data1.loc[data1['id'].isin(ids), 'id']

insert a list as row in a dataframe at a specific position

I have a list l=['a', 'b' ,'c']
and a dataframe with columns d,e,f and values are all numbers
How can I insert list l in my dataframe just below the columns.
Setup
df = pd.DataFrame(np.ones((2, 3), dtype=int), columns=list('def'))
l = list('abc')
df
d e f
0 1 1 1
1 1 1 1
Option 1
I'd accomplish this task by adding a level to the columns object
df.columns = pd.MultiIndex.from_tuples(list(zip(df.columns, l)))
df
d e f
a b c
0 1 1 1
1 1 1 1
Option 2
Use a dictionary comprehension passed to the dataframe constructor
pd.DataFrame({(i, j): df[i] for i, j in zip(df, l)})
d e f
a b c
0 1 1 1
1 1 1 1
But if you insist on putting it in the dataframe proper... (keep in mind, this turns the dataframe into dtype object and we lose significant computational efficiencies.)
Alternative 1
pd.DataFrame([l], columns=df.columns).append(df, ignore_index=True)
d e f
0 a b c
1 1 1 1
2 1 1 1
Alternative 2
pd.DataFrame([l] + df.values.tolist(), columns=df.columns)
d e f
0 a b c
1 1 1 1
2 1 1 1
Use pd.concat
In [1112]: df
Out[1112]:
d e f
0 0.517243 0.731847 0.259034
1 0.318821 0.551298 0.773115
2 0.194192 0.707525 0.804102
3 0.945842 0.614033 0.757389
In [1113]: pd.concat([pd.DataFrame([l], columns=df.columns), df], ignore_index=True)
Out[1113]:
d e f
0 a b c
1 0.517243 0.731847 0.259034
2 0.318821 0.551298 0.773115
3 0.194192 0.707525 0.804102
4 0.945842 0.614033 0.757389
Are you looking for append i.e
df = pd.DataFrame([[1,2,3]],columns=list('def'))
I = ['a','b','c']
ndf = df.append(pd.Series(I,index=df.columns.tolist()),ignore_index=True)
Output:
d e f
0 1 2 3
1 a b c
If you want add list to columns for MultiIndex:
df.columns = [df.columns, l]
print (df)
d e f
a b c
0 4 7 1
1 5 8 3
2 4 9 5
3 5 4 7
4 5 2 1
5 4 3 0
print (df.columns)
MultiIndex(levels=[['d', 'e', 'f'], ['a', 'b', 'c']],
labels=[[0, 1, 2], [0, 1, 2]])
If you want add list to specific position pos:
pos = 0
df1 = pd.DataFrame([l], columns=df.columns)
print (df1)
d e f
0 a b c
df = pd.concat([df.iloc[:pos], df1, df.iloc[pos:]], ignore_index=True)
print (df)
d e f
0 a b c
1 4 7 1
2 5 8 3
3 4 9 5
4 5 4 7
5 5 2 1
6 4 3 0
But if append this list to numeric dataframe, get mixed types - numeric with strings, so some pandas functions should failed.
Setup:
df = pd.DataFrame({'d':[4,5,4,5,5,4],
'e':[7,8,9,4,2,3],
'f':[1,3,5,7,1,0]})
print (df)

Categories

Resources