python Input column values from lists - python

Consider I have the following data.
import pandas as pd
age = [[1,2,3],[2,1],[4,2,3,1],[2,1,3]]
frame = {'age': age }
result = pd.DataFrame(frame)
ver=pd.DataFrame(result.age.values.tolist(), index= result.index)
listado=pd.unique(ver.values.ravel('K'))
cleanedList = [x for x in listado if str(x) != 'nan']
for col in cleanedList:
result[col] = 0
#Return values
age 1.0 2.0 4.0 3.0
[1, 2, 3] 0 0 0 0
[2, 1] 0 0 0 0
[4, 2, 3, 1] 0 0 0 0
[2, 1, 3] 0 0 0 0
How can I impute 1 in the columns corresponding to each list in the age column. So final output would be:
age 1.0 2.0 4.0 3.0
[1, 2, 3] 1 1 0 1
[2, 1] 1 1 0 0
[4, 2, 3, 1] 1 1 1 1
[2, 1, 3] 1 1 1 0
Consider that the amount of elements in the age column is dynamic (as an example I put 4 numbers, but in reality they can be many more).

Check with sklearn
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
s=pd.DataFrame(mlb.fit_transform(result['age']),columns=mlb.classes_, index=result.index)
s
1 2 3 4
0 1 1 1 0
1 1 1 0 0
2 1 1 1 1
3 1 1 1 0
#df = df.join(s)

Related

Expand Pandas series into dataframe by unique values

I would like to expand a series or dataframe into a sparse matrix based on the unique values of the series. It's a bit hard to explain verbally but an example should be clearer.
First, simpler version - if I start with this:
Idx Tag
0 A
1 B
2 A
3 C
4 B
I'd like to get something like this, where the unique values in the starting series are the column values here (could be 1s and 0s, Boolean, etc.):
Idx A B C
0 1 0 0
1 0 1 0
2 1 0 0
3 0 0 1
4 0 1 0
Second, more advanced version - if I have values associated with each entry, preserving those and filling the rest of the matrix with a placeholder (0, NaN, something else), e.g. starting from this:
Idx Tag Val
0 A 5
1 B 2
2 A 3
3 C 7
4 B 1
And ending up with this:
Idx A B C
0 5 0 0
1 0 2 0
2 3 0 0
3 0 0 7
4 0 1 0
What's a Pythonic way to do this?
Here's how to do it, using pandas.get_dummies() which was designed specifically for this (often called "one-hot-encoding" in ML). I've done it step-by-step so you can see how it's done ;)
>>> df
Idx Tag Val
0 0 A 5
1 1 B 2
2 2 A 3
3 3 C 7
4 4 B 1
>>> pd.get_dummies(df['Tag'])
A B C
0 1 0 0
1 0 1 0
2 1 0 0
3 0 0 1
4 0 1 0
>>> pd.concat([df[['Idx']], pd.get_dummies(df['Tag'])], axis=1)
Idx A B C
0 0 1 0 0
1 1 0 1 0
2 2 1 0 0
3 3 0 0 1
4 4 0 1 0
>>> pd.get_dummies(df['Tag']).to_numpy()
array([[1, 0, 0],
[0, 1, 0],
[1, 0, 0],
[0, 0, 1],
[0, 1, 0]], dtype=uint8)
>>> df2[['Val']].to_numpy()
array([[5],
[2],
[3],
[7],
[1]])
>>> pd.get_dummies(df2['Tag']).to_numpy() * df2[['Val']].to_numpy()
array([[5, 0, 0],
[0, 2, 0],
[3, 0, 0],
[0, 0, 7],
[0, 1, 0]])
>>> pd.DataFrame(pd.get_dummies(df['Tag']).to_numpy() * df[['Val']].to_numpy(), columns=df['Tag'].unique())
A B C
0 5 0 0
1 0 2 0
2 3 0 0
3 0 0 7
4 0 1 0
>>> pd.concat([df, pd.DataFrame(pd.get_dummies(df['Tag']).to_numpy() * df[['Val']].to_numpy(), columns=df['Tag'].unique())], axis=1)
Idx Tag Val A B C
0 0 A 5 5 0 0
1 1 B 2 0 2 0
2 2 A 3 3 0 0
3 3 C 7 0 0 7
4 4 B 1 0 1 0
Based on #user17242583 's answer, found a pretty simple way to do it using pd.get_dummies combined with DataFrame.multiply:
>>> df
Tag Val
0 A 5
1 B 2
2 A 3
3 C 7
4 B 1
>>> pd.get_dummies(df['Tag'])
A B C
0 1 0 0
1 0 1 0
2 1 0 0
3 0 0 1
4 0 1 0
>>> pd.get_dummies(df['Tag']).multiply(df['Val'], axis=0)
A B C
0 5 0 0
1 0 2 0
2 3 0 0
3 0 0 7
4 0 1 0

Iteration over a Pandas DataFrame to extract data

I have a DataFrame that contains hour intervals in the columns, and employee ID's in rows.
I want to iterate over each column(hourly interval) and extract it to a list ONLY if the column contains the number 1 (1 means they are available in that hour , 0 means they are not)
I've tried iterrows() and iteritems() and neither are giving me what I want to see from this DataFrame
Which is a new list called
available = [0800, 0900, 1000, 1100]
Which I can then extract the min and max values to create a schedule.
Apologies if this is somewhat vague Im pretty new to Python 3 and Pandas
You don't need to iterate
Suppose you have a dataframe like this
0 1 2 3 4 5 6 7 8 9
0 0 0 0 0 0 1 0 1 1 0
1 1 0 1 1 1 1 1 1 0 1
2 1 1 1 0 0 0 0 0 0 0
3 0 1 1 0 1 1 0 0 1 1
4 1 0 1 0 1 0 1 0 0 0
5 0 1 1 0 0 0 0 0 0 0
6 1 0 0 0 1 1 1 1 0 0
7 0 1 0 1 0 1 1 1 1 1
8 0 0 1 0 1 1 1 0 0 0
9 1 0 0 1 0 0 1 1 1 1
You can just use this code to get the column names of all the columns where value is 1
df['available'] = df.apply(lambda row: row[row == 1].index.tolist(), axis=1)
0 1 2 3 4 5 6 7 8 9 available
0 0 0 0 0 0 1 0 1 1 0 [5, 7, 8]
1 1 0 1 1 1 1 1 1 0 1 [0, 2, 3, 4, 5, 6, 7, 9]
2 1 1 1 0 0 0 0 0 0 0 [0, 1, 2]
3 0 1 1 0 1 1 0 0 1 1 [1, 2, 4, 5, 8, 9]
4 1 0 1 0 1 0 1 0 0 0 [0, 2, 4, 6]
5 0 1 1 0 0 0 0 0 0 0 [1, 2]
6 1 0 0 0 1 1 1 1 0 0 [0, 4, 5, 6, 7]
7 0 1 0 1 0 1 1 1 1 1 [1, 3, 5, 6, 7, 8, 9]
8 0 0 1 0 1 1 1 0 0 0 [2, 4, 5, 6]
9 1 0 0 1 0 0 1 1 1 1 [0, 3, 6, 7, 8, 9]
And if you want mix/max from this you can use
df['min_max'] = df['available'].apply(lambda x: (min(x), max(x)))
available min_max
0 [5, 7, 8] (5, 8)
1 [0, 2, 3, 4, 5, 6, 7, 9] (0, 9)
2 [0, 1, 2] (0, 2)
3 [1, 2, 4, 5, 8, 9] (1, 9)
4 [0, 2, 4, 6] (0, 6)
5 [1, 2] (1, 2)
6 [0, 4, 5, 6, 7] (0, 7)
7 [1, 3, 5, 6, 7, 8, 9] (1, 9)
8 [2, 4, 5, 6] (2, 6)
9 [0, 3, 6, 7, 8, 9] (0, 9)
You can simply do
available = df.columns[df.T.any(axis=1)].tolist()
In general it is not advisable to iterate over Pandas DataFrames unless they are small, as AFAIK this does not use vectorized functions and is thus slower.
Can you show the rest of your code?
Assuming only 0s and 1s are in the dataframe the following conditional selection should work (if I'm correctly interpreting what you want; it seems more likely that you want what
Shubham Periwal posted):
filtered_df = df[df != 0]
lists = filtered_df.values.tolist()
Or in 1 line:
lists = df[df != 0].values.tolist()

Concatenate multiple values in same row into a list

I have a df where I want to do multi-label classification. One of the ways which was suggested to me was to calculate the probability vector. Here's an example of my DF with what would represent training data.
id ABC DEF GHI
1 0 0 0 1
2 1 0 1 0
3 2 1 0 0
4 3 0 1 1
5 4 0 0 0
6 5 0 1 1
7 6 1 1 1
8 7 1 0 1
9 8 1 1 0
And I would like to concatenate columns ABC, DEF, GHI into a new column. I will also have to do this with more than 3 columns, so I want to do relatively cleanly using a column list or something similar:
col_list = ['ABC','DEF','GHI']
The result I am looking for would be something like:
id ABC DEF GHI Conc
1 0 0 0 1 [0,0,1]
2 1 0 1 0 [0,1,0]
3 2 1 0 0 [1,0,0]
4 3 0 1 1 [0,1,1]
5 4 0 0 0 [0,0,0]
6 5 0 1 1 [0,1,1]
7 6 1 1 1 [1,1,1]
8 7 1 0 1 [1,0,1]
9 8 1 1 0 [1,1,0]
Try:
col_list = ['ABC','DEF','GHI']
df['agg_lst']=df.apply(lambda x: list(x[col] for col in col_list), axis=1)
You can use 'agg' with function 'list':
df[cols].agg(list,axis=1)
1 [0, 0, 1]
2 [0, 1, 0]
3 [1, 0, 0]
4 [0, 1, 1]
5 [0, 0, 0]
6 [0, 1, 1]
7 [1, 1, 1]
8 [1, 0, 1]
9 [1, 1, 0]

multiple same categorical variables into one hot encoded columns efficiently

How do I encode the table below efficiently?
e.g.
import pandas as pd
df = pd.DataFrame(np.array([[1, 2, 3], [2, 3, 4], [1, 3, 4]]), columns=['col_1', 'col_2', 'col_3'])
col_1 col_2 col_3
0 1 2 3
1 2 3 4
2 1 3 4
to
1 2 3 4
0 1 1 1 0
1 0 1 1 1
2 1 0 1 1
Here's one way -
def hotencode(df):
unq, idx = np.unique(df, return_inverse=1)
col_idx = idx.reshape(df.shape)
out = np.zeros((len(col_idx),col_idx.max()+1),dtype=int)
out[np.arange(len(col_idx))[:,None], col_idx] = 1
return pd.DataFrame(out, columns=unq, index=df.index)
Another way with broadcasting would be -
unq = np.unique(df)
out = (df.values[...,None] == unq).any(1).astype(int)
Sample run -
In [81]: df
Out[81]:
col_1 col_2 col_3
0 1 2 3
1 2 3 4
2 1 3 4
In [82]: hotencode(df)
Out[82]:
1 2 3 4
0 1 1 1 0
1 0 1 1 1
2 1 0 1 1

Split a MultiIndex DataFrame base on another DataFrame

Say you have a multiindex DataFrame
x y z
a 1 0 1 2
2 3 4 5
b 1 0 1 2
2 3 4 5
3 6 7 8
c 1 0 1 2
2 0 4 6
Now you have another DataFrame which is
col1 col2
0 a 1
1 b 1
2 b 3
3 c 1
4 c 2
How do you split the multiindex DataFrame based on the one above?
Use loc by tuples:
df = df1.loc[df2.set_index(['col1','col2']).index.tolist()]
print (df)
x y z
a 1 0 1 2
b 1 0 1 2
3 6 7 8
c 1 0 1 2
2 0 4 6
df = df1.loc[[tuple(x) for x in df2.values.tolist()]]
print (df)
x y z
a 1 0 1 2
b 1 0 1 2
3 6 7 8
c 1 0 1 2
2 0 4 6
Or join:
df = df2.join(df1, on=['col1','col2']).set_index(['col1','col2'])
print (df)
x y z
col1 col2
a 1 0 1 2
b 1 0 1 2
3 6 7 8
c 1 0 1 2
2 0 4 6
Simply using isin
df[df.index.isin(list(zip(df2['col1'],df2['col2'])))]
Out[342]:
0 1 2 3
index1 index2
a 1 1 0 1 2
b 1 1 0 1 2
3 3 6 7 8
c 1 1 0 1 2
2 2 0 4 6
You can also do this using the MultiIndex reindex method https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html
## Recreate your dataframes
tuples = [('a', 1), ('a', 2),
('b', 1), ('b', 2),
('b', 3), ('c', 1),
('c', 2)]
data = [[1, 0, 1, 2],
[2, 3, 4, 5],
[1, 0, 1, 2],
[2, 3, 4, 5],
[3, 6, 7, 8],
[1, 0, 1, 2],
[2, 0, 4, 6]]
idx = pd.MultiIndex.from_tuples(tuples, names=['index1','index2'])
df= pd.DataFrame(data=data, index=idx)
df2 = pd.DataFrame([['a', 1],
['b', 1],
['b', 3],
['c', 1],
['c', 2]])
# Answer Question
idx_subset = pd.MultiIndex.from_tuples([(a, b) for a, b in df2.values], names=['index1', 'index2'])
out = df.reindex(idx_subset)
print(out)
0 1 2 3
index1 index2
a 1 1 0 1 2
b 1 1 0 1 2
3 3 6 7 8
c 1 1 0 1 2
2 2 0 4 6

Categories

Resources