Python pivot DataFrame without index columns - python

It is intrducing nulls in resultant dataframe
df.pivot( columns='colname',values='value')
Initial DF:
colname value
0 bathrooms 1.0
1 bathrooms 2.0
2 bathrooms 1.0
3 bathrooms 2.0
4 property_id 82671.0
enter image description here
Result:
colname addr_street bathrooms bedrooms lat lng parking_space property_id
0 NaN 1.0 NaN NaN NaN NaN NaN
1 NaN 2.0 NaN NaN NaN NaN NaN
2 NaN 1.0 NaN NaN NaN NaN NaN
I just want a dataframe where unique values of 'colname' in initial df are the columns and the its corresponding value is the value(like it happens in bathroom)

If I understand, you want a groupby and concatenation, not pivot:
df = pd.concat(
{k: g.reset_index(drop=True)
for k, g in df.groupby('colname')['value']}, axis=1)
df
bathrooms property_id
0 1.0 82671.0
1 2.0 NaN
2 1.0 NaN
3 2.0 NaN

Related

Creating non-exist columns in multiindex dataframe

Let's say we have dataframe like this
df = pd.DataFrame({
"metric": ["1","2","1" ,"1","2"],
"group1":["o", "x", "x" , "o", "x"],
"group2":['a', 'b', 'a', 'a', 'b'] ,
"value": range(5),
"value2": np.array(range(5))* 2})
df
metric group1 group2 value value2
0 1 o a 0 0
1 2 x b 1 2
2 1 x a 2 4
3 1 o a 3 6
4 2 x b 4 8
then I want to have pivot format
df['g'] = df.groupby(['group1','group2'])['group2'].cumcount()
df1 = df.pivot(index=['g','metric'], columns=['group1','group2'], values=['value','value2']).sort_index(axis=1).rename_axis(columns={'g':None})
value value2
group1 o x o x
group2 a a b a a b
g metric
0 1 0.0 2.0 NaN 0.0 4.0 NaN
2 NaN NaN 1.0 NaN NaN 2.0
1 1 3.0 NaN NaN 6.0 NaN NaN
2 NaN NaN 4.0 NaN NaN 8.0
From here we can see that ("value","o","b") and ("value2","o","b") not exist after making pivot
but I need to have those columns with values NA
So I tried;
cols = [('value','x','a'), ('value','o','a'),('value','o','b')]
df1.assign(**{col : "NA" for col in np.setdiff1d(cols, df1.columns.values)})
which gives
Expected output
value value2
group1 o x o x
group2 a b a b a b a b
g metric
0 1 0.0 NaN 2.0 NaN 0.0 NaN 4.0 NaN
2 NaN NaN NaN 1.0 NaN NaN NaN 2.0
1 1 3.0 NaN NaN NaN 6.0 NaN NaN NaN
2 NaN NaN NaN 4.0 NaN NaN NaN 8.0
one corner case with this is that if b does not exist how to create that column ?
value value2
group1 o x o x
group2 a a a a
g metric
0 1 0.0 2.0 0.0 4.0
2 NaN NaN NaN NaN
1 1 3.0 NaN 6.0 NaN
2 NaN NaN NaN NaN
Multiple insert columns if not exist pandas
Pandas: Check if column exists in df from a list of columns
Pandas - How to check if multi index column exists
Use DataFrame.stack with DataFrame.unstack:
df1 = df1.stack([1,2],dropna=False).unstack([2,3])
print (df1)
value value2
group1 o x o x
group2 a b a b a b a b
g metric
0 1 0.0 NaN 2.0 NaN 0.0 NaN 4.0 NaN
2 NaN NaN NaN 1.0 NaN NaN NaN 2.0
1 1 3.0 NaN NaN NaN 6.0 NaN NaN NaN
2 NaN NaN NaN 4.0 NaN NaN NaN 8.0
Or with selecting last and last previous levels:
df1 = df1.stack([-2,-1],dropna=False).unstack([-2,-1])
Another idea:
df1 = df1.reindex(pd.MultiIndex.from_product(df1.columns.levels), axis=1)
print (df1)
value value2
group1 o x o x
group2 a b a b a b a b
g metric
0 1 0.0 NaN 2.0 NaN 0.0 NaN 4.0 NaN
2 NaN NaN NaN 1.0 NaN NaN NaN 2.0
1 1 3.0 NaN NaN NaN 6.0 NaN NaN NaN
2 NaN NaN NaN 4.0 NaN NaN NaN 8.0
EDIT:
If need set new columns by list of tuples:
cols = [('value','x','a'), ('value','o','a'),('value','o','b')]
df = df1.reindex(pd.MultiIndex.from_tuples(cols).union(df1.columns), axis=1)
print (df)
value value2
o x o x
a b a b a a b
g metric
0 1 0.0 NaN 2.0 NaN 0.0 4.0 NaN
2 NaN NaN NaN 1.0 NaN NaN 2.0
1 1 3.0 NaN NaN NaN 6.0 NaN NaN
2 NaN NaN NaN 4.0 NaN NaN 8.0

Get a subset of columns by row value in pandas

I have a DataFrame of users and their ratings for movies:
userId movie1 movie2 movie3 movie4 movie5 movie6
0 4.1 NaN 1.0 NaN 2.1 NaN
1 3.1 1.1 3.4 1.4 NaN NaN
2 2.8 NaN 1.7 NaN 3.0 NaN
3 NaN 5.0 NaN 2.3 NaN 2.1
4 NaN NaN NaN NaN NaN NaN
5 2.3 NaN 2.0 4.0 NaN NaN
There isnt actually a userId column in the dataframe, it's just being used for the index
From this DataFrame, I'm trying to make a another DataFrame that only contain movies that have been rated by a specific user. For example if I wanted to make a new DataFrame of movies only rated by user with userId == 0 the output would a dataframe with:
userId movie1 movie3 movie5
0 4.1 1.0 2.1
1 3.1 3.4 NaN
2 2.8 1.7 3.0
3 NaN NaN NaN
4 NaN NaN NaN
5 2.3 2.0 NaN
I know how to iterate over the columns but I dont know how to select the columns I want by checking a row value.
You can use .loc accessor to select the particular userId then use notna to create a boolean mask which specifies the columns which does not contain NaN values, finally use this boolean mask to filter the columns:
userId = 0 # specify the userid here
df_user = df.loc[:, df.loc[userId].notna()]
Details:
>>> df.loc[userId].notna()
movie1 True
movie2 False
movie3 True
movie4 False
movie5 True
movie6 False
Name: 0, dtype: bool
>>> df.loc[:, df.loc[userId].notna()]
movie1 movie3 movie5
userId
0 4.1 1.0 2.1
1 3.1 3.4 NaN
2 2.8 1.7 3.0
3 NaN NaN NaN
4 NaN NaN NaN
5 2.3 2.0 NaN
Another approach:
import pandas as pd
user0 = df.iloc[0,:] #select the first row
flags = user0.notna() #flag the non NaN values
flags = flags.tolist() #convert to list instead of series
newdf = df.iloc[:,flags] #return all rows, and the columns where flags are true
Declare and loc the userId of interest into a new df keeping only the relevant columns.
Then pd.concat the new df with the other userId's and keep columns (movies) of your userId that you selected:
user = 0 # set your userId
a = df.loc[[user]].dropna(axis=1)
b = pd.concat([a, (df.drop(a.index))[[i for i in a.columns]]])
Which prints:
b
movie1 movie3 movie5
userId
0 4.10 1.00 2.10
1 3.10 3.40 NaN
2 2.80 1.70 3.00
3 NaN NaN NaN
4 NaN NaN NaN
5 2.30 2.00 NaN
Note that I have set the index to be userId as you specified.

DataFrame columns sort by a given list and add empty columns for missing columns

I have a DataFrame as below.
df = pd.DataFrame(
{
"code": ["AA", "BB", "CC","DD"],
"YA" : [2,1,1,np.nan],
"YD" : [1,np.nan,np.nan,1],
"ZB" : [1,np.nan,np.nan,np.nan],
"ZD" : [1,np.nan,np.nan,1]
}
)
Also, I have a sorting list.
sort_list = ['YD','YA', 'ZD', 'YB', 'ZA', 'ZB']
I am trying to add the missing columns based on the sort list and sort the DataFrame.
expected output:
code YD YA ZD YB ZA ZB
0 AA 1.0 2.0 1.0 NaN NaN 1.0
1 BB NaN 1.0 NaN NaN NaN NaN
2 CC NaN 1.0 NaN NaN NaN NaN
3 DD 1.0 NaN 1.0 NaN NaN NaN
I can get the result using the below code. Is there another(simple) way to do this?
my code:
col_list = list(set(sort_list) - set(df.columns.to_list()))
df1 = pd.DataFrame(index=df.index, columns=col_list)
df1 = df1.fillna(np.nan)
df2 = df.join(df1, how='left')
df2 = df2.set_index('code')
df2 = df2[sort_list]
df2 = df2.reset_index()
df2
try using reindex:
df = df.reindex(columns=['code'] + sort_list)
df:
code YD YA ZD YB ZA ZB
0 AA 1.0 2.0 1.0 NaN NaN 1.0
1 BB NaN 1.0 NaN NaN NaN NaN
2 CC NaN 1.0 NaN NaN NaN NaN
3 DD 1.0 NaN 1.0 NaN NaN NaN

Remove NaN for the dataset

Given the sample df:
p = [[1.234,1], [2.2134,1.2365], [1.1234,2.5432]]
q = [[2,2], [0,1], [2,4]]
p[p == 22] = np.nan
I am able to remove NaN from p values by doing:
p = np.array([i for i in p if np.any(np.isfinite(i))], np.float64)
q = np.array(q, np.float64)
Can I do anything for a loop to check if there is a NaN and remove it?
But it is for one couple. What if I have the dataset like (real data is much more bigger(106,1900))
df =
1 1.1 2 2.1 3 3.1 4 4.1 5 5.1
0 43.1024 6.7498 NaN NaN NaN NaN NaN NaN NaN NaN
1 46.0595 1.6829 25.0695 3.7463 NaN NaN NaN NaN NaN NaN
2 25.0695 5.5454 44.9727 8.6660 41.9726 2.6666 84.9566 3.8484 44.9566 1.8484
3 35.0281 7.7525 45.0322 3.7465 14.0369 3.7463 NaN NaN NaN NaN
4 35.0292 7.5616 45.0292 4.5616 23.0292 3.5616 45.0292 NaN NaN
Try for instance (in order to fill all NaN-s with 0's):
df.fillna(0)
Ref: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html
You can use your average or mean of each column to fill your NaN values
df.fillna(df.mean())

How to drop column according to NAN percentage for dataframe?

For certain columns of df, if 80% of the column is NAN.
What's the simplest code to drop such columns?
You can use isnull with mean for threshold and then remove columns by boolean indexing with loc (because remove columns), also need invert condition - so <.8 means remove all columns >=0.8:
df = df.loc[:, df.isnull().mean() < .8]
Sample:
np.random.seed(100)
df = pd.DataFrame(np.random.random((100,5)), columns=list('ABCDE'))
df.loc[:80, 'A'] = np.nan
df.loc[:5, 'C'] = np.nan
df.loc[20:, 'D'] = np.nan
print (df.isnull().mean())
A 0.81
B 0.00
C 0.06
D 0.80
E 0.00
dtype: float64
df = df.loc[:, df.isnull().mean() < .8]
print (df.head())
B C E
0 0.278369 NaN 0.004719
1 0.670749 NaN 0.575093
2 0.209202 NaN 0.219697
3 0.811683 NaN 0.274074
4 0.940030 NaN 0.175410
If want remove columns by minimal values dropna working nice with parameter thresh and axis=1 for remove columns:
np.random.seed(1997)
df = pd.DataFrame(np.random.choice([np.nan,1], p=(0.8,0.2),size=(10,10)))
print (df)
0 1 2 3 4 5 6 7 8 9
0 NaN NaN NaN 1.0 1.0 NaN NaN NaN NaN NaN
1 1.0 NaN 1.0 NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN 1.0 1.0 NaN NaN NaN
3 NaN NaN NaN NaN 1.0 NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN 1.0 NaN NaN NaN 1.0
5 NaN NaN NaN 1.0 1.0 NaN NaN 1.0 NaN 1.0
6 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
7 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
8 NaN NaN NaN NaN NaN NaN NaN 1.0 NaN NaN
9 1.0 NaN NaN NaN 1.0 NaN NaN 1.0 NaN NaN
df1 = df.dropna(thresh=2, axis=1)
print (df1)
0 3 4 5 7 9
0 NaN 1.0 1.0 NaN NaN NaN
1 1.0 NaN NaN NaN NaN NaN
2 NaN NaN NaN 1.0 NaN NaN
3 NaN NaN 1.0 NaN NaN NaN
4 NaN NaN NaN 1.0 NaN 1.0
5 NaN 1.0 1.0 NaN 1.0 1.0
6 NaN NaN NaN NaN NaN NaN
7 NaN NaN NaN NaN NaN NaN
8 NaN NaN NaN NaN 1.0 NaN
9 1.0 NaN 1.0 NaN 1.0 NaN
EDIT: For non-Boolean data
Total number of NaN entries in a column must be less than 80% of total entries:
df = df.loc[:, df.isnull().sum() < 0.8*df.shape[0]]
df.dropna(thresh=np.int((100-percent_NA_cols_required)*(len(df.columns)/100)),inplace=True)
Basically pd.dropna takes number(int) of non_na cols required if that row is to be removed.
You can use the pandas dropna. For example:
df.dropna(axis=1, thresh = int(0.2*df.shape[0]), inplace=True)
Notice that we used 0.2 which is 1-0.8 since the thresh refers to the number of non-NA values
As suggested in comments, if you use sum() on a boolean test, you can get the number of occurences.
Code:
def get_nan_cols(df, nan_percent=0.8):
threshold = len(df.index) * nan_percent
return [c for c in df.columns if sum(df[c].isnull()) >= threshold]
Used as:
del df[get_nan_cols(df, 0.8)]

Categories

Resources