I am trying to find a way to create a pandas Series which is based on values within another DataFrame. A simplified example would be:
df_idx = pd.DataFrame([0, 2, 2, 3, 1, 3])
df_lookup = pd.DataFrame([10.0, 20.0, 30.0, 40.0])
where I wish to generate a new pandas series of values drawn from df_lookup based on the indices in df_idx, i.e.:
df_target = pd.DataFrame([10.0, 30.0, 30.0, 40.0, 20.0, 40.0])
Clearly, it is desirable to do this without looping for speed.
Any help greatly appreciated.
This is what reindex is for:
df_idx = pd.DataFrame([0, 2, 2, 3, 1, 3])
df_lookup = pd.DataFrame([10.0, 20.0, 30.0, 40.0])
df_lookup.reindex(df_idx[0])
Output:
0
0
0 10.0
2 30.0
2 30.0
3 40.0
1 20.0
3 40.0
This is precisely the use case for iloc:
import pandas as pd
df = pd.DataFrame([10.0, 20.0, 30.0, 40.0])
idx_lst = pd.Series([0, 2, 2, 3, 1, 3])
res = df.iloc[idx_lst]
See here for more on indexing by position.
Related
is there a way to find the maximum length of contiguous periods without data for each column? `
df.isna().sum() gives me the number of all nan but here in the example I am looking for a way to get for A=3 and B=2:
import pandas as pd
import numpy as np
i = pd.date_range('2018-04-09', periods=8, freq='1D')
df = pd.DataFrame({'A': [1, 5, np.nan ,np.nan, np.nan, 2, 5, np.nan], 'B' : [np.nan, 2, 3, np.nan, np.nan, 6, np.nan, 8]}, index=i)
df
For one Series you can make groups of consecutive NaNs using the non-NaNs as starting points. Then count them and get the max:
s = df['A'].isna()
s.groupby((~s).cumsum()).sum().max()
Output: 3
Now do this for all columns:
def max_na_stretch(s):
s = s.isna()
return s.groupby((~s).cumsum()).sum().max()
df.apply(max_na_stretch)
Output:
A 3
B 2
dtype: int64
I am trying to create in df1 the column Factor based on the dictionary df2. However the Code columns for mapping are not exactly the same and the dictionary only contain the Code strings partially.
import pandas as pd
df1 = pd.DataFrame({
'Date':['2021-01-01', '2021-01-01', '2021-01-01', '2021-01-02', '2021-01-02', '2021-01-02', '2021-01-02', '2021-01-03'],
'Ratings':[9.0, 8.0, 5.0, 3.0, 2, 3, 6, 5],
'Code':['R:EST 5R', 'R:EKG EK', 'R:EKG EK', 'R:EST 5R', 'R:EKGP', 'R:EST 5R', 'R:OID_P', 'R:OID_P']})
df2 = pd.DataFrame({
'Code':['R:EST', 'R:EKG', 'R:OID'],
'Factor':[1, 1.3, 0.9]})
So far, I wasn't able to map the data frames correctly, because the columns are not exactly the same. The column Code does not necessary start with "R:".
df1['Factor'] = df1['Code'].map(df2.set_index('Code')['Factor'])
This is how the preferred output would look like:
df3 = pd.DataFrame({
'Date':['2021-01-01', '2021-01-01', '2021-01-01', '2021-01-02', '2021-01-02', '2021-01-02', '2021-01-02', '2021-01-03'],
'Ratings':[9.0, 8.0, 5.0, 3.0, 2, 3, 6, 5],
'Code':['R:EST 5R', 'R:EKG EK', 'R:EKG EK', 'R:EST 5R', 'R:EKGP', 'R:EST 5R', 'R:OID_P', 'R:OID_P'],
'Factor':[1, 1.3, 1.3, 1, 1.3, 1, 0.9, 0.9]})
Thanks a lot!
>>> df1['Code'].str[:5].map(df2.set_index('Code')['Factor'])
0 1.0
1 1.3
2 1.3
3 1.0
4 1.3
5 1.0
6 0.9
7 0.9
Name: Code, dtype: float64
>>> (df2.Code
.apply(lambda x:df1.Code.str.contains(x))
.T
.idxmax(axis=1)
.apply(lambda x:df2.Factor.iloc[x])
)
0 1.0
1 1.3
2 1.3
3 1.0
4 1.3
5 1.0
6 0.9
7 0.9
dtype: float64
I am collecting heart rate values over the course of time. Each subject varies in the length of time that data was collected. I would like to make a table of the last 2 seconds of collected data.
import pandas as pd
import numpy as np
#example data
example_s = [["4/20/21 4:20", 302, 0, 0, 1, 2, 3],
["2/17/21 9:20",135, 1, 1.4, 8, 10, np.NaN, np.NaN],
["2/17/21 9:20", 111, 5, 5,1, np.NaN, np.NaN,np.NaN, np.NaN]]
example_s_table = pd.DataFrame(example_s,columns=['Date_Time','CID', 0, 1, 2, 3, 4, 5, 6])
desired_outcome = [["4/20/21 4:20",302,1, 2, 3],
["2/17/21 9:20",135, 1.4, 8, 10 ],
["2/17/21 9:20",111, 5, 5,1 ]]
desired_outcome_table = pd.DataFrame(desired_outcome,columns=['Date_Time','CID', "Second 1", "Second 2", "Second 3"])
I can see how to collect a single instance of the data from the example shown here, but would like to know how to quickly add multiple values to my table:
desired_outcome_table["Last Second"]=example_s_table.iloc[:,1:].ffill(axis=1).iloc[:, -1]
Python Dataframe Get Value of Last Non Null Column for Each Row
Try:
df = example_s_table.copy()
df = df.set_index(['Date_Time', 'CID'])
df_out = df.mask(df.eq(0))\
.apply(lambda x: pd.Series(x.dropna().tail(3).values), axis=1)\
.rename(columns = lambda x: f'Second {x+1}')
df_out['Last Second'] = df_out['Second 3']
print(df_out.reset_index())
Output:
Date_Time CID Second 1 Second 2 Second 3 Last Second
0 4/20/21 4:20 302 1.0 2.0 3.0 3.0
1 2/17/21 9:20 135 1.4 8.0 10.0 10.0
2 2/17/21 9:20 111 5.0 5.0 1.0 1.0
I wish to find out the largest 5 numbers in a DataFrame and store the Index name and Column name for these 5 values.
I am trying to use nlargest() and idxmax methods but failing to achieve what i want. My code is as below:
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
df = DataFrame({'a': [1, 10, 8, 11, -1],'b': [1.0, 2.0, 6, 3.0, 4.0],'c': [1.0, 2.0, 6, 3.0, 4.0]})
Can you kindly let me know How can i achieve this. Thank you
Use stack and nlargest:
max_vals = df.stack().nlargest(5)
This will give you a Series with a multiindex, where the first level is the original DataFrame's index, and the second level is the column name for the given value. Here's what max_vals looks like:
3 a 11.0
1 a 10.0
2 a 8.0
b 6.0
c 6.0
To explicitly get the index and column names, use get_level_values on the index of max_vals:
max_idx = max_vals.index.get_level_values(0)
max_cols = max_vals.index.get_level_values(1)
The result of max_idx:
Int64Index([3, 1, 2, 2, 2], dtype='int64')
The result of max_cols:
Index(['a', 'a', 'a', 'b', 'c'], dtype='object')
Suppose I use df.isnull().sum() and I get a count for all the 'NA' values in all the columns of df dataframe. I want to remove a column that has NA values above 'K'.
For eg.,
df = pd.DataFrame({'A': [1, 2.1, np.nan, 4.7, 5.6, 6.8],
'B': [0, np.nan, np.nan, 0, 0, 0],
'C': [0, 0, 0, 0, 0, 0.0],
'D': [5, 5, np.nan, np.nan, 5.6, 6.8],
'E': [0,np.nan,np.nan,np.nan,np.nan,np.nan],})
df.isnull().sum()
A 1
B 2
C 0
D 2
E 5
dtype: int64
Suppose I want to remove columns that have '2' and above number of NA values. How would be approach this problem? My output should be,
df.columns
A,C
Can anybody help me in doing this?
Thanks
Call dropna and pass axis=1 to drop column-wise and pass thresh=len(df)-K, what thresh does is it sets the minimum number of non-NaN values which is equal to the number of rows minus K NaN values
In [22]:
df.dropna(axis=1, thresh=len(df)-1)
Out[22]:
A C
0 1.0 0
1 2.1 0
2 NaN 0
3 4.7 0
4 5.6 0
5 6.8 0
If you just want the columns:
In [23]:
df.dropna(axis=1, thresh=len(df)-1).columns
Out[23]:
Index(['A', 'C'], dtype='object')
Or simply mask the counts output against the columns:
In [28]:
df.columns[df.isnull().sum() <2]
Out[28]:
Index(['A', 'C'], dtype='object')
Could do something like:
df = df.reindex(columns=[x for x in df.columns.values if df[x].isnull().sum() < threshold])
Which just builds a list of columns that match your requirement (fewer than threshold nulls), and then uses that list to reindex the dataframe. So if you set threshold to 1:
threshold = 1
df = pd.DataFrame({'A': [1, 2.1, np.nan, 4.7, 5.6, 6.8],
'B': [0, np.nan, np.nan, 0, 0, 0],
'C': [0, 0, 0, 0, 0, 0.0],
'D': [5, 5, np.nan, np.nan, 5.6, 6.8],
'E': ['NA', 'NA', 'NA', 'NA', 'NA', 'NA'],})
df = df.reindex(columns=[x for x in df.columns.values if df[x].isnull().sum() < threshold])
df.count()
Will yield:
C 6
E 6
dtype: int64
The dropna() function has a thresh argument that allows you to give the number of non-NaN values you require, so this would give you your desired output:
df.dropna(axis=1,thresh=5).count()
A 5
C 6
E 6
If you wanted just C & E, you'd have to change thresh to 6 in this case.