Adding to Pandas DataFrame using timestamps for index creates new columns - python

I have a script that reads data from a CSV and I want to append new data to the DF as it becomes available. Unfortunately, when I do that, I always end up with new columns. The DF from the CSV looks like this when I print()
df = pd.read_csv(filename, index_col=0, parse_dates=True)
Temp RH
Time
2021-05-17 11:08:34 51.08 77.9
2021-05-17 11:10:30 51.08 77.0
2021-05-17 11:10:35 50.72 71.9
2021-05-17 11:10:41 50.72 71.8
2021-05-17 11:12:19 50.72 71.6
... ... ...
2021-05-24 17:13:57 55.22 70.2
2021-05-24 17:14:02 55.22 69.6
2021-05-24 17:14:08 55.22 68.1
2021-05-24 17:14:18 54.86 66.9
2021-05-24 17:14:29 54.68 69.3
I use the following to create a fake new df for testing
timeStamp = datetime.now()
timeStamp = timeStamp.strftime("%m-%d-%Y %H:%M:%S")
t = 51.06
h = 69.3
data = {'Temp': t, 'RH': h}
newDF = pd.DataFrame(data, index = pd.to_datetime([timeStamp]) )
print(newDF)
which gives me
Temp RH
2021-05-24 17:28:32 51.06 69.3
Here is the output when I call append()
print(df.append([df, pd.DataFrame(newDF)], ignore_index = False))
Temp RH Temp RH
2021-05-17 11:08:34 51.08 77.9 NaN NaN
2021-05-17 11:10:30 51.08 77.0 NaN NaN
2021-05-17 11:10:35 50.72 71.9 NaN NaN
2021-05-17 11:10:41 50.72 71.8 NaN NaN
2021-05-17 11:12:19 50.72 71.6 NaN NaN
... ... ... ... ...
2021-05-24 17:14:02 55.22 69.6 NaN NaN
2021-05-24 17:14:08 55.22 68.1 NaN NaN
2021-05-24 17:14:18 54.86 66.9 NaN NaN
2021-05-24 17:14:29 54.68 69.3 NaN NaN
2021-05-24 17:28:32 NaN NaN 51.06 69.3
[223293 rows x 4 columns]
and concat()
df1 = pd.concat([df, newDF], ignore_index=False)
print(df1)
Temp RH Temp RH
2021-05-17 11:08:34 51.08 77.9 NaN NaN
2021-05-17 11:10:30 51.08 77.0 NaN NaN
2021-05-17 11:10:35 50.72 71.9 NaN NaN
2021-05-17 11:10:41 50.72 71.8 NaN NaN
2021-05-17 11:12:19 50.72 71.6 NaN NaN
... ... ... ... ...
2021-05-24 17:14:02 55.22 69.6 NaN NaN
2021-05-24 17:14:08 55.22 68.1 NaN NaN
2021-05-24 17:14:18 54.86 66.9 NaN NaN
2021-05-24 17:14:29 54.68 69.3 NaN NaN
2021-05-24 17:28:32 NaN NaN 51.06 69.3
[111647 rows x 4 columns]

Instead of
print(df.append([df, pd.DataFrame(newDF)], ignore_index = False))
Which I believe is keeping the columns of each unique dataframe, just call append on the original dataframe itself.
Try
df = df.append(newDF, ignore_index = False)

Related

How should I combine the rows of similar time in a Dataframe?

I'm processing a MIMIC dataset. Now I want to combine the data in the rows whose time difference (delta time) is below 10min. How can I do that?
The original data:
charttime hadm_id age is_male HR RR SPO2 Systolic_BP Diastolic_BP MAP PEEP PO2
0 2119-07-20 17:54:00 26270240 NaN NaN NaN NaN NaN 103.0 66.0 81.0 NaN NaN
1 2119-07-20 17:55:00 26270240 68.0 1.0 113.0 26.0 NaN NaN NaN NaN NaN NaN
2 2119-07-20 17:57:00 26270240 NaN NaN NaN NaN 92.0 NaN NaN NaN NaN NaN
3 2119-07-20 18:00:00 26270240 68.0 1.0 114.0 28.0 NaN 85.0 45.0 62.0 16.0 NaN
4 2119-07-20 18:01:00 26270240 NaN NaN NaN NaN 91.0 NaN NaN NaN NaN NaN
5 2119-07-30 21:00:00 26270240 68.0 1.0 90.0 16.0 93.0 NaN NaN NaN NaN NaN
6 2119-07-30 21:00:00 26270240 68.0 1.0 89.0 9.0 94.0 NaN NaN NaN NaN NaN
7 2119-07-30 21:01:00 26270240 68.0 1.0 89.0 10.0 93.0 NaN NaN NaN NaN NaN
8 2119-07-30 21:05:00 26270240 NaN NaN NaN NaN NaN 109.0 42.0 56.0 NaN NaN
9 2119-07-30 21:10:00 26270240 68.0 1.0 90.0 10.0 93.0 NaN NaN NaN NaN NaN
After combining the rows whose delta time is less than 10 min, the output I want:
(when there is duplicate data in same column in some rows to group, just take the first one)
charttime hadm_id age is_male HR RR SPO2 Systolic_BP Diastolic_BP MAP PEEP PO2
0 2119-07-20 17:55:00 26270240 68.0 1.0 113.0 26.0 92.0 103.0 66.0 81.0 16.0 NaN2119-07-30 20:00:00 26270240 68.0 1.0 90.0 16.0 93.0 NaN NaN NaN NaN NaN
1 2119-07-30 21:00:00 26270240 68.0 1.0 89.0 9.0 94.0 109.0 42.0 56.0 NaN NaN
How can I do this?
First, I would round the timestamp column to 10 minutes:
df['charttime'] = pd.to_datetime(df['charttime']).dt.floor('10T').dt.time
Then, I would drop the duplicates, based on the columns you want to compare (for example, hadm_id and charttime:
df.drop_duplicates(subset=['charttime', 'hadm_id'], keep='first', inplace=True)

np.select instead of for while loop

I am aiming to dramatically speed up my code which I think can be done using np.select although I dont know how.
Here is a the current output of the when my code is executed:
date starting_temp average_high average_low limit_temp observation_date Date_Limit_reached
2019-12-03 22:30:00 NaN 13.0 14.8 NaN nan
2019-12-03 23:00:00 NaN 14.7 14.9 NaN nan
2019-12-03 23:30:00 NaN 13.0 13.9 NaN nan
2019-12-04 00:00:00 13.2 13.0 14.7 NaN 2019-12-04 10:00:00
2019-12-04 00:30:00 NaN 14.0 13.8 NaN nan
2019-12-04 01:00:00 NaN 13.9 13.8 NaN nan
2019-12-04 01:30:00 NaN 13.6 14.8 NaN nan
2019-12-04 02:00:00 NaN 13.1 14.5 NaN nan
2019-12-04 02:30:00 NaN 14.9 13.7 NaN nan
2019-12-04 03:00:00 NaN 14.2 14.1 NaN nan
2019-12-04 03:30:00 NaN 13.4 14.1 NaN nan
2019-12-04 04:00:00 NaN 14.3 13.0 NaN nan
2019-12-04 04:30:00 NaN 13.5 14.1 NaN nan
2019-12-04 05:00:00 NaN 13.6 13.4 NaN nan
2019-12-04 05:30:00 NaN 14.5 13.9 NaN nan
2019-12-04 06:00:00 NaN 14.4 14.5 NaN nan
2019-12-04 06:30:00 NaN 13.7 14.2 NaN nan
2019-12-04 07:00:00 NaN 13.7 14.2 NaN nan
2019-12-04 07:30:00 NaN 13.2 14.4 NaN nan
2019-12-04 08:00:00 NaN 13.9 13.1 NaN nan
2019-12-04 08:30:00 NaN 13.9 14.4 NaN nan
2019-12-04 09:00:00 NaN 14.4 13.9 NaN nan
2019-12-04 09:30:00 NaN 14.4 13.8 NaN nan
2019-12-04 10:00:00 NaN 15.0 14.0 NaN nan
2019-12-04 10:30:00 NaN 13.2 13.2 NaN nan
2019-12-04 11:00:00 NaN 14.0 13.3 NaN nan
2019-12-04 11:30:00 NaN 14.2 13.4 NaN nan
2019-12-04 12:00:00 NaN 14.2 13.4 NaN nan
2019-12-04 12:30:00 NaN 13.7 13.6 NaN nan
2019-12-04 13:00:00 NaN 14.1 13.3 NaN nan
2019-12-04 13:30:00 NaN 13.1 14.1 NaN nan
2019-12-04 14:00:00 NaN 13.2 14.3 NaN nan
2019-12-04 14:30:00 NaN 13.7 13.8 NaN nan
The code to produce the final df['Date_Limit_reached'] column is way too slow which I have added below. I would like to change its structure to np.select if possible:
new_col = []
df_size = len(df)
# Loop the dataframe
for ind in df.index:
if not math.isnan(df['starting_temp'][ind]):
entry_price_val = df['starting_temp'][ind]
count = 0
hasValue = False
while count < df_size:
if df['starting_temp'][ind] > df['limit_temp'][ind] and df['limit_temp'][ind] >= df['asklow'][count] and df['date'][count] >= df['observation_date'][ind] :
new_col.append(df['date'][count])
hasValue = True
break # Break the loop if matching value meets
count += 1
elif df['starting_temp'][ind] < df['limit_temp'][ind] and df['limit_temp'][ind] <= df['average_high'][count] and df['date'][count] >= df['observation_date'][ind] :
new_col.append(df['date'][count])
hasValue = True
break # Break the loop if matching value meets
count += 1
# If matching value not meets, then append nan value to the column
if not hasValue:
new_col.append(float('nan'))
else:
new_col.append(float('nan'))
df['Date_Limit_reached'] = new_col
Since i can't run the code due lack of df here my suggestions:
use less flags but concrete values instead. makes the code more readable. hasValue --> val
you will have a problem if there is an entry with df['starting_temp'][ind] == df['limit_temp'][ind] because none of your cases will fire. Maybe this is the problem with the slow code.
you can pre-calculate the first boolean expression in the while loops. This might solve the issue from the above point
you don't use entry_price_val
for further improvement use vectorization of your data, this is possible in all of the loops. (not shown in my code since I can't test it)
here is my suggested code
new_col = []
df_size = len(df)
for ind in df.index:
val = float('nan') # use data instead of flags
if not math.isnan(df['starting_temp'][ind]):
count = 0
if df['starting_temp'][ind] > df['limit_temp'][ind]:
while count < df_size:
if df['limit_temp'][ind] >= df['asklow'][count] and df['date'][count] >= df['observation_date'][ind] :
val=df['date'][count]
break # Break the loop if matching value meets
count += 1
elif df['starting_temp'][ind] < df['limit_temp'][ind]
while count < df_size:
if df['limit_temp'][ind] <= df['average_high'][count] and df['date'][count] >= df['observation_date'][ind] :
val = df['date'][count]
break # Break the loop if matching value meets
count += 1
new_col.append(val)
df['Date_Limit_reached'] = new_col
Code snippets were not tested, test for correctness required, further improvements possible (hints on request).

How to group by level 0 and describe in a multi index and level dataframe (pandas)?

Here is (file) a multi index and level dataframe. Loading the dataframe from a csv:
import pandas as pd
df = pd.read_csv('./enviar/only-bh-extreme-events-satellite.csv'
,index_col=[0,1,2,3,4]
,header=[0,1,2,3]
,skipinitialspace=True
,tupleize_cols=True
)
df.columns = pd.MultiIndex.from_tuples(df.columns)
print(df)
ci \
1
1
00h 06h 12h 18h
wsid lat lon start prcp_24
329 -43.969397 -19.883945 2007-03-18 10:00:00 72.0 NaN NaN NaN NaN
2007-03-20 10:00:00 104.4 NaN NaN NaN NaN
2007-10-18 23:00:00 92.8 NaN NaN NaN NaN
2007-12-21 00:00:00 60.4 NaN NaN NaN NaN
2008-01-19 18:00:00 53.0 NaN NaN NaN NaN
2008-04-05 01:00:00 80.8 0.0 0.0 0.0 0.0
2008-10-31 17:00:00 101.8 NaN NaN NaN NaN
2008-11-01 04:00:00 82.0 NaN NaN NaN NaN
2008-12-29 00:00:00 57.8 NaN NaN NaN NaN
2009-03-28 10:00:00 72.4 NaN NaN NaN NaN
2009-10-07 02:00:00 57.8 NaN NaN NaN NaN
2009-10-08 00:00:00 83.8 NaN NaN NaN NaN
2009-11-28 16:00:00 84.4 NaN NaN NaN NaN
2009-12-18 04:00:00 51.8 NaN NaN NaN NaN
2009-12-28 00:00:00 96.4 NaN NaN NaN NaN
2010-01-06 05:00:00 74.2 NaN NaN NaN NaN
2011-12-18 00:00:00 113.6 NaN NaN NaN NaN
2011-12-19 00:00:00 90.6 NaN NaN NaN NaN
2012-11-15 07:00:00 85.8 NaN NaN NaN NaN
2013-10-17 00:00:00 52.4 NaN NaN NaN NaN
2014-04-01 22:00:00 72.0 0.0 0.0 0.0 0.0
2014-10-20 06:00:00 56.6 NaN NaN NaN NaN
2014-12-13 09:00:00 104.4 NaN NaN NaN NaN
2015-02-09 00:00:00 62.0 NaN NaN NaN NaN
2015-02-16 19:00:00 56.8 NaN NaN NaN NaN
2015-05-06 17:00:00 50.8 0.0 0.0 0.0 0.0
2016-02-26 00:00:00 52.2 NaN NaN NaN NaN
343 -44.416883 -19.885398 2008-08-30 21:00:00 50.4 0.0 0.0 0.0 0.0
2009-02-01 01:00:00 53.8 NaN NaN NaN NaN
2010-03-22 00:00:00 51.4 NaN NaN NaN NaN
2011-11-12 21:00:00 57.8 NaN NaN NaN NaN
2011-11-25 22:00:00 107.6 NaN NaN NaN NaN
2012-12-28 20:00:00 94.0 NaN NaN NaN NaN
2013-10-16 22:00:00 50.8 NaN NaN NaN NaN
2014-11-06 21:00:00 55.2 NaN NaN NaN NaN
2015-01-24 00:00:00 80.0 NaN NaN NaN NaN
2015-01-27 00:00:00 52.8 NaN NaN NaN NaN
370 -43.958651 -19.980034 2015-01-28 23:00:00 50.4 NaN NaN NaN NaN
2015-01-29 00:00:00 50.6 NaN NaN NaN NaN
I'm trying to describe grouping by level (0), variables ci, d, r, z... I like to get the count, max, min, std, etc...
When I tried df.describe() I got not grouping by level 0. So I expected:
ci cc z r -> Level 0
count 39.000000 39.000000 39.000000 39.000000
mean 422577.032051 422025.595353 421672.402244 422449.004808
std 144740.869473 144550.040108 144425.167173 144692.422425
min 0.000000 0.000000 0.000000 0.000000
25% 467962.437500 467512.156250 467915.437500 468552.750000
50% 470644.687500 469924.468750 469772.312500 470947.468750
75% 472557.875000 471953.828125 471156.250000 472279.937500
max 473988.062500 473269.187500 472358.125000 473675.812500
I had created this helper function:
def format_percentiles(percentiles):
percentiles = np.asarray(percentiles)
percentiles = 100 * percentiles
int_idx = (percentiles.astype(int) == percentiles)
if np.all(int_idx):
out = percentiles.astype(int).astype(str)
return [i + '%' for i in out]
And this my own describe function:
import numpy as np
from functools import reduce
def describe_customized(df):
_df = pd.DataFrame()
data = []
variables = list(set(df.columns.get_level_values(0)))
variables.sort()
for var in variables:
idx = pd.IndexSlice
values = df.loc[:, idx[[var]]].values.tolist() #get all values from a specif variable
z = reduce(lambda x,y: x+y,values) #flat a list of list
data.append(pd.Series(z,name=var))
#return data
for series in data:
percentiles = np.array([0.25, 0.5, 0.75])
formatted_percentiles = format_percentiles(percentiles)
stat_index = (['count', 'mean', 'std', 'min'] + formatted_percentiles + ['max'])
d = ([series.count(), series.mean(), series.std(), series.min()] +
[series.quantile(x) for x in percentiles] + [series.max()])
s = pd.Series(d, index=stat_index, name=series.name)
_df = pd.concat([_df,s], axis=1)
return _df
dd = describe_customized(df)
Result:
al asn cc chnk ci ciwc \
25% 0.130846 0.849998 0.000000 0.018000 0.0 0.000000e+00
50% 0.131369 0.849999 0.000000 0.018000 0.0 0.000000e+00
75% 0.134000 0.849999 0.000000 0.018000 0.0 0.000000e+00
count 624.000000 624.000000 23088.000000 624.000000 64.0 2.308800e+04
max 0.137495 0.849999 1.000000 0.018006 0.0 5.576574e-04
mean 0.119082 0.762819 0.022013 0.016154 0.0 8.247306e-07
min 0.000000 0.000000 0.000000 0.000000 0.0 0.000000e+00
std 0.040338 0.258087 0.098553 0.005465 0.0 8.969210e-06
I created a function that returns a new dataframe with the statistics of the variables for a level of your choice:
def describe_levels(df,level):
df_des = pd.DataFrame(
index=df.columns.levels[0],
columns=['count','mean','std','min','25','50','75','max']
)
for index in df_des.index:
df_des.loc[index,'count'] = len(df[index]['1'][level])
df_des.loc[index,'mean'] = df[index]['1'][level].mean().mean()
df_des.loc[index,'std'] = df[index]['1'][level].std().mean()
df_des.loc[index,'min'] = df[index]['1'][level].min().mean()
df_des.loc[index,'max'] = df[index]['1'][level].max().mean()
df_des.loc[index,'25'] = df[index]['1'][level].quantile(q=0.25).mean()
df_des.loc[index,'50'] = df[index]['1'][level].quantile(q=0.5).mean()
df_des.loc[index,'75'] = df[index]['1'][level].quantile(q=0.75).mean()
return df_des
For example, I called:
describe_levels(df,'1').T
See here the result for pressure level 1:

Pandas converting timestamp and monthly summary

I have several .csv files which I am importing via Pandas and then work out a summary of the data (min, max, mean), ideally weekly and monthly reports. I have the following code, but just do not seem to get the month summary to work, I am sure the problem is with the timestamp conversion.
What am I doing wrong?
import pandas as pd
import numpy as np
#Format of the data that is been imported
#2017-05-11 18:29:14+00:00,264.0,987.99,26.5,23.70,512.0,11.763,52.31
df = pd.read_csv('data.csv')
df['timestamp'] = pd.to_datetime(df['time'], format='%Y-%m-%d %H:%M:%S')
print 'month info'
print [g for n, g in df.groupby(pd.Grouper(key='timestamp',freq='M'))]
print(data.groupby('timestamp')['light'].mean())
IIUC, you almost have it, and your datetime conversion is fine. Here is an example:
Starting from a dataframe like this (which is your example row, duplicated with slight modifications):
>>> df
time x y z a b c d
0 2017-05-11 18:29:14+00:00 264.0 947.99 24.5 53.7 511.0 11.463 12.31
1 2017-05-15 18:29:14+00:00 265.0 957.99 25.5 43.7 512.0 11.563 22.31
2 2017-05-21 18:29:14+00:00 266.0 967.99 26.5 33.7 513.0 11.663 32.31
3 2017-06-11 18:29:14+00:00 267.0 977.99 26.5 23.7 514.0 11.763 42.31
4 2017-06-22 18:29:14+00:00 268.0 997.99 27.5 13.7 515.0 11.800 52.31
You can do what you did before with your datetime:
df['timestamp'] = pd.to_datetime(df['time'], format='%Y-%m-%d %H:%M:%S')
And then get your summaries either separately:
monthly_mean = df.groupby(pd.Grouper(key='timestamp',freq='M')).mean()
monthly_max = df.groupby(pd.Grouper(key='timestamp',freq='M')).max()
monthly_min = df.groupby(pd.Grouper(key='timestamp',freq='M')).min()
weekly_mean = df.groupby(pd.Grouper(key='timestamp',freq='W')).mean()
weekly_min = df.groupby(pd.Grouper(key='timestamp',freq='W')).min()
weekly_max = df.groupby(pd.Grouper(key='timestamp',freq='W')).max()
# Examples:
>>> monthly_mean
x y z a b c d
timestamp
2017-05-31 265.0 957.99 25.5 43.7 512.0 11.5630 22.31
2017-06-30 267.5 987.99 27.0 18.7 514.5 11.7815 47.31
>>> weekly_mean
x y z a b c d
timestamp
2017-05-14 264.0 947.99 24.5 53.7 511.0 11.463 12.31
2017-05-21 265.5 962.99 26.0 38.7 512.5 11.613 27.31
2017-05-28 NaN NaN NaN NaN NaN NaN NaN
2017-06-04 NaN NaN NaN NaN NaN NaN NaN
2017-06-11 267.0 977.99 26.5 23.7 514.0 11.763 42.31
2017-06-18 NaN NaN NaN NaN NaN NaN NaN
2017-06-25 268.0 997.99 27.5 13.7 515.0 11.800 52.31
Or aggregate them all together to get a multi-indexed dataframe with your summaries:
monthly_summary = df.groupby(pd.Grouper(key='timestamp',freq='M')).agg(['mean', 'min', 'max'])
weekly_summary = df.groupby(pd.Grouper(key='timestamp',freq='W')).agg(['mean', 'min', 'max'])
# Example of summary of row 'x':
>>> monthly_summary['x']
mean min max
timestamp
2017-05-31 265.0 264.0 266.0
2017-06-30 267.5 267.0 268.0
>>> weekly_summary['x']
mean min max
timestamp
2017-05-14 264.0 264.0 264.0
2017-05-21 265.5 265.0 266.0
2017-05-28 NaN NaN NaN
2017-06-04 NaN NaN NaN
2017-06-11 267.0 267.0 267.0
2017-06-18 NaN NaN NaN
2017-06-25 268.0 268.0 268.0

pandas interpolate doesnt fill null values

i have this code which I load a my data to a dataframe and i try to fill up the naN values using .interpolate instead of replacing it with 0
my dataframe looks like this:
weight height wc hc FBS HBA1C
0 NaN NaN NaN NaN NaN NaN
1 55.6 151.0 NaN NaN 126.0 NaN
2 42.8 151.0 73.0 79.0 NaN NaN
3 60.8 155.0 NaN NaN 201.0 NaN
4 NaN NaN NaN NaN NaN NaN
5 60.0 NaN 87.0 92.0 NaN NaN
6 NaN NaN NaN NaN NaN NaN
7 NaN NaN NaN NaN NaN NaN
8 NaN NaN NaN NaN 194.0 NaN
9 57.0 158.0 95.0 90.0 NaN NaN
10 46.0 NaN 83.0 91.0 223.0 NaN
11 NaN NaN NaN NaN NaN NaN
12 NaN NaN NaN NaN NaN NaN
13 58.5 164.0 NaN NaN NaN NaN
14 62.0 154.0 80.5 100.0 NaN NaN
15 NaN NaN NaN NaN NaN NaN
16 57.0 152.0 NaN NaN NaN NaN
17 62.4 153.0 88.0 99.0 NaN NaN
18 NaN NaN NaN NaN NaN NaN
19 48.0 146.0 NaN NaN NaN NaN
20 68.7 NaN NaN NaN NaN NaN
21 49.0 146.0 NaN NaN NaN NaN
22 NaN NaN NaN NaN NaN NaN
23 NaN NaN NaN NaN NaN NaN
24 70.2 161.0 NaN NaN NaN NaN
25 70.4 161.0 93.0 68.0 NaN NaN
26 61.8 143.0 91.0 98.0 NaN NaN
27 70.4 NaN NaN NaN NaN NaN
28 70.1 144.0 100.0 103.0 NaN NaN
29 NaN NaN NaN NaN NaN NaN
... ... ... ... ... ... ...
318 49.0 146.0 92.0 89.0 NaN NaN
319 64.7 145.0 87.0 107.0 NaN NaN
320 55.5 149.0 81.0 101.0 NaN NaN
321 55.4 145.0 87.0 96.0 NaN NaN
322 53.1 153.0 83.0 96.0 NaN NaN
323 52.1 147.0 89.0 92.0 NaN NaN
324 68.9 167.0 96.0 100.0 NaN NaN
325 NaN NaN NaN NaN NaN NaN
326 57.0 142.0 100.0 101.0 NaN NaN
327 72.5 163.0 98.0 95.0 NaN NaN
328 73.5 157.0 94.0 114.0 NaN NaN
329 61.0 160.0 90.0 89.5 NaN NaN
330 49.0 150.0 80.0 90.0 NaN NaN
331 50.0 150.0 83.0 90.0 NaN NaN
332 67.6 155.0 92.0 103.0 NaN NaN
333 NaN NaN NaN NaN NaN NaN
334 78.7 162.0 99.0 101.0 NaN NaN
335 74.5 155.0 98.0 110.0 NaN NaN
336 68.0 152.0 85.0 93.0 NaN NaN
337 67.0 152.0 NaN NaN 179.1 NaN
338 NaN NaN NaN NaN 315.0 NaN
339 38.0 145.0 66.0 NaN 196.0 NaN
340 50.0 148.0 NaN NaN 133.0 NaN
341 73.5 NaN NaN NaN NaN NaN
342 74.5 NaN NaN NaN NaN NaN
343 NaN NaN NaN NaN NaN NaN
344 67.0 152.0 106.0 NaN NaN NaN
345 52.0 145.0 94.0 NaN NaN NaN
346 52.0 159.0 89.0 NaN NaN NaN
347 67.0 153.0 92.0 91.0 NaN NaN
my code:
import pandas as pd
df = pd.read_csv('final_dataset_3.csv')
import numpy as np
df['weight'].replace(0,np.nan, inplace=True)
df['height'].replace(0,np.nan, inplace=True)
df['wc'].replace(0,np.nan, inplace=True)
df['hc'].replace(0,np.nan, inplace=True)
df['FBS'].replace(0,np.nan, inplace=True)
df['HBA1C'].replace(0,np.nan, inplace=True)
df1 = df.interpolate()
df1
df1 looks like this
weight height wc hc FBS HBA1C
0 NaN NaN NaN NaN NaN NaN
1 55.600000 151.0 NaN NaN 126.000000 NaN
2 42.800000 151.0 73.000000 79.000000 163.500000 NaN
3 60.800000 155.0 77.666667 83.333333 201.000000 NaN
4 60.400000 155.5 82.333333 87.666667 199.600000 NaN
5 60.000000 156.0 87.000000 92.000000 198.200000 NaN
6 59.250000 156.5 89.000000 91.500000 196.800000 NaN
after running the code, it didnt replace the naN values with a value instead replaces the values with more decimal points.
Looking at this data leads me to believe that interpolating the values would be improper. Each row represents some attributes for different people. You cannot base a missing value of, say, weight on adjacent rows. I understand that you need to deal with the NaN's because much of the data will be useless when building many types of models.
Instead maybe you should fill with the mean() or median(). Here's a simple dataframe with some missing values.
df
Out[58]:
height weight
0 54.0 113.0
1 61.0 133.0
2 NaN 129.0
3 48.0 NaN
4 60.0 107.0
5 51.0 114.0
6 NaN 165.0
7 51.0 NaN
8 53.0 147.0
9 NaN 124.0
To replace missing values with the mean() of the column:
df.fillna(df.mean())
Out[59]:
height weight
0 54.0 113.0
1 61.0 133.0
2 54.0 129.0
3 48.0 129.0
4 60.0 107.0
5 51.0 114.0
6 54.0 165.0
7 51.0 129.0
8 53.0 147.0
9 54.0 124.0
Of course, you could easily use median() or some other method that makes sense for your data.

Categories

Resources