pandas: fill missing data in data frame columns - python

I have the following pandas data frame:
import numpy as np
import pandas as pd
timestamps = [1, 14, 30]
data = dict(quantities=[1, 4, 9], e_quantities=[1, 2, 3])
df = pd.DataFrame(data=data, columns=data.keys(), index=timestamps)
which looks like this:
quantities e_quantities
1 1 1
14 4 2
30 9 3
However, the timestamps should run from 1 to 52:
index = pd.RangeIndex(1, 53)
The following line provides the timestamps that are missing:
series_fill = pd.Series(np.nan, index=index.difference(df.index)).sort_index()
How can I get the quantities and e_quantities columns to have NaN values at these missing timestamps?
I've tried:
df = pd.concat([df, series_fill]).sort_index()
but it adds another column (0) and swaps the order of the original data frame:
0 e_quantities quantities
1 NaN 1.0 1.0
2 NaN NaN NaN
3 NaN NaN NaN
Thanks for any help here.

I think you are looking for reindex
df=df.reindex(index)

Related

Merge two dataframes of different lengths with matching ID and fill NaN values of main dataframe on two columns

I have two dataframes, the main dataframe has two columns for Lat and Long some of which have values and some of which are NaN. I have another dataframe that is a subset of this main dataframe with Lat and Long filled in with values. I'd like to fill in the main DataFrame with these values based on matching ID.
Main DataFrame:
ID Lat Long
0 9547507704 33.853682 -80.369867
1 9777677704 32.942332 -80.066165
2 5791407702 47.636067 -122.302559
3 6223567700 34.224719 -117.372550
4 9662437702 42.521828 -82.913680
... ... ... ...
968552 4395967002 NaN NaN
968553 6985647108 NaN NaN
968554 7996438405 NaN NaN
968555 9054647103 NaN NaN
968556 9184687004 NaN NaN
DataFrame to fill:
ID Lat Long
0 2392497107 36.824257 -76.272486
1 2649457102 37.633918 -77.507746
2 2952437110 37.511077 -77.528711
3 3379937304 39.119430 -77.569008
4 3773127208 36.909731 -76.070420
... ... ... ...
23263 9512327001 37.371059 -79.194838
23264 9677417002 38.406665 -78.913133
23265 9715167306 38.761194 -77.454184
23266 9767568404 37.022287 -76.319882
23267 9872047407 38.823017 -77.057818
The two dataframes are of different lengths.
EDIT for clarification: I need to replace the NaN in the Lat & Long columns of the main DataFrame with the Lat & Long from the subset if ID matches in both DataFrames. My DataFrames are both >60 columns, I am only trying to replace the NaN for those two columns.
Edit:
I went with this mapping solution although it isn't exactly what I'm looking for, I know there is a much more simple solution.
#mapping coordinates to NaN values in main
m = dict(zip(fill_df.ID,fill_df.Lat))
main_df.Lat = main_df.Lat.fillna(main_df.ID.map(m))
n = dict(zip(fill_df.ID,fill_df.Long))
main_df.Long = main_df.Long.fillna(main_df.ID.map(n))
new_df = pd.merge(main_df, sub_df, how='left', on='ID')
I guess the left join will do the job.
One approach is to use DataFrame.combine_first. This method aligns DataFrames on index and columns, so you need to set ID as the index of each DataFrame, call df_main.combine_first(df_filler), then reset ID back into a column. (Seems awkward; there's probably a more elegant approach.)
Assuming your main DataFrame is named df_main and your DataFrame to fill is named df_filler:
df_main.set_index('ID').combine_first(df_filler.set_index('ID')).reset_index()
This should do the trick:
import math
A = pd.DataFrame({'ID' : [1, 2, 3], 'Lat':[4, 5, 6], 'Long': [7, 8, float('nan')]})
B = pd.DataFrame({'ID' : [2, 3], 'Lat':[5, 6], 'Long': [8, 9]})
print('Old table:')
print(A)
print('Fix table:')
print(B)
for i in A.index.to_list():
for j in B.index.to_list():
if not A['ID'][i] == B['ID'][j]:
continue
if math.isnan(A['Lat'][i]):
A.at[i, 'Lat'] = B['Lat'][j]
if math.isnan(A['Long'][i]):
A.at[i, 'Long'] = B['Long'][j]
print('New table:')
print(A)
Returns:
ID Lat Long
0 1 4 7.0
1 2 5 8.0
2 3 6 NaN
Fix table:
ID Lat Long
0 2 5 8
1 3 6 9
New table:
ID Lat Long
0 1 4 7.0
1 2 5 8.0
2 3 6 9.0
Not very elegant but gets the job done :)

How do I fill na values in a column with the average of previous non-na and next non-na value in pandas?

Raw table:
Column A
5
nan
nan
15
New table:
Column A
5
10
10
15
One option might be the following (using fillna twice (with options ffill and bfill) and then averaging them):
import pandas as pd
import numpy as np
df = pd.DataFrame({'x': [np.nan, 5, np.nan, np.nan, 15]})
filled_series = [df['x'].fillna(method=m) for m in ('ffill', 'bfill')]
print(pd.concat(filled_series, axis=1).mean(axis=1))
# 0 5.0
# 1 5.0
# 2 10.0
# 3 10.0
# 4 15.0
As you can see, this works even if nan happens at the beginning or at the end.

select range of values for all columns in pandas dataframe

I have a dataframe 'DF', part of which looks like this:
I want to select only the values between 0 and 0.01, to form a new dataframe(with blanks where the value was over 0.01)
To do this, i tried:
similarity = []
for x in DF:
similarity.append([DF[DF.between(0, 0.01).any(axis=1)]])
simdf = pd.DataFrame(similarity)
simdf.to_csv("similarity.csv")
However, i get the error AttributeError: 'DataFrame' object has no attribute 'between'
How do i select a range of values and create a new data frame with these?
Just do the two comparisons:
df_new = df[(df>0) & (df<0.01)]
Example:
import pandas as pd
df = pd.DataFrame({"a":[0,2,4,54,56,4],"b":[4,5,7,12,3,4]})
print(df[(df>5) & (df<33)])
a b
0 NaN NaN
1 NaN NaN
2 NaN 7.0
3 NaN 12.0
4 NaN NaN
5 NaN NaN
If want blank string instead of NaN:
df[(df>5) & (df<33)].fillna("")

Need to combine multiple rows based on index

I have a dataframe with values like
0 1 2
a 5 NaN 6
a NaN 2 NaN
Need the output by combining the two rows based on index 'a' which is same in both rows
Also need to add multiple columns and output as single column
Need the output as below. Value 13 since adding 5 2 6
0
a 13
Trying this using concat function but getting errors
How about using Pandas dataframe.sum() ?
import pandas as pd
import numpy as np
data = pd.DataFrame({"0":[5, np.NaN], "1":[np.NaN, 2], "2":[6,np.NaN]})
row_total = data.sum(axis = 1, skipna = True)
row_total.sum(axis = 0)
result:
13.0
EDIT: #Chris comment (did not see it while writing my answer) shows how to do it in one line, if all rows have same index.
data:
data = pd.DataFrame({"0":[5, np.NaN],
"1":[np.NaN, 2],
"2":[6,np.NaN]},
index=['a', 'a'])
gives:
0 1 2
a 5.0 NaN 6.0
a NaN 2.0 NaN
Then
data.groupby(data.index).sum().sum(1)
Returns
13.0

Pandas how to place an array in a single dataframe cell?

So I currently have a dataframe that looks like:
And I want to add a completely new column called "Predictors" with only one cell that contains an array.
So [0, 'Predictors'] should contain an array and everything below that cell in the same column should be empty.
Here's my attempt, I tried to create a separate dataframe that just contained the "Predictors" column, and tried appending it to the current dataframe, but I get: 'Length mismatch: Expected axis has 3 elements, new values have 4 elements.'
How do I append a single cell containing an array to my dataframe?
# create a list and dataframe to hold the names of predictors
dataframe=dataframe.drop(['price','Date'],axis=1)
predictorsList = dataframe.columns.get_values().tolist()
predictorsList = np.array(predictorsList, dtype=object)
# Combine actual and forecasted lists to one dataframe
combinedResults = pd.DataFrame({'Actual': actual, 'Forecasted': forecasted})
predictorsDF = pd.DataFrame({'Predictors': [predictorsList]})
# Add Predictors to dataframe
#combinedResults.at[0, 'Predictors'] = predictorsList
pd.concat([combinedResults,predictorsDF], ignore_index=True, axis=1)
You could fill the rest of the cells in the desired column with NaN, but they will not "empty". To do that, use pd.merge on both indexes:
Setup
import pandas as pd
import numpy as np
df = pd.DataFrame({
'Actual': [18.442, 15.4233, 20.6217, 16.7, 18.185],
'Forecasted': [19.6377, 13.1665, 19.3992, 17.4557, 14.0053]
})
arr = np.zeros(3)
df_arr = pd.DataFrame({'Predictors': [arr]})
Merging df and df_arr
result = pd.merge(
df,
df_arr,
how='left',
left_index=True, # Merge on both indexes, since right only has 0...
right_index=True # all the other rows will be NaN
)
Results
>>> print(result)
Actual Forecasted Predictors
0 18.4420 19.6377 [0.0, 0.0, 0.0]
1 15.4233 13.1665 NaN
2 20.6217 19.3992 NaN
3 16.7000 17.4557 NaN
4 18.1850 14.0053 NaN
>>> result.loc[0, 'Predictors']
array([0., 0., 0.])
>>> result.loc[1, 'Predictors'] # actually contains a NaN value
nan
You need to change the object type of the column (in your case Predictors) first
import pandas as pd
import numpy as np
df=pd.DataFrame(np.arange(20).reshape(5,4), columns=list('abcd'))
df=df.astype(object) # this line allows the signment of the array
df.iloc[1,2] = np.array([99,99,99])
print(df)
gives
a b c d
0 0 1 2 3
1 4 5 [99, 99, 99] 7
2 8 9 10 11
3 12 13 14 15
4 16 17 18 19

Categories

Resources