I have a dataframe with having 5 columns with having missing values.
How do i fill the missing values with taking the average of previous two column values.
Here is the sample code for the same.
coh0 = [0.5, 0.3, 0.1, 0.2,0.2]
coh1 = [0.4,0.3,0.6,0.5]
coh2 = [0.2,0.2,0.3]
coh3 = [0.8,0.8]
coh4 = [0.5]
df= pd.DataFrame({'coh0': pd.Series(coh0), 'coh1': pd.Series(coh1),'coh2': pd.Series(coh2), 'coh3': pd.Series(coh3),'coh4': pd.Series(coh4)})
df
Here is the sample output
coh0coh1coh2coh3coh4
0 0.5 0.4 0.2 0.8 0.5
1 0.3 0.3 0.2 0.8 NaN
2 0.1 0.6 0.3 NaN NaN
3 0.2 0.5 NaN NaN NaN
4 0.2 NaN NaN NaN NaN
Here is the desired result i am looking for.
The NaN value in each column should be replaced by the previous two columns average value at the same position. However for the first NaN value in second column, it will take the default last value of first column.
The sample desired output would be like below.
For the exception you named, the first NaN, you can do
df.iloc[1, -1] = df.iloc[0, -1]
though it doesn't make a difference in this case as the mean of .2 and .8 is .5, anyway.
Either way, the rest is something like a rolling window calculation, except it has to be computed incrementally. Normally, you want to vectorize your operations and avoid iterating over the dataframe, but IMHO this is one of the rarer cases where it's actually appropriate to loop over the columns (cf. this excellent post), i.e.,
compute the row-wise (axis=1) mean of up to two columns left of the current one (df.iloc[:, max(0, i-2):i]),
and fill its NaN values from the resulting series.
for i in range(1, df.shape[1]):
mean_df = df.iloc[:, max(0, i-2):i].mean(axis=1)
df.iloc[:, i] = df.iloc[:, i].fillna(mean_df)
which results in
coh0 coh1 coh2 coh3 coh4
0 0.5 0.4 0.20 0.800 0.5000
1 0.3 0.3 0.20 0.800 0.5000
2 0.1 0.6 0.30 0.450 0.3750
3 0.2 0.5 0.35 0.425 0.3875
4 0.2 0.2 0.20 0.200 0.2000
Related
I have a dataframe df: where APer columns range from 0-60
ID FID APerc0 ... APerc60
0 X 0.2 ... 0.5
1 Z 0.1 ... 0.3
2 Y 0.4 ... 0.9
3 X 0.2 ... 0.3
4 Z 0.9 ... 0.1
5 Z 0.1 ... 0.2
6 Y 0.8 ... 0.3
7 W 0.5 ... 0.4
8 X 0.6 ... 0.3
I want to calculate the cosine similarity of the values for all APerc columns between each row. So the result for the above should be:
ID CosSim
1 0,2,4 0.997
2 1,8,7 0.514
1 3,5,6 0.925
I know how to generate cosine similarity for the whole df:
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(df)
But I want to find similarity between each ID and group them together(or create separate df). How to do it fast for big dataset?
One possible solution could be get the particular rows you want to use for cosine similarity computation and do the following.
Here, combinations is basically the list pair of row index which you want to consider for computation.
cos = nn.CosineSimilarity(dim=0)
for i in range(len(combinations)):
row1 = df.loc[combinations[i][0], 2:62]
row2 = df.loc[combinations[i][1], 2:62]
sim = cos(row1, row2)
print(sim)
The result you can use in the way you want.
create a function for calculation, then df.apply(cosine_similarity_function()), one said that using apply function may perform hundreds times faster than row by row.
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html
I have a dataframe like as shown below
import numpy as np
import pandas as pd
np.random.seed(100)
df = pd.DataFrame({'grade': np.random.choice(list('ABCD'),size=(20)),
'dash': np.random.choice(list('PQRS'),size=(20)),
'dumeel': np.random.choice(list('QWER'),size=(20)),
'dumma': np.random.choice((1234),size=(20)),
'target': np.random.choice([0,1],size=(20))
})
I would like to do the below
a) event rate - Compute the % occurrence of 1s (from target column) for each unique value in a each of the input categorical column
b) non event rate - Compute the % occurrence of 0s (from target column) for each unique value in each of the input categorical columns
I tried the below
input_category_columns = df.select_dtypes(include='object')
df_rate_calc = pd.DataFrame()
for ip in input_category_columns:
feature,target = ip,'target'
df_rate_calc['col_name'] = (pd.crosstab(df[feature],df[target],normalize='columns'))
I would like to do this on a million rows and if there is any efficient approach, would really be helpful
I expect my output to be like as shown below. I have shown for only two columns but I want to produce this output for all categorical columns
Here is one approach:
Select the catgorical columns (cols)
Melt the dataframe with target as id variable and cols as value variables
Group the dataframe and use value_counts to calculate frequency
Unstack to reshape the dataframe
cols = df.select_dtypes('object')
df_out = (
df.melt('target', cols)
.groupby(['variable', 'target'])['value']
.value_counts(normalize=True)
.unstack(1, fill_value=0)
)
print(df_out)
target 0 1
variable value
dash P 0.4 0.3
Q 0.2 0.3
R 0.2 0.1
S 0.2 0.3
dumeel E 0.2 0.2
Q 0.1 0.0
R 0.4 0.6
W 0.3 0.2
grade A 0.4 0.2
B 0.0 0.2
C 0.4 0.3
D 0.2 0.3
I'm trying to sample a Dataframe based on a given Minimum Sample Interval on the "timestamp" column. Every extracted value would be the closest extracted value to the last one that is at least Minimum Sample Interval larger than the last one. So what I mean is, for the table given below and Minimum Sample Interval = 0.2
A timestamp
1 0.000000 0.1
2 3.162278 0.15
3 7.211103 0.45
4 7.071068 0.55
Here, we would extract indexes:
1, no last value yet so why not
Not 2, because it is only 0.05 larger than last value
3, because it is 0.35 larger than last value
Not 4, because it is only 0.1 larger than last value.
I've found a way to do this with iterrows, but I would like to avoid iterating over it if possible.
Closest I can think of is integer dividing the timestamp column with floordiv as interval and finding the rows where interval value changes. but for a case like [0.01 , 0.21, 0.55, 0.61, 0.75, 0.41], I would be selecting 0.61 instead of 0.75, which is only 0.06 larger than 0.55, instead of 0.2.
You can use pandas.Series.diff to compute the difference between each value and the next one:
sample = df[df['timestamp'].diff().fillna(1) > 0.2]
Output:
>>> sample
A timestamp
1 0.000000 0.10
3 7.211103 0.45
I have a dataframe as follows:
100 105 110
timestamp
2020-11-0112:00:00 0.2 0.5 0.1
2020-11-0112:01:00 0.3 0.8 0.2
2020-11-0112:02:00 0.8 0.9 0.4
2020-11-0112:03:00 1 0 0.4
2020-11-0112:04:00 0 1 0.5
2020-11-0112:05:00 0.5 1 0.2
I want to select columns with dataframe where the values would be greater than equal 0.5 and less than equal to 1, and I want the index/timestamp in which these occurrences happened. Each column could have multiple such occurrences. So, 100, can be between 0.5 and 1 from 12:00 to 12:03 and then again from 12:20 to 12:30. It needs to reset when it hits 0. The column names are variable.
I also want the time difference in which the column value was between 0.5 and 1, so from the above it was 3 minutes, and 10 minutes.
The expected output would be with a dict for ranges the indexes appeared in:
100 105 110
timestamp
2020-11-0112:00:00 NaN 0.5 NaN
2020-11-0112:01:00 NaN 0.8 NaN
2020-11-0112:02:00 0.8 0.9 NaN
2020-11-0112:03:00 1 NaN NaN
2020-11-0112:04:00 NaN 1 0.5
2020-11-0112:05:00 0.5 1 NaN
and probably a way to calculate the minutes which could be in a dict/list of dicts:
["105":
[{"from": "2020-11-0112:00:00", "to":"2020-11-0112:02:00"},
{"from": "2020-11-0112:04:00", "to":"2020-11-0112:05:00"}]
...
]
Essentially I want a the dicts at the end to evaluate.
Basically, it would be best if you got the ordered sequence of timestamps; then, you can manipulate it to get the differences. If the question is only about Pandas slicing and not about timestamp operations, then you need to do the following operation:
df[df["100"] >= 0.5][df["100"] <= 1]["timestamp"].values
Pandas data frames comparaision operations
For Pandas, data frames, normal comparison operations are overridden. If you do dataframe_instance >= 0.5, the result is a sequence of boolean values. An individual value in the sequence results from comparing an individual data frame value to 0.5.
Pandas data frame slicing
This sequence could be used to filter a subsequence from your data frame. It is possible because Pandas slicing is overridden and implemented as a reach filtering algorithm.
This is my df:
NAME DEPTH A1 A2 A3 AA4 AA5 AI4 AC5 Surface
0 Ron 2800.04 8440.53 1330.99 466.77 70.19 56.79 175.96 77.83 C
1 Ron 2801.04 6084.15 997.13 383.31 64.68 51.09 154.59 73.88 C
2 Ron 2802.04 4496.09 819.93 224.12 62.18 47.61 108.25 63.86 C
3 Ben 2803.04 5766.04 927.69 228.41 65.51 49.94 106.02 62.61 L
4 Ron 2804.04 6782.89 863.88 223.79 63.68 47.69 101.95 61.83 L
... ... ... ... ... ... ... ... ... ... ...
So, my first problem has been answered here:
Find percentile in pandas dataframe based on groups
Using:
df.groupby('Surface')['DEPTH'].quantile([.1, .9])
I can get the percentiles [.1,.9] from DEPTH grouped by Surface, which is what I need:
Surface
C 0.1 2800.24
0.9 2801.84
L 0.1 3799.74
0.9 3960.36
N 0.1 2818.24
0.9 2972.86
P 0.1 3834.94
0.9 4001.16
Q 0.1 3970.64
0.9 3978.62
R 0.1 3946.14
0.9 4115.96
S 0.1 3902.03
0.9 4073.26
T 0.1 3858.14
0.9 4029.96
U 0.1 3583.01
0.9 3843.76
V 0.1 3286.01
0.9 3551.06
Y 0.1 2917.00
0.9 3135.86
X 0.1 3100.01
0.9 3345.76
Z 0.1 4128.56
0.9 4132.56
Name: DEPTH, dtype: float64
Now, I believe that was already the hardest part. What is left is subsetting the original df to include only the values in between those DEPTH percentiles .1 & .9. So for example: DEPTH values in Surface group "Z" have to be greater than 4128.56 and less than 4132.56.
Note that I need df again, not df.groupby("Surface"): the final df would be exactly the same, but the rows whose depths are outside the borders should be dropped.
This seems so easy ... any ideas?
Thanks!
When you need to filter rows within groups it's often simpler and faster to use groupby + transform to broadcast the result to every row within a group and then filter the original DataFrame. In this case we can check if 'DEPTH' is between those two quantiles.
Sample Data
import pandas as pd
import numpy as np
np.random.seed(42)
df = pd.DataFrame({'DEPTH': np.random.normal(0,1,100),
'Surface': np.random.choice(list('abcde'), 100)})
Code
gp = df.groupby('Surface')['DEPTH']
df1 = df[df['DEPTH'].between(gp.transform('quantile', 0.1),
gp.transform('quantile', 0.9))]
For clarity, here you can see that transform will broadcast the scalar result to every row that belongs to the group, in this case defined by 'Surface'
pd.concat([df['Surface'], gp.transform('quantile', 0.1).rename('q = 0.1')], axis=1)
# Surface q = 0.1
#0 a -1.164557
#1 e -0.967809
#2 a -1.164557
#3 c -1.426986
#4 b -1.544816
#.. ... ...
#95 a -1.164557
#96 e -0.967809
#97 b -1.544816
#98 b -1.544816
#99 b -1.544816
#
#[100 rows x 2 columns]