I have 7 dataframes (df_1, df_2, df_3,..., df_7) all with the same columns but different lengths but sometimes have the same values.
I'd like to concatenate all 7 dataframes under the conditions that:
if df_n.iloc[row_i] != df_n+1.iloc[row_i] and df_n.iloc[row_i][0] < df_n+1.iloc[row_i][0]:
pd.concat([df_n.iloc[row_i], df_n+1.iloc[row_i], df_n+2.iloc[row_i],
...., df_n+6.iloc[row_i]])
Where df_n.iloc[row_i] is the ith row of the nth dataframe and df_n.iloc[row_i][0] is the first column of the ith row.
For example if we only had 2 dataframes and that len(df_1) < len(df_2) and if we used the conditions above the input would be:
df_1 df_2
index 0 1 2 index 0 1 2
0 12.12 11.0 31 0 12.2 12.6 30
1 12.3 12.1 33 1 12.3 12.1 33
2 10 9.1 33 2 13 12.1 23
3 16 12.1 33 3 13.1 12.1 27
4 14.4 13.1 27
5 15.2 13.2 28
And the output would be:
conditions -> pd.concat([df_1, df_2]):
index 0 1 2 3 4 5
0 12.12 11.0 31 12.2 12.6 30
2 10 9.1 33 13 12.1 23
4 nan 14.4 13.1 27
5 nan 15.2 13.2 28
Is there an easy way to do this?
IIUC concat first , the groupby by columns get the different , and we just implement your condition
s=pd.concat([df1,df2],1)
s1=s.groupby(level=0,axis=1).apply(lambda x : x.iloc[:,0]-x.iloc[:,1])
yourdf=s[s1.ne(0).any(1)&s1.iloc[:,0].lt(0)|s1.iloc[:,0].isnull()]
Out[487]:
0 1 2 0 1 2
index
0 12.12 11.0 31.0 12.2 12.6 30
2 10.00 9.1 33.0 13.0 12.1 23
4 NaN NaN NaN 14.4 13.1 27
5 NaN NaN NaN 15.2 13.2 28
Related
I have a dataframe with 5 columns: M1, M2, M3, M4 and M5. Each column contains floating-point values. Now I want to combine the data of 5 columns into one.
I tried
cols = list(df.columns)
df_new['Total'] = []
df_new['Total'] = [df_new['Total'].append(df[i], ignore_index=True) for i in cols]
But I'm getting this
I'm using Python 3.8.5 and Pandas 1.1.2.
Here's a part of my df
M1 M2 M3 M4 M5
0 5 12 20 26
0.5 5.5 12.5 20.5 26.5
1 6 13 21 27
1.5 6.5 13.5 21.5 27.5
2 7 14 22 28
2.5 7.5 14.5 22.5 28.5
10 15 22 30 36
10.5 15.5 22.5 30.5 36.5
11 16 23 31 37
11.5 16.5 23.5 31.5 37.5
12 17 24 32 38
12.5 17.5 24.5 32.5 38.5
And this is what I'm expecting
0
0.5
1
1.5
2
2.5
10
10.5
11
11.5
12
12.5
5
5.5
6
6.5
7
7.5
15
15.5
16
16.5
17
17.5
12
12.5
13
13.5
14
14.5
22
22.5
23
23.5
24
24.5
20
20.5
21
21.5
22
22.5
30
30.5
31
31.5
32
32.5
26
26.5
27
27.5
28
28.5
36
36.5
37
37.5
38
38.5
import pandas as pd
Just make use of concat() method and list comprehension:
result=pd.concat((df[x] for x in df.columns),ignore_index=True)
Now If you print result then you will get your desired output
Performance(concat() vs unstack()):
Using Numpy I tried creating an array from one of the columns out of a dataframe. This array I created, however, its size is (48,) where 48 is the number of rows, instead of (48,1) which I expected. Why is this the case? I thought any array created from a numpy dataframe had to have a defined number of rows and columns
Below is the relevant code, output, and dataset represented by df
y = df.iloc[:, -1]
a=y.shape//Output is (48,)
00 0 1
0 1 0.0 45.0
1 1 0.0 48.0
2 1 0.5 67.0
3 1 1.5 59.5
4 1 1.5 62.4
5 1 1.5 84.4
6 1 1.5 82.0
7 1 1.5 79.5
8 1 3.0 64.8
9 1 3.0 67.4
10 1 3.0 82.6
11 1 3.0 78.2
12 1 3.0 80.4
13 1 3.5 71.3
14 1 3.5 70.5
15 1 3.5 75.0
16 1 3.5 80.9
17 1 3.5 83.2
18 1 4.0 78.4
19 1 4.0 74.2
20 1 4.0 81.5
21 1 4.0 68.9
22 1 4.5 68.3
23 1 4.5 78.5
24 1 4.5 75.9
25 1 4.5 81.6
26 1 4.5 83.2
27 1 4.5 86.1
28 1 4.5 87.4
29 1 5.0 72.8
30 1 5.0 75.0
31 1 5.0 75.6
32 1 5.0 79.3
33 1 5.0 82.4
34 1 5.0 86.3
35 1 5.0 90.2
36 1 5.0 93.4
37 1 5.5 79.5
38 1 5.5 81.4
39 1 5.5 83.2
40 1 5.5 85.7
41 1 5.5 91.4
42 1 5.5 98.5
43 1 5.5 94.3
44 1 6.0 81.2
45 1 6.0 85.4
46 1 6.0 91.0
47 1 6.0 94.3
The result is a 1D array. If its length is N, then it can be represented as N-by-1 or 1-by-N vector, as we were told in the linear algebra class. But this approach has some issues that we don't want to deal with in coding.
Issue 1. We need to choose if the answer is N-by-1 or 1-by-N vector and stick to it. However, sometimes we have a preference for one over another, and, therefore, we need to do additional actions.
Issue 2. If the size of an array is (1, N) or (N, 1), we need to access its elements using two indexes, for example, arr[0, N-1] or arr[N-1, 0]. It is confusing -- after all, it is a 1D vector, and a single index suffices. We wish to access its elements as arr[N-1], i.e., by using a single index. In linear algebra notations, it would mean that the shape is (N), which looks awkward. The shape is a tuple, and a one-element tuple is written using a comma as (N, ).
Such a twist solves the issue. Now we can use linear multiplication from both right and left sides and access the elements using a single index.
I have the following NFL tracking data:
Event PlayId FrameId x-coord y-coord
0 Start 1 1 20.2 20.0
1 NaN 1 2 21.0 19.1
2 NaN 1 3 21.3 18.3
3 NaN 1 4 22.0 17.5
4 End 1 5 22.5 17.2
4 NaN 1 6 22.5 17.2
4 NaN 1 7 22.5 17.2
4 NaN 1 8 22.5 17.2
4 NaN 1 9 22.5 17.2
4 NaN 1 10 22.5 17.2
5 NaN 2 1 23.0 16.9
6 Start 2 2 23.6 16.7
7 End 2 3 25.1 34.1
8 NaN 2 4 25.9 34.2
10 NaN 3 1 22.7 34.2
11 Nan 3 2 21.5 34.5
12 NaN 3 3 21.1 37.3
13 Start 3 4 21.2 44.3
14 NaN 3 5 20.4 44.6
15 End 3 6 21.9 42.7
How can I filter this list to only get the rows in between the "Start" and "End" values for the Event column? To clarify, this is the data I want to filter for:
Event PlayId FrameId x-coord y-coord
0 Start 1 1 20.2 20.0
1 NaN 1 2 21.0 19.1
2 NaN 1 3 21.3 18.3
3 NaN 1 4 22.0 17.5
4 End 1 5 22.5 17.2
6 Start 2 2 23.6 16.7
7 End 2 3 25.1 34.1
13 Start 3 4 21.2 44.3
14 NaN 3 5 20.4 44.6
15 End 3 6 21.9 42.7
An explicit solution will not work because the actual dataset is very large and there is no way to predict where the Start and End values fall.
Doing with slice and ffill then concat back , Also you have Nan in your df , should it be NaN ?
df1=df.copy()
newdf=pd.concat([df1[df.Event.ffill()=='Start'],df1[df.Event=='End']]).sort_index()
newdf
Event PlayId FrameId x-coord y-coord
0 Start 1 1 20.2 20.0
1 NaN 1 2 21.0 19.1
2 NaN 1 3 21.3 18.3
3 NaN 1 4 22.0 17.5
4 End 1 5 22.5 17.2
6 Start 2 2 23.6 16.7
7 End 2 3 25.1 34.1
13 Start 3 4 21.2 44.3
14 NaN 3 5 20.4 44.6
15 End 3 6 21.9 42.7
Or
newdf=df[~((df.Event.ffill()=='End')&(df.Event.isna()))]
newdf
Event PlayId FrameId x-coord y-coord
0 Start 1 1 20.2 20.0
1 NaN 1 2 21.0 19.1
2 NaN 1 3 21.3 18.3
3 NaN 1 4 22.0 17.5
4 End 1 5 22.5 17.2
6 Start 2 2 23.6 16.7
7 End 2 3 25.1 34.1
13 Start 3 4 21.2 44.3
14 NaN 3 5 20.4 44.6
15 End 3 6 21.9 42.7
I have a data set consisting of 135 columns. I am trying to drop the columns which have empty data of more than 60%. There are some 40 columns approx in it. So, I wrote a function to drop this empty columns. But I am getting "Not contained in axis" error. Could some one help me solving this?. Or any other way to drop this 40 columns at once?
My function:
list_drop = df.isnull().sum()/(len(df))
def empty(df):
if list_drop > 0.5:
df.drop(list_drop,axis=1,inplace=True)
return df
Other method i tried:
df.drop(df.count()/len(df)<0.5,axis=1,inplace=True)
You could use isnull + sum and then use the mask to filter df.columns.
m = df.isnull().sum(0) / len(df) < 0.6
df = df[df.columns[m]]
Demo
df
A B C
0 29.0 NaN 26.6
1 NaN NaN 23.3
2 23.0 94.0 28.1
3 35.0 168.0 43.1
4 NaN NaN 25.6
5 32.0 88.0 31.0
6 NaN NaN 35.3
7 45.0 543.0 30.5
8 NaN NaN NaN
9 NaN NaN 37.6
10 NaN NaN 38.0
11 NaN NaN 27.1
12 23.0 846.0 30.1
13 19.0 175.0 25.8
14 NaN NaN 30.0
15 47.0 230.0 45.8
16 NaN NaN 29.6
17 38.0 83.0 43.3
18 30.0 96.0 34.6
m = df.isnull().sum(0) / len(df) < 0.3 # 0.3 as an example
m
A False
B False
C True
dtype: bool
df[df.columns[m]]
C
0 26.6
1 23.3
2 28.1
3 43.1
4 25.6
5 31.0
6 35.3
7 30.5
8 NaN
9 37.6
10 38.0
11 27.1
12 30.1
13 25.8
14 30.0
15 45.8
16 29.6
17 43.3
18 34.6
I am trying to do data analysis of some rainfall data. Example of the data looks like this:-
10 18/05/2016 26.9 40 20.8 34 52.2 20.8 46.5 45
11 19/05/2016 25.5 32 0.3 41.6 42 0.3 56.3 65.2
12 20/05/2016 8.5 29 18.4 9 36 18.4 28.6 46
13 21/05/2016 24.5 18 TRACE 3.5 17 TRACE 4.4 40
14 22/05/2016 0.6 18 0 6.5 14 0 8.6 20
15 23/05/2016 3.5 9 0.6 4.3 14 0.6 7 15
16 24/05/2016 3.6 25 T 3 12 T 14.9 9
17 25/05/2016 25 21 2.2 25.6 50 2.2 25 9
The rainfall data contain a specific string 'TRACE' or 'T' (both meaning non measurable rainfall amount). For analysis, I would like to convert this strings in to '1.0' (float). My desired data should look like this so as to plot the values as line diagram:-
10 18/05/2016 26.9 40 20.8 34 52.2 20.8 46.5 45
11 19/05/2016 25.5 32 0.3 41.6 42 0.3 56.3 65.2
12 20/05/2016 8.5 29 18.4 9 36 18.4 28.6 46
13 21/05/2016 24.5 18 1.0 3.5 17 1.0 4.4 40
14 22/05/2016 0.6 18 0 6.5 14 0 8.6 20
15 23/05/2016 3.5 9 0.6 4.3 14 0.6 7 15
16 24/05/2016 3.6 25 1.0 3 12 1.0 14.9 9
17 25/05/2016 25 21 2.2 25.6 50 2.2 25 9
Can some one point me to right direction?
You can use df.replace, and then converting the numeric to float using df.astype (the original datatype would be object, so any operations on these columns would still suffer from performance issues):
df = df.replace('^T(RACE)?$', 1.0, regex=True)
df.iloc[:, 1:] = df.iloc[:, 1:].astype(float) # converting object columns to floats
This will replace all T or TRACE elements with 1.0.
Output:
10 18/05/2016 26.9 40 20.8 34.0 52.2 20.8 46.5 45.0
11 19/05/2016 25.5 32 0.3 41.6 42.0 0.3 56.3 65.2
12 20/05/2016 8.5 29 18.4 9.0 36.0 18.4 28.6 46.0
13 21/05/2016 24.5 18 1 3.5 17.0 1 4.4 40.0
14 22/05/2016 0.6 18 0 6.5 14.0 0 8.6 20.0
15 23/05/2016 3.5 9 0.6 4.3 14.0 0.6 7.0 15.0
16 24/05/2016 3.6 25 1 3.0 12.0 1 14.9 9.0
17 25/05/2016 25.0 21 2.2 25.6 50.0 2.2 25.0 9.0
Use replace by dict:
df = df.replace({'T':1.0, 'TRACE':1.0})
And then if necessary convert columns to float:
cols = df.columns.difference(['Date','another cols dont need convert'])
df[cols] = df[cols].astype(float)
df = df.replace({'T':1.0, 'TRACE':1.0})
cols = df.columns.difference(['Date','a'])
df[cols] = df[cols].astype(float)
print (df)
a Date 2 3 4 5 6 7 8 9
0 10 18/05/2016 26.9 40.0 20.8 34.0 52.2 20.8 46.5 45.0
1 11 19/05/2016 25.5 32.0 0.3 41.6 42.0 0.3 56.3 65.2
2 12 20/05/2016 8.5 29.0 18.4 9.0 36.0 18.4 28.6 46.0
3 13 21/05/2016 24.5 18.0 1.0 3.5 17.0 1.0 4.4 40.0
4 14 22/05/2016 0.6 18.0 0.0 6.5 14.0 0.0 8.6 20.0
5 15 23/05/2016 3.5 9.0 0.6 4.3 14.0 0.6 7.0 15.0
6 16 24/05/2016 3.6 25.0 1.0 3.0 12.0 1.0 14.9 9.0
7 17 25/05/2016 25.0 21.0 2.2 25.6 50.0 2.2 25.0 9.0
print (df.dtypes)
a int64
Date object
2 float64
3 float64
4 float64
5 float64
6 float64
7 float64
8 float64
9 float64
dtype: object
Extending the answer from #jezrael, you can replace and convert to floats in a single statement (assumes the first column is Date and the remaining are the desired numeric columns):
df.iloc[:, 1:] = df.iloc[:, 1:].replace({'T':1.0, 'TRACE':1.0}).astype(float)