filtering pandas dataframe and avoid nan - python

Ok I have this pandas dataframe
import pandas
dfp=pandas.DataFrame([5,10,1,7,13,4,5,7,8,10,11,3])
And i want to create a second data frame with the rows that have a value greater than 5, thereby:
dfp2=dfp[dfp>5]
My problem is that I obtain this result:
0
0 NaN
1 10
2 NaN
3 7
4 13
5 NaN
6 NaN
7 7
8 8
9 10
10 11
11 NaN
And what I want is this other result:
0
0 10
1 7
2 13
3 7
4 8
5 10
6 11
What is wrong with my code?
Thanks a lot

You're using the mask generated from the comparison so where it's False it returns NaN, to get rid of those call dropna:
In [32]:
dfp[dfp > 5].dropna()
Out[32]:
0
1 10
3 7
4 13
7 7
8 8
9 10
10 11
The mask here:
In [33]:
dfp > 5
Out[33]:
0
0 False
1 True
2 False
3 True
4 True
5 False
6 False
7 True
8 True
9 True
10 True
11 False

Related

why i cant use np.isnan to filter dataframe?

I have some dataframes, which contain a lot of nan.
i want to make a mask by the frist dataframe, then only keep those columns which contains no np.nan in the first datafame.
let me give an example:
In [69]: df = pd.DataFrame(np.reshape(range(25), (5,5)))
In [70]: df
Out[70]:
0 1 2 3 4
0 0 1 2 3 4
1 5 6 7 8 9
2 10 11 12 13 14
3 15 16 17 18 19
4 20 21 22 23 24
In [71]: df[5] = np.nan
In [72]: df
Out[72]:
0 1 2 3 4 5
0 0 1 2 3 4 NaN
1 5 6 7 8 9 NaN
2 10 11 12 13 14 NaN
3 15 16 17 18 19 NaN
4 20 21 22 23 24 NaN
### the following is the mask
In [73]: np.isnan(df)
Out[73]:
0 1 2 3 4 5
0 False False False False False True
1 False False False False False True
2 False False False False False True
3 False False False False False True
4 False False False False False True
In [74]: df[~np.isnan(df)]
Out[74]:
0 1 2 3 4 5
0 0 1 2 3 4 NaN
1 5 6 7 8 9 NaN
2 10 11 12 13 14 NaN
3 15 16 17 18 19 NaN
4 20 21 22 23 24 NaN
you can see, i use np.isnan to create a mask.
then use df[mask] to filter.
but it looks failed, the output still contains column5. is there anything i used wrongly?
EDIT:
If not working any solution below, it means there are no missing values, only strings nans and not np.nans.
So possible solution is replace them:
df = df.replace('nan', np.nan)
You can use it, but cannot filter by it, need Series or 1d mask add DataFrame.all for test ig no values are missing values per rows (also added ~ for inverted mask).
So for filter rows with no NaNs use:
df[~np.isnan(df).all(axis=1)]
Btw, in pandas it is simplier - remove all rows with at least one NaN per rows:
df = df.dropna()
If need filter rows with at least one NaN:
df[np.isnan(df).any(axis=1)]
because you cannot map matrix in elementwise approach... you can remove either rows or columns:
df[~np.isnan(df).all(axis=1)]

how to dynamically add values of some of the columns in a dataframe?

dataframe in the image
year= 2020 (MAX COLUMN)
lastFifthYear = year - 4
input = '2001509-00'
I want to add all the values between year(2020) and lastFifthYear(2016) WHERE INPUT PARTNO = input
so for input value I should get 4+6+2+3+2 (2016+2017+2018+2019+2020) i.e 17
please give me some code
Here is some code that should work but you definitely need to improve on the way you ask questions here :-)
Considering df is the table you pasted as image above.
>>> year = 2016
>>> df_new=df.query('INPUT_PARTNO == "2001509-00"').melt(['ACTUAL_GI_YEAR', 'INPUT_PARTNO'], var_name='year', value_name='number')
>>> df_new.year=df_new.year.astype(int)
>>> df_new[df_new.year >= year].groupby(['ACTUAL_GI_YEAR','INPUT_PARTNO']).agg({'number' : sum})
number
ACTUAL_GI_YEAR INPUT_PARTNO
0 2001509-00 17
Example Setup
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randint(0, 10, (10, 10)),
columns=list('ab')+list(range(2,10)))
Solved
#sum where a == 9 columns between 3,6 by rows
df['number'] = df.loc[df['a'].eq(9),
pd.to_numeric(df.columns, errors='coerce')
.to_series()
.between(3, 6)
.values].sum(axis=1)
print(df)
a b 2 3 4 5 6 7 8 9 number
0 1 9 9 2 6 0 6 1 4 2 NaN
1 2 3 4 8 7 2 4 0 0 6 NaN
2 2 2 7 4 9 6 7 1 0 0 NaN
3 0 3 5 3 0 4 2 7 2 6 NaN
4 7 7 1 4 7 7 9 7 4 2 NaN
5 9 9 9 0 3 3 3 8 7 7 9.0
6 9 0 5 5 7 9 6 6 5 7 27.0
7 2 1 9 1 9 3 3 4 4 9 NaN
8 4 0 5 9 6 7 3 9 1 6 NaN
9 5 5 0 8 6 4 5 4 7 4 NaN

Using Certain Conditions to Find Certain Parts of a Dataframe

I have a dataframe that looks like this:
>>> data = {'Count':[15, 21, 1, 7, 6, 1, 25, 8, 56, 0, 5, 9, 0, 12, 12, 8, 7, 12, 0, 8]}
>>> df = pd.DataFrame(data)
>>> df
Count
0 15
1 21
2 1
3 7
4 6
5 1
6 25
7 8
8 56
9 0
10 5
11 9
12 0
13 12
14 12
15 8
16 7
17 12
18 0
19 8
I need to add two columns to this df to detect "floods". "Flood" is defined as from the row where 'Count' goes above 10 and until 'Count' drops below 5.
So, in this case, I want this as a result:
Count Flood FloodNumber
0 15 True 1
1 21 True 1
2 1 False 0
3 7 False 0
4 6 False 0
5 1 False 0
6 25 True 2
7 8 True 2
8 56 True 2
9 0 False 0
10 5 False 0
11 9 False 0
12 0 False 0
13 12 True 3
14 12 True 3
15 8 True 3
16 7 True 3
17 12 True 3
18 0 False 0
19 8 False 0
I managed to add my 'Flood' column with a simple loop like this:
df.loc[0, 'Flood'] = (df.loc[0, 'Count'] > 10)
for index in range(1, len(df)):
df.loc[index, 'Flood'] = ((df.loc[index, 'Count'] > 10) | ((df.loc[index-1, 'Flood']) & (df.loc[index, 'Count'] > 5)))
, but this seems like an extremly slow and stupid way of doing this. Is there any "proper" way of doing it using pandas functions rather than loops?
To find Flood flags, we can play with masks and ffill().
df['Flood'] = ((df.Count > 10).where(df.Count > 10)
.fillna((df.Count > 5)
.where(df.Count < 5))
.ffill()
.astype(bool))
To get the FloodNumber, let's ignore all rows which are False in the Flood column and groupby+cumsum
s = df.Flood.where(df.Flood)
df.loc[:, 'FloodNumber'] = s.dropna().groupby((s != s.shift(1)).cumsum()).ngroup().add(1)
Outputs
Count Flood FloodNumber
0 15 True 1.0
1 21 True 1.0
2 1 False NaN
3 7 False NaN
4 6 False NaN
5 1 False NaN
6 25 True 2.0
7 8 True 2.0
8 56 True 2.0
9 0 False NaN
10 5 False NaN
11 9 False NaN
12 0 False NaN
13 12 True 3.0
14 12 True 3.0
15 8 True 3.0
16 7 True 3.0
17 12 True 3.0
18 0 False NaN
19 8 False NaN

Pandas: Filter a data-frame, and assign values to top n number of rows

import pandas as pd
df = pd.DataFrame({'col1':[1,2,3,4,2,5,6,7,1,8,9,2], 'city':[1,2,3,4,2,5,6,7,1,8,9,2]})
# The following code, creates a boolean filter,
filter = df.city==2
# Assigns True to all rows where filter is True
df.loc[filter,'selected']= True
What I need, is a change in the code so that it assigns True to given n number of rows.
The actual data frame has more than 3 million rows. Sometimes, I would want
df.loc[filter,'selected']= True for only 100 rows [Actual rows could be more or less than 100].
I believe you need filter by values defined in list first with isin and then for top 2 values use GroupBy.head:
cities= [2,3]
df = df1[df1.city.isin(cities)].groupby('city').head(2)
print (df)
col1 city
1 2 2
2 3 3
4 2 2
If need assign True in new column:
cities= [2,3]
idx = df1[df1.city.isin(cities)].groupby('city').head(2).index
df1.loc[idx, 'selected'] = True
print (df1)
col1 city selected
0 1 1 NaN
1 2 2 True
2 3 3 True
3 4 4 NaN
4 2 2 True
5 5 5 NaN
6 6 6 NaN
7 7 7 NaN
8 1 1 NaN
9 8 8 NaN
10 9 9 NaN
11 2 2 NaN
define a list of elements to be checked and pass it to city columns creating a new column with True & False booleans ..
>>> check
[2, 3]
>>> df['Citis'] = df.city.isin(check)
>>> df
col1 city Citis
0 1 1 False
1 2 2 True
2 3 3 True
3 4 4 False
4 2 2 True
5 5 5 False
6 6 6 False
7 7 7 False
8 1 1 False
9 8 8 False
10 9 9 False
11 2 2 True
OR
>>> df['Citis'] = df['city'].apply(lambda x: x in check)
>>> df
col1 city Citis
0 1 1 False
1 2 2 True
2 3 3 True
3 4 4 False
4 2 2 True
5 5 5 False
6 6 6 False
7 7 7 False
8 1 1 False
9 8 8 False
10 9 9 False
11 2 2 True
Matter of fact indeed you need to the starting (lets say 5 values to be read)
df['Citis'] = df.city.isin(check).head(5)
OR
df['Citis'] = df['city'].apply(lambda x: x in check).head(5)

Pandas: Weighted median of grouped observations

I have a dataframe that contains number of observations per group of income:
INCAGG
1 6.561681e+08
3 9.712955e+08
5 1.658043e+09
7 1.710781e+09
9 2.356979e+09
I would like to compute the median income group. What do I mean?
Let's start with a simpler series:
INCAGG
1 6
3 9
5 16
7 17
9 23
It represents this set of numbers:
1 1 1 1 1 1
3 3 3 3 3 3 3 3 3
5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7
9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
Which I can reorder to
1 1 1 1 1 1 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 7 7 7 7 7
7 7 7 7 7 7 7 7 7 7 7 7 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
which visually is what I mean - the median here would be 7.
After glancing at a numpy example here, I think cumsum() provides a good approach. Assuming your column of counts is called 'wt', here's a simple solution that will work most of the time (and see below for a more general solution):
df = df.sort('incagg')
df['tmp'] = df.wt.cumsum() < ( df.wt.sum() / 2. )
df['med_grp'] = (df.tmp==False) & (df.tmp.shift()==True)
The second code line above is dividing into rows above and below the median. The median observation will be in the first False group.
incagg wt tmp med_grp
0 1 656168100 True False
1 3 971295500 True False
2 5 1658043000 True False
3 7 1710781000 False True
4 9 2356979000 False False
df.ix[df.med_grp,'incagg']
3 7
Name: incagg, dtype: int64
This will work fine when the median is unique and often when it isn't. The problem can only occur if the median is non-unique AND it falls on the edge of a group. In this case (with 5 groups and weights in the millions/billions), it's really not a concern but nevertheless here's a more general solution:
df['tmp1'] = df.wt.cumsum() == (df.wt.sum() / 2.)
df['tmp2'] = df.wt.cumsum() < (df.wt.sum() / 2.)
df['med_grp'] = (df.tmp2==False) & (df.tmp2.shift()==True)
df['med_grp'] = df.med_grp | df.tmp1.shift()
incagg wt tmp1 tmp2 med_grp
0 1 1 False True False
1 3 1 False True False
2 5 1 True False True
3 7 2 False False True
4 9 1 False False False
df.ix[df.med_grp,'incagg']
2 5
3 7
df.ix[df.med_grp,'incagg'].mean()
6.0
You can use chain from itertools. I used list comprehension to get a list of the aggregation group repeated the appropriate number of times, and then used chain to put it into a single list. Finally, I converted it to a Series and calculated the median:
from itertools import chain
df = pd.DataFrame([6, 9, 16, 17, 23], index=[1, 3, 5, 7, 9], columns=['counts'])
median = pd.Series([i for i in chain(*[[k] * v for k, v in zip(df.index, df.counts)])]).median()
>>> median
7.0

Categories

Resources