How to select columns in dataframe based on lookup in another? - python

I'm working with a dataset with a large number of predictors, and want to easily test different composite variable groupings by using a control file. For starters, the control file would indicate whether or not to include a variable. Here's an example:
control = pd.DataFrame({'Variable': ['Var1','Var2','Var3'],
'Include': [1,0,1]})
control
Out[48]:
Include Variable
0 1 Var1
1 0 Var2
2 1 Var3
data = pd.DataFrame({'Sample':['a','b','c'],
'Var1': [1,0,0],
'Var2': [0,1,0],
'Var3': [0,0,1]})
data
Out[50]:
Sample Var1 Var2 Var3
0 a 1 0 0
1 b 0 1 0
2 c 0 0 1
So the result after processing should be a new dataframe which looks like data, but drops the Var2 column:
data2
Out[51]:
Sample Var1 Var3
0 a 1 0
1 b 0 0
2 c 0 1
I can get this to work by selectively dropping columns using .itterows():
data2 = data.copy()
for index, row in control.iterrows():
if row['Include'] != 1:
z = (row['Variable'])
data2.drop(z, axis=1,inplace="True")
This works, but it seems there should be a way to do this on the whole dataframe at once. Something like:
data2 = data[control['Include'] == 1]
However, this filters out rows based on the 'Include' value, not columns.
Any suggestions appreciated.

Select the necessary headers from the control frame and use smart selection from the data:
headers = control[control['Include']==1]['Variable']
all_headers = ['Sample'] + list(headers)
data[all_headers]
# Sample Var1 Var3
#0 a 1 0
#1 b 0 0
#2 c 0 1
A side note: Consider using boolean True and False instead of 0s and 1s in the Include column, if possible.

This should be a pretty fast solution using numpy and reconstruction
# get data columns values which is a numpy array
dcol = data.columns.values
# find the positions where control Include are non-zero
non0 = np.nonzero(control.Include.values)
# slice control.Variable to get names of Variables to include
icld = control.Variable.values[non0]
# search the column names of data for the included Variables
# and the Sample column to get the positions to slice
srch = dcol.searchsorted(np.append('Sample', icld))
# reconstruct the dataframe using the srch slice we've created
pd.DataFrame(data.values[:, srch], data.index, dcol[srch])

Related

Python dataframe if conditions are true column value=1, otherwise 0

I have a dataframe and I want to make a new column that is 0 if any one of about 20 conditions are true, and 1 otherwise. My current code is:
conditions=[(lastYear_sale['Low_Balance_Flag'==0])&(lastYear_sale.another_Flag==0),(lastYear_sale['Low_Balance_Flag'==1])&(lastYear_sale.another_Flag==1)]
choices=[1,0]
lastYear_sale['eligible']=np.select(conditions,choices,default=0)
So here is a simplified version of the dataframe I have, that looks a little like:
data = {'ID':['a', 'b', 'c', 'd'],
'Low_Balance_Flag':[1, 0, 1, 0], 'another_Flag':[0,0,1,1]}
dfr = pd.DataFrame(data)
I would like to add a column called eligible that is 0 if low balance flag or another_flag are 1, but if all other columns are 0 then eligible should be 1. I get an error from my attempt that just says keyError: False but I can't see what the error is, thanks for any suggestions! :)
Edit: so the output I'd be looking for in this case would be:
ID Low_Balance another_Flag Eligible
a 1 0 0
b 0 0 1
c 1 1 0
d 0 1 0
So the conditions is basically what you need. You just need the proper conditions. I assume the conditions you have are 2 condition clauses separated by comma. If you have a data frame lastYear_sale which I believe is supposed to be dfr then
conditions=((lastYear_sale['Low_Balance_Flag']==0)&(lastYear_sale.another_Flag==0))|((lastYear_sale['Low_Balance_Flag']==1)&(lastYear_sale.another_Flag==1))
print((~conditions).astype(int))
0 1
1 0
2 0
3 0
dtype: int64
If your conditions are somewhat dynamic and you need to build them within code you can use pandas.DataFrame.query to evaluate a string expression you have built.
Edit: I still assume dfr is the same as lastYear_sale. Also, the data in dfr does not match the data in the expected output.
#Use either of these
conditions = ~((lastYear_sale['Low_Balance_Flag']==1)|(lastYear_sale.another_Flag==1))
conditions = ~lastYear_sale.eval('Low_Balance_Flag==1|another_Flag==1')
dfr['eligible'] = conditions.astype(int)

add/combine columns after searching in a DataFrame

I'm trying to copy data from different columns to a particular column in the same DataFrame.
Index
col1A
col2A
colB
list
CT
CW
CH
0
1
:
1
b
2
2
3
3d
But prior to that I wanted to search if those columns(col1A,col2A,colB) exist in the DataFrame and group those columns which are present and move the grouped data to relevant columns(CT,CH,etc) like,
CH
CW
CT
0
1
1
1
b
b
2
2
2
3
3d
3d
I did,
col_list1 = ['ColA','ColB','ColC']
test1 = any([ i in df.columns for i in col_list1 ])
if test1==True:
df['CH'] = df['Col1A'] +df['Col2A']
df['CT'] = df['ColB']
this code is throwing me a keyerror
.
I want it to ignore columns that are not present and add only those that are present
IIUC, you can use Python set or Series.isin to find the common columns
cols = list(set(col_list1) & set(df.columns))
# or
cols = df.columns[df.columns.isin(col_list1)]
df['CH'] = df[cols].sum(axis=1)
Instead of just concatenating the columns with +, collect them into a list and use sum with axis=1:
df['CH'] = np.sum([df[c] for c in cl if c in df], axis=1)

Appending two dataframes with multindex rows?

I have two dataframes:
The first one looks like this:
variable
entry
subentry
0
1
X
2
Y
3
Z
and the second one looks like:
variable
entry
subentry
0
1
A
2
B
I would like to merge the two dataframe such that I get:
variable
entry
subentry
0
1
X
2
Y
3
Z
1
1
A
2
B
Simply using df1.append(df2, ignore_index=True) gives
variable
0
X
1
Y
2
Z
3
A
4
B
In other words, it collapses the multindex into a single index. Is there a way around this?
Edit: Here is a code sinppet that will reproduce the problem:
arrays = [
np.array([0,0,0]),
np.array([0,1,2]),]
arrays_2 = [
np.array([0,0]),
np.array([0,1]),]
df1 = pd.DataFrame(np.random.randn(3, 1), index=arrays)
df2 = pd.DataFrame(np.random.randn(2, 1), index=arrays_2)
df = df1.append(df2, ignore_index=True)
print(df)
Edit: In practice, I am looking ao combine N dataframes, each with a different number of "entry" rows. So I am looking for an approach that will not rely on me knowing the exact of the dataframes I am combining.
One way try:
pd.concat([df1, df2], keys=[0,1]).droplevel(1)
Output:
0
0 0 -0.439749
1 -0.478744
2 0.719870
1 0 -1.055648
1 -2.007242
Use pd.concat to concat the dataframes together and since entry is the same of both, use keys parameter to create a new level with the naming you want your level to be. Finally, go back and drop the old index level (where the value was the same).

Data cleaning: Remove 0 value from my dataset having a header and index_col

I have a dataset showing below.
What I would like to do is three things.
Step 1: AA to CC is an index, however, happy to keep in the dataset for the future purpose.
Step 2: Count 0 value to each row.
Step 3: If 0 is more than 20% in the row, which means more than 2 in this case because DD to MM is 10 columns, remove the row.
So I did a stupid way to achieve above three steps.
df = pd.read_csv("dataset.csv", header=None)
df_bool = (df == "0")
print(df_bool.sum(axis=1))
then I got an expected result showing below.
0 0
1 0
2 1
3 0
4 1
5 8
6 1
7 0
So removed the row #5 as I indicated below.
df2 = df.drop([5], axis=0)
print(df2)
This works well even this is not an elegant, kind of a stupid way to go though.
However, if I import my dataset as header=0, then this approach did not work at all.
df = pd.read_csv("dataset.csv", header=0)
0 0
1 0
2 0
3 0
4 0
5 0
6 0
7 0
How come this happens?
Also, if I would like to write a code with loop, count and drop functions, what does the code look like?
You can just continue using boolean_indexing:
First we calculate number of columns and number of zeroes per row:
n_columns = len(df.columns) # or df.shape[1]
zeroes = (df == "0").sum(axis=1)
We then select only rows that have less than 20 % zeroes.
proportion_zeroes = zeroes / n_columns
max_20 = proportion_zeroes < 0.20
df[max_20] # This will contain only rows that have less than 20 % zeroes
One liner:
df[((df == "0").sum(axis=1) / len(df.columns)) < 0.2]
It would have been great if you could have posted how the dataframe looks in pandas rather than a picture of an excel file. However, constructing a dummy df
df = pd.DataFrame({'index1':['a','b','c'],'index2':['b','g','f'],'index3':['w','q','z']
,'Col1':[0,1,0],'Col2':[1,1,0],'Col3':[1,1,1],'Col4':[2,2,0]})
Step1, assigning the index can be done using the .set_index() method as per below
df.set_index(['index1','index2','index3'],inplace=True)
instead of doing everything manually when it comes fo filtering out, you can use the return you got from df_bool.sum(axis=1) in the filtering of the dataframe as per below
df.loc[(df==0).sum(axis=1) / (df.shape[1])>0.6]
index1 index2 index3 Col1 Col2 Col3 Col4
c f z 0 0 1 0
and using that you can drop those rows, assuming 20% then you would use
df = df.loc[(df==0).sum(axis=1) / (df.shape[1])<0.2]
Ween it comes to the header issue it's a bit difficult to answer without seeing the what the file or dataframe looks like

Reference values in the previous row with map or apply

Given a dataframe df, I would like to generate a new variable/column for each row based on the values in the previous row. df is sorted so that the order of the rows is meaningful.
Normally, we can use either map or apply, but it seems that neither of them allows the access to values in the previous row.
For example, given existing rows a b c, I want to generate a new column d, which is based on some calculation using the value of c in the previous row.
How should I do it in pandas?
If you just want to do a calculation based on the previous row, you can calculate and then shift:
In [2]: df = pd.DataFrame({'a':[0,1,2], 'b':[0,10,20]})
In [3]: df
Out[3]:
a b
0 0 0
1 1 10
2 2 20
# a calculation based on other column
In [4]: df['c'] = df['b'] + 1
# shift the column
In [5]: df['c'] = df['c'].shift()
In [6]: df
Out[6]:
a b c
0 0 0 NaN
1 1 10 1
2 2 20 11
If you want to do a calculation based on multiple rows, you could look at the rolling_apply function (http://pandas.pydata.org/pandas-docs/stable/computation.html#moving-rolling-statistics-moments and http://pandas.pydata.org/pandas-docs/stable/generated/pandas.rolling_apply.html#pandas.rolling_apply)
You can use the dataframe 'apply' function and leverage the unused the 'kwargs' parameter to store the previous row.
import pandas as pd
df = pd.DataFrame({'a':[0,1,2], 'b':[0,10,20]})
new_col = 'c'
def apply_func_decorator(func):
prev_row = {}
def wrapper(curr_row, **kwargs):
val = func(curr_row, prev_row)
prev_row.update(curr_row)
prev_row[new_col] = val
return val
return wrapper
#apply_func_decorator
def running_total(curr_row, prev_row):
return curr_row['a'] + curr_row['b'] + prev_row.get('c', 0)
df[new_col] = df.apply(running_total, axis=1)
print(df)
# Output will be:
# a b c
# 0 0 0 0
# 1 1 10 11
# 2 2 20 33
This example uses a decorator to store the previous row in a dictionary and then pass it to the function when Pandas calls it on the next row.
Disclaimer 1: The 'prev_row' variable starts off empty for the first row so when using it in the apply function I had to supply a default value to avoid a 'KeyError'.
Disclaimer 2: I am fairly certain this will be slower the apply operation but I did not do any tests to figure out how much.

Categories

Resources