np.where multiple return values - python

Using pandas and numpy I am trying to process a column in a dataframe, and want to create a new column with values relating to it. So if in column x the value 1 is present, in the new column it would be a, for value 2 it would be b etc
I can do this for single conditions, i.e
df['new_col'] = np.where(df['col_1'] == 1, a, n/a)
And I can find example of multiple conditions i.e if x = 3 or x = 4 the value should a, but not to do something like if x = 3 the value should be a and if x = 4 the value be c.
I tried simply running two lines of code such as :
df['new_col'] = np.where(df['col_1'] == 1, a, n/a)
df['new_col'] = np.where(df['col_1'] == 2, b, n/a)
But obviously the second line overwrites. Am I missing something crucial?

I think you can use loc:
df.loc[(df['col_1'] == 1, 'new_col')] = a
df.loc[(df['col_1'] == 2, 'new_col')] = b
Or:
df['new_col'] = np.where(df['col_1'] == 1, a, np.where(df['col_1'] == 2, b, np.nan))
Or numpy.select:
df['new_col'] = np.select([df['col_1'] == 1, df['col_1'] == 2],[a, b], default=np.nan)
Or use Series.map, if no match get NaN by default:
d = { 0 : 'a', 1 : 'b'}
df['new_col'] = df['col_1'].map(d)

I think numpy choose() is the best option for you.
import numpy as np
choices = 'abcde'
N = 10
np.random.seed(0)
data = np.random.randint(1, len(choices) + 1, size=N)
print(data)
print(np.choose(data - 1, choices))
Output:
[5 1 4 4 4 2 4 3 5 1]
['e' 'a' 'd' 'd' 'd' 'b' 'd' 'c' 'e' 'a']

you could define a dict with your desired transformations.
Then loop through the a DataFrame column and fill it.
There may a more elegant ways, but this will work:
# create a dummy DataFrame
df = pd.DataFrame( np.random.randint(2, size=(6,4)), columns=['col_1', 'col_2', 'col_3', 'col_4'], index=range(6) )
# create a dict with your desired substitutions:
swap_dict = { 0 : 'a',
1 : 'b',
999 : 'zzz', }
# introduce new column and fill with swapped information:
for i in df.index:
df.loc[i, 'new_col'] = swap_dict[ df.loc[i, 'col_1'] ]
print df
returns something like:
col_1 col_2 col_3 col_4 new_col
0 1 1 1 1 b
1 1 1 1 1 b
2 0 1 1 0 a
3 0 1 0 0 a
4 0 0 1 1 a
5 0 0 1 0 a

Use the pandas Series.map instead of where.
import pandas as pd
df = pd.DataFrame({'col_1' : [1,2,4,2]})
print(df)
def ab_ify(v):
if v == 1:
return 'a'
elif v == 2:
return 'b'
else:
return None
df['new_col'] = df['col_1'].map(ab_ify)
print(df)
# output:
#
# col_1
# 0 1
# 1 2
# 2 4
# 3 2
# col_1 new_col
# 0 1 a
# 1 2 b
# 2 4 None
# 3 2 b

Related

Subtract values in a column in blocks

Suppose there is the following dataframe:
import pandas as pd
df = pd.DataFrame({'Group': ['A', 'A', 'B', 'B', 'C', 'C'], 'Value': [1, 2, 3, 4, 5, 6]})
I would like to subtract the values from group B and C with those of group A and make a new column with the difference. That is, I would like to do something like this:
df[df['Group'] == 'B']['Value'].reset_index() - df[df['Group'] == 'A']['Value'].reset_index()
df[df['Group'] == 'C']['Value'].reset_index() - df[df['Group'] == 'A']['Value'].reset_index()
and place the result in a new column. Is there a way of doing it without a for loop?
Assuming you want to subtract the first A to the first B/C, second A to second B/C, etc. the easiest might be to reshape:
df2 = (df
.assign(cnt=df.groupby('Group').cumcount())
.pivot('cnt', 'Group', 'Value')
)
# Group A B C
# cnt
# 0 1 3 5
# 1 2 4 6
df['new_col'] = df2.sub(df2['A'], axis=0).melt()['value']
variant:
df['new_col'] = (df
.assign(cnt=df.groupby('Group').cumcount())
.groupby('cnt', group_keys=False)
.apply(lambda d: d['Value'].sub(d.loc[d['Group'].eq('A'), 'Value'].iloc[0]))
)
output:
Group Value new_col
0 A 1 0
1 A 2 0
2 B 3 2
3 B 4 2
4 C 5 4
5 C 6 4

Groupby contains two specific values - pandas

I'm aiming to return rows in a pandas df that contain two specific values grouped by a separate column. Using below, I'm grouping by Num and aiming to return rows where B is present but not A for each unique group.
If neither A nor B is assigned to a grouped value then continue. I only want to return the rows where B is present but not A.
import pandas as pd
df = pd.DataFrame({
'Num' : [1,1,2,2,2,2,3,3,4,4,4,4],
'Label' : ['X','Y','X','B','B','B','A','B','B','A','B','X'],
})
df = df.loc[(df['Label'] == 'A') | (df['Label'] == 'B')]
df = df.groupby('Num').filter(lambda x: any(x['Label'] == 'A'))
df = df.groupby('Num').filter(lambda x: any(x['Label'] == 'B'))
intended output:
Num Label
2 2 B
3 2 B
4 2 B
5 2 B
You can filter if all values per groups are B by GroupBy.transform with GroupBy.all:
df1 = df.loc[(df['Label'] == 'A') | (df['Label'] == 'B')]
df1 = df1[(df1['Label'] == 'B').groupby(df1['Num']).transform('all')]
print (df1)
Num Label
3 2 B
4 2 B
5 2 B
If need fitler original column Num use:
df = df[df['Num'].isin(df1['Num'])]
print (df)
Num Label
2 2 X
3 2 B
4 2 B
5 2 B
Another approach is filter by numpy.setdiff1d:
num = np.setdiff1d(df.loc[(df['Label'] == 'B'), 'Num'],
df.loc[(df['Label'] == 'A'), 'Num'])
df = df[df['Num'].isin(num)]
print (df)
Num Label
2 2 X
3 2 B
4 2 B
5 2 B

How do I delete a column that contains only zeros from a given row in pandas

I've found how to remove column with zeros for all the rows using the command df.loc[:, (df != 0).any(axis=0)], and I need to do the same but given the row number.
For example, for the folowing df
In [75]: df = pd.DataFrame([[1,1,0,0], [1,0,1,0]], columns=['a','b','c','d'])
In [76]: df
Out[76]:
a b c d
0 1 1 0 0
1 1 0 1 0
Give me the columns with non-zeros for the row 0 and I would expect the result:
a b
0 1 1
And for the row 1 get:
a c
1 1 1
I tried a lot of combinations of commands but I couldn't find a solution.
UPDATE:
I have a 300x300 matrix, I need to better visualize its result.
Below a pseudo-code trying to show what I need
for i in range(len(df[rows])):
_df = df.iloc[i]
_df = _df.filter(remove_zeros_columns)
print('Row: ', i)
print(_df)
Result:
Row: 0
a b
0 1 1
Row: 1
a c f
1 1 5 10
Row: 2
e
2 20
Best Regards.
Kleyson Rios.
You can change data structure:
df = df.reset_index().melt('index', var_name='columns').query('value != 0')
print (df)
index columns value
0 0 a 1
1 1 a 1
2 0 b 1
5 1 c 1
If need new column by values joined by , compare values for not equal by DataFrame.ne and use matrix multiplication by DataFrame.dot:
df['new'] = df.ne(0).dot(df.columns + ', ').str.rstrip(', ')
print (df)
a b c d new
0 1 1 0 0 a, b
1 1 0 1 0 a, c
EDIT:
for i in df.index:
row = df.loc[[i]]
a = row.loc[:, (row != 0).any()]
print ('Row {}'.format(i))
print (a)
Or:
def f(x):
print ('Row {}'.format(x.name))
print (x[x!=0].to_frame().T)
df.apply(f, axis=1)
Row 0
a b
0 1 1
Row 1
a c
1 1 1
df = pd.DataFrame([[1, 1, 0, 0], [1, 0, 1, 0]], columns=['a', 'b', 'c', 'd'])
def get(row):
return list(df.columns[row.ne(0)])
df['non zero column'] = df.apply(lambda x: get(x), axis=1)
print(df)
also if you want single liner use this
df['non zero column'] = [list(df.columns[i]) for i in df.ne(0).values]
output
a b c d non zero column
0 1 1 0 0 [a, b]
1 1 0 1 0 [a, c]
I think this answers your question more strictly.
Just change the value of given_row as needed.
given_row = 1
mask_all_rows = df.apply(lambda x: x!=0, axis=0)
mask_row = mask_all_rows.loc[given_row]
cols_to_keep = mask_row.index[mask_row == True].tolist()
df_filtered = df[cols_to_keep]
# And if you only want to keep the given row
df_filtered = df_filtered[df_filtered.index == given_row]

print columns names if row equal to value in python

How can i iterate between rows and print the columns names in one column if the value = 1
mydata = [{'a' : '0', 'b': 1, 'c': 0}, {'a' : 1, 'b': 0, 'c':1}, {'a' : '0', 'b': 1, 'c':1}]
df = pd.DataFrame(mydata)
a b c Result
0 1 0 b
1 0 1 a , c
0 1 1 b , c
The result only shows the columns name that are equal to 1
Using dot
df['New']=df.astype(int).dot(df.columns+',').str[:-1]
df
Out[44]:
a b c New
0 0 1 0 b
1 1 0 1 a,c
2 0 1 1 b,c
Use boolean indexing to index the row, and join column names
df['new'] = df.eq(1).apply(lambda x: ', '.join(x[x].index), axis = 1)
a b c new
0 0 1 0 b
1 1 0 1 a, c
2 0 1 1 b, c
You an also do this:
for i in range(len(df)):
df.set_value(i,'Result',[df.columns[(df == 1).iloc[i]]])
More beginners solutions would be
for index, row in df.iterrows():
for key in row.keys():
print(key if row.get(key) == 1 else None)

Most efficient way to return Column name in a pandas df

I have a pandas df that contains 4 different columns. For every row theres a value thats of importance. I want to return the Column name where that value is displayed. So for the df below I want to return the Column name when the value 2 is labelled.
d = ({
'A' : [2,0,0,2],
'B' : [0,0,2,0],
'C' : [0,2,0,0],
'D' : [0,0,0,0],
})
df = pd.DataFrame(data=d)
Output:
A B C D
0 2 0 0 0
1 0 0 2 0
2 0 2 0 0
3 2 0 0 0
So it would be A,C,B,A
I'm doing this via
m = (df == 2).idxmax(axis=1)[0]
And then changing the row. But this isn't very efficient.
I'm also hoping to produce the output as a Series from pandas df
Use DataFrame.dot:
df.astype(bool).dot(df.columns).str.cat(sep=',')
Or,
','.join(df.astype(bool).dot(df.columns))
'A,C,B,A'
Or, as a list:
df.astype(bool).dot(df.columns).tolist()
['A', 'C', 'B', 'A']
...or a Series:
df.astype(bool).dot(df.columns)
0 A
1 C
2 B
3 A
dtype: object

Categories

Resources