python Pandas replace the word in the string - python

Given a dataframe like:
A B C
1 a yes
2 b yes
3 a no
I would like to change the dataframe to:
A B C
1 a yes
2 b no
3 a no
which means that if column B has the value 'b', I want to change the column C to 'no'. Which can be represented by df[df['B']=='b']['C'].str.replace('yes','no'). But use this will not change dataframe df itself. Even I tried df[df['B']=='b']['C'] = df[df['B']=='b']['C'].str.replace('yes','no') it didn't work. I am wondering how to solve this problem.

Solutions with set values by mask:
df.loc[df.B == 'b', 'C'] = 'no'
print (df)
A B C
0 1 a yes
1 2 b no
2 3 a no
df['C'] = df['C'].mask(df.B == 'b','no')
print (df)
A B C
0 1 a yes
1 2 b no
2 3 a no
Solutions with replace only yes string:
df.loc[df.B == 'b', 'C'] = df['C'].replace('yes', 'no')
print (df)
A B C
0 1 a yes
1 2 b no
2 3 a no
df['C'] = df['C'].mask(df.B == 'b', df['C'].replace('yes', 'no'))
print (df)
A B C
0 1 a yes
1 2 b no
2 3 a no
Difference better seen in changed df:
print (df)
A B C
0 1 a yes
1 2 b yes
2 3 b another
3 4 a no
df['C_set'] = df['C'].mask(df.B == 'b','no')
df['C_replace'] = df['C'].mask(df.B == 'b', df['C'].replace('yes', 'no'))
print (df)
A B C C_set C_replace
0 1 a yes yes yes
1 2 b yes no no
2 3 b another no another
3 4 a no no no
EDIT:
In your solution is necessary only add loc:
df.loc[df['B']=='b', 'C'] = df.loc[df['B']=='b', 'C'].str.replace('yes','no')
print (df)
A B C
0 1 a yes
1 2 b no
2 3 b another
3 4 a no
EDIT1:
I was really curious what method is fastest:
#[40000 rows x 3 columns]
df = pd.concat([df]*10000).reset_index(drop=True)
print (df)
In [37]: %timeit df.loc[df['B']=='b', 'C'] = df['C'].str.replace('yes','no')
10 loops, best of 3: 79.5 ms per loop
In [38]: %timeit df.loc[df['B']=='b', 'C'] = df.loc[df['B']=='b','C'].str.replace('yes','no')
10 loops, best of 3: 48.4 ms per loop
In [39]: %timeit df.loc[df['B']=='b', 'C'] = df.loc[df['B']=='b', 'C'].replace('yes','no')
100 loops, best of 3: 14.1 ms per loop
In [40]: %timeit df['C'] = df['C'].mask(df.B == 'b', df['C'].replace('yes', 'no'))
100 loops, best of 3: 10.1 ms per loop
# piRSquared solution with replace
In [53]: %timeit df.C = np.where(df.B.values == 'b', df.C.replace('yes', 'no'), df.C.values)
100 loops, best of 3: 4.74 ms per loop
EDIT1:
Better is change condition - add df.C == 'yes' or df.C.values == 'yes' if need fastest solution:
df.loc[(df.B == 'b') & (df.C == 'yes'), 'C'] = 'no'
df.C = np.where((df.B.values == 'b') & (df.C.values == 'yes'), 'no', df.C.values)

np.where
df.C = np.where(df.B == 'b', 'no', df.C)
Or
df.C = np.where(df.B.values == 'b', 'no', df.C.values)
pd.Series.mask
df.C = df.C.mask(df.B == 'b', 'no')
All change df in place and yield
A B C
0 1 a yes
1 2 b no
2 3 a no
timing

Related

pandas dataframe index match

I'm wondering if there is a more efficient way to do an "index & match" type function that is popular in excel. For example - given two pandas DataFrames, update the df_1 with information found in df_2:
import pandas as pd
df_1 = pd.DataFrame({'num_a':[1, 2, 3, 4, 5],
'num_b':[2, 4, 1, 2, 3]})
df_2 = pd.DataFrame({'num':[1, 2, 3, 4, 5],
'name':['a', 'b', 'c', 'd', 'e']})
I'm working with data sets that have ~80,000 rows in both df_1 and df_2 and my goal is to create two new columns in df_1, "name_a" and "name_b".
Below is the most efficient method that I could come up with. There has to be a better way!
name_a = []
name_b = []
for i in range(len(df_1)):
name_a.append(df_2.name.iloc[df_2[
df_2.num == df_1.num_a.iloc[i]].index[0]])
name_b.append(df_2.name.iloc[df_2[
df_2.num == df_1.num_b.iloc[i]].index[0]])
df_1['name_a'] = name_a
df_1['name_b'] = name_b
Resulting in:
>>> df_1.head()
num_a num_b name_a name_b
0 1 2 a b
1 2 4 b d
2 3 1 c a
3 4 2 d b
4 5 3 e c
High Level
Create a dictionary to use in a replace
replace, rename columns, and join
m = dict(zip(
df_2.num.values.tolist(),
df_2.name.values.tolist()
))
df_1.join(
df_1.replace(m).rename(
columns=lambda x: x.replace('num', 'name')
)
)
num_a num_b name_a name_b
0 1 2 a b
1 2 4 b d
2 3 1 c a
3 4 2 d b
4 5 3 5 c
Breakdown
replace with a dictionary should be pretty quick. There are bunch of ways to build a dictionary form df_2. As a matter of fact we could have used a pd.Series. I chose to build with dict and zip because I find that it's faster.
Building m
Option 1
m = df_2.set_index('num').name
Option 2
m = df_2.set_index('num').name.to_dict()
Option 3
m = dict(zip(df_2.num, df_2.name))
Option 4 (My Choice)
m = dict(zip(df_2.num.values.tolist(), df_2.name.values.tolist()))
m build times
1000 loops, best of 3: 325 µs per loop
1000 loops, best of 3: 376 µs per loop
10000 loops, best of 3: 32.9 µs per loop
100000 loops, best of 3: 10.4 µs per loop
%timeit df_2.set_index('num').name
%timeit df_2.set_index('num').name.to_dict()
%timeit dict(zip(df_2.num, df_2.name))
%timeit dict(zip(df_2.num.values.tolist(), df_2.name.values.tolist()))
Replacing num
Again, we have choices, here are a few and their times.
%timeit df_1.replace(m)
%timeit df_1.applymap(lambda x: m.get(x, x))
%timeit df_1.stack().map(lambda x: m.get(x, x)).unstack()
1000 loops, best of 3: 792 µs per loop
1000 loops, best of 3: 959 µs per loop
1000 loops, best of 3: 925 µs per loop
I choose...
df_1.replace(m)
num_a num_b
0 a b
1 b d
2 c a
3 d b
4 5 c
Rename columns
df_1.replace(m).rename(columns=lambda x: x.replace('num', 'name'))
name_a name_b <-- note the column name change
0 a b
1 b d
2 c a
3 d b
4 5 c
Join
df_1.join(df_1.replace(m).rename(columns=lambda x: x.replace('num', 'name')))
num_a num_b name_a name_b
0 1 2 a b
1 2 4 b d
2 3 1 c a
3 4 2 d b
4 5 3 5 c
I think there's a more straightforward solution than those already offered. Since you mentioned Excel, this is a basic vlookup. You can simulate this in pandas by using Series.map.
name_map = dict(df_2.set_index('num').name)
df_1['name_a'] = df_1.num_a.map(name_map)
df_1['name_b'] = df_1.num_b.map(name_map)
df_1
num_a num_b name_a name_b
0 1 2 a b
1 2 4 b d
2 3 1 c a
3 4 2 d b
4 5 3 e c
All we do is convert df_2 to a dict with 'num' as the keys. The map function looks up each value from a df_1 column in the dict and returns the corresponding letter. No complicated indexing required.
Just try a conditional statement:
import pandas as pd
import numpy as np
df_1 = pd.DataFrame({'num_a':[1, 2, 3, 4, 5],
'num_b':[2, 4, 1, 2, 3]})
df_2 = pd.DataFrame({'num':[1, 2, 3, 4, 5],
'name':['a', 'b', 'c', 'd', 'e']})
df_1["name_a"] = df_2["num_b"]
df_1["name_b"] = np.array(df_1["name_a"][df_1["num_b"]-1])
print(df_1)
num_a num_b name_a name_b
0 1 2 a b
1 2 4 b d
2 3 1 c a
3 4 2 d b
4 5 3 e c

adding column in pandas dataframe containing the same value

I have a pandas dataframe A of size (1500,5) and a dictionary D containing:
D
Out[121]:
{'newcol1': 'a',
'newcol2': 2,
'newcol3': 1}
for each key in the dictionary I would like to create a new column in the dataframe A with the values in the dictionary (same value for all the rows of each column)
at the end
A should be of size (1500,8)
Is there a "python" way to do this? thanks!
You can use concat with DataFrame constructor:
D = {'newcol1': 'a',
'newcol2': 2,
'newcol3': 1}
df = pd.DataFrame({'A':[1,2],
'B':[4,5],
'C':[7,8]})
print (df)
A B C
0 1 4 7
1 2 5 8
print (pd.concat([df, pd.DataFrame(D, index=df.index)], axis=1))
A B C newcol1 newcol2 newcol3
0 1 4 7 a 2 1
1 2 5 8 a 2 1
Timings:
D = {'newcol1': 'a',
'newcol2': 2,
'newcol3': 1}
df = pd.DataFrame(np.random.rand(10000000, 5), columns=list('abcde'))
In [37]: %timeit pd.concat([df, pd.DataFrame(D, index=df.index)], axis=1)
The slowest run took 18.06 times longer than the fastest. This could mean that an intermediate result is being cached.
1 loop, best of 3: 875 ms per loop
In [38]: %timeit df.assign(**D)
1 loop, best of 3: 1.22 s per loop
setup
A = pd.DataFrame(np.random.rand(10, 5), columns=list('abcde'))
d = {
'newcol1': 'a',
'newcol2': 2,
'newcol3': 1
}
solution
Use assign
A.assign(**d)
a b c d e newcol1 newcol2 newcol3
0 0.709249 0.275538 0.135320 0.939448 0.549480 a 2 1
1 0.396744 0.513155 0.063207 0.198566 0.487991 a 2 1
2 0.230201 0.787672 0.520359 0.165768 0.616619 a 2 1
3 0.300799 0.554233 0.838353 0.637597 0.031772 a 2 1
4 0.003613 0.387557 0.913648 0.997261 0.862380 a 2 1
5 0.504135 0.847019 0.645900 0.312022 0.715668 a 2 1
6 0.857009 0.313477 0.030833 0.952409 0.875613 a 2 1
7 0.488076 0.732990 0.648718 0.389069 0.301857 a 2 1
8 0.187888 0.177057 0.813054 0.700724 0.653442 a 2 1
9 0.003675 0.082438 0.706903 0.386046 0.973804 a 2 1

Split a pandas dataframe into two by columns

I have a dataframe and I want to split it into two dataframes, one that has all the columns beginning with foo and one with the rest of the columns.
Is there a quick way of doing this?
You can use list comprehensions for select all columns names:
df = pd.DataFrame({'fooA':[1,2,3],
'fooB':[4,5,6],
'fooC':[7,8,9],
'D':[1,3,5],
'E':[5,3,6],
'F':[7,4,3]})
print (df)
D E F fooA fooB fooC
0 1 5 7 1 4 7
1 3 3 4 2 5 8
2 5 6 3 3 6 9
foo = [col for col in df.columns if col.startswith('foo')]
print (foo)
['fooA', 'fooB', 'fooC']
other = [col for col in df.columns if not col.startswith('foo')]
print (other)
['D', 'E', 'F']
print (df[foo])
fooA fooB fooC
0 1 4 7
1 2 5 8
2 3 6 9
print (df[other])
D E F
0 1 5 7
1 3 3 4
2 5 6 3
Another solution with filter and difference:
df1 = df.filter(regex='^foo')
print (df1)
fooA fooB fooC
0 1 4 7
1 2 5 8
2 3 6 9
print (df.columns.difference(df1.columns))
Index(['D', 'E', 'F'], dtype='object')
print (df[df.columns.difference(df1.columns)])
D E F
0 1 5 7
1 3 3 4
2 5 6 3
Timings:
In [123]: %timeit a(df)
1000 loops, best of 3: 1.06 ms per loop
In [124]: %timeit b(df3)
1000 loops, best of 3: 1.04 ms per loop
In [125]: %timeit c(df4)
1000 loops, best of 3: 1.41 ms per loop
df3 = df.copy()
df4 = df.copy()
def a(df):
df1 = df.filter(regex='^foo')
df2 = df[df.columns.difference(df1.columns)]
return df1, df2
def b(df):
df1 = df[[col for col in df.columns if col.startswith('foo')]]
df2 = df[[col for col in df.columns if not col.startswith('foo')]]
return df1, df2
def c(df):
df1 = df[df.columns[df.columns.str.startswith('foo')]]
df2 = df[df.columns[~df.columns.str.startswith('foo')]]
return df1, df2
df1, df2 = a(df)
print (df1)
print (df2)
df1, df2 = b(df3)
print (df1)
print (df2)
df1, df2 = c(df4)
print (df1)
print (df2)

Replace certain column with `filter(like = "")` in Pandas

Sometimes, I would manipulate some columns of the dataframe and re-change it.
For example, one dataframe df has 6 columns like this:
A, B1, B2, B3, C, D
And I want to change the values in the columns (B1,B2,B3) transform into (B1*A, B2*A, B3*A).
Aside the loop subroutine which is slow, the df.filter(like = 'B') will accelerate a lot.
df.filter(like = "B").mul(df.A, axis = 0) can produce the right answer. But I can't change the B-like columns in df using:
df.filter(like = "B") =df.filter(like = "B").mul(df.A. axis = 0)`
How to achieve it? I know using pd.concat to creat a new dataframe can get it done. But when the number of columns are huge, this method may be loss of efficiency. What I want to do is to assign new value to the columns already exist.
Any advices would be appreciate!
Use str.contains with boolean indexing:
cols = df.columns[df.columns.str.contains('B')]
df[cols] = df[cols].mul(df.A, axis = 0)
Sample:
import pandas as pd
df = pd.DataFrame({'A':[1,2,3],
'B1':[4,5,6],
'B2':[7,8,9],
'B3':[1,3,5],
'C':[5,3,6],
'D':[7,4,3]})
print (df)
A B1 B2 B3 C D
0 1 4 7 1 5 7
1 2 5 8 3 3 4
2 3 6 9 5 6 3
cols = df.columns[df.columns.str.contains('B')]
print (cols)
Index(['B1', 'B2', 'B3'], dtype='object')
df[cols] = df[cols].mul(df.A, axis = 0)
print (df)
A B1 B2 B3 C D
0 1 4 7 1 5 7
1 2 10 16 6 3 4
2 3 18 27 15 6 3
Timings:
len(df)=3:
In [17]: %timeit (a(df))
1000 loops, best of 3: 1.36 ms per loop
In [18]: %timeit (b(df1))
100 loops, best of 3: 2.39 ms per loop
len(df)=30k:
In [14]: %timeit (a(df))
100 loops, best of 3: 2.89 ms per loop
In [15]: %timeit (b(df1))
100 loops, best of 3: 4.71 ms per loop
Code:
import pandas as pd
df = pd.DataFrame({'A':[1,2,3],
'B1':[4,5,6],
'B2':[7,8,9],
'B3':[1,3,5],
'C':[5,3,6],
'D':[7,4,3]})
print (df)
df = pd.concat([df]*10000).reset_index(drop=True)
df1 = df.copy()
def a(df):
cols = df.columns[df.columns.str.contains('B')]
df[cols] = df[cols].mul(df.A, axis = 0)
return (df)
def b(df):
df.loc[:, df.filter(regex=r'^B').columns] = df.loc[:, df.filter(regex=r'^B').columns].mul(df.A, axis=0)
return (df)
print (a(df))
print (b(df1))
you have almost done it:
In [136]: df.loc[:, df.filter(regex=r'^B').columns] = df.loc[:, df.filter(regex=r'^B').columns].mul(df.A, axis=0)
In [137]: df
Out[137]:
A B1 B2 B3 B4 F
0 1 4 7 1 5 7
1 2 10 16 6 6 4
2 3 18 27 15 18 3

Change values conditionally in Pandas DF with multilevel columns

Given the following DF with multilevel columns:
arrays = [['foo', 'foo', 'bar', 'bar'],
['A', 'B', 'C', 'D']]
tuples = list(zip(*arrays))
columnValues = pd.MultiIndex.from_tuples(tuples)
df = pd.DataFrame(np.random.rand(6,4), columns = columnValues)
df['txt'] = 'aaa'
print(df)
yields:
foo bar txt
A B C D
0 0.080029 0.710943 0.157265 0.774827 aaa
1 0.276949 0.923369 0.550799 0.758707 aaa
2 0.416714 0.440659 0.835736 0.130818 aaa
3 0.935763 0.908967 0.502363 0.677957 aaa
4 0.191245 0.291017 0.014355 0.762976 aaa
5 0.365464 0.286350 0.450263 0.509556 aaa
Question: how do i efficiently change values in the foo sub-columns to 100 if their values < 0.5 for the huge DF?
the following works:
In [41]: df.foo < 0.5
Out[41]:
A B
0 True False
1 True False
2 True True
3 False False
4 True True
5 True True
In [42]: df.foo[df.foo < 0.5]
Out[42]:
A B
0 0.080029 NaN
1 0.276949 NaN
2 0.416714 0.440659
3 NaN NaN
4 0.191245 0.291017
5 0.365464 0.286350
but if i try to change the value it throws me:
In [45]: df.foo[df.foo < 0.5] = 100
C:\Users\USER\AppData\Local\Programs\Python35\Scripts\ipython:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
if i try to use locators:
In [46]: df.foo.loc[df.foo < 0.5] = 100
...
ValueError: cannot copy sequence with size 2 to array axis with dimension 6
the same error for df.foo.loc[df.foo < 0.5, 'foo'] = 100
if i try:
df.loc[df.foo < 0.5, 'foo']
i get:
KeyError: 'None of [ A B\n0 True False\n1 True False\n2 True True\n3 False False\n4 True True\n5 True True] are in the [index]'
Solutions - timeit comparison against DF with 10M rows:
In [19]: %timeit df.foo.applymap(lambda x: x if x >= 0.5 else 100)
1 loop, best of 3: 29.4 s per loop
In [20]: %timeit df.foo[df.foo >= 0.5].fillna(100)
1 loop, best of 3: 1.55 s per loop
John Galt:
In [21]: %timeit df.foo.where(df.foo < 0.5, 100)
1 loop, best of 3: 1.12 s per loop
B. M.:
In [5]: %timeit u=df['foo'].values;u[u<.5]=100
1 loop, best of 3: 628 ms per loop
Here's one way using where -- df['foo'] = df['foo'].where(df['foo'] < 0.5, 100)
In [96]: df
Out[96]:
foo bar txt
A B C D
0 0.255309 0.237892 0.491065 0.930555 aaa
1 0.859998 0.008269 0.376213 0.984806 aaa
2 0.479928 0.761266 0.993970 0.266486 aaa
3 0.078284 0.009748 0.461687 0.653085 aaa
4 0.923293 0.642398 0.629140 0.561777 aaa
5 0.936824 0.526626 0.413250 0.732074 aaa
In [97]: df['foo'] = df['foo'].where(df['foo'] < 0.5, 100)
In [98]: df
Out[98]:
foo bar txt
A B C D
0 0.255309 0.237892 0.491065 0.930555 aaa
1 100.000000 0.008269 0.376213 0.984806 aaa
2 0.479928 100.000000 0.993970 0.266486 aaa
3 0.078284 0.009748 0.461687 0.653085 aaa
4 100.000000 100.000000 0.629140 0.561777 aaa
5 100.000000 100.000000 0.413250 0.732074 aaa

Categories

Resources