How Do I Input Message Data Into a DataFrame Using pandas? [duplicate] - python

I have a following DataFrame:
from pandas import *
df = DataFrame({'foo':['a','b','c'], 'bar':[1, 2, 3]})
It looks like this:
bar foo
0 1 a
1 2 b
2 3 c
Now I want to have something like:
bar
0 1 is a
1 2 is b
2 3 is c
How can I achieve this?
I tried the following:
df['foo'] = '%s is %s' % (df['bar'], df['foo'])
but it gives me a wrong result:
>>>print df.ix[0]
bar a
foo 0 a
1 b
2 c
Name: bar is 0 1
1 2
2
Name: 0
Sorry for a dumb question, but this one pandas: combine two columns in a DataFrame wasn't helpful for me.

df['bar'] = df.bar.map(str) + " is " + df.foo

This question has already been answered, but I believe it would be good to throw some useful methods not previously discussed into the mix, and compare all methods proposed thus far in terms of performance.
Here are some useful solutions to this problem, in increasing order of performance.
DataFrame.agg
This is a simple str.format-based approach.
df['baz'] = df.agg('{0[bar]} is {0[foo]}'.format, axis=1)
df
foo bar baz
0 a 1 1 is a
1 b 2 2 is b
2 c 3 3 is c
You can also use f-string formatting here:
df['baz'] = df.agg(lambda x: f"{x['bar']} is {x['foo']}", axis=1)
df
foo bar baz
0 a 1 1 is a
1 b 2 2 is b
2 c 3 3 is c
char.array-based Concatenation
Convert the columns to concatenate as chararrays, then add them together.
a = np.char.array(df['bar'].values)
b = np.char.array(df['foo'].values)
df['baz'] = (a + b' is ' + b).astype(str)
df
foo bar baz
0 a 1 1 is a
1 b 2 2 is b
2 c 3 3 is c
List Comprehension with zip
I cannot overstate how underrated list comprehensions are in pandas.
df['baz'] = [str(x) + ' is ' + y for x, y in zip(df['bar'], df['foo'])]
Alternatively, using str.join to concat (will also scale better):
df['baz'] = [
' '.join([str(x), 'is', y]) for x, y in zip(df['bar'], df['foo'])]
df
foo bar baz
0 a 1 1 is a
1 b 2 2 is b
2 c 3 3 is c
List comprehensions excel in string manipulation, because string operations are inherently hard to vectorize, and most pandas "vectorised" functions are basically wrappers around loops. I have written extensively about this topic in For loops with pandas - When should I care?. In general, if you don't have to worry about index alignment, use a list comprehension when dealing with string and regex operations.
The list comp above by default does not handle NaNs. However, you could always write a function wrapping a try-except if you needed to handle it.
def try_concat(x, y):
try:
return str(x) + ' is ' + y
except (ValueError, TypeError):
return np.nan
df['baz'] = [try_concat(x, y) for x, y in zip(df['bar'], df['foo'])]
perfplot Performance Measurements
Graph generated using perfplot. Here's the complete code listing.
Functions
def brenbarn(df):
return df.assign(baz=df.bar.map(str) + " is " + df.foo)
def danielvelkov(df):
return df.assign(baz=df.apply(
lambda x:'%s is %s' % (x['bar'],x['foo']),axis=1))
def chrimuelle(df):
return df.assign(
baz=df['bar'].astype(str).str.cat(df['foo'].values, sep=' is '))
def vladimiryashin(df):
return df.assign(baz=df.astype(str).apply(lambda x: ' is '.join(x), axis=1))
def erickfis(df):
return df.assign(
baz=df.apply(lambda x: f"{x['bar']} is {x['foo']}", axis=1))
def cs1_format(df):
return df.assign(baz=df.agg('{0[bar]} is {0[foo]}'.format, axis=1))
def cs1_fstrings(df):
return df.assign(baz=df.agg(lambda x: f"{x['bar']} is {x['foo']}", axis=1))
def cs2(df):
a = np.char.array(df['bar'].values)
b = np.char.array(df['foo'].values)
return df.assign(baz=(a + b' is ' + b).astype(str))
def cs3(df):
return df.assign(
baz=[str(x) + ' is ' + y for x, y in zip(df['bar'], df['foo'])])

The problem in your code is that you want to apply the operation on every row. The way you've written it though takes the whole 'bar' and 'foo' columns, converts them to strings and gives you back one big string. You can write it like:
df.apply(lambda x:'%s is %s' % (x['bar'],x['foo']),axis=1)
It's longer than the other answer but is more generic (can be used with values that are not strings).

You could also use
df['bar'] = df['bar'].str.cat(df['foo'].values.astype(str), sep=' is ')

df.astype(str).apply(lambda x: ' is '.join(x), axis=1)
0 1 is a
1 2 is b
2 3 is c
dtype: object

series.str.cat is the most flexible way to approach this problem:
For df = pd.DataFrame({'foo':['a','b','c'], 'bar':[1, 2, 3]})
df.foo.str.cat(df.bar.astype(str), sep=' is ')
>>> 0 a is 1
1 b is 2
2 c is 3
Name: foo, dtype: object
OR
df.bar.astype(str).str.cat(df.foo, sep=' is ')
>>> 0 1 is a
1 2 is b
2 3 is c
Name: bar, dtype: object
Unlike .join() (which is for joining list contained in a single Series), this method is for joining 2 Series together. It also allows you to ignore or replace NaN values as desired.

#DanielVelkov answer is the proper one BUT
using string literals is faster:
# Daniel's
%timeit df.apply(lambda x:'%s is %s' % (x['bar'],x['foo']),axis=1)
## 963 µs ± 157 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# String literals - python 3
%timeit df.apply(lambda x: f"{x['bar']} is {x['foo']}", axis=1)
## 849 µs ± 4.28 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

I think the most concise solution for arbitrary numbers of columns is a short-form version of this answer:
df.astype(str).apply(' is '.join, axis=1)
You can shave off two more characters with df.agg(), but it's slower:
df.astype(str).agg(' is '.join, axis=1)

It's been 10 years and no one proposed the most simple and intuitive way which is 50% faster than all examples proposed on these 10 years.
df.bar.astype(str) + ' is ' + df.foo

I have encountered a specific case from my side with 10^11 rows in my dataframe, and in this case none of the proposed solution is appropriate. I have used categories, and this should work fine in all cases when the number of unique string is not too large. This is easily done in the R software with XxY with factors but I could not find any other way to do it in python (I'm new to python). If anyone knows a place where this is implemented I'd be glad to know.
def Create_Interaction_var(df,Varnames):
'''
:df data frame
:list of 2 column names, say "X" and "Y".
The two columns should be strings or categories
convert strings columns to categories
Add a column with the "interaction of X and Y" : X x Y, with name
"Interaction-X_Y"
'''
df.loc[:, Varnames[0]] = df.loc[:, Varnames[0]].astype("category")
df.loc[:, Varnames[1]] = df.loc[:, Varnames[1]].astype("category")
CatVar = "Interaction-" + "-".join(Varnames)
Var0Levels = pd.DataFrame(enumerate(df.loc[:,Varnames[0]].cat.categories)).rename(columns={0 : "code0",1 : "name0"})
Var1Levels = pd.DataFrame(enumerate(df.loc[:,Varnames[1]].cat.categories)).rename(columns={0 : "code1",1 : "name1"})
NbLevels=len(Var0Levels)
names = pd.DataFrame(list(itertools.product(dict(enumerate(df.loc[:,Varnames[0]].cat.categories)),
dict(enumerate(df.loc[:,Varnames[1]].cat.categories)))),
columns=['code0', 'code1']).merge(Var0Levels,on="code0").merge(Var1Levels,on="code1")
names=names.assign(Interaction=[str(x) + '_' + y for x, y in zip(names["name0"], names["name1"])])
names["code01"]=names["code0"] + NbLevels*names["code1"]
df.loc[:,CatVar]=df.loc[:,Varnames[0]].cat.codes+NbLevels*df.loc[:,Varnames[1]].cat.codes
df.loc[:, CatVar]= df[[CatVar]].replace(names.set_index("code01")[["Interaction"]].to_dict()['Interaction'])[CatVar]
df.loc[:, CatVar] = df.loc[:, CatVar].astype("category")
return df

from pandas import *
x = DataFrame({'foo':['a','b','c'], 'bar':[1, 2, 3]})
x
x['bar'] = x.bar.astype("str") + " " + "is" + " " + x.foo
x.drop(['foo'], axis=1)

Related

How to add a value in one column to the end of another value in a different column? [duplicate]

I have a following DataFrame:
from pandas import *
df = DataFrame({'foo':['a','b','c'], 'bar':[1, 2, 3]})
It looks like this:
bar foo
0 1 a
1 2 b
2 3 c
Now I want to have something like:
bar
0 1 is a
1 2 is b
2 3 is c
How can I achieve this?
I tried the following:
df['foo'] = '%s is %s' % (df['bar'], df['foo'])
but it gives me a wrong result:
>>>print df.ix[0]
bar a
foo 0 a
1 b
2 c
Name: bar is 0 1
1 2
2
Name: 0
Sorry for a dumb question, but this one pandas: combine two columns in a DataFrame wasn't helpful for me.
df['bar'] = df.bar.map(str) + " is " + df.foo
This question has already been answered, but I believe it would be good to throw some useful methods not previously discussed into the mix, and compare all methods proposed thus far in terms of performance.
Here are some useful solutions to this problem, in increasing order of performance.
DataFrame.agg
This is a simple str.format-based approach.
df['baz'] = df.agg('{0[bar]} is {0[foo]}'.format, axis=1)
df
foo bar baz
0 a 1 1 is a
1 b 2 2 is b
2 c 3 3 is c
You can also use f-string formatting here:
df['baz'] = df.agg(lambda x: f"{x['bar']} is {x['foo']}", axis=1)
df
foo bar baz
0 a 1 1 is a
1 b 2 2 is b
2 c 3 3 is c
char.array-based Concatenation
Convert the columns to concatenate as chararrays, then add them together.
a = np.char.array(df['bar'].values)
b = np.char.array(df['foo'].values)
df['baz'] = (a + b' is ' + b).astype(str)
df
foo bar baz
0 a 1 1 is a
1 b 2 2 is b
2 c 3 3 is c
List Comprehension with zip
I cannot overstate how underrated list comprehensions are in pandas.
df['baz'] = [str(x) + ' is ' + y for x, y in zip(df['bar'], df['foo'])]
Alternatively, using str.join to concat (will also scale better):
df['baz'] = [
' '.join([str(x), 'is', y]) for x, y in zip(df['bar'], df['foo'])]
df
foo bar baz
0 a 1 1 is a
1 b 2 2 is b
2 c 3 3 is c
List comprehensions excel in string manipulation, because string operations are inherently hard to vectorize, and most pandas "vectorised" functions are basically wrappers around loops. I have written extensively about this topic in For loops with pandas - When should I care?. In general, if you don't have to worry about index alignment, use a list comprehension when dealing with string and regex operations.
The list comp above by default does not handle NaNs. However, you could always write a function wrapping a try-except if you needed to handle it.
def try_concat(x, y):
try:
return str(x) + ' is ' + y
except (ValueError, TypeError):
return np.nan
df['baz'] = [try_concat(x, y) for x, y in zip(df['bar'], df['foo'])]
perfplot Performance Measurements
Graph generated using perfplot. Here's the complete code listing.
Functions
def brenbarn(df):
return df.assign(baz=df.bar.map(str) + " is " + df.foo)
def danielvelkov(df):
return df.assign(baz=df.apply(
lambda x:'%s is %s' % (x['bar'],x['foo']),axis=1))
def chrimuelle(df):
return df.assign(
baz=df['bar'].astype(str).str.cat(df['foo'].values, sep=' is '))
def vladimiryashin(df):
return df.assign(baz=df.astype(str).apply(lambda x: ' is '.join(x), axis=1))
def erickfis(df):
return df.assign(
baz=df.apply(lambda x: f"{x['bar']} is {x['foo']}", axis=1))
def cs1_format(df):
return df.assign(baz=df.agg('{0[bar]} is {0[foo]}'.format, axis=1))
def cs1_fstrings(df):
return df.assign(baz=df.agg(lambda x: f"{x['bar']} is {x['foo']}", axis=1))
def cs2(df):
a = np.char.array(df['bar'].values)
b = np.char.array(df['foo'].values)
return df.assign(baz=(a + b' is ' + b).astype(str))
def cs3(df):
return df.assign(
baz=[str(x) + ' is ' + y for x, y in zip(df['bar'], df['foo'])])
The problem in your code is that you want to apply the operation on every row. The way you've written it though takes the whole 'bar' and 'foo' columns, converts them to strings and gives you back one big string. You can write it like:
df.apply(lambda x:'%s is %s' % (x['bar'],x['foo']),axis=1)
It's longer than the other answer but is more generic (can be used with values that are not strings).
You could also use
df['bar'] = df['bar'].str.cat(df['foo'].values.astype(str), sep=' is ')
df.astype(str).apply(lambda x: ' is '.join(x), axis=1)
0 1 is a
1 2 is b
2 3 is c
dtype: object
series.str.cat is the most flexible way to approach this problem:
For df = pd.DataFrame({'foo':['a','b','c'], 'bar':[1, 2, 3]})
df.foo.str.cat(df.bar.astype(str), sep=' is ')
>>> 0 a is 1
1 b is 2
2 c is 3
Name: foo, dtype: object
OR
df.bar.astype(str).str.cat(df.foo, sep=' is ')
>>> 0 1 is a
1 2 is b
2 3 is c
Name: bar, dtype: object
Unlike .join() (which is for joining list contained in a single Series), this method is for joining 2 Series together. It also allows you to ignore or replace NaN values as desired.
#DanielVelkov answer is the proper one BUT
using string literals is faster:
# Daniel's
%timeit df.apply(lambda x:'%s is %s' % (x['bar'],x['foo']),axis=1)
## 963 µs ± 157 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# String literals - python 3
%timeit df.apply(lambda x: f"{x['bar']} is {x['foo']}", axis=1)
## 849 µs ± 4.28 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
I think the most concise solution for arbitrary numbers of columns is a short-form version of this answer:
df.astype(str).apply(' is '.join, axis=1)
You can shave off two more characters with df.agg(), but it's slower:
df.astype(str).agg(' is '.join, axis=1)
It's been 10 years and no one proposed the most simple and intuitive way which is 50% faster than all examples proposed on these 10 years.
df.bar.astype(str) + ' is ' + df.foo
I have encountered a specific case from my side with 10^11 rows in my dataframe, and in this case none of the proposed solution is appropriate. I have used categories, and this should work fine in all cases when the number of unique string is not too large. This is easily done in the R software with XxY with factors but I could not find any other way to do it in python (I'm new to python). If anyone knows a place where this is implemented I'd be glad to know.
def Create_Interaction_var(df,Varnames):
'''
:df data frame
:list of 2 column names, say "X" and "Y".
The two columns should be strings or categories
convert strings columns to categories
Add a column with the "interaction of X and Y" : X x Y, with name
"Interaction-X_Y"
'''
df.loc[:, Varnames[0]] = df.loc[:, Varnames[0]].astype("category")
df.loc[:, Varnames[1]] = df.loc[:, Varnames[1]].astype("category")
CatVar = "Interaction-" + "-".join(Varnames)
Var0Levels = pd.DataFrame(enumerate(df.loc[:,Varnames[0]].cat.categories)).rename(columns={0 : "code0",1 : "name0"})
Var1Levels = pd.DataFrame(enumerate(df.loc[:,Varnames[1]].cat.categories)).rename(columns={0 : "code1",1 : "name1"})
NbLevels=len(Var0Levels)
names = pd.DataFrame(list(itertools.product(dict(enumerate(df.loc[:,Varnames[0]].cat.categories)),
dict(enumerate(df.loc[:,Varnames[1]].cat.categories)))),
columns=['code0', 'code1']).merge(Var0Levels,on="code0").merge(Var1Levels,on="code1")
names=names.assign(Interaction=[str(x) + '_' + y for x, y in zip(names["name0"], names["name1"])])
names["code01"]=names["code0"] + NbLevels*names["code1"]
df.loc[:,CatVar]=df.loc[:,Varnames[0]].cat.codes+NbLevels*df.loc[:,Varnames[1]].cat.codes
df.loc[:, CatVar]= df[[CatVar]].replace(names.set_index("code01")[["Interaction"]].to_dict()['Interaction'])[CatVar]
df.loc[:, CatVar] = df.loc[:, CatVar].astype("category")
return df
from pandas import *
x = DataFrame({'foo':['a','b','c'], 'bar':[1, 2, 3]})
x
x['bar'] = x.bar.astype("str") + " " + "is" + " " + x.foo
x.drop(['foo'], axis=1)

Python str() function applied to dataframe column [duplicate]

I have a following DataFrame:
from pandas import *
df = DataFrame({'foo':['a','b','c'], 'bar':[1, 2, 3]})
It looks like this:
bar foo
0 1 a
1 2 b
2 3 c
Now I want to have something like:
bar
0 1 is a
1 2 is b
2 3 is c
How can I achieve this?
I tried the following:
df['foo'] = '%s is %s' % (df['bar'], df['foo'])
but it gives me a wrong result:
>>>print df.ix[0]
bar a
foo 0 a
1 b
2 c
Name: bar is 0 1
1 2
2
Name: 0
Sorry for a dumb question, but this one pandas: combine two columns in a DataFrame wasn't helpful for me.
df['bar'] = df.bar.map(str) + " is " + df.foo
This question has already been answered, but I believe it would be good to throw some useful methods not previously discussed into the mix, and compare all methods proposed thus far in terms of performance.
Here are some useful solutions to this problem, in increasing order of performance.
DataFrame.agg
This is a simple str.format-based approach.
df['baz'] = df.agg('{0[bar]} is {0[foo]}'.format, axis=1)
df
foo bar baz
0 a 1 1 is a
1 b 2 2 is b
2 c 3 3 is c
You can also use f-string formatting here:
df['baz'] = df.agg(lambda x: f"{x['bar']} is {x['foo']}", axis=1)
df
foo bar baz
0 a 1 1 is a
1 b 2 2 is b
2 c 3 3 is c
char.array-based Concatenation
Convert the columns to concatenate as chararrays, then add them together.
a = np.char.array(df['bar'].values)
b = np.char.array(df['foo'].values)
df['baz'] = (a + b' is ' + b).astype(str)
df
foo bar baz
0 a 1 1 is a
1 b 2 2 is b
2 c 3 3 is c
List Comprehension with zip
I cannot overstate how underrated list comprehensions are in pandas.
df['baz'] = [str(x) + ' is ' + y for x, y in zip(df['bar'], df['foo'])]
Alternatively, using str.join to concat (will also scale better):
df['baz'] = [
' '.join([str(x), 'is', y]) for x, y in zip(df['bar'], df['foo'])]
df
foo bar baz
0 a 1 1 is a
1 b 2 2 is b
2 c 3 3 is c
List comprehensions excel in string manipulation, because string operations are inherently hard to vectorize, and most pandas "vectorised" functions are basically wrappers around loops. I have written extensively about this topic in For loops with pandas - When should I care?. In general, if you don't have to worry about index alignment, use a list comprehension when dealing with string and regex operations.
The list comp above by default does not handle NaNs. However, you could always write a function wrapping a try-except if you needed to handle it.
def try_concat(x, y):
try:
return str(x) + ' is ' + y
except (ValueError, TypeError):
return np.nan
df['baz'] = [try_concat(x, y) for x, y in zip(df['bar'], df['foo'])]
perfplot Performance Measurements
Graph generated using perfplot. Here's the complete code listing.
Functions
def brenbarn(df):
return df.assign(baz=df.bar.map(str) + " is " + df.foo)
def danielvelkov(df):
return df.assign(baz=df.apply(
lambda x:'%s is %s' % (x['bar'],x['foo']),axis=1))
def chrimuelle(df):
return df.assign(
baz=df['bar'].astype(str).str.cat(df['foo'].values, sep=' is '))
def vladimiryashin(df):
return df.assign(baz=df.astype(str).apply(lambda x: ' is '.join(x), axis=1))
def erickfis(df):
return df.assign(
baz=df.apply(lambda x: f"{x['bar']} is {x['foo']}", axis=1))
def cs1_format(df):
return df.assign(baz=df.agg('{0[bar]} is {0[foo]}'.format, axis=1))
def cs1_fstrings(df):
return df.assign(baz=df.agg(lambda x: f"{x['bar']} is {x['foo']}", axis=1))
def cs2(df):
a = np.char.array(df['bar'].values)
b = np.char.array(df['foo'].values)
return df.assign(baz=(a + b' is ' + b).astype(str))
def cs3(df):
return df.assign(
baz=[str(x) + ' is ' + y for x, y in zip(df['bar'], df['foo'])])
The problem in your code is that you want to apply the operation on every row. The way you've written it though takes the whole 'bar' and 'foo' columns, converts them to strings and gives you back one big string. You can write it like:
df.apply(lambda x:'%s is %s' % (x['bar'],x['foo']),axis=1)
It's longer than the other answer but is more generic (can be used with values that are not strings).
You could also use
df['bar'] = df['bar'].str.cat(df['foo'].values.astype(str), sep=' is ')
df.astype(str).apply(lambda x: ' is '.join(x), axis=1)
0 1 is a
1 2 is b
2 3 is c
dtype: object
series.str.cat is the most flexible way to approach this problem:
For df = pd.DataFrame({'foo':['a','b','c'], 'bar':[1, 2, 3]})
df.foo.str.cat(df.bar.astype(str), sep=' is ')
>>> 0 a is 1
1 b is 2
2 c is 3
Name: foo, dtype: object
OR
df.bar.astype(str).str.cat(df.foo, sep=' is ')
>>> 0 1 is a
1 2 is b
2 3 is c
Name: bar, dtype: object
Unlike .join() (which is for joining list contained in a single Series), this method is for joining 2 Series together. It also allows you to ignore or replace NaN values as desired.
#DanielVelkov answer is the proper one BUT
using string literals is faster:
# Daniel's
%timeit df.apply(lambda x:'%s is %s' % (x['bar'],x['foo']),axis=1)
## 963 µs ± 157 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# String literals - python 3
%timeit df.apply(lambda x: f"{x['bar']} is {x['foo']}", axis=1)
## 849 µs ± 4.28 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
I think the most concise solution for arbitrary numbers of columns is a short-form version of this answer:
df.astype(str).apply(' is '.join, axis=1)
You can shave off two more characters with df.agg(), but it's slower:
df.astype(str).agg(' is '.join, axis=1)
It's been 10 years and no one proposed the most simple and intuitive way which is 50% faster than all examples proposed on these 10 years.
df.bar.astype(str) + ' is ' + df.foo
I have encountered a specific case from my side with 10^11 rows in my dataframe, and in this case none of the proposed solution is appropriate. I have used categories, and this should work fine in all cases when the number of unique string is not too large. This is easily done in the R software with XxY with factors but I could not find any other way to do it in python (I'm new to python). If anyone knows a place where this is implemented I'd be glad to know.
def Create_Interaction_var(df,Varnames):
'''
:df data frame
:list of 2 column names, say "X" and "Y".
The two columns should be strings or categories
convert strings columns to categories
Add a column with the "interaction of X and Y" : X x Y, with name
"Interaction-X_Y"
'''
df.loc[:, Varnames[0]] = df.loc[:, Varnames[0]].astype("category")
df.loc[:, Varnames[1]] = df.loc[:, Varnames[1]].astype("category")
CatVar = "Interaction-" + "-".join(Varnames)
Var0Levels = pd.DataFrame(enumerate(df.loc[:,Varnames[0]].cat.categories)).rename(columns={0 : "code0",1 : "name0"})
Var1Levels = pd.DataFrame(enumerate(df.loc[:,Varnames[1]].cat.categories)).rename(columns={0 : "code1",1 : "name1"})
NbLevels=len(Var0Levels)
names = pd.DataFrame(list(itertools.product(dict(enumerate(df.loc[:,Varnames[0]].cat.categories)),
dict(enumerate(df.loc[:,Varnames[1]].cat.categories)))),
columns=['code0', 'code1']).merge(Var0Levels,on="code0").merge(Var1Levels,on="code1")
names=names.assign(Interaction=[str(x) + '_' + y for x, y in zip(names["name0"], names["name1"])])
names["code01"]=names["code0"] + NbLevels*names["code1"]
df.loc[:,CatVar]=df.loc[:,Varnames[0]].cat.codes+NbLevels*df.loc[:,Varnames[1]].cat.codes
df.loc[:, CatVar]= df[[CatVar]].replace(names.set_index("code01")[["Interaction"]].to_dict()['Interaction'])[CatVar]
df.loc[:, CatVar] = df.loc[:, CatVar].astype("category")
return df
from pandas import *
x = DataFrame({'foo':['a','b','c'], 'bar':[1, 2, 3]})
x
x['bar'] = x.bar.astype("str") + " " + "is" + " " + x.foo
x.drop(['foo'], axis=1)

Python what is the fastest way to join (values) two dataframe columns [duplicate]

I have a following DataFrame:
from pandas import *
df = DataFrame({'foo':['a','b','c'], 'bar':[1, 2, 3]})
It looks like this:
bar foo
0 1 a
1 2 b
2 3 c
Now I want to have something like:
bar
0 1 is a
1 2 is b
2 3 is c
How can I achieve this?
I tried the following:
df['foo'] = '%s is %s' % (df['bar'], df['foo'])
but it gives me a wrong result:
>>>print df.ix[0]
bar a
foo 0 a
1 b
2 c
Name: bar is 0 1
1 2
2
Name: 0
Sorry for a dumb question, but this one pandas: combine two columns in a DataFrame wasn't helpful for me.
df['bar'] = df.bar.map(str) + " is " + df.foo
This question has already been answered, but I believe it would be good to throw some useful methods not previously discussed into the mix, and compare all methods proposed thus far in terms of performance.
Here are some useful solutions to this problem, in increasing order of performance.
DataFrame.agg
This is a simple str.format-based approach.
df['baz'] = df.agg('{0[bar]} is {0[foo]}'.format, axis=1)
df
foo bar baz
0 a 1 1 is a
1 b 2 2 is b
2 c 3 3 is c
You can also use f-string formatting here:
df['baz'] = df.agg(lambda x: f"{x['bar']} is {x['foo']}", axis=1)
df
foo bar baz
0 a 1 1 is a
1 b 2 2 is b
2 c 3 3 is c
char.array-based Concatenation
Convert the columns to concatenate as chararrays, then add them together.
a = np.char.array(df['bar'].values)
b = np.char.array(df['foo'].values)
df['baz'] = (a + b' is ' + b).astype(str)
df
foo bar baz
0 a 1 1 is a
1 b 2 2 is b
2 c 3 3 is c
List Comprehension with zip
I cannot overstate how underrated list comprehensions are in pandas.
df['baz'] = [str(x) + ' is ' + y for x, y in zip(df['bar'], df['foo'])]
Alternatively, using str.join to concat (will also scale better):
df['baz'] = [
' '.join([str(x), 'is', y]) for x, y in zip(df['bar'], df['foo'])]
df
foo bar baz
0 a 1 1 is a
1 b 2 2 is b
2 c 3 3 is c
List comprehensions excel in string manipulation, because string operations are inherently hard to vectorize, and most pandas "vectorised" functions are basically wrappers around loops. I have written extensively about this topic in For loops with pandas - When should I care?. In general, if you don't have to worry about index alignment, use a list comprehension when dealing with string and regex operations.
The list comp above by default does not handle NaNs. However, you could always write a function wrapping a try-except if you needed to handle it.
def try_concat(x, y):
try:
return str(x) + ' is ' + y
except (ValueError, TypeError):
return np.nan
df['baz'] = [try_concat(x, y) for x, y in zip(df['bar'], df['foo'])]
perfplot Performance Measurements
Graph generated using perfplot. Here's the complete code listing.
Functions
def brenbarn(df):
return df.assign(baz=df.bar.map(str) + " is " + df.foo)
def danielvelkov(df):
return df.assign(baz=df.apply(
lambda x:'%s is %s' % (x['bar'],x['foo']),axis=1))
def chrimuelle(df):
return df.assign(
baz=df['bar'].astype(str).str.cat(df['foo'].values, sep=' is '))
def vladimiryashin(df):
return df.assign(baz=df.astype(str).apply(lambda x: ' is '.join(x), axis=1))
def erickfis(df):
return df.assign(
baz=df.apply(lambda x: f"{x['bar']} is {x['foo']}", axis=1))
def cs1_format(df):
return df.assign(baz=df.agg('{0[bar]} is {0[foo]}'.format, axis=1))
def cs1_fstrings(df):
return df.assign(baz=df.agg(lambda x: f"{x['bar']} is {x['foo']}", axis=1))
def cs2(df):
a = np.char.array(df['bar'].values)
b = np.char.array(df['foo'].values)
return df.assign(baz=(a + b' is ' + b).astype(str))
def cs3(df):
return df.assign(
baz=[str(x) + ' is ' + y for x, y in zip(df['bar'], df['foo'])])
The problem in your code is that you want to apply the operation on every row. The way you've written it though takes the whole 'bar' and 'foo' columns, converts them to strings and gives you back one big string. You can write it like:
df.apply(lambda x:'%s is %s' % (x['bar'],x['foo']),axis=1)
It's longer than the other answer but is more generic (can be used with values that are not strings).
You could also use
df['bar'] = df['bar'].str.cat(df['foo'].values.astype(str), sep=' is ')
df.astype(str).apply(lambda x: ' is '.join(x), axis=1)
0 1 is a
1 2 is b
2 3 is c
dtype: object
series.str.cat is the most flexible way to approach this problem:
For df = pd.DataFrame({'foo':['a','b','c'], 'bar':[1, 2, 3]})
df.foo.str.cat(df.bar.astype(str), sep=' is ')
>>> 0 a is 1
1 b is 2
2 c is 3
Name: foo, dtype: object
OR
df.bar.astype(str).str.cat(df.foo, sep=' is ')
>>> 0 1 is a
1 2 is b
2 3 is c
Name: bar, dtype: object
Unlike .join() (which is for joining list contained in a single Series), this method is for joining 2 Series together. It also allows you to ignore or replace NaN values as desired.
#DanielVelkov answer is the proper one BUT
using string literals is faster:
# Daniel's
%timeit df.apply(lambda x:'%s is %s' % (x['bar'],x['foo']),axis=1)
## 963 µs ± 157 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# String literals - python 3
%timeit df.apply(lambda x: f"{x['bar']} is {x['foo']}", axis=1)
## 849 µs ± 4.28 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
I think the most concise solution for arbitrary numbers of columns is a short-form version of this answer:
df.astype(str).apply(' is '.join, axis=1)
You can shave off two more characters with df.agg(), but it's slower:
df.astype(str).agg(' is '.join, axis=1)
It's been 10 years and no one proposed the most simple and intuitive way which is 50% faster than all examples proposed on these 10 years.
df.bar.astype(str) + ' is ' + df.foo
I have encountered a specific case from my side with 10^11 rows in my dataframe, and in this case none of the proposed solution is appropriate. I have used categories, and this should work fine in all cases when the number of unique string is not too large. This is easily done in the R software with XxY with factors but I could not find any other way to do it in python (I'm new to python). If anyone knows a place where this is implemented I'd be glad to know.
def Create_Interaction_var(df,Varnames):
'''
:df data frame
:list of 2 column names, say "X" and "Y".
The two columns should be strings or categories
convert strings columns to categories
Add a column with the "interaction of X and Y" : X x Y, with name
"Interaction-X_Y"
'''
df.loc[:, Varnames[0]] = df.loc[:, Varnames[0]].astype("category")
df.loc[:, Varnames[1]] = df.loc[:, Varnames[1]].astype("category")
CatVar = "Interaction-" + "-".join(Varnames)
Var0Levels = pd.DataFrame(enumerate(df.loc[:,Varnames[0]].cat.categories)).rename(columns={0 : "code0",1 : "name0"})
Var1Levels = pd.DataFrame(enumerate(df.loc[:,Varnames[1]].cat.categories)).rename(columns={0 : "code1",1 : "name1"})
NbLevels=len(Var0Levels)
names = pd.DataFrame(list(itertools.product(dict(enumerate(df.loc[:,Varnames[0]].cat.categories)),
dict(enumerate(df.loc[:,Varnames[1]].cat.categories)))),
columns=['code0', 'code1']).merge(Var0Levels,on="code0").merge(Var1Levels,on="code1")
names=names.assign(Interaction=[str(x) + '_' + y for x, y in zip(names["name0"], names["name1"])])
names["code01"]=names["code0"] + NbLevels*names["code1"]
df.loc[:,CatVar]=df.loc[:,Varnames[0]].cat.codes+NbLevels*df.loc[:,Varnames[1]].cat.codes
df.loc[:, CatVar]= df[[CatVar]].replace(names.set_index("code01")[["Interaction"]].to_dict()['Interaction'])[CatVar]
df.loc[:, CatVar] = df.loc[:, CatVar].astype("category")
return df
from pandas import *
x = DataFrame({'foo':['a','b','c'], 'bar':[1, 2, 3]})
x
x['bar'] = x.bar.astype("str") + " " + "is" + " " + x.foo
x.drop(['foo'], axis=1)

Combing dataframe values to a new dataframe

I am combining two dataframe values from an Excel file to a new dataframe but the combined values changed to decimal number. Here are my codes:
My dataframe that I wish to combine:
cable_block pair
1 10
1 11
3 123
3 222
I insert a dataframe to have those two combined with a delimiter of /, so here is my code:
df['new_col'] = df[['cable_block', 'pair']].apply(lambda x: '/'.join(x.astype(str), axis=1))
The result I get is:
cable_block pair new_col
1 10 1.0/10.0
1 11 1.0/11.0
3 123 3.0/123.0
3 222 3.0/222.0
After searching, I found good answer by
here Psidom and Skirrebattie. So I tried:
df['new_col'] = df['new_col'].applymap(str)
and
df['new_col'] = df['new_col'].astype(str)
But it doesn't work the way it should. Looking by the codes, it should work and I find it weird that it doesn't.
Is there another work around?
First, to remove the trailing .0 ensure that data is int:
df = df.astype(int)
Then you can do:
df['cable_block'].astype(str) + '/' + df['pair'].astype(str)
0 1/10
1 1/11
2 3/123
3 3/222
dtype: object
Another option to ensure a correct formatting could be:
df.apply(lambda x: "%d/%d" %(x['cable_block'], x['pair']), axis=1)
0 1/10
1 1/11
2 3/123
3 3/222
dtype: object
Why not using astype
df.astype(str).apply('/'.join,1)
Out[604]:
0 1/10
1 1/11
2 3/123
3 3/222
dtype: object
df['cable_block'].astype(int).astype(str) + '/' + df['pair'].astype(int).astype(str)
The data in your dataframe is probably floats, not ints.
You can use a list comprehension and f-strings:
df['new_col'] = [f'{cable_block}/{pair}' for cable_block, pair in df.values]
print(df)
cable_block pair new_col
0 1 10 1/10
1 1 11 1/11
2 3 123 3/123
3 3 222 3/222
The approach compares reasonably well versus the alternatives:
df = pd.concat([df]*10000, ignore_index=True)
%timeit df['cable_block'].astype(str) + '/' + df['pair'].astype(str) # 62.8 ms
%timeit [f'{cable_block}/{pair}' for cable_block, pair in df.values] # 85.1 ms
%timeit list(map('/'.join, map(list, df.values.astype(str)))) # 157 ms
%timeit df.astype(str).apply('/'.join,1) # 1.11 s

Pandas str.count

Consider the following dataframe. I want to count the number of '$' that appear in a string. I use the str.count function in pandas (http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.count.html).
>>> import pandas as pd
>>> df = pd.DataFrame(['$$a', '$$b', '$c'], columns=['A'])
>>> df['A'].str.count('$')
0 1
1 1
2 1
Name: A, dtype: int64
I was expecting the result to be [2,2,1]. What am I doing wrong?
In Python, the count function in the string module returns the correct result.
>>> a = "$$$$abcd"
>>> a.count('$')
4
>>> a = '$abcd$dsf$'
>>> a.count('$')
3
$ has a special meaning in RegEx - it's end-of-line, so try this:
In [21]: df.A.str.count(r'\$')
Out[21]:
0 2
1 2
2 1
Name: A, dtype: int64
As the other answers have noted, the issue here is that $ denotes the end of the line. If you do not intend to use regular expressions, you may find that using str.count (that is, the method from the built-in type str) is faster than its pandas counterpart;
In [39]: df['A'].apply(lambda x: x.count('$'))
Out[39]:
0 2
1 2
2 1
Name: A, dtype: int64
In [40]: %timeit df['A'].str.count(r'\$')
1000 loops, best of 3: 243 µs per loop
In [41]: %timeit df['A'].apply(lambda x: x.count('$'))
1000 loops, best of 3: 202 µs per loop
Try pattern [$] so it doesn't treat $ as end of character (see this cheatsheet) if you place it in square brackets [] then it treats it as a literal character:
In [3]:
df = pd.DataFrame(['$$a', '$$b', '$c'], columns=['A'])
df['A'].str.count('[$]')
Out[3]:
0 2
1 2
2 1
Name: A, dtype: int64
taking a cue from #fuglede
pd.Series([x.count('$') for x in df.A.values.tolist()], df.index)
as pointed by #jezrael, the above fails when there is a null type, so...
def tc(x):
try:
return x.count('$')
except:
return 0
pd.Series([tc(x) for x in df.A.values.tolist()], df.index)
timings
np.random.seed([3,1415])
df = pd.Series(np.random.randint(0, 100, 100000)) \
.apply(lambda x: '\$' * x).to_frame('A')
df.A.replace('', np.nan, inplace=True)
def tc(x):
try:
return x.count('$')
except:
return 0

Categories

Resources