Pandas - Modify string values in each cell - python

I have a pandas dataframe and I need to modify all values in a given string column. Each column contains string values of the same length. The user provides the index they want to be replaced for each value
for example: [1:3] and the replacement value "AAA".
This would replace the string from values 1 to 3 with the value AAA.
How can I use the applymap(), map() or apply() function to get this done?
SOLUTION: Here is the final solution I went off of using the answer marked below:
import pandas as pd
df = pd.DataFrame({'A':['ffgghh','ffrtss','ffrtds'],
#'B':['ffrtss','ssgghh','d'],
'C':['qqttss',' 44','f']})
print df
old = ['g', 'r', 'z']
new = ['y', 'b', 'c']
vals = dict(zip(old, new))
pos = 2
for old, new in vals.items():
df.ix[df['A'].str[pos] == old, 'A'] = df['A'].str.slice_replace(pos,pos + len(new),new)
print df

Use str.slice_replace:
df['B'] = df['B'].str.slice_replace(1, 3, 'AAA')
Sample Input:
A B
0 w abcdefg
1 x bbbbbbb
2 y ccccccc
3 z zzzzzzzz
Sample Output:
A B
0 w aAAAdefg
1 x bAAAbbbb
2 y cAAAcccc
3 z zAAAzzzzz

IMO the most straightforward solution:
In [7]: df
Out[7]:
col
0 abcdefg
1 bbbbbbb
2 ccccccc
3 zzzzzzzz
In [9]: df.col = df.col.str[:1] + 'AAA' + df.col.str[4:]
In [10]: df
Out[10]:
col
0 aAAAefg
1 bAAAbbb
2 cAAAccc
3 zAAAzzzz

Related

How to give duplicated columns distinct names in Pandas [duplicate]

I have several columns named the same in a df. I need to rename them but the problem is that the df.rename method renames them all the same way. How I can rename the below blah(s) to blah1, blah4, blah5?
df = pd.DataFrame(np.arange(2*5).reshape(2,5))
df.columns = ['blah','blah2','blah3','blah','blah']
df
# blah blah2 blah3 blah blah
# 0 0 1 2 3 4
# 1 5 6 7 8 9
Here is what happens when using the df.rename method:
df.rename(columns={'blah':'blah1'})
# blah1 blah2 blah3 blah1 blah1
# 0 0 1 2 3 4
# 1 5 6 7 8 9
Starting with Pandas 0.19.0 pd.read_csv() has improved support for duplicate column names
So we can try to use the internal method:
In [137]: pd.io.parsers.ParserBase({'names':df.columns})._maybe_dedup_names(df.columns)
Out[137]: ['blah', 'blah2', 'blah3', 'blah.1', 'blah.2']
Since Pandas 1.3.0:
pd.io.parsers.base_parser.ParserBase({'names':df.columns, 'usecols':None})._maybe_dedup_names(df.columns)
This is the "magic" function:
def _maybe_dedup_names(self, names):
# see gh-7160 and gh-9424: this helps to provide
# immediate alleviation of the duplicate names
# issue and appears to be satisfactory to users,
# but ultimately, not needing to butcher the names
# would be nice!
if self.mangle_dupe_cols:
names = list(names) # so we can index
counts = {}
for i, col in enumerate(names):
cur_count = counts.get(col, 0)
if cur_count > 0:
names[i] = '%s.%d' % (col, cur_count)
counts[col] = cur_count + 1
return names
I was looking to find a solution within Pandas more than a general Python solution.
Column's get_loc() function returns a masked array if it finds duplicates with 'True' values pointing to the locations where duplicates are found. I then use the mask to assign new values into those locations. In my case, I know ahead of time how many dups I'm going to get and what I'm going to assign to them but it looks like df.columns.get_duplicates() would return a list of all dups and you can then use that list in conjunction with get_loc() if you need a more generic dup-weeding action
'''UPDATED AS-OF SEPT 2020'''
cols=pd.Series(df.columns)
for dup in df.columns[df.columns.duplicated(keep=False)]:
cols[df.columns.get_loc(dup)] = ([dup + '.' + str(d_idx)
if d_idx != 0
else dup
for d_idx in range(df.columns.get_loc(dup).sum())]
)
df.columns=cols
blah blah2 blah3 blah.1 blah.2
0 0 1 2 3 4
1 5 6 7 8 9
New Better Method (Update 03Dec2019)
This code below is better than above code. Copied from another answer below (#SatishSK):
#sample df with duplicate blah column
df=pd.DataFrame(np.arange(2*5).reshape(2,5))
df.columns=['blah','blah2','blah3','blah','blah']
df
# you just need the following 4 lines to rename duplicates
# df is the dataframe that you want to rename duplicated columns
cols=pd.Series(df.columns)
for dup in cols[cols.duplicated()].unique():
cols[cols[cols == dup].index.values.tolist()] = [dup + '.' + str(i) if i != 0 else dup for i in range(sum(cols == dup))]
# rename the columns with the cols list.
df.columns=cols
df
Output:
blah blah2 blah3 blah.1 blah.2
0 0 1 2 3 4
1 5 6 7 8 9
You could use this:
def df_column_uniquify(df):
df_columns = df.columns
new_columns = []
for item in df_columns:
counter = 0
newitem = item
while newitem in new_columns:
counter += 1
newitem = "{}_{}".format(item, counter)
new_columns.append(newitem)
df.columns = new_columns
return df
Then
import numpy as np
import pandas as pd
df=pd.DataFrame(np.arange(2*5).reshape(2,5))
df.columns=['blah','blah2','blah3','blah','blah']
so that df:
blah blah2 blah3 blah blah
0 0 1 2 3 4
1 5 6 7 8 9
then
df = df_column_uniquify(df)
so that df:
blah blah2 blah3 blah_1 blah_2
0 0 1 2 3 4
1 5 6 7 8 9
You could assign directly to the columns:
In [12]:
df.columns = ['blah','blah2','blah3','blah4','blah5']
df
Out[12]:
blah blah2 blah3 blah4 blah5
0 0 1 2 3 4
1 5 6 7 8 9
[2 rows x 5 columns]
If you want to dynamically just rename the duplicate columns then you could do something like the following (code taken from answer 2: Index of duplicates items in a python list):
In [25]:
import collections
dups = collections.defaultdict(list)
dup_indices=[]
col_list=list(df.columns)
for i, e in enumerate(list(df.columns)):
dups[e].append(i)
for k, v in sorted(dups.items()):
if len(v) >= 2:
dup_indices = v
for i in dup_indices:
col_list[i] = col_list[i] + ' ' + str(i)
col_list
Out[25]:
['blah 0', 'blah2', 'blah3', 'blah 3', 'blah 4']
You could then use this to assign back, you could also have a function to generate a unique name that is not present in the columns prior to renaming.
duplicated_idx = dataset.columns.duplicated()
duplicated = dataset.columns[duplicated_idx].unique()
rename_cols = []
i = 1
for col in dataset.columns:
if col in duplicated:
rename_cols.extend([col + '_' + str(i)])
else:
rename_cols.extend([col])
dataset.columns = rename_cols
Thank you #Lamakaha for the solution. Your idea gave me a chance to modify it and make it workable in all the cases.
I am using Python 3.7.3 version.
I tried your piece of code on my data set which had only one duplicated column i.e. two columns with same name. Unfortunately, the column names remained As-Is without being renamed. On top of that I got a warning that "get_duplicates() is deprecated and same will be removed in future version". I used duplicated() coupled with unique() in place of get_duplicates() which did not yield the expected result.
I have modified your piece of code little bit which is working for me now for my data set as well as in other general cases as well.
Here are the code runs with and without code modification on the example data set mentioned in the question along with results:
df=pd.DataFrame(np.arange(2*5).reshape(2,5))
df.columns=['blah','blah2','blah3','blah','blah']
df
cols=pd.Series(df.columns)
for dup in df.columns.get_duplicates():
cols[df.columns.get_loc(dup)]=[dup+'.'+str(d_idx) if d_idx!=0 else dup for d_idx in range(df.columns.get_loc(dup).sum())]
df.columns=cols
df
f:\Anaconda3\lib\site-packages\ipykernel_launcher.py:2: FutureWarning:
'get_duplicates' is deprecated and will be removed in a future
release. You can use idx[idx.duplicated()].unique() instead
Output:
blah blah2 blah3 blah blah.1
0 0 1 2 3 4
1 5 6 7 8 9
Two of the three "blah"(s) are not renamed properly.
Modified code
df=pd.DataFrame(np.arange(2*5).reshape(2,5))
df.columns=['blah','blah2','blah3','blah','blah']
df
cols=pd.Series(df.columns)
for dup in cols[cols.duplicated()].unique():
cols[cols[cols == dup].index.values.tolist()] = [dup + '.' + str(i) if i != 0 else dup for i in range(sum(cols == dup))]
df.columns=cols
df
Output:
blah blah2 blah3 blah.1 blah.2
0 0 1 2 3 4
1 5 6 7 8 9
Here is a run of modified code on some another example:
cols = pd.Series(['X', 'Y', 'Z', 'A', 'B', 'C', 'A', 'A', 'L', 'M', 'A', 'Y', 'M'])
for dup in cols[cols.duplicated()].unique():
cols[cols[cols == dup].index.values.tolist()] = [dup + '_' + str(i) if i != 0 else dup for i in range(sum(cols == dup))]
cols
Output:
0 X
1 Y
2 Z
3 A
4 B
5 C
6 A_1
7 A_2
8 L
9 M
10 A_3
11 Y_1
12 M_1
dtype: object
Hope this helps anybody who is seeking answer to the aforementioned question.
Since the accepted answer (by Lamakaha) is not working for recent versions of pandas, and because the other suggestions looked a bit clumsy, I worked out my own solution:
def dedupIndex(idx, fmt=None, ignoreFirst=True):
# fmt: A string format that receives two arguments:
# name and a counter. By default: fmt='%s.%03d'
# ignoreFirst: Disable/enable postfixing of first element.
idx = pd.Series(idx)
duplicates = idx[idx.duplicated()].unique()
fmt = '%s.%03d' if fmt is None else fmt
for name in duplicates:
dups = idx==name
ret = [ fmt%(name,i) if (i!=0 or not ignoreFirst) else name
for i in range(dups.sum()) ]
idx.loc[dups] = ret
return pd.Index(idx)
Use the function as follows:
df.columns = dedupIndex(df.columns)
# Result: ['blah', 'blah2', 'blah3', 'blah.001', 'blah.002']
df.columns = dedupIndex(df.columns, fmt='%s #%d', ignoreFirst=False)
# Result: ['blah #0', 'blah2', 'blah3', 'blah #1', 'blah #2']
Here's a solution that also works for multi-indexes
# Take a df and rename duplicate columns by appending number suffixes
def rename_duplicates(df):
import copy
new_columns = df.columns.values
suffix = {key: 2 for key in set(new_columns)}
dup = pd.Series(new_columns).duplicated()
if type(df.columns) == pd.core.indexes.multi.MultiIndex:
# Need to be mutable, make it list instead of tuples
for i in range(len(new_columns)):
new_columns[i] = list(new_columns[i])
for ix, item in enumerate(new_columns):
item_orig = copy.copy(item)
if dup[ix]:
for level in range(len(new_columns[ix])):
new_columns[ix][level] = new_columns[ix][level] + f"_{suffix[tuple(item_orig)]}"
suffix[tuple(item_orig)] += 1
for i in range(len(new_columns)):
new_columns[i] = tuple(new_columns[i])
df.columns = pd.MultiIndex.from_tuples(new_columns)
# Not a MultiIndex
else:
for ix, item in enumerate(new_columns):
if dup[ix]:
new_columns[ix] = item + f"_{suffix[item]}"
suffix[item] += 1
df.columns = new_columns
I just wrote this code it uses a list comprehension to update all duplicated names.
df.columns = [x[1] if x[1] not in df.columns[:x[0]] else f"{x[1]}_{list(df.columns[:x[0]]).count(x[1])}" for x in enumerate(df.columns)]
Created a function with some tests so it should be drop in ready; this is a little different than Lamakaha's excellent solution since it renames the first appearance of a duplicate column:
from collections import defaultdict
from typing import Dict, List, Set
import pandas as pd
def rename_duplicate_columns(df: pd.DataFrame) -> pd.DataFrame:
"""Rename column headers to ensure no header names are duplicated.
Args:
df (pd.DataFrame): A dataframe with a single index of columns
Returns:
pd.DataFrame: The dataframe with headers renamed; inplace
"""
if not df.columns.has_duplicates:
return df
duplicates: Set[str] = set(df.columns[df.columns.duplicated()].tolist())
indexes: Dict[str, int] = defaultdict(lambda: 0)
new_cols: List[str] = []
for col in df.columns:
if col in duplicates:
indexes[col] += 1
new_cols.append(f"{col}.{indexes[col]}")
else:
new_cols.append(col)
df.columns = new_cols
return df
def test_rename_duplicate_columns():
df = pd.DataFrame(data=[[1, 2]], columns=["a", "b"])
assert rename_duplicate_columns(df).columns.tolist() == ["a", "b"]
df = pd.DataFrame(data=[[1, 2]], columns=["a", "a"])
assert rename_duplicate_columns(df).columns.tolist() == ["a.1", "a.2"]
df = pd.DataFrame(data=[[1, 2, 3]], columns=["a", "b", "a"])
assert rename_duplicate_columns(df).columns.tolist() == ["a.1", "b", "a.2"]
We can just assign each column a different name.
Suppoese duplicate column name is like = [a,b,c,d,d,c]
Then just create a list of name what you want to assign:
C = [a,b,c,d,D1,C1]
df.columns = c
This works for me.
This is my solution:
cols = [] # for tracking if we alread seen it before
new_cols = []
for col in df.columns:
cols.append(col)
count = cols.count(col)
if count > 1:
new_cols.append(f'{col}_{count}')
else:
new_cols.append(col)
df.columns = new_cols
Here's an elegant solution:
Isolate a dataframe with only the repeated columns (looks like it will be a series but it will be a dataframe if >1 column with that name):
df1 = df['blah']
For each "blah" column, give it a unique number
df1.columns = ['blah_' + str(int(x)) for x in range(len(df1.columns))]
Isolate a dataframe with all but the repeated columns:
df2 = df[[x for x in df.columns if x != 'blah']]
Merge back together on indices:
df3 = pd.merge(df1, df2, left_index=True, right_index=True)
Et voila:
blah_0 blah_1 blah_2 blah2 blah3
0 0 3 4 1 2
1 5 8 9 6 7

Pandas Set element of a new column as a list (iterable) raise ValueError: setting an array element with a sequence

I want to, at the same time, create a new column in a pandas dataframe and set its first value to a list.
I want to transform this dataframe
df = pd.DataFrame.from_dict({'a':[1,2],'b':[3,4]})
a b
0 1 3
1 2 4
into this one
a b c
0 1 3 [2,3]
1 2 4 NaN
I tried :
df.loc[0, 'c'] = [2,3]
df.loc[0, 'c'] = np.array([2,3])
df.loc[0, 'c'] = [[2,3]]
df.at[0,'c'] = [2,3]
df.at[0,'d'] = [[2,3]]
It does not work.
How should I proceed?
If the first element of a series is a list, then the series must be of type object (not the most efficient for numerical computations). This should work, however.
df = df.assign(c=None)
df.loc[0, 'c'] = [2, 3]
>>> df
a b c
0 1 3 [2, 3]
1 2 4 None
If you really need the remaining values of column c to be NaNs instead of None, use this:
df.loc[1:, 'c'] = np.nan
The problem seems to have something to do with the type of the c column. If you convert it to type 'object', you can use iat, loc or set_value to set a cell as a list.
df2 = (
df.assign(c=np.nan)
.assign(c=lambda x: x.c.astype(object))
)
df2.set_value(0,'c',[2,3])
Out[86]:
a b c
0 1 3 [2, 3]
1 2 4 NaN

Slicing a DataFrameGroupBy object

Is there a way to slice a DataFrameGroupBy object?
For example, if I have:
df = pd.DataFrame({'A': [2, 1, 1, 3, 3], 'B': ['x', 'y', 'z', 'r', 'p']})
A B
0 2 x
1 1 y
2 1 z
3 3 r
4 3 p
dfg = df.groupby('A')
Now, the returned GroupBy object is indexed by values from A, and I would like to select a subset of it, e.g. to perform aggregation. It could be something like
dfg.loc[1:2].agg(...)
or, for a specific column,
dfg['B'].loc[1:2].agg(...)
EDIT. To make it more clear: by slicing the GroupBy object I mean accessing only a subset of groups. In the above example, the GroupBy object will contain 3 groups, for A = 1, A = 2, and A = 3. For some reasons, I may only be interested in groups for A = 1 and A = 2.
It seesm you need custom function with iloc - but if use agg is necessary return aggregate value:
df = df.groupby('A')['B'].agg(lambda x: ','.join(x.iloc[0:3]))
print (df)
A
1 y,z
2 x
3 r,p
Name: B, dtype: object
df = df.groupby('A')['B'].agg(lambda x: ','.join(x.iloc[1:3]))
print (df)
A
1 z
2
3 p
Name: B, dtype: object
For multiple columns:
df = pd.DataFrame({'A': [2, 1, 1, 3, 3],
'B': ['x', 'y', 'z', 'r', 'p'],
'C': ['g', 'y', 'y', 'u', 'k']})
print (df)
A B C
0 2 x g
1 1 y y
2 1 z y
3 3 r u
4 3 p k
df = df.groupby('A').agg(lambda x: ','.join(x.iloc[1:3]))
print (df)
B C
A
1 z y
2
3 p k
If I understand correctly, you only want some groups, but those are supposed to be returned completely:
A B
1 1 y
2 1 z
0 2 x
You can solve your problem by extracting the keys and then selecting groups based on those keys.
Assuming you already know the groups:
pd.concat([dfg.get_group(1),dfg.get_group(2)])
If you don't know the group names and are just looking for random n groups, this might work:
pd.concat([dfg.get_group(n) for n in list(dict(list(dfg)).keys())[:2]])
The output in both cases is a normal DataFrame, not a DataFrameGroupBy object, so it might be smarter to first filter your DataFrame and only aggregate afterwards:
df[df['A'].isin([1,2])].groupby('A')
The same for unknown groups:
df[df['A'].isin(list(set(df['A']))[:2])].groupby('A')
I believe there are some Stackoverflow answers refering to this, like How to access pandas groupby dataframe by key

What is the dataset return from dataframe.stack()

I am trying to work on dataframe which i have used .stack() function
df = pd.read_csv('test.csv', usecols =['firstround','secondround','thirdround','fourthround','fifthround'])
sortedArray = df.stack().value_counts()
sortedArray = sortedArray.sort_index()
I need to retrieve the first index column values and the 2nd index column values from the sortedArray, meaning i need x and y value from the sorted array.
Any idea how i can do it?
I think you need Series.iloc, because output from stack is Series:
x = sortedArray.iloc[0]
y = sortedArray.iloc[1]
Sample:
df = pd.DataFrame({'A':['a','a','s'],
'B':['a','s','a'],
'C':['s','d','a']})
print (df)
A B C
0 a a s
1 a s d
2 s a a
sortedArray = df.stack().value_counts()
print (sortedArray)
a 5
s 3
d 1
dtype: int64
sortedArray = sortedArray.sort_index()
print (sortedArray)
a 5
d 1
s 3
dtype: int64
x = sortedArray.iloc[0]
y = sortedArray.iloc[1]
print (x)
5
print (y)
1
print (sortedArray.tolist())
[5, 1, 3]
print (sortedArray.index.tolist())
['a', 'd', 's']

pandas if then statement without looping

All I'm trying to do is add columns data1 and data2 if in the same row letters is a and subtract if it is c. multipy if it is b. Here is my code.
import pandas as pd
a=[['Date', 'letters', 'data1', 'data2'], ['1/2/2014', 'a', 6, 1], ['1/2/2014', 'a', 3, 1], ['1/3/2014', 'c', 1, 3],['1/3/2014', 'b', 3, 5]]
df = pd.DataFrame.from_records(a[1:],columns=a[0])
df['result']=df['data1']
for i in range(0,len(df)):
if df['letters'][i]=='a':
df['result'][i]=df['data1'][i]+df['data2'][i]
if df['letters'][i]=='b':
df['result'][i]=df['data1'][i]*df['data2'][i]
if df['letters'][i]=='c':
df['result'][i]=df['data1'][i]-df['data2'][i]
>>> df
Date letters data1 data2 result
0 1/2/2014 a 6 1 7
1 1/2/2014 a 3 1 4
2 1/3/2014 c 1 3 -2
3 1/3/2014 b 3 5 15
My question: is there a way to do it in one line without looping? something to the spirit of:
df['result']=df['result'].map(lambda x:df['data1'][i]+df['data2'][i] if x =='a' df['data1'][i]-df['data2'][i] elif x =='c' else x)`
You can use df.apply in combination with a lambda function. You have to use the keyword argument axis=1 to ensure you work on rows as opposed to the columns.
import pandas as pd
a=[['Date', 'letters', 'data1', 'data2'], ['1/2/2014', 'a', 6, 1], ['1/2/2014', 'a', 3, 1], ['1/3/2014', 'c', 1, 3]]
df = pd.DataFrame.from_records(a[1:],columns=a[0])
from operator import add, sub, mul
d = dict(a=add, b=mul, c=sub)
df['result'] = df.apply(lambda r: d[r['letters']](r['data1'], r['data2']), axis=1)
This will use the dictionary d to get the function you wish to use (add, sub, or mul).
Original solution below
df['result'] = df.apply(lambda r: r['data1'] + r['data2'] if r['letters'] == 'a'
else r['data1'] - r['data2'] if r['letters'] == 'c'
else r['data1'] * r['data2'], axis=1)
print(df)
Date letters data1 data2 result
0 1/2/2014 a 6 1 7
1 1/2/2014 b 3 1 3
2 1/3/2014 c 1 3 -2
The lambda function is a bit complex now so I'll go into it in a bit more detail...
The lambda function uses a so-called ternary operator to make boolean conditions inside one line, a typical ternary expession is of the form
a if b else c
Unfortunately you can't have an elif with a ternary expression, but what you can do is place another one inside the else statement, then it becomes
a if b else c if d else e
You can use the .where method:
where(cond, other=nan, inplace=False, axis=None, level=None, try_cast=False, raise_on_error=True) method of pandas.core.series.Series instance
Return an object of same shape as self and whose corresponding entries are from self where cond is True and otherwise are from other.
as in:
>>> df['data1'] + df['data2'].where(df['letters'] == 'a', - df['data2'])
0 7
1 4
2 -2
dtype: int64
alternatively, numpy.where:
>>> df['data1'] + np.where(df['letters'] == 'a', 1, -1) * df['data2']
0 7
1 4
2 -2
dtype: int64

Categories

Resources