remove duplicate pairs from the list in column in pandas - python

I would like to remove duplicate pairs from the list in column while mainting the order:
for example the input is :
cola. colb
1. [sitea,siteb,sitea,siteb;sitec,sited,sitec,sited]
the expected output is the unique elements before each ';' symbol
cola. colb
1. [sitea,siteb;sitec,sited]
I tried splitting the column based on the ; symbol and the create a set for the list but it didn't work.
df['test'] = df.e2etrail.str.split(';').map(lambda x : ','.join(sorted(set(x),key=x.index)))
I also tried the following
df['test'] = df['e2etrail'].apply(lambda x: list(pd.unique(x)))
Any idea on how to make it work

You can remove [] by strip and then split by , or ; first and then use your solution:
print (df.e2etrail.str.strip('[]').str.split('[;,]'))
dtype: object
0 [sitea, siteb, sitea, siteb, sitec, sited, sit...
Name: e2etrail, dtype: object
f = lambda x : ','.join(sorted(set(x),key=x.index))
df['test'] = df.e2etrail.str.strip('[]').str.split('[;,]').map(f)
print (df)
cola. e2etrail \
0 1.0 [sitea,siteb,sitea,siteb;sitec,sited,sitec,sited]
test
0 sitea,siteb,sitec,sited
If need output list:
f = lambda x : sorted(set(x),key=x.index)
df['test'] = df.e2etrail.str.strip('[]').str.split('[;,]').map(f)
print (df)
cola. e2etrail \
0 1.0 [sitea,siteb,sitea,siteb;sitec,sited,sitec,sited]
test
0 [sitea, siteb, sitec, sited]

eventually I did it by converting the list into series, dropped the duplicates and joined the series again as following :
df['e2etrails']=df['e2etrails'].str.split(';')
df['e2etrails']=df['e2etrails'].apply(lambda row :';'.join(pd.Series(row).str.split(',').map(lambda x : ','.join(sorted(set(x),key=x.index)))))

Related

How to find and replace substrings at the end of column headers

I have the following columns, among others, in my dataframe: dom_pop', 'an_dom_n', 'an_dom_ncmplt. Equivalent columns exist in multiple dataframes, with the suffix changing. For example, in another dataframe they may be called out as pa_pop', 'an_pa_n', 'an_pa_ncmplt. I want to append '_kwh' to these cols across all my dataframes.
I wrote the following code:
cols = ['_n$', '_ncmplt', '_pop'] << the $ is added to indicate string ending in _n.
filterfuel = 'kwh'
for c in cols:
dfdom.columns = [col.replace(f'{c}', f'{c}_{filterfuel}') for col in dfdom.columns]
dfpa.columns = [col.replace(f'{c}', f'{c}_{filterfuel}') for col in dfpa.columns]
dfsw.columns = [col.replace(f'{c}', f'{c}_{filterfuel}') for col in dfsw.columns]
kwh gets appended to _ncmplt and _pop cols, but not the _n column. If I remove the $ _n gets appended but then _ncmplt looks like 'an_dom_n_kwh_cmplt'.
for df dom the corrected names should look like dom_pop_kwh', 'an_dom_n_kwh', 'an_dom_ncmplt_kwh'
Why is $ not being recongnised as an end of string parameter?
You can use np.where with a regex
cols = ['_n$', '_ncmplt', '_pop']
filterfuel = 'kwh'
pattern = fr"(?:{'|'.join(cols)})"
for df in [dfdom, dfpa, dfsw]:
df.columns = np.where(df.columns.str.contains(pattern, regex=True),
df.columns + f"_{filterfuel}", df.columns)
Output:
>>> pattern
'(?:_n$|_ncmplt|_pop)'
# dfdom = pd.DataFrame([[0]*4], columns=['dom_pop', 'an_dom_n', 'an_dom_ncmplt', 'hello'])
# After:
>>> dfdom
dom_pop_kwh an_dom_n_kwh an_dom_ncmplt_kwh hello
0 0 0 0 0

How to add a prefix to a string if it ends with a particular character (Pandas) i.e. add '-' to string given it ends with '-'

For a particular column (dtype = object), how can I add '-' to the start of the string, given it ends with '-'.
i.e convert: 'MAY500-' to '-May500-'
(I need to add this to every element in the column)
Try something like this:
#setup
df = pd.DataFrame({'col':['aaaa','bbbb-','cc-','dddddddd-']})
mask = df.col.str.endswith('-'), 'col'
df.loc[mask] = '-' + df.loc[mask]
Output
df
col
0 aaaa
1 -bbbb-
2 -cc-
3 -dddddddd-
You can use np.select
Given a dataframe like this:
df
values
0 abcd-
1 a-bcd
2 efg-
You can use np.select as follows:
df['values'] = np.select([df['values'].str.endswith('-')], ['-' + df['values']], df['values'])
output:
df
values
0 -abcd-
1 a-bcd
2 -efg-
def add_prefix(text):
# If text is null or empty string the -1 index will result in IndexError
if text and text[-1] == "-":
return "-"+text
return text
df = pd.DataFrame(data={'A':["MAY500", "MAY500-", "", None, np.nan]})
# Change the column to string dtype first
df['A'] = df['A'].astype(str)
df['A'] = df['A'].apply(add_prefix)
0 MAY500
1 -MAY500-
2
3 None
4 nan
Name: A, dtype: object
I have a knack for using apply with lambda functions a lot. It just makes the code a lot easier to read.
df['value'] = df['value'].apply(lambda x: '-'+str(x) if str(x).endswith('-') else x)

Eliminate duplicate for a column value in a Dataframe - the column holds multiple URL's

so i have a column called "URL's" in my DataFrame Pd1
URL
row 1 : url1,url1,url2
row 2 : url2,url2,url3
output :
URL
row 1 : url1,url2
row 2 : url2,url3
I assume that your column contains only URL list.
One of possible solutions is to:
apply a function to URL column, containing the following steps:
split the source string on each comma (tre result is a list of
fragments),
create a set from this list (thus eleminating repetitions),
join keys from this set, using a comma,
save the result back into the source column.
Something like:
df.URL = df.URL.apply(lambda x: ','.join(set(re.split(',', x))))
As this code uses re module, you have to import re before.
split and apply set
d = {"url": ["url1,url1,url2",
"url2,url2,url3"]}
df = pd.DataFrame(d)
df.url.str.split(",").apply(set)
df['URL'] = df.URL.str.split(':').apply(lambda x: [x[0],','.join(sorted(set(x[1].split(','))))]).apply(' : '.join)
URL
0 row 1 : url1,url2
1 row 2 : url2,url3
if data
URL
0 url1,url1,url2
1 url2,url2,url3
then
df['URL'] = df.URL.str.split(',').apply(lambda x: ','.join(sorted(set(x))))
##print(df)
URL
0 url1,url2
1 url2,url3

Replace string in pandas dataframe if it contains specific substring

I have a dataframe generated from a .csv (I use Python 3.5). The df['category'] contains only strings. What I want is to check this column and if a string contains a specific substring(not really interested where they are in the string as long as they exist) to be replaced. I am using this script
import pandas as pd
df=pd.read_csv('lastfile.csv')
df.dropna(inplace=True)
g='Drugs'
z='Weapons'
c='Flowers'
df.category = df.category.str.lower().apply(lambda x: g if ('mdma' or 'xanax' or 'kamagra' or 'weed' or 'tabs' or 'lsd' or 'heroin' or 'morphine' or 'hci' or 'cap' or 'mda' or 'hash' or 'kush' or 'wax'or 'klonop'or\
'dextro'or'zepam'or'amphetamine'or'ketamine'or 'speed' or 'xtc' or 'XTC' or 'SPEED' or 'crystal' or 'meth' or 'marijuana' or 'powder' or 'afghan'or'cocaine'or'haze'or'pollen'or\
'sativa'or'indica'or'valium'or'diazepam'or'tablet'or'codeine'or \
'mg' or 'dmt'or'diclazepam'or'zepam'or 'heroin' ) in x else(z if ('weapon'or'milit'or'gun'or'grenades'or'submachine'or'rifle'or'ak47')in x else c) )
print(df['category'])
My problem is that some records though they contain some of the substrings I defined, do not get replaced. Is it a regex related problem?
Thank you in advance.
Create dictionary of list of substrings with key for replace strings, loop it and join all list values by | for regex OR, so possible check column by contains and replace matched rows with loc:
df = pd.DataFrame({'category':['sss mdma df','milit ss aa','aa ss']})
a = ['mdma', 'xanax' , 'kamagra']
b = ['weapon','milit','gun']
g='Drugs'
z='Weapons'
c='Flowers'
d = {g:a, z:b}
df['new_category'] = c
for k, v in d.items():
pat = '|'.join(v)
mask = df.category.str.contains(pat, case=False)
df.loc[mask, 'new_category'] = k
print (df)
category new_category
0 sss mdma df Drugs
1 milit ss aa Weapons
2 aa ss Flowers

create names of dataframes in a loop

I need to give names to previously defined dataframes.
I have a list of dataframes :
liste_verif = ( dffreesurfer,total,qcschizo)
And I would like to give them a name by doing something like:
for h in liste_verif:
h.name = str(h)
Would that be possible ?
When I'm testing this code, it's doesn't work : instead of considering h as a dataframe, python consider each column of my dataframe.
I would like the name of my dataframe to be 'dffreesurfer', 'total' etc...
You can use dict comprehension and map DataFrames by values in list L:
dffreesurfer = pd.DataFrame({'col1': [7,8]})
total = pd.DataFrame({'col2': [1,5]})
qcschizo = pd.DataFrame({'col2': [8,9]})
liste_verif = (dffreesurfer,total,qcschizo)
L = ['dffreesurfer','total','qcschizo']
dfs = {L[i]:x for i,x in enumerate(liste_verif)}
print (dfs['dffreesurfer'])
col1
0 7
1 8
print (dfs['total'])
col2
0 1
1 5

Categories

Resources