Convert a string column to number in a dataframe - python

I'm trying to convert a column in my DataFrame to numbers. The input is email domains extracted from email addresses. Sample:
>>> data['emailDomain']
0 [gmail]
1 [gmail]
2 [gmail]
3 [aol]
4 [yahoo]
5 [yahoo]
I want to create a new column where if the domain is gmail or aol, the column entry would be a 1 and 0 otherwise.
I created a method which goes like this:
def convertToNumber(row):
try:
if row['emailDomain'] == '[gmail]':
return 1
elif row['emailDomain'] == '[aol]':
return 1
elif row['emailDomain'] == '[outlook]':
return 1
elif row['emailDomain'] == '[hotmail]':
return 1
elif row['emailDomain'] == '[yahoo]':
return 1
else:
return 0
except TypeError:
print("TypeError")
and used it like:
data['validEmailDomain'] = data.apply(convertToNumber, axis=1)
However, my output column is 0 even when I know there are gmail and aol emails present in the input column.
Any idea what could be going wrong?
Also, I think this usage of conditional statements might not be the most efficient way to tackle this problem. Is there any other approach to getting this done?

you can use series.isin
providers = {'gmail', 'aol', 'yahoo','hotmail', 'outlook'}
data['emailDomain'].isin(providers)
searching the provider
instead of applying a re to each email in each row, you can use the Series.str methods to do it on a columns at a time
pattern2 = '(?<=#)([^.]+)(?=\.)'
df['email'].str.extract(pattern2, expand=False)
so this becomes something like this:
pattern2 = '(?<=#)([^.]+)(?=\.)'
providers = {'gmail', 'aol', 'yahoo','hotmail', 'outlook'}
df = pd.DataFrame(data={'email': ['test.1#gmail.com', 'test.2#aol.com', 'test3#something.eu']})
provider_serie = df['email'].str.extract(pattern2, expand=False)
0 gmail
1 aol
2 something
Name: email, dtype: object
interested_providers = df['email'].str.extract(pattern2, expand=False).isin(providers)
0 True
1 True
2 False
Name: email, dtype: bool
If you really want 0s and 1s, you can add a .astype(int)

Your code would work if your series contained strings. As such, they likely contain lists, in which case you need to extract the first element.
I would also utilise pd.Series.map instead of using any row-wise logic. Below is a complete example:
df = pd.DataFrame({'emailDomain': [['gmail'], ['gmail'], ['gmail'], ['aol'],
['yahoo'], ['yahoo'], ['else']]})
domains = {'gmail', 'aol', 'outlook', 'hotmail', 'yahoo'}
df['validEmailDomain'] = df['emailDomain'].map(lambda x: x[0]).isin(domains)\
.astype(int)
print(df)
# emailDomain validEmailDomain
# 0 [gmail] 1
# 1 [gmail] 1
# 2 [gmail] 1
# 3 [aol] 1
# 4 [yahoo] 1
# 5 [yahoo] 1
# 6 [else] 0

You could sum up the occurence checks of every Provider via list comprehensions and write the resulting list into data['validEmailDomain']:
providers = ['gmail', 'aol', 'outlook', 'hotmail', 'yahoo']
data['validEmailDomain'] = [np.sum([p in e for p in providers]) for e in data['emailDomain'].values]

Related

retrieve cell string values in a column between two unknown indexes based on substrings location

I need to locate the first location where the word 'then' appears on Words table. I'm trying to get a code to consolidate all strings on 'text' column from this location till the first text with a substring '666' or '999' in it (in this case a combination of their, stoma22, fe156, sligh334, pain666 (the desired subtrings_output = 'theirfe156sligh334pain666').
I've tried:
their_loc = np.where(words['text'].str.contains(r'their', na =True))[0][0]
666_999_loc = np.where(words['text'].str.contains(r'666', na =True))[0][0]
subtrings_output = Words['text'].loc[Words.index[their_loc:666_999_loc]]
as you can see I'm not sure how to extend the conditioning of 666_999_loc to include substring 666 or 999, also slicing the indexing between two variables renders an error. Many thanks
Words table:
page no
text
font
1
they
0
1
ate
0
1
apples
0
2
and
0
2
then
1
2
their
0
2
stoma22
0
2
fe156
1
2
sligh334
0
2
pain666
1
2
given
0
2
the
1
3
fruit
0
You just need to add one for the end of the slice, and add an or condition to the np.where of the 666_or_999_loc using the | operator.
text_col = words['text']
their_loc = np.where(text_col.str.contains(r'their', na=True))[0][0]
contains_666_or_999_loc = np.where(text_col.str.contains('666', na=True) |
text_col.str.contains('999', na=True))[0][0]
subtrings_output = ''.join(text_col.loc[words.index[their_loc:contains_666_or_999_loc + 1]])
print(subtrings_output)
Output:
theirstoma22fe156sligh334pain666
IIUC, use pandas.Series.idxmax with "".join().
Series.idxmax(axis=0, skipna=True, *args, **kwargs)
Return the row label of the maximum value.
If multiple values equal the maximum, the first row label with that
value is returned.
So, assuming (Words) is your dataframe, try this :
their_loc = Words["text"].str.contains("their").idxmax()
_666_999_loc = Words["text"].str.contains("666").idxmax()
subtrings_output = "".join(Words["text"].loc[Words.index[their_loc:_666_999_loc+1]])
Output :
print(subtrings_output)
#theirstoma22fe156sligh334pain666
#their stoma22 fe156 sligh334 pain666 # <- with " ".join()

how to change the iterrows method to apply

I have this code, in which I have rows around 60k. It taking around 4 hrs to complete the whole process. This code is not feasible and want to use apply instead iterrow because of time constraints.
Here is the code,
all_merged_k = pd.DataFrame(columns=all_merged_f.columns)
for index, row in all_merged_f.iterrows():
if (row['route_count'] == 0):
all_merged_k = all_merged_k.append(row)
else:
for i in range(row['route_count']):
row1 = row.copy()
row['Route Number'] = i
row['Route_Broken'] = row1['routes'][i]
all_merged_k = all_merged_k.append(row)
Basically, what the code is doing is that if the route count is 0 then append the same row, if not then whatever the number of counts is it will append that number of rows with all same value except the routes column (as it contains nested list) so breaking them in multiple rows. And adding them in new columns called Route_Broken and Route Number.
Sample of data:
routes route_count
[[CHN-IND]] 1
[[CHN-IND],[IND-KOR]] 2
O/P data:
routes route_count Broken_Route Route Number
[[CHN-IND]] 1 [CHN-IND] 1
[[CHN-IND],[IND-KOR]] 2 [CHN-IND] 1
[[CHN-IND],[IND-KOR]] 2 [IND-KOR] 2
Can it be possible using apply because 4 hrs is very high and cant be put into production. I need extreme help. Pls help me.
So below code doesn't work
df.join(df['routes'].explode().rename('Broken_Route')) \
.assign(**{'Route Number': lambda x: x.groupby(level=0).cumcount().add(1)})
or
(df.assign(Broken_Route=df['routes'],
count=df['routes'].str.len().apply(range))
.explode(['Broken_Route', 'count'])
)
It doesn't working if the index matches, we can see the last row, Route Number should be 1
Are you expect something like that:
>>> df.join(df['routes'].explode().rename('Broken_Route')) \
.assign(**{'Route Number': lambda x: x.groupby(level=0).cumcount().add(1)})
routes route_count Broken_Route Route Number
0 [[CHN-IND]] 1 [CHN-IND] 1
1 [[CHN-IND], [IND-KOR]] 2 [CHN-IND] 1
1 [[CHN-IND], [IND-KOR]] 2 [IND-KOR] 2
2 0 1
Setup:
data = {'routes': [[['CHN-IND']], [['CHN-IND'], ['IND-KOR']], ''],
'route_count': [1, 2, 0]}
df = pd.DataFrame(data)
Update 1: added a record with route_count=0 and routes=''.
You can assign the routes and counts and explode:
(df.assign(Broken_Route=df['routes'],
count=df['routes'].str.len().apply(range))
.explode(['Broken_Route', 'count'])
)
NB. multi-column explode requires pandas ≥1.3.0, if older use this method
output:
routes route_count Broken_Route count
0 [[CHN-IND]] 1 [CHN-IND] 0
1 [[CHN-IND], [IND-KOR]] 2 [CHN-IND] 0
1 [[CHN-IND], [IND-KOR]] 2 [IND-KOR] 1

Filtering inside the dataframe pandas

I want to extract two first symbols in case three first symbols match a certain pattern (first two symbols should be any of those inside the brackets [ptkbdgG_fvsSxzZhmnNJlrwj], the third symbol should be any of those inside the brackets[IEAOYye|aouKLM#)3*<!(#0~q^LMOEK].
The first two lines work correctly.
The last lines do not work and I do not understand why. The code doesn`t give any errors, it just does nothing for those
# extract tree first symbols and save them in the new column
df['first_three_symbols'] = df['ITEM'].str[0:3]
#create a boolean column on condition whether first three symbols contain symbols
df["ccv"] = df["first_three_symbols"].str.contains('[ptkbdgG_fvsSxzZhmnNJlrwj][ptkbdgG_fvsSxzZhmnNJlrwj][IEAOYye|aouKLM#)3*<!(#0~q^LMOEK]')
#create another column for True values in the previous column
if df["ccv"].item == True:
df['first_two_symbols'] = df["ITEM"].str[0:2]
Here is my output:
ID ITEM FREQ first_three_symbols ccv
0 0 a 563 a False
1 1 OlrMndmEn 1 Olr False
2 2 OlrMndSpOrtl#r 0 Olr False
3 3 AG#l 74 AG# False
4 4 AG#lbMm 24 AG# False
... ... ... ... ... ...
51723 51723 zytzWt# 8 zyt False
51724 51724 zytzytOst 0 zyt False
51725 51725 zYxtIx 5 zYx False
51726 51726 zYxtIxkWt 0 zYx False
51727 51727 zyZe 4 zyZ False
[51728 rows x 5 columns]
you can either create a function, use apply method :
def f(row):
if row["ccv"] == True:
return row["ITEM"].str[0:2]
else:
return None
df['first_two_symbols'] = df.apply(f,axis=1)
or you can use np.wherefunction from numpy package.

Reading values from datafram.iloc is too slow and problem in dataframe.values

I use python and I have data of 35 000 rows I need to change values by loop but it takes too much time
ps: I have columns named by succes_1, succes_2, succes_5, succes_7....suces_120 so I get the name of the column by the other loop the values depend on the other column
exemple:
SK_1 Sk_2 Sk_5 .... SK_120 Succes_1 Succes_2 ... Succes_120
1 0 1 0 1 0 0
1 1 0 1 2 1 1
for i in range(len(data_jeux)):
for d in range (len(succ_len)):
ids = succ_len[d]
if data_jeux['SK_%s' % ids][i] == 1:
data_jeux.iloc[i]['Succes_%s' % ids]= 1+i
I ask if there is a way for executing this problem with the faster way I try :
data_jeux.values[i, ('Succes_%s' % ids)] = 1+i
but it returns me the following error maybe it doesn't accept string index
You can define columns and then use loc to increment. It's not clear whether your columns are naturally ordered; if they aren't you can use sorted with a custom function. String-based sorting will cause '20' to come before '100'.
def splitter(x):
return int(x.rsplit('_', maxsplit=1)[-1])
cols = df.columns
sk_cols = sorted(cols[cols.str.startswith('SK')], key=splitter)
succ_cols = sorted(cols[cols.str.startswith('Succes')], key=splitter)
df.loc[df[sk_cols] == 1, succ_cols] += 1

Counting Values In Columns Igonorig AlphaNumeric Values

First post here, I am trying to find out total count of values in an excel file. So after importing the file, I need to run a condition which is count all the values except 0 also where it finds 0 make that blank.
> df6 = df5.append(df5.ne(0).sum().rename('Final Value'))
I tried the above one but not working properly, It is counting the column name as well, I only need to count the float values.
Demo DataFrame:
0 1 2 3
ID_REF 1007_s 1053_a 117_at 121_at
GSM95473 0.08277 0.00874 0.00363 0.01877
GSM95474 0.09503 0.00592 0.00352 0
GSM95475 0.08486 0.00678 0.00386 0.01973
GSM95476 0.08105 0.00913 0.00306 0.01801
GSM95477 0.00000 0.00812 0.00428 0
GSM95478 0.07615 0.00777 0.00438 0.01799
GSM95479 0 0.00508 1 0
GSM95480 0.08499 0.00442 0.00298 0.01897
GSM95481 0.08893 0.00734 0.00204 0
0 1 2 3
ID_REF 1007_s 1053_a 117_at 121_at
These are column name and index value which needs to be ignored when counting.
The output Should be like this after counting:
Final 8 9 9 5
If you just nee the count, but change the values in your dataframe, you could apply a function to each cell in your DataFrame with the applymap method. First create a function to check for a float:
def floatcheck(value):
if isinstance(value, float):
return 1
else:
return 0
Then apply it to your dataframe:
df6 = df5.applymap(floatcheck)
This will create a dataframe with a 1 if the value is a float and a 0 if not. Then you can apply your sum method:
df7 = df6.append(df6.sum().rename("Final Value"))
I was able to solve the issue, So here it is:
df5 = df4.append(pd.DataFrame(dict(((df4[1:] != 1) & (df4[1:] != 0)).sum()), index=['Final']))
df5.columns = df4.columns
went = df5.to_csv("output3.csv")
What i did was i changed the starting index so i didn't count the first row which was alphanumeric and then i just compared it.
Thanks for your response.

Categories

Resources