I have a issue with applying function for column in pandas, please see below code :
import pandas as pd
#create a dict as below
data_dic = {
"text": ['hello',1,'how are you?',4],
"odd": [0,2,4,6],
"even": [1,3,5,7]
}
#create a DataFrame
df = pd.DataFrame(data_dic)
#define function
def checktext(str1):
if isinstance(str1,str):
return str1.upper()
def checknum(str1):
if isinstance(str1,int):
return str1+1
df['new'] = df['text'].apply(lambda x: checktext(x))
df['new'].head()
my df now show like below:
text odd even new
0 hello 0 1 HELLO
1 1 2 3 None
2 how are you? 4 5 HOW ARE YOU?
3 4 6 7 None
I would like to apply function checknum for 2 cell in column 'new' which is having 'None' value. Can someone assist this ? Thank you
IIUC, you can use vectorial code:
# make string UPPER
s = df['text'].str.upper()
# where there was no string, get number + 1 instead
df['new'] = s.fillna(df['text'].where(s.isna())+1)
output:
text odd even new
0 hello 0 1 HELLO
1 1 2 3 2
2 how are you? 4 5 HOW ARE YOU?
3 4 6 7 5
That said, for the sake of the argument, your 2 functions could be combined into one:
def check(str1):
if isinstance(str1,str):
return str1.upper()
elif isinstance(str1,int):
return str1+1
df['new'] = df['text'].apply(check)
your function:
def checktext(str1):
if isinstance(str1,str):
return str1.upper()
Will return None, if the if statement is false (i.e., 'str1' is not a string). By default, return the value?
def checktext(str1):
if isinstance(str1,str):
return str1.upper()
return str1
First, you could use the StringMethods accessor to convert to upper case whithout any loop. And when you have done that, you can easily process the rows where the result is NaN:
df['new'] = df['text'].str.upper()
mask = df['new'].isna()
df.loc[mask, 'new'] = df.loc[mask, 'text'] + 1
It gives directly:
text odd even new
0 hello 0 1 HELLO
1 1 2 3 2
2 how are you? 4 5 HOW ARE YOU?
3 4 6 7 5
Related
Let df be a dataframe of boolean values with a two column index. I want to calculate the value for every id. For example, this is how it would look on this specific case.
value consecutive
id Week
1 1 True 1
1 2 True 2
1 3 False 0
1 4 True 1
1 5 True 2
2 1 False 0
2 2 False 0
2 3 True 1
This is my solution:
def func(id,week):
M = df.loc[id]
M= df.loc[id][:week+1]
consecutive_list = list()
S=0
for index,row in M.iterrows():
if row['value']:
S+=1
else:
S=0
consecutive_list.append(S)
return consecutive_list[-1]
Then we generate the column "consecutive" as a list on the following way:
Consecutive_list = list()
for k in df.index:
id = k[0]
week=k[1]
Consecutive_list.append(func(id,week))
df['consecutive'] = Consecutive_list
I would like to know if there is a more Pythonic way to do this.
EDIT: I wrote the "consecutive" column in order to show what I expect this to be.
If you are trying to add the consecutive column to the df, this should work:
df.assign(consecutive = df['value'].groupby(df['value'].diff().ne(0).cumsum()).cumsum())
Output:
value consecutive
1 a True 1
b True 2
2 a False 0
b True 1
3 a True 2
b False 0
4 a False 0
b True 1
I have an excel sheet which consists of 2 columns. The first keywords and the second is Url.
I am making a script to extract groups which shares the same 3 URLs or more.
I wrote the below code but it takes around an hour to process the main function on a huge excel sheet.
import pandas as pd
import numpy as np
import time
loop = 1
numerator = 0
continuee= []
df_list = []
for index in list(df.sort_values('Url').set_index('Url').index.unique()):
if len(df.sort_values('Url').set_index('Url').loc[index].values) == 1:
list1 = list(df.sort_values('Url').set_index('Url').loc[index].values)
elif len(df.sort_values('Url').set_index('Url').loc[index].keywords.values) > 1:
list1 = list(df.sort_values('Url').set_index('Url').loc[index].keywords.values)
df1 = df[df.keywords.isin(list1)]
df1 = df1[df1.Url.duplicated(keep=False)]
df1 = df1.groupby('Url').filter(lambda x: x.Url.value_counts() == df1.keywords.nunique())
df1 = df1.groupby('keywords').filter(lambda x: x.keywords.value_counts() >= 3)
df1 = df1.groupby('Url').filter(lambda x: x.Url.value_counts() == df1.keywords.nunique())
if df1.keywords.nunique() > 1:
silos = list(df1.keywords.unique())
df_list.append({numerator:silos})
word = word[~(word.isin(silos))]
numerator += 1
else:
singles = list(word[word.keywords.isin(list1)].keywords.unique())
df_list.append({"single" : singles})
word = word[~(word.isin(singles))]
print(loop)
loop += 1
trial = pd.DataFrame(df_list)
if 'single' in list(trial.columns):
for i in list(word.keywords.unique()):
if i not in list(trial.single):
df_list.append({"single" : i})
else:
for i in list(word.keywords.unique()):
df_list.append({"single" : i})
trial = pd.DataFrame(df_list)
I tried many times to use multiprocessing but I failed as I am not really getting how it works with Pandas. Is there a way to help me, please? Also, if I wanted to pass another couple of functions how would I do it? Many thanks in advance.
From what I can gather, this should be your solution;
by_size = df.groupby(df.columns.tolist()).size().reset_index()
three_or_more=by_size[by_size[0]>=3].iloc[:,:-1]
Example:
>>> df
keyword url
0 2 2
1 4 3
2 2 1
3 4 3
4 1 1
5 2 1
6 4 1
7 2 1
8 1 1
9 3 3
>>> by_size = df.groupby(df.columns.tolist()).size().reset_index()
>>> by_size
keyword url 0
0 1 1 2
1 2 1 3
2 2 2 1
3 3 3 1
4 4 1 1
5 4 3 2
>>> three_or_more=by_size[by_size[0]>=3].iloc[:,:-1]
>>> three_or_more
keyword url
1 2 1
I have a dataframe with one column called label which has the values [0,1,2,3,4,5,6,8,9].
I would like to make dummy columns out of this, but I would like some labels to be joined together, so for example I want dummy_012 to be 1 if the observation has either label 0, 1 or 2.
If i use the command df2 = pd.get_dummies(df, columns=['label']), it would create 9 columns, 1 for each label.
I know I can use df2['dummy_012']=df2['dummy_0']+df2['dummy_1']+df2['dummy_2'] after that to turn it into one joint column, but I want to know if there's a more pythonic way of doing it (or some function where i can just change the parameters to the joins).
Maybe this approach can give a idea:
groups = ['012', '345', '6789']
for gp in groups:
df.loc[df['Label'].isin([int(x) for x in gp]), 'Label_Group'] = f'dummies_{gp}'
Output:
Label Label_Group
0 0 dummies_012
1 1 dummies_012
2 2 dummies_012
3 3 dummies_345
4 4 dummies_345
5 5 dummies_345
6 6 dummies_6789
7 8 dummies_6789
8 9 dummies_6789
And then apply dummy:
df_dummies = pd.get_dummies(df['Label_Group'])
dummies_012 dummies_345 dummies_6789
0 1 0 0
1 1 0 0
2 1 0 0
3 0 1 0
4 0 1 0
5 0 1 0
6 0 0 1
7 0 0 1
8 0 0 1
I don't know that this is pythonic because a more elegant solution might exist, but I does allow you to change parameters and it's vectorized. I've read that get_dummies() can be a bit slow with large amounts of data and vectorizing pandas is good practice in general. So I vectorized this function and had it do its calculations with numpy arrays. It should give you a boost in performance as the dataset increases in size compared to similar functions.
This function will take your dataframe and a list of numbers as strings and will return your dataframe with the column you wanted.
def get_dummy(df,column_nos):
new_col_name = 'dummy_'+''.join([i for i in column_nos])
vector_sum = sum([df[i].values for i in column_nos])
df[new_col_name] = [1 if i>0 else 0 for i in vector_sum]
return df
In case you'd rather the input to be integers rather than strings, you can tweak the above function to look like below.
def get_dummy(df,column_nos):
column_names = ['dummy_'+str(i) for i in column_nos]
new_col_name = 'dummy_'+''.join([str(i) for i in sorted(column_nos)])
vector_sum = sum([df[i].values for i in column_names])
df[new_col_name] = [1 if i>0 else 0 for i in vector_sum]
return df
I am a beginner and this is my first project.. I searched for the answer but it still isn't clear.
I have imported a worksheet from excel using Pandas..
**Rabbit Class:
Num Behavior Speaking Listening
0 1 3 1 1
1 2 1 1 1
2 3 3 1 1
3 4 1 1 1
4 5 3 2 2
5 6 3 2 3
6 7 3 3 1
7 8 3 3 3
8 9 2 3 2
What I want to do is create if functions.. ex. if a student's behavior is a "1" I want it to print one string, else print a different string. How can I reference a particular cell of the worksheet to set up such a function? I tried: val = df.at(1, "Behavior") but that clearly isn't working..
Here is the code I have so far..
import os
import pandas as pd
from pandas import ExcelWriter
from pandas import ExcelFile
path = r"C:\Users\USER\Desktop\Python\rabbit_class.xls"
print("Rabbit Class:")
print(df)
Also you can do
dff = df.loc[df['Behavior']==1]
if(not(dff.empty)):
# do Something
What you want is to find rows where df.Behavior is equal to 1. Use any of the following three methods.
# Method-1
df[df["Behavior"]==1]
# Method-2
df.loc[df["Behavior"]==1]
# Method-3
df.query("Behavior==1")
Output:
Num Behavior Speaking Listening LastColumn
0 0 1 3 1 1
Note: Dummy Data
Your sample data does not have a column header (the last one). So I named it LastColumn and read-in the data as a dataframe.
# Dummy Data
s = """
Num Behavior Speaking Listening LastColumn
0 1 3 1 1
1 2 1 1 1
2 3 3 1 1
3 4 1 1 1
4 5 3 2 2
5 6 3 2 3
6 7 3 3 1
7 8 3 3 3
8 9 2 3 2
"""
# Make Dataframe
ss = re.sub('\s+',',',s)
ss = ss[1:-1]
sa = np.array(ss.split(',')).reshape(-1,5)
df = pd.DataFrame(dict((k,v) for k,v in zip(sa[0,:], sa[1:,].T)))
df = df.astype(int)
df
Hope below example will help you
import pandas as pd
df = pd.read_excel(r"D:\test_stackoverflow.xlsx")
print(df.columns)
def _filter(col, filter_):
return df[df[col]==filter_]
print(_filter('Behavior', 1))
Thank you all for your answers. I finally figured out what I was trying to do using the following code:
i = 0
for i in df.index:
student_number = df["Student Number"][i]
print(student_number)
student_name = student_list[int(student_number) - 1]
behavior = df["Behavior"][i]
if behavior == 1:
print("%s's behavior is good" % student_name)
elif behavior == 2:
print ("%s's behavior is average." % student_name)
else:
print ("%s's behavior is poor" % student_name)
speaking = df["Speaking"][i]
I have a pandas dataframe with a text column.
I'd like to create a new column in which values are conditional on the start of the text string from the text column.
So if the 30 first characters of the text column:
== 'xxx...xxx' then return value 1
== 'yyy...yyy' then return value 2
== 'zzz...zzz' then return value 3
if none of the above return 0
There is possible use multiple numpy.where but if more conditions use apply:
For select strings from strats use indexing with str.
df = pd.DataFrame({'A':['xxxss','yyyee','zzzswee','sss'],
'B':[4,5,6,8]})
print (df)
A B
0 xxxss 4
1 yyyee 5
2 zzzswee 6
3 sss 8
#check first 3 values
a = df.A.str[:3]
df['new'] = np.where(a == 'xxx', 1,
np.where(a == 'yyy', 2,
np.where(a == 'zzz', 3, 0)))
print (df)
A B new
0 xxxss 4 1
1 yyyee 5 2
2 zzzswee 6 3
3 sss 8 0
def f(x):
#print (x)
if x == 'xxx':
return 1
elif x == 'yyy':
return 2
elif x == 'zzz':
return 3
else:
return 0
df['new'] = df.A.str[:3].apply(f)
print (df)
A B new
0 xxxss 4 1
1 yyyee 5 2
2 zzzswee 6 3
3 sss 8 0
EDIT:
If length is different, only need:
df['new'] = np.where(df.A.str[:3] == 'xxx', 1,
np.where(df.A.str[:2] == 'yy', 2,
np.where(df.A.str[:1] == 'z', 3, 0)))
print (df)
A B new
0 xxxss 4 1
1 yyyee 5 2
2 zzzswee 6 3
3 sss 8 0
EDIT1:
Thanks for idea to Quickbeam2k1 use str.startswith for check starts of each string:
df['new'] = np.where(df.A.str.startswith('xxx'), 1,
np.where(df.A.str.startswith('yy'), 2,
np.where(df.A.str.startswith('z'), 3, 0)))
print (df)
A B new
0 xxxss 4 1
1 yyyee 5 2
2 zzzswee 6 3
3 sss 8 0
A different and slower solution:
However, the advantage is that the mapping from patterns is a function parameter (with implicit default 0 value)
def map_starts_with(pat_map):
def map_string(t):
pats = [pat for pat in pat_map.keys() if t.startswith(pat)]
return pat_map.get(pats[0]) if len(pats) > 0 else 0
# get only value of "first" pattern if at least one pattern is found
return map_string
df = pd.DataFrame({'col':[ 'xx', 'aaaaaa', 'c']})
col
0 xx
1 aaaaaa
2 c
mapping = { 'aaa':4 ,'c':3}
df.col.apply(lambda x: map_starts_with(mapping)(x))
0 0
1 4
2 3
Note the we also used currying here. I'm wondering if this approach can be implemented using additional pandas or numpy functionality.
Note that the "first" pattern match may depend on the traversal order of the dict keys. This is irrelephant if there is no overlap in the keys. (Jezrael's solution, or its direct generalization thereof, will also choose one element for the match, but in a more predictable manner)