Python Pandas replicate rows in dataframe - python

If the dataframe looks like:
Store,Dept,Date,Weekly_Sales,IsHoliday
1,1,2010-02-05,24924.5,FALSE
1,1,2010-02-12,46039.49,TRUE
1,1,2010-02-19,41595.55,FALSE
1,1,2010-02-26,19403.54,FALSE
1,1,2010-03-05,21827.9,FALSE
1,1,2010-03-12,21043.39,FALSE
1,1,2010-03-19,22136.64,FALSE
1,1,2010-03-26,26229.21,FALSE
1,1,2010-04-02,57258.43,FALSE
And I wanna duplicate rows with IsHoliday equal to TRUE, I can do:
is_hol = df['IsHoliday'] == True
df_try = df[is_hol]
df=df.append(df_try*10)
But is there a better way to do this as I need to duplicate holiday rows 5 times, and I have to append 5 times if using the above way.

You can put df_try inside a list and then do what you have in mind:
>>> df.append([df_try]*5,ignore_index=True)
Store Dept Date Weekly_Sales IsHoliday
0 1 1 2010-02-05 24924.50 False
1 1 1 2010-02-12 46039.49 True
2 1 1 2010-02-19 41595.55 False
3 1 1 2010-02-26 19403.54 False
4 1 1 2010-03-05 21827.90 False
5 1 1 2010-03-12 21043.39 False
6 1 1 2010-03-19 22136.64 False
7 1 1 2010-03-26 26229.21 False
8 1 1 2010-04-02 57258.43 False
9 1 1 2010-02-12 46039.49 True
10 1 1 2010-02-12 46039.49 True
11 1 1 2010-02-12 46039.49 True
12 1 1 2010-02-12 46039.49 True
13 1 1 2010-02-12 46039.49 True

Other way is using concat() function:
import pandas as pd
In [603]: df = pd.DataFrame({'col1':list("abc"),'col2':range(3)},index = range(3))
In [604]: df
Out[604]:
col1 col2
0 a 0
1 b 1
2 c 2
In [605]: pd.concat([df]*3, ignore_index=True) # Ignores the index
Out[605]:
col1 col2
0 a 0
1 b 1
2 c 2
3 a 0
4 b 1
5 c 2
6 a 0
7 b 1
8 c 2
In [606]: pd.concat([df]*3)
Out[606]:
col1 col2
0 a 0
1 b 1
2 c 2
0 a 0
1 b 1
2 c 2
0 a 0
1 b 1
2 c 2

This is an old question, but since it still comes up at the top of my results in Google, here's another way.
import pandas as pd
import numpy as np
df = pd.DataFrame({'col1':list("abc"),'col2':range(3)},index = range(3))
Say you want to replicate the rows where col1="b".
reps = [3 if val=="b" else 1 for val in df.col1]
df.loc[np.repeat(df.index.values, reps)]
You could replace the 3 if val=="b" else 1 in the list interpretation with another function that could return 3 if val=="b" or 4 if val=="c" and so on, so it's pretty flexible.

Appending and concatenating is usually slow in Pandas so I recommend just making a new list of the rows and turning that into a dataframe (unless appending a single row or concatenating a few dataframes).
import pandas as pd
df = pd.DataFrame([
[1,1,'2010-02-05',24924.5,False],
[1,1,'2010-02-12',46039.49,True],
[1,1,'2010-02-19',41595.55,False],
[1,1,'2010-02-26',19403.54,False],
[1,1,'2010-03-05',21827.9,False],
[1,1,'2010-03-12',21043.39,False],
[1,1,'2010-03-19',22136.64,False],
[1,1,'2010-03-26',26229.21,False],
[1,1,'2010-04-02',57258.43,False]
], columns=['Store','Dept','Date','Weekly_Sales','IsHoliday'])
temp_df = []
for row in df.itertuples(index=False):
if row.IsHoliday:
temp_df.extend([list(row)]*5)
else:
temp_df.append(list(row))
df = pd.DataFrame(temp_df, columns=df.columns)

You can do it in one line:
df.append([df[df['IsHoliday'] == True]] * 5, ignore_index=True)
or
df.append([df[df['IsHoliday']]] * 5, ignore_index=True)

Another alternative to append() is to first replace the values of a column by a list of entries and then explode() (either using ignore_index=True or not, depending on what you want):
df['IsHoliday'] = df['IsHoliday'].apply(lambda x: 5*[x] if (x == True) else x)
df.explode('IsHoliday', ignore_index=True)
The nice thing about this one is that you can already use the list in the apply() call to build copies of rows with modified values in a column, in case you wanted to do that later anyways...

Related

Return entries in DataFrame where logic statement passes

How can boolean statements be used to filter column data?
E.g. Get index and value of rows in dataframe['score'] where value != 0
Where dataframe['score'] contains data in the following format
0 0
1 1
2 0
3 3
4 0
..
100 0
101 9
102 7
I am a big fan of readability. Use pd.DataFrame.query:
df.query('score != 0')
Lets try a boolean query and mask
df[df.score.ne(0)]
score
1 1
3 3
101 9
102 7
Here is a sample code:
Code
import pandas as pd
import random
d = {'score': [random.randint(0,1) for x in range(10)]}
df = pd.DataFrame(d)
print(df)
print(df[df.score != 0])
Input
score
0 0
1 0
2 1
3 0
4 1
5 0
6 0
7 1
8 1
9 0
Output
score
2 1
4 1
7 1
8 1
You can also use
>>> import pandas as pd
>>> dataframe = pd.DataFrame({'score':[0,1,3,0]})
>>> dataframe[dataframe['score']!=0]
score
1 1
2 3
Notes:
dataframe['score']!=0 creates a mask of boolean values per each row, if the score value is not equal to 0, it is True, else, False
dataframe[...] just keeps the values that have True in the mask.

How can I operate with the output of a DataFrame?

I have a DataFrame object and I'm grouping by some keys and counting the results. The problem is that I want to replace one of the index of the DataFrame columns for a relation between the counts.
df.groupby(['A','B', 'C'])['C'].count().apply(f).reset_index()
I'm looking for an f that replaces the column C by the value of #timesC==1 / #timesC==0 for each value of A and B.
Is this what you want?
import pandas as pd
import numpy as np
df = pd.DataFrame(
{'A':[1,2,3,1,2,3],
'B':[2,0,1,2,0,1],
'C':[1,1,0,1,1,1]
})
print(df)
def f(x):
if np.count_nonzero(x==0)==0:
return np.nan
else:
return np.count_nonzero(x==1)/np.count_nonzero(x==0)
result = df.groupby(['A','B'])['C'].apply(f).reset_index()
print(result)
Result:
#df
A B C
0 1 2 1
1 2 0 1
2 3 1 0
3 1 2 1
4 2 0 1
5 3 1 1
#result
A B C
0 1 2 NaN
1 2 0 NaN
2 3 1 1.0

pandas add a column with only one row

This sounds a bit weird, but I think that's exactly what I needed now:
I got several pandas dataframes that contains columns with float numbers, for example:
a b c
0 0 1 2
1 3 4 5
2 6 7 8
Now I want to add a column, with only one row, and the value is equal to the average of column 'a', in this case, is 3.0. So the new dataframe will looks like this:
a b c average
0 0 1 2 3.0
1 3 4 5
2 6 7 8
And all the rows below are empty.
I've tried things like df['average'] = np.mean(df['a']) but that give me a whole column of 3.0. Any help will be appreciated.
Assign a series, this is cleaner.
df['average'] = pd.Series(df['a'].mean(), index=df.index[[0]])
Or, even better, assign with loc:
df.loc[df.index[0], 'average'] = df['a'].mean().item()
Filling NaNs is straightforward, you can do
df['average'] = df['average'].fillna('')
df
a b c average
0 0 1 2 3
1 3 4 5
2 6 7 8
Can do something like:
df['average'] = [np.mean(df['a'])]+['']*(len(df)-1)
Here is a full example:
import pandas as pd
import numpy as np
df = pd.DataFrame(
[(0,1,2), (3,4,5), (6,7,8)],
columns=['a', 'b', 'c'])
print(df)
a b c
0 0 1 2
1 3 4 5
2 6 7 8
df['average'] = ''
df['average'][0] = df['a'].mean()
print(df)
a b c average
0 0 1 2 3
1 3 4 5
2 6 7 8

Retain only duplicated rows in a pandas dataframe

I have a dataframe with two columns: "Agent" and "Client"
Each row corresponds to an interaction between an Agent and a client.
I want to keep only the rows if a client had interactions with at least 2 agents.
How can I do that?
Worth adding that now you can use df.duplicated()
df = df.loc[df.duplicated(subset='Agent', keep=False)]
Use groupby and transform by value_counts.
df[df.Agent.groupby(df.Agent).transform('value_counts') > 1]
Note, that, as mentioned here, you might have one agent interacting with the same client multiple times. This might be retained as a false positive. If you do not want this, you could add a drop_duplicates call before filtering:
df = df.drop_duplicates()
df = df[df.Agent.groupby(df.Agent).transform('value_counts') > 1]
print(df)
A B
0 1 2
1 2 5
2 3 1
3 4 1
4 5 5
5 6 1
mask = df.B.groupby(df.B).transform('value_counts') > 1
print(mask)
0 False
1 True
2 True
3 True
4 True
5 True
Name: B, dtype: bool
df = df[mask]
print(df)
A B
1 2 5
2 3 1
3 4 1
4 5 5
5 6 1

Set value of first item in slice in python pandas

So I would like make a slice of a dataframe and then set the value of the first item in that slice without copying the dataframe. For example:
df = pandas.DataFrame(numpy.random.rand(3,1))
df[df[0]>0][0] = 0
The slice here is irrelevant and just for the example and will return the whole data frame again. Point being, by doing it like it is in the example you get a setting with copy warning (understandably). I have also tried slicing first and then using ILOC/IX/LOC and using ILOC twice, i.e. something like:
df.iloc[df[0]>0,:][0] = 0
df[df[0]>0,:].iloc[0] = 0
And neither of these work. Again- I don't want to make a copy of the dataframe even if it id just the sliced version.
EDIT:
It seems there are two ways, using a mask or IdxMax. The IdxMax method seems to work if your index is unique, and the mask method if not. In my case, the index is not unique which I forgot to mention in the initial post.
I think you can use idxmax for get index of first True value and then set by loc:
np.random.seed(1)
df = pd.DataFrame(np.random.randint(4, size=(5,1)))
print (df)
0
0 1
1 3
2 0
3 0
4 3
print ((df[0] == 0).idxmax())
2
df.loc[(df[0] == 0).idxmax(), 0] = 100
print (df)
0
0 1
1 3
2 100
3 0
4 3
df.loc[(df[0] == 3).idxmax(), 0] = 200
print (df)
0
0 1
1 200
2 0
3 0
4 3
EDIT:
Solution with not unique index:
np.random.seed(1)
df = pd.DataFrame(np.random.randint(4, size=(5,1)), index=[1,2,2,3,4])
print (df)
0
1 1
2 3
2 0
3 0
4 3
df = df.reset_index()
df.loc[(df[0] == 3).idxmax(), 0] = 200
df = df.set_index('index')
df.index.name = None
print (df)
0
1 1
2 200
2 0
3 0
4 3
EDIT1:
Solution with MultiIndex:
np.random.seed(1)
df = pd.DataFrame(np.random.randint(4, size=(5,1)), index=[1,2,2,3,4])
print (df)
0
1 1
2 3
2 0
3 0
4 3
df.index = [np.arange(len(df.index)), df.index]
print (df)
0
0 1 1
1 2 3
2 2 0
3 3 0
4 4 3
df.loc[(df[0] == 3).idxmax(), 0] = 200
df = df.reset_index(level=0, drop=True)
print (df)
0
1 1
2 200
2 0
3 0
4 3
EDIT2:
Solution with double cumsum:
np.random.seed(1)
df = pd.DataFrame([4,0,4,7,4], index=[1,2,2,3,4])
print (df)
0
1 4
2 0
2 4
3 7
4 4
mask = (df[0] == 0).cumsum().cumsum()
print (mask)
1 0
2 1
2 2
3 3
4 4
Name: 0, dtype: int32
df.loc[mask == 1, 0] = 200
print (df)
0
1 4
2 200
2 4
3 7
4 4
Consider the dataframe df
df = pd.DataFrame(dict(A=[1, 2, 3, 4, 5]))
print(df)
A
0 1
1 2
2 3
3 4
4 5
Create some arbitrary slice slc
slc = df[df.A > 2]
print(slc)
A
2 3
3 4
4 5
Access the first row of slc within df by using index[0] and loc
df.loc[slc.index[0]] = 0
print(df)
A
0 1
1 2
2 0
3 4
4 5
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.rand(6,1),index=[1,2,2,3,3,3])
df[1] = 0
df.columns=['a','b']
df['b'][df['a']>=0.5]=1
df=df.sort(['b','a'],ascending=[0,1])
df.loc[df[df['b']==0].index.tolist()[0],'a']=0
In this method extra copy of the dataframe is not created but an extra column is introduced which can be dropped after processing. To choose any index instead o the first one you can change the last line as follows
df.loc[df[df['b']==0].index.tolist()[n],'a']=0
to change any nth item in a slice
df
a
1 0.111089
2 0.255633
2 0.332682
3 0.434527
3 0.730548
3 0.844724
df after slicing and labelling them
a b
1 0.111089 0
2 0.255633 0
2 0.332682 0
3 0.434527 0
3 0.730548 1
3 0.844724 1
After changing value of first item in slice (labelled as 0) to 0
a b
3 0.730548 1
3 0.844724 1
1 0.000000 0
2 0.255633 0
2 0.332682 0
3 0.434527 0
So using some of the answers I managed to find a one liner way to do this:
np.random.seed(1)
df = pd.DataFrame(np.random.randint(4, size=(5,1)))
print df
0
0 1
1 3
2 0
3 0
4 3
df.loc[(df[0] == 0).cumsum()==1,0] = 1
0
0 1
1 3
2 1
3 0
4 3
Essentially this is using the mask inline with a cumsum.

Categories

Resources