dataframe iloc works unexpectedly in pandas - python

I am creating a dataframe like this.
np.random.seed(2)
df=pd.DataFrame(np.random.randint(1,6,(6,6)))
out[]
0 1 1 4 3 4 1
1 3 2 4 3 5 5
2 5 4 5 3 4 4
3 3 2 3 5 4 1
4 5 4 2 3 1 5
5 5 3 5 3 2 1
spliting the dataframe into 3,3 matrix like below, it will have 16 matrix.
dfs=[]
for col in range(df.shape[1]-2):
for row in range(df.shape[0]-2):
dfs.append(df.iloc[row:row+3,col:col+3])
lets print,
dfs[0]
1 1 4
3 2 4
5 4 5
dfs[1]
3 2 4
5 4 5
3 2 3
.
.
.
dfs[15]
5 4 1
3 1 5
3 2 1
writing a function to change the values from each matrix in locations [1,0] and [1,2] to zero,
so that my output will looks like,
dfs[0]
1 1 4
0 2 0
5 4 5
def process(x):
new=[]
for d in x:
d.iloc[1,0]=0
d.iloc[1,2]=0
new.append(d)
print(d)
return new
dfs=process(dfs.copy())
my expected output, is
dfs[0]
1 1 4
0 2 0
5 4 5
but what my function returns is,
dfs[0]
1 1 4
0 0 0
0 0 0
dfs[1]
0 0 0
0 0 0
0 0 0
It producres more zeros in all matrix. I don't know why it is working unexpectedly or what I am doing wrong with my function process please help. Thanks.

Long story short, you are a victim of chained indexing, which can lead to bad things happening.
When you slice the original DataFrame, you get overlapping views.
Modifying one changes the others too, since the second row of one chunk is the first row of another, and the third row of the first chunk is the first row of yet another, and so on...which is why you see non-zero values only at the "edges", since those are unique to a single chunk.
You can make copies of each slice, like this:
def process(x):
new = []
for d in x:
d = d.copy() # each one is now a copy
d.iloc[1, 0]=0
d.iloc[1, 2]=0
new.append(d)
return new
Lastly, note that dfs = process(dfs) is actually fine; you don't need to make a copy of the enclosing list.

Change your code and process function call to get your required output. Also, I used copy in for loop to make subset of dataframe which is independent to change in future, in your case it makes changes to original df which are reflected with all zeros in other dfs list:
for col in range(df.shape[1]-2):
for row in range(df.shape[0]-2):
dfs.append(df.iloc[row:row+3,col:col+3].copy())
dfs=process(dfs)

Related

Duplicate a single row at index?

In the past hour I was searching here and couldn't find a very simple thing I need to do, duplicate a single row at index x, and just put in on index x+1.
df
a b
0 3 8
1 2 4
2 9 0
3 5 1
copy index 2 and insert it as is in the next row:
a b
0 3 8
1 2 4
2 9 0
3 9 0 # new row
4 5 1
What I tried is concat(with my own columns names) which make a mess.
line = pd.DataFrame({"date": date, "event": None}, index=[index+1])
return pd.concat([df.iloc[:index], line, df.iloc[index:]]).reset_index(drop=True)
How to simply duplicate a full row at a given index ?
You can use repeat(). Fill in the dictionary with the index and the key, and how many extra rows you would like to add as the value. This can work for multiple values.
d = {2:1}
df.loc[df.index.repeat(df.index.map(d).fillna(0)+1)].reset_index()
Output:
index a b
0 0 3 8
1 1 2 4
2 2 9 0
3 2 9 0
4 3 5 1
Got it.
df.loc[index+0.5] = df.loc[index].values
return df.sort_index().reset_index(drop = True)

How to drop row with bracket in Pandas

I would like to drop the [] for a given df
df=pd.DataFrame(dict(a=[1,2,4,[],5]))
Such that the expected output will be
a
0 1
1 2
2 4
3 5
Edit:
or to make thing more interesting, what if we have two columns and some of the cell is with [] to be dropped.
df=pd.DataFrame(dict(a=[1,2,4,[],5],b=[2,[],1,[],6]))
One way is to get the string repr and filter:
df = df[df['a'].map(repr)!='[]']
Output:
a
0 1
1 2
2 4
4 5
For multiple columns, we could apply the above:
out = df[df.apply(lambda c: c.map(repr)).ne('[]').all(axis=1)]
Output:
a b
0 1 2
2 4 1
4 5 6
You can't use equality directly as pandas will try to align a Series and a list, but you can use isin:
df[~df['a'].isin([[]])]
output:
a
0 1
1 2
2 4
4 5
To act on all columns:
df[~df.isin([[]]).any(1)]
output:
a b
0 1 2
2 4 1
4 5 6

How to create new column in Pandas dataframe where each row is product of previous rows

I have the following DataFrame dt:
a
0 1
1 2
2 3
3 4
4 5
How do I create a a new column where each row is a function of previous rows?
For instance, say the formula is:
B_row(t) = A_row(t-1)+A_row(t-2)+3
Such that:
a b
0 1 /
1 2 /
2 3 6
3 4 8
4 5 10
Also, I hear a lot about the fact that we mustn't loop through rows in Pandas', however it seems to me that I should go at it by looping through each row and creating a sort of recursive loop - as I would do in regular Python.
You could use cumprod:
dt['b'] = dt['a'].cumprod()
Output:
a b
0 1 1
1 2 2
2 3 6
3 4 24
4 5 120

Function for DataFrame operation using variables in the list with Python

I have a list list = ['OUT', 'IN']where all the elements of the list is a variable name in the data frame with suffixes _3M, _6M, _9M, 15Mattached to it.
List:
list = ['OUT', 'IN']
Input_df:
ID OUT_3M OUT_6M OUT_9M OUT_15M IN_3M IN_6M IN_9M IN_15M
A 2 3 4 6 2 3 4 6
B 3 3 5 7 3 3 5 7
C 2 3 6 6 2 3 6 6
D 3 3 7 7 3 3 7 7
The problem I am solving to do is subtracting the
1.OUT_6M from OUT_3M and entering in into separate column as Out_3M-6M
2.OUT_9M from OUT_6M and entering in into separate column as Out_6M-9M
3.OUT_15M from OUT_9M and entering in into separate column as Out_9M-15M
The Same repeats to each and every element in the list while keeping the OUT_3M and IN_3M which I mentioned in the sample Output_df dataset.
Output_df:
ID Out_3M Out_3M-6M Out_6M-9M Out_9M-15M IN_3M IN_3M-6M IN_6M-9M IN_9M-15M
A 2 1 1 2 2 1 1 2
B 3 0 2 2 3 0 2 2
C 2 1 3 0 2 1 3 0
D 3 0 4 0 3 0 4 0
There are many elements in the list which I need to perform operation on. Is there any way I could solve this by writing a function. Thanks!
I'm not sure what you mean by writing a function, aren't a couple of for cycles enough for what you want to do? Something like:
postfixes = ['3M','6M','9M','15M']
prefixes = ['IN','OUT']
# Allocate the space, while also copying _3M
output_df = input_df.copy()
# Rename a few
output_df.rename(columns={'_'.join((prefix, postfixes[i])): '_'.join((prefix, postfixes[i-1] + '-' + postfixes[i]))
for prefix in prefixes for i in range(1, len(postfixes))}, inplace=True)
# Compute the differences
for prefix in prefixes:
for i in range(1,len(postfixes)):
postfix = postfixes[i] + '-' + postfixes[i-1]
output_df['_'.join((prefix, postfix))] = input_df['_'.join((prefix, postfixes[i-1]))].values - input_df['_'.join((prefix, postfixes[i]))].values
The output_df is a copy of input_df in the beginning, both to avoid dealing with the _3M case separately, and to pre-allocate the DataFrame instead of creating the columns one at a time (it doesn't matter in your code, but if you had thousands of columns it would waste time moving stuff around in memory otherwise...)
Also, you should avoid calling a list "list" or you're going to get some nasty-to-find bugs along the way when you're trying to convert a tuple to a list!

computing sum of pandas dataframes

I have two dataframes that I want to add bin-wise. That is, given
dfc1 = pd.DataFrame(list(zip(range(10),np.zeros(10))), columns=['bin', 'count'])
dfc2 = pd.DataFrame(list(zip(range(0,10,2), np.ones(5))), columns=['bin', 'count'])
which gives me this
dfc1:
bin count
0 0 0
1 1 0
2 2 0
3 3 0
4 4 0
5 5 0
6 6 0
7 7 0
8 8 0
9 9 0
dfc2:
bin count
0 0 1
1 2 1
2 4 1
3 6 1
4 8 1
I want to generate this:
bin count
0 0 1
1 1 0
2 2 1
3 3 0
4 4 1
5 5 0
6 6 1
7 7 0
8 8 1
9 9 0
where I've added the count columns where the bin columns matched.
In fact, it turns out that I only ever add 1 (that is, count in dfc2 is always 1). So an alternate version of the question is "given an array of bin values (dfc2.bin), how can I add one to each of their corresponding count values in dfc1?"
My only solution thus far feels grossly inefficient (and slightly unreadable in the end), doing an outer joint between the two bin columns, thus creating a third dataframe on which I do a computation and then project out the unneeded column.
Suggestions?
First set bin to be index in both dataframes, then you can use add, fillvalue is needed to point that zero shall be used if bin is missing in dataframe:
dfc1 = dfc1.set_index('bin')
dfc2 = dfc2.set_index('bin')
result = pd.DataFrame.add(dfc1, dfc2, fill_value=0)
Pandas automatically sums up rows with equal index.
By the way, if you need to perform such operation frequently, I strongly recommend using numpy.bincount, which allows even repeating the bin index inside one dataframe
Since the dfc1 index is the same as your "bin" value, you could simply do the following:
dfc1.iloc[dfc2.bin].cnt += 1
Notice that I renamed your "count" column to "cnt" since count is a pandas builtin, which can cause confusion and errors!
As an alternative of #Alleo's answer, you can use method combineAdd to simply add 2 dataframes together and set_index at the same time, provided that their indexes will be matched by bin:
dfc1.set_index('bin').combineAdd(dfc2.set_index('bin')).reset_index()
bin count
0 0 1
1 1 0
2 2 1
3 3 0
4 4 1
5 5 0
6 6 1
7 7 0
8 8 1
9 9 0

Categories

Resources