Pandas Lag over multiple columns and set number of iterations - python

I have a dataframe like below:
df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
I would like to apply the pandas shift function to shift each column 4 times and create a new row for each shift:
col1 col1.lag0 col1.lag1 col1.lag2 col1.lag3 col2 col2.lag0 col2.lag1 col2.lag2 col2.lag3
1 0 0 0 0 3 0 0 0 0
2 1 0 0 0 4 3 0 0 0
0 2 1 0 0 0 4 3 0 0
0 0 2 1 0 0 0 4 3 0
0 0 0 2 1 0 0 0 4 3
I have tried a few solutions with shift like d['col1'].shift().fillna(0), however, I am not sure how to iterate the solution nor how to ensure the correct number of rows are added to the dataframe.

First I extend the given DataFrame by the correct number of rows with zeros. Then iterate over the columns and the amount of shifts to create the desired columns.
import pandas as pd
df = pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
n_shifts = 4
zero_rows = pd.DataFrame(index=pd.RangeIndex(n_shift_rows), columns=df.columns).fillna(0)
df = df.append(zero_rows).reset_index(drop=True)
for col in df.columns:
for shift_amount in range(1, n_shifts+1):
df[f"{col}.lag{shift_amount}"] = df[col].shift(shift_amount)
df.fillna(0).astype(int)
As pointed out by Ben.T the outer loop can be avoided as shift can be applied at once on the whole DataFrame. An alternative for the looping would be
shifts = df
for shift_amount in range(1, n_shifts+1):
columns = df.columns + ".lag" + str(shift_amount)
shift = pd.DataFrame(df.shift(shift_amount).values, columns=columns)
shifts = shifts.join(shift)
shifts.fillna(0).astype(int)

Related

Split a column into multiple columns that has value as list

I have a problem about splitting column into multiple columns
I have a data like table on the top.
column B contains the values of list .
I want to split the values of column B into columns like the right table. The values in the top table will be the number of occurrences of the values in column B (bottom table).
input:
A B
a [1, 2]
b [3, 4, 5]
c [1, 5]
expected output:
A 1 2 3 4 5
a 1 1 0 0 0
b 0 0 1 1 1
c 1 0 0 0 1
You can explode the column of lists and use crosstab:
df2 = df.explode('B')
out = pd.crosstab(df2['A'], df2['B']).reset_index().rename_axis(columns=None)
output:
A 1 2 3 4 5
0 a 1 1 0 0 0
1 b 0 0 1 1 1
2 c 1 0 0 0 1
used input:
df = pd.DataFrame({'A': list('abc'), 'B': [[1,2], [3,4,5], [1,5]]})

Pandas DataFrame replace negative values with latest preceding positive value

Consider a DataFrame such as
df = pd.DataFrame({'a': [1,-2,0,3,-1,2],
'b': [-1,-2,-5,-7,-1,-1],
'c': [-1,-2,-5,4,5,3]})
For each column, how to replace any negative value with the last positive value or zero ? Last here refers from top to bottom for each column. The closest solution noticed is for instance df[df < 0] = 0.
The expected result would be a DataFrame such as
df_res = pd.DataFrame({'a': [1,1,0,3,3,2],
'b': [0,0,0,0,0,0],
'c': [0,0,0,4,5,3]})
You can use DataFrame.mask to convert all values < 0 to NaN then use ffill and fillna:
df = df.mask(df.lt(0)).ffill().fillna(0).convert_dtypes()
a b c
0 1 0 0
1 1 0 0
2 0 0 0
3 3 0 4
4 3 0 5
5 2 0 3
Use pandas where
df.where(df.gt(0)).ffill().fillna(0).astype(int)
a b c
0 1 0 0
1 1 0 0
2 1 0 0
3 3 0 4
4 3 0 5
5 2 0 3
Expected result may obtained with this manipulations:
mask = df >= 0 #creating boolean mask for non-negative values
df_res = (df.where(mask, np.nan) #replace negative values to nan
.ffill() #apply forward fill for nan values
.fillna(0)) # fill rest nan's with zeros

Create dummy variable of multiple columns with python

I am working with a dataframe containing two columns with ID numbers. For further research I want to make a sort of dummy variables of these ID numbers (with the two ID numbers). My code, however, does not merge the columns from the two dataframes. How can I merge the columns from the two dataframes and create the dummy variables?
Dataframe
import pandas as pd
import numpy as np
d = {'ID1': [1,2,3], 'ID2': [2,3,4]}
df = pd.DataFrame(data=d)
Current code
pd.get_dummies(df, prefix = ['ID1', 'ID2'], columns=['ID1', 'ID2'])
Desired output
p = {'1': [1,0,0], '2': [1,1,0], '3': [0,1,1], '4': [0,0,1]}
df2 = pd.DataFrame(data=p)
df2
If need indicators in output use max, if need count values use sum after get_dummies with another parameters and casting values to strings:
df = pd.get_dummies(df.astype(str), prefix='', prefix_sep='').max(level=0, axis=1)
#count alternative
#df = pd.get_dummies(df.astype(str), prefix='', prefix_sep='').sum(level=0, axis=1)
print (df)
1 2 3 4
0 1 1 0 0
1 0 1 1 0
2 0 0 1 1
Different ways of skinning a cat; here's how I'd do it—use an additional groupby:
# pd.get_dummies(df.astype(str)).groupby(lambda x: x.split('_')[1], axis=1).sum()
pd.get_dummies(df.astype(str)).groupby(lambda x: x.split('_')[1], axis=1).max()
1 2 3 4
0 1 1 0 0
1 0 1 1 0
2 0 0 1 1
Another option is stacking, if you like conciseness:
# pd.get_dummies(df.stack()).sum(level=0)
pd.get_dummies(df.stack()).max(level=0)
1 2 3 4
0 1 1 0 0
1 0 1 1 0
2 0 0 1 1

Extracting data from two dataframes to create a third

I am using Python Pandas for the following. I have three dataframes, df1, df2 and df3. Each has the same dimensions, index and column labels. I would like to create a fourth dataframe that takes elements from df1 or df2 depending on the values in df3:
df1 = pd.DataFrame(np.random.randn(4, 2), index=list('0123'), columns=['A', 'B'])
df1
Out[67]:
A B
0 1.335314 1.888983
1 1.000579 -0.300271
2 -0.280658 0.448829
3 0.977791 0.804459
df2 = pd.DataFrame(np.random.randn(4, 2), index=list('0123'), columns=['A', 'B'])
df2
Out[68]:
A B
0 0.689721 0.871065
1 0.699274 -1.061822
2 0.634909 1.044284
3 0.166307 -0.699048
df3 = pd.DataFrame({'A': [1, 0, 0, 1], 'B': [1, 0, 1, 0]})
df3
Out[69]:
A B
0 1 1
1 0 0
2 0 1
3 1 0
The new dataframe, df4, has the same index and column labels and takes an element from df1 if the corresponding value in df3 is 1. It takes an element from df2 if the corresponding value in df3 is a 0.
I need a solution that uses generic references (e.g. ix or iloc) rather than actual column labels and index values because my dataset has fifty columns and four hundred rows.
As your DataFrames happen to be numeric, and the selector matrix happens to be of indicator variables, you can do the following:
>>> pd.DataFrame(
df1.as_matrix() * df3.as_matrix() + df1.as_matrix() * (1 - df3.as_matrix()),
index=df1.index,
columns=df1.columns)
I tried it by me and it works. Strangely enough, #Yakym Pirozhenko's answer - which I think is superior - doesn't work by me as well.
df4 = df1.where(df3.astype(bool), df2) should do it.
import pandas as pd
import numpy as np
df1 = pd.DataFrame(np.random.randint(10, size = (4,2)))
df2 = pd.DataFrame(np.random.randint(10, size = (4,2)))
df3 = pd.DataFrame(np.random.randint(2, size = (4,2)))
df4 = df1.where(df3.astype(bool), df2)
print df1, '\n'
print df2, '\n'
print df3, '\n'
print df4, '\n'
Output:
0 1
0 0 3
1 8 8
2 7 4
3 1 2
0 1
0 7 9
1 4 4
2 0 5
3 7 2
0 1
0 0 0
1 1 0
2 1 1
3 1 0
0 1
0 7 9
1 8 4
2 7 4
3 1 2

Pandas: set the value of a column in a row to be the value stored in a different df at the index of its other rows

>>> df
0 1
0 0 0
1 1 1
2 2 1
>>> df1
0 1 2
0 A B C
1 D E F
>>> crazy_magic()
>>> df
0 1 3
0 0 0 A #df1[0][0]
1 1 1 E #df1[1][1]
2 2 1 F #df1[2][1]
Is there a way to achieve this without for?
import pandas as pd
df = pd.DataFrame([[0,0],[1,1],[2,1]])
df1 = pd.DataFrame([['A', 'B', 'C'],['D', 'E', 'F']])
df2 = df1.reset_index(drop=False)
# index 0 1 2
# 0 0 A B C
# 1 1 D E F
df3 = pd.melt(df2, id_vars=['index'])
# index variable value
# 0 0 0 A
# 1 1 0 D
# 2 0 1 B
# 3 1 1 E
# 4 0 2 C
# 5 1 2 F
result = pd.merge(df, df3, left_on=[0,1], right_on=['variable', 'index'])
result = result[[0, 1, 'value']]
print(result)
yields
0 1 value
0 0 0 A
1 1 1 E
2 2 1 F
My reasoning goes as follows:
We want to use two columns of df as coordinates.
The word "coordinates" reminds me of pivot, since
if you have two columns whose values represent "coordinates" and a third
column representing values, and you want to convert that to a grid, then
pivot is the tool to use.
But df does not have a third column of values. The values are in df1. In fact df1 looks like the result of a pivot operation. So instead of pivoting df, we want to unpivot df1.
pd.melt is the function to use when you want to unpivot.
So I tried melting df1. Comparison with other uses of pd.melt led me to conclude df1 needed the index as a column. That's the reason for defining df2. So we melt df2.
Once you get that far, visually comparing df3 to df leads you naturally to the use of pd.merge.

Categories

Resources