shifting down rows of specific columns from a specific index in python - python

I am scraping multiple tables from multiple pages of a website. The issue is there is a row missing from the initial table. Basically, this is how the dataframe looks.
mar2018 feb2018 jan2018 dec2017 nov2017
oct2017 sep2017 aug2017
balls faced 345 561 295 0 645 balls faced 200 58 0
runs scored 156 281 183 0 389 runs scored 50 20 0
strike rate 52.3 42.6 61.1 0 52.2 strike rate 25 34 0
dot balls 223 387 173 0 476 dot balls 125 34 0
fours 8 12 19 0 22 sixes 2 0 0
doubles 20 38 16 0 36 fours 4 2 0
notout 2 0 0 0 4 doubles 2 0 0
notout 4 2 0
the column 'sixes' is missing in the first page and present in the subsequent pages. So, I am trying to move the rows starting from 'fours' to 'not out' to a position down and leave nan's in row 4 for first 5 columns starting from mar2018 to nov2017.
I tried the following code but it isn't working. This is moving the values horizontally but not vertically downward.
df.iloc[4][0:6] = df.iloc[4][0:6].shift(1)
and also
df2 = pd.DataFrame(index = 4)
df = pd.concat([df.iloc[:], df2, df.iloc[4:]]).reset_index(drop=True)
did not work.
df['mar2018'] = df['mar2018'].shift(1)
But this moves all the values of that column down by 1 row.
So, I was wondering if it is possible to shift down rows of specific columns from a specific index?

I think need reindex by union by numpy.union1d of all index values:
idx = np.union1d(df1.index, df2.index)
df1 = df1.reindex(idx)
df2 = df2.reindex(idx)
print (df1)
mar2018 feb2018 jan2018 dec2017 nov2017
balls faced 345.0 561.0 295.0 0.0 645.0
dot balls 223.0 387.0 173.0 0.0 476.0
doubles 20.0 38.0 16.0 0.0 36.0
fours 8.0 12.0 19.0 0.0 22.0
notout 2.0 0.0 0.0 0.0 4.0
runs scored 156.0 281.0 183.0 0.0 389.0
sixes NaN NaN NaN NaN NaN
strike rate 52.3 42.6 61.1 0.0 52.2
print (df2)
oct2017 sep2017 aug2017
balls faced 200 58 0
dot balls 125 34 0
doubles 2 0 0
fours 4 2 0
notout 4 2 0
runs scored 50 20 0
sixes 2 0 0
strike rate 25 34 0
If multiple DataFrames in list is possible use list comprehension:
from functools import reduce
dfs = [df1, df2]
idx = reduce(np.union1d, [x.index for x in dfs])
dfs1 = [df.reindex(idx) for df in dfs]
print (dfs1)
[ mar2018 feb2018 jan2018 dec2017 nov2017
balls faced 345.0 561.0 295.0 0.0 645.0
dot balls 223.0 387.0 173.0 0.0 476.0
doubles 20.0 38.0 16.0 0.0 36.0
fours 8.0 12.0 19.0 0.0 22.0
notout 2.0 0.0 0.0 0.0 4.0
runs scored 156.0 281.0 183.0 0.0 389.0
sixes NaN NaN NaN NaN NaN
strike rate 52.3 42.6 61.1 0.0 52.2, oct2017 sep2017 aug2017
balls faced 200 58 0
dot balls 125 34 0
doubles 2 0 0
fours 4 2 0
notout 4 2 0
runs scored 50 20 0
sixes 2 0 0
strike rate 25 34 0]

Related

Pandas - new column based on other row

I want to create a new column in my dataframe with the value of a other row.
DataFrame
TimeStamp Event Value
0 1603822620000 1 102.0
1 1603822680000 1 108.0
2 1603822740000 1 107.0
3 1603822800000 2 1
4 1603823040000 1 106.0
5 1603823100000 2 0
6 1603823160000 2 1
7 1603823220000 1 105.0
I would like to add a new column with the previous value where event = 1.
TimeStamp Event Value PrevValue
0 1603822620000 1 102.0 NaN
1 1603822680000 1 108.0 102.0
2 1603822740000 1 107.0 108.0
3 1603822800000 2 1 107.0
4 1603823040000 1 106.0 107.0
5 1603823100000 2 0 106.0
6 1603823160000 2 1 106.0
7 1603823220000 1 105.0 106.0
So I can't simply use shift(1) and also not groupBy(event).shift(1).
Current solution
df["PrevValue"] =df.timestamp.apply(lambda ts: (df[(df.Event == 1) & (df.timestamp < ts)].iloc[-1].value))
But I guess, that's not the best solution.
Is there something like shiftUntilCondition(condition)?
Thanks a lot!
Try with
df['new'] = df['Value'].where(df['Event']==1).ffill().shift()
Out[83]:
0 NaN
1 102.0
2 108.0
3 107.0
4 107.0
5 106.0
6 106.0
7 106.0
Name: Value, dtype: float64

Forward fill missing values by group after condition is met in pandas

I'm having a bit of trouble with this. My dataframe looks like this:
id amount dummy
1 130 0
1 120 0
1 110 1
1 nan nan
1 nan nan
2 nan 0
2 50 0
2 20 1
2 nan nan
2 nan nan
So, what I need to do is, after the dummy gets value = 1, I need to fill the amount variable with zeroes for each id, like this:
id amount dummy
1 130 0
1 120 0
1 110 1
1 0 nan
1 0 nan
2 nan 0
2 50 0
2 20 1
2 0 nan
2 0 nan
I'm guessing I'll need some combination of groupby('id'), fillna(method='ffill'), maybe a .loc or a shift() , but everything I tried has had some problem or is very slow. Any suggestions?
The way I will use
s = df.groupby('id')['dummy'].ffill().eq(1)
df.loc[s&df.dummy.isna(),'amount']=0
You can do this much easier:
data[data['dummy'].isna()]['amount'] = 0
This will select all the rows where dummy is nan and fill the amount column with 0.
IIUC, ffill() and mask the still-nan:
s = df.groupby('id')['amount'].ffill().notnull()
df.loc[df['amount'].isna() & s, 'amount'] = 0
Output:
id amount dummy
0 1 130.0 0.0
1 1 120.0 0.0
2 1 110.0 1.0
3 1 0.0 NaN
4 1 0.0 NaN
5 2 NaN 0.0
6 2 50.0 0.0
7 2 20.0 1.0
8 2 0.0 NaN
9 2 0.0 NaN
Could you please try following.
df.loc[df['dummy'].isnull(),'amount']=0
df
Output will be as follows.
id amount dummy
0 1 130.0 0.0
1 1 120.0 0.0
2 1 110.0 1.0
3 1 0.0 NaN
4 1 0.0 NaN
5 2 NaN 0.0
6 2 50.0 0.0
7 2 20.0 1.0
8 2 0.0 NaN
9 2 0.0 NaN

Python split a column into multiple columns based on ranges

I have a column in my dataframe and it has values between 2100 and 8000. I want to split this column into multiple columns of intervals of 500. So let me show you by example:
column
2100
2105
2119
.
8000
I want to split it like this:
column1 column2 column3 . . column n
2100 0 0 . . 0
. 0 . . . 0
2600 0 0
2601 0 . . .
. .
3101 0
3102 0
.
3602
8000
Please suggest a solution.
Here's one approach using pd.cut and DataFrame.pivot:
df = pd.DataFrame(list(range(2100, 8000+1)), columns=['column'])
# create the bins to be used in pd.cut
bins = list(range(df.column.min(), df.column.max()+50, 50))
# array([2100, 2150, 2200, 2250, 2300 ...
# Create the labels for pd.cut, which will be used as column names
labels = [f'column{i}' for i in range(len(bins)-1)]
# ['column0', 'column1', 'column2', 'column3', 'column4', ...
df['bins'] = pd.cut(df.column, bins, labels=labels, include_lowest=True)
Which will give you:
column bins
0 2100 column0
1 2101 column0
2 2102 column0
3 2103 column0
4 2104 column0
5 2105 column0
6 2106 column0
7 2107 column0
8 2108 column0
And now use pivot to obtain the final result:
ix = df.groupby('bins').column.cumcount()
df.pivot(columns = 'bins', index=ix).fillna(0)
bins column0 column1 column2 column3 column4 column5 column6 column7 column8 ...
0 2100.0 2151.0 2201.0 2251.0 2301.0 2351.0 2401.0 2451.0 2501.0
1 2101.0 2152.0 2202.0 2252.0 2302.0 2352.0 2402.0 2452.0 2502.0
2 2102.0 2153.0 2203.0 2253.0 2303.0 2353.0 2403.0 2453.0 2503.0
3 2103.0 2154.0 2204.0 2254.0 2304.0 2354.0 2404.0 2454.0 2504.0
4 2104.0 2155.0 2205.0 2255.0 2305.0 2355.0 2405.0 2455.0 2505.0
5 2105.0 2156.0 2206.0 2256.0 2306.0 2356.0 2406.0 2456.0 2506.0
6 2106.0 2157.0 2207.0 2257.0 2307.0 2357.0 2407.0 2457.0 2507.0
7 2107.0 2158.0 2208.0 2258.0 2308.0 2358.0 2408.0 2458.0 2508.0
8 2108.0 2159.0 2209.0 2259.0 2309.0 2359.0 2409.0 2459.0 2509.0
9 2109.0 2160.0 2210.0 2260.0 2310.0 2360.0 2410.0 2460.0 2510.0
10 2110.0 2161.0 2211.0 2261.0 2311.0 2361.0 2411.0 2461.0 2511.0
...
Lets encapsulate it all in a function, and try with a simpler example to better see how this works:
def binning_and_pivot(df, bin_size):
bins = list(range(df.column.min(), df.column.max()+bin_size, bin_size))
labels = [f'column{i}' for i in range(len(bins)-1)]
df['bins'] = pd.cut(df.column, bins, labels=labels, include_lowest=True)
ix = df.groupby('bins').column.cumcount()
return df.pivot(columns = 'bins', index=ix).fillna(0)
df = pd.DataFrame(list(range(100+1)), columns=['column'])
df = df.sample(frac=0.7).reset_index(drop=True)
binning_and_pivot(df, bin_size=10)
bins column0 column1 column2 column3 column4 column5 column6 column7 column8
0 2.0 16.0 32.0 39.0 45.0 55.0 69.0 81.0 87.0
1 6.0 21.0 29.0 42.0 46.0 59.0 72.0 76.0 92.0
2 3.0 13.0 31.0 36.0 49.0 61.0 68.0 74.0 91.0
3 12.0 20.0 25.0 41.0 52.0 56.0 70.0 78.0 86.0
4 8.0 17.0 30.0 37.0 43.0 62.0 64.0 73.0 89.0
5 7.0 19.0 27.0 38.0 50.0 53.0 71.0 77.0 83.0
6 0.0 22.0 28.0 0.0 0.0 54.0 65.0 82.0 90.0
7 0.0 18.0 24.0 0.0 0.0 60.0 63.0 80.0 0.0
8 0.0 14.0 26.0 0.0 0.0 0.0 0.0 75.0 0.0
bins column9
0 95.0
1 100.0
2 96.0
3 0.0
4 0.0
5 0.0
6 0.0
7 0.0
8 0.0
Here's my choice of action
I did it for intervals of 4
NOTE the number of rows must by fully divided by the intervals
import pandas as pd
df = pd.read_csv(r'Z:\Path\neww.txt', delim_whitespace=True)
didi = df.to_dict()
num = 4
dd={}
for i in range(int(len(didi['column'].items())/num)):
dd['col' + str(i)] = dict(list(didi['column'].items())[i*num:num*(i+1)])
print(pd.DataFrame(dd).apply(lambda x: pd.Series(x.dropna().values)))
Input:
column
2100
2100
2100
2100
2100
2100
2100
2100
2100
2100
8000
8000
8000
8000
8000
8000
8000
8000
80
8000
Output:
col0 col1 col2 col3 col4
0 2100.0 2100.0 2100.0 8000.0 8000.0
1 2100.0 2100.0 2100.0 8000.0 8000.0
2 2100.0 2100.0 8000.0 8000.0 80.0
3 2100.0 2100.0 8000.0 8000.0 8000.0
see the documentation of numpy.reshape.
Suppose you extract your concerned data into a numpy array, say data. Here's a possible solution.
newdata = data.reshape((500, -1))
newdata is your reshaped data

How to find last index in Pandas Data Frame row and count backwards using column information?

For example:
If I have a data frame like this:
20 40 60 80 100 120 140
1 1 1 1 NaN NaN NaN NaN
2 1 1 1 1 1 NaN NaN
3 1 1 1 1 NaN NaN NaN
4 1 1 NaN NaN 1 1 1
How do I find the last index in each row and then count the difference in columns elapsed so I get something like this?
20 40 60 80 100 120 140
1 40 20 0 NaN NaN NaN NaN
2 80 60 40 20 0 NaN NaN
3 60 40 20 0 NaN NaN NaN
4 20 0 NaN NaN 40 20 0
You can try of Transposing the dataframe, then after count only not null values and last set 1
#bit of complex procedure, solution involving with.
def fill_values(df):
df = df[::-1]
a = df == 1
b = a.cumsum()
#Function in counting the cummulative not null values
arr = np.where(a, b-b.mask(a).ffill().fillna(0).astype(int), 1)
return (b-b.mask(a).ffill().fillna(0).astype(int))[::-1]*20
df.apply(fill_values,1).replace(0,np.nan)-20
Out:
20 40 60 80 100 120 140
1 40.0 20.0 0.0 NaN NaN NaN NaN
2 80.0 60.0 40.0 20.0 0.0 NaN NaN
3 60.0 40.0 20.0 0.0 NaN NaN NaN
4 20.0 0.0 NaN NaN 40.0 20.0 0.0

Filling missing rows in groups after groupby

I've got some SQL data that I'm grouping and performing some aggregation on. It works nicely:
grouped = df.groupby(['a', 'b'])
agged = grouped.aggregate({
c: [numpy.sum, numpy.mean, numpy.size],
d: [numpy.sum, numpy.mean, numpy.size]
})
and
c d
sum mean size sum mean size
a b
25 20 107.0 0.804511 133.0 5328000 40060.150376 133
21 110.0 0.774648 142.0 6031000 42471.830986 142
23 126.0 0.792453 159.0 8795000 55314.465409 159
24 72.0 0.947368 76.0 2920000 38421.052632 76
25 54.0 0.818182 66.0 2570000 38939.393939 66
26 23 126.0 0.792453 159.0 8795000 55314.465409 159
but I want to fill all of the rows that are in a=25 but not in a=26 with zeros. In other words, something like:
c d
sum mean size sum mean size
a b
25 20 107.0 0.804511 133.0 5328000 40060.150376 133
21 110.0 0.774648 142.0 6031000 42471.830986 142
23 126.0 0.792453 159.0 8795000 55314.465409 159
24 72.0 0.947368 76.0 2920000 38421.052632 76
25 54.0 0.818182 66.0 2570000 38939.393939 66
26 20 0 0 0 0 0 0
21 0 0 0 0 0 0
23 126.0 0.792453 159.0 8795000 55314.465409 159
24 0 0 0 0 0 0
25 0 0 0 0 0 0
How can I do this?
Consider the dataframe df
df = pd.DataFrame(
np.random.randint(10, size=(6, 6)),
pd.MultiIndex.from_tuples(
[(25, 20), (25, 21), (25, 23), (25, 24), (25, 25), (26, 23)],
names=['a', 'b']
),
pd.MultiIndex.from_product(
[['c', 'd'], ['sum', 'mean', 'size']]
)
)
c d
sum mean size sum mean size
a b
25 20 8 3 5 5 0 2
21 3 7 8 9 2 7
23 2 1 3 2 5 4
24 9 0 1 7 1 6
25 1 9 3 5 8 8
26 23 8 8 4 8 0 5
You can quickly recover all missing rows from the cartesian product with unstack(fill_value=0) followed by stack
df.unstack(fill_value=0).stack()
c d
mean size sum mean size sum
a b
25 20 3 5 8 0 2 5
21 7 8 3 2 7 9
23 1 3 2 5 4 2
24 0 1 9 1 6 7
25 9 3 1 8 8 5
26 20 0 0 0 0 0 0
21 0 0 0 0 0 0
23 8 4 8 0 5 8
24 0 0 0 0 0 0
25 0 0 0 0 0 0
Note: Using fill_value=0 preserves the dtype int. Without it, when unstacked, the gaps get filled with NaN and dtypes get converted to float
print(df)
c d
sum mean size sum mean size
a b
25 20 107.0 0.804511 133.0 5328000 40060.150376 133
21 110.0 0.774648 142.0 6031000 42471.830986 142
23 126.0 0.792453 159.0 8795000 55314.465409 159
24 72.0 0.947368 76.0 2920000 38421.052632 76
25 54.0 0.818182 66.0 2570000 38939.393939 66
26 23 126.0 0.792453 159.0 8795000 55314.465409 159
I like:
df = df.unstack().replace(np.nan,0).stack(-1)
print(df)
c d
mean size sum mean size sum
a b
25 20 0.804511 133.0 107.0 40060.150376 133.0 5328000.0
21 0.774648 142.0 110.0 42471.830986 142.0 6031000.0
23 0.792453 159.0 126.0 55314.465409 159.0 8795000.0
24 0.947368 76.0 72.0 38421.052632 76.0 2920000.0
25 0.818182 66.0 54.0 38939.393939 66.0 2570000.0
26 20 0.000000 0.0 0.0 0.000000 0.0 0.0
21 0.000000 0.0 0.0 0.000000 0.0 0.0
23 0.792453 159.0 126.0 55314.465409 159.0 8795000.0
24 0.000000 0.0 0.0 0.000000 0.0 0.0
25 0.000000 0.0 0.0 0.000000 0.0 0.0

Categories

Resources