Keep inner values of numpy array - python

I have a tiff with lakes, which I converted into a 2D array. I would like to keep the outline of the lakes in a 2D array.
import rasterio
import numpy as np
with rasterio.open('myfile.tif') as dtm:
array = dtm.read(1)
array[array>0] = 1
array = array.astype(float)
array[array==0] = np.nan
My array looks like this now, a lake can be seen in the upper right corner:
[[ nan nan nan ... 2888.001 **2877.458 2867.5798**]
[ nan nan nan ... 2890.188 **2879.2876 2869.0415**]
[ nan nan nan ... 2892.2622 2880.9907 2870.4985]
...
[ nan nan nan ... nan nan nan]
[ nan nan nan ... nan nan nan]
[ nan nan nan ... nan nan nan]]
I wish to keep the outline of the lakes I have to set all values to nan, which are NOT located next to a nan (marked in bold).
I have tried:
array[1:-1, 1:-1] = np.nan
However, this converts ALL inner values of the entire array to nan, not just the inner values of the lakes.
If you know of a completely different way how to keep the outline of the lakes (maybe with rasterio), I would also be thankful.
I hope I made clear what I mean with inner values of the lakes.
Tim

Related

Pandas dataframe merge row by addition

I want to create a dataframe from census data. I want to calculate the number of people that returned a tax return for each specific earnings group.
For now, I wrote this
census_df = pd.read_csv('../zip code data/19zpallagi.csv')
sub_census_df = census_df[['zipcode', 'agi_stub', 'N02650', 'A02650', 'ELDERLY', 'A07180']].copy()
num_of_returns = ['Number_of_returns_1_25000', 'Number_of_returns_25000_50000', 'Number_of_returns_50000_75000',
'Number_of_returns_75000_100000', 'Number_of_returns_100000_200000', 'Number_of_returns_200000_more']
for i, column_name in zip(range(1, 7), num_of_returns):
sub_census_df[column_name] = sub_census_df[sub_census_df['agi_stub'] == i]['N02650']
I have 6 groups attached to a specific zip code. I want to get one row, with the number of returns for a specific zip code appearing just once as a column. I already tried to change NaNs to 0 and to use groupby('zipcode').sum(), but I get 50 million rows summed for zip code 0, where it seems that only around 800k should exist.
Here is the dataframe that I currently get:
zipcode agi_stub N02650 A02650 ELDERLY A07180 Number_of_returns_1_25000 Number_of_returns_25000_50000 Number_of_returns_50000_75000 Number_of_returns_75000_100000 Number_of_returns_100000_200000 Number_of_returns_200000_more Amount_1_25000 Amount_25000_50000 Amount_50000_75000 Amount_75000_100000 Amount_100000_200000 Amount_200000_more
0 0 1 778140.0 10311099.0 144610.0 2076.0 778140.0 NaN NaN NaN NaN NaN 10311099.0 NaN NaN NaN NaN NaN
1 0 2 525940.0 19145621.0 113810.0 17784.0 NaN 525940.0 NaN NaN NaN NaN NaN 19145621.0 NaN NaN NaN NaN
2 0 3 285700.0 17690402.0 82410.0 9521.0 NaN NaN 285700.0 NaN NaN NaN NaN NaN 17690402.0 NaN NaN NaN
3 0 4 179070.0 15670456.0 57970.0 8072.0 NaN NaN NaN 179070.0 NaN NaN NaN NaN NaN 15670456.0 NaN NaN
4 0 5 257010.0 35286228.0 85030.0 14872.0 NaN NaN NaN NaN 257010.0 NaN NaN NaN NaN NaN 35286228.0 NaN
And here is what I want to get:
zipcode Number_of_returns_1_25000 Number_of_returns_25000_50000 Number_of_returns_50000_75000 Number_of_returns_75000_100000 Number_of_returns_100000_200000 Number_of_returns_200000_more
0 0 778140.0 525940.0 285700.0 179070.0 257010.0 850.0
here is one way to do it using groupby and sum the desired columns
num_of_returns = ['Number_of_returns_1_25000', 'Number_of_returns_25000_50000', 'Number_of_returns_50000_75000',
'Number_of_returns_75000_100000', 'Number_of_returns_100000_200000', 'Number_of_returns_200000_more']
df.groupby('zipcode', as_index=False)[num_of_returns].sum()
zipcode Number_of_returns_1_25000 Number_of_returns_25000_50000 Number_of_returns_50000_75000 Number_of_returns_75000_100000 Number_of_returns_100000_200000 Number_of_returns_200000_more
0 0 778140.0 525940.0 285700.0 179070.0 257010.0 0.0
This question needs more information to actually give a proper answer. For example you leave out what is meant by certain columns in your data frame:
- `N1: Number of returns`
- `agi_stub: Size of adjusted gross income`
According to IRS this has the following levels.
Size of adjusted gross income "0 = No AGI Stub
1 = ‘Under $1’
2 = '$1 under $10,000'
3 = '$10,000 under $25,000'
4 = '$25,000 under $50,000'
5 = '$50,000 under $75,000'
6 = '$75,000 under $100,000'
7 = '$100,000 under $200,000'
8 = ‘$200,000 under $500,000’
9 = ‘$500,000 under $1,000,000’
10 = ‘$1,000,000 or more’"
I got the above from https://www.irs.gov/pub/irs-soi/16incmdocguide.doc
With this information, I think what you want to find is the number of
people who filed a tax return for each of the income levels of agi_stub.
If that is what you mean then, this can be achieved by:
import pandas as pd
data = pd.read_csv("./data/19zpallagi.csv")
## select only the desired columns
data = data[['zipcode', 'agi_stub', 'N1']]
## solution to your problem?
df = data.pivot_table(
index='zipcode',
values='N1',
columns='agi_stub',
aggfunc=['sum']
)
## bit of cleaning up.
PREFIX = 'agi_stub_level_'
df.columns = [PREFIX + level for level in df.columns.get_level_values(1).astype(str)]
Here's the output.
In [77]: df
Out[77]:
agi_stub_level_1 agi_stub_level_2 ... agi_stub_level_5 agi_stub_level_6
zipcode ...
0 50061850.0 37566510.0 ... 21938920.0 8859370.0
1001 2550.0 2230.0 ... 1420.0 230.0
1002 2850.0 1830.0 ... 1840.0 990.0
1005 650.0 570.0 ... 450.0 60.0
1007 1980.0 1530.0 ... 1830.0 460.0
... ... ... ... ... ...
99827 470.0 360.0 ... 170.0 40.0
99833 550.0 380.0 ... 290.0 80.0
99835 1250.0 1130.0 ... 730.0 190.0
99901 1960.0 1520.0 ... 1030.0 290.0
99999 868450.0 644160.0 ... 319880.0 142960.0
[27595 rows x 6 columns]

Rolling window produces no effect on dataframe

So I have to perform a rolling window to a set of rows inside a dataframe. The problem is that when I do full_df = full_df.rolling(window=5).mean() the output of full_df.head(2000) shows all NaN values. Does anyone know why this happens? I have to perform a time series exercise with this.
This is the dataset: https://github.com/plotly/datasets/blob/master/all_stocks_5yr.csv
This is what I have:
df = pd.read_csv('all_stocks_5yr.csv', usecols=["date", "close",
"Name"])
gp = df.groupby("Name")
my_dict = {key: group['close'].to_numpy() for key, group in gp}
full_df = pd.DataFrame.from_dict(my_dict, orient='index')
for i in full_df:
full_df = full_df.rolling(window=5).mean()
An image of the output:
First off, your loop for i in full_df is not doing what you think; instead of running the rolling mean in each row, you're running it over and over again on the whole dataframe, averaging along columns.
If we just do the rolling average once the way you're implemting it:
full_df = full_df.rolling(window=5).mean()
print(full_df)
0 1 2 3 ... 1255 1256 1257 1258
A NaN NaN NaN NaN ... NaN NaN NaN NaN
AAL NaN NaN NaN NaN ... NaN NaN NaN NaN
AAP NaN NaN NaN NaN ... NaN NaN NaN NaN
AAPL NaN NaN NaN NaN ... NaN NaN NaN NaN
ABBV 48.56684 48.37228 47.95056 48.07312 ... 102.590 98.768 101.212 100.510
... ... ... ... ... ... ... ... ... ...
XYL 45.58400 45.60000 45.74000 45.96200 ... 64.504 61.854 61.596 61.036
YUM 51.14200 51.01800 51.17400 51.28400 ... 66.902 64.420 63.914 63.668
ZBH 48.59000 48.49200 48.57000 48.75000 ... 75.154 73.112 72.704 72.436
ZION 44.84400 44.76600 44.89400 45.08200 ... 73.972 71.734 71.516 71.580
ZTS 45.08600 45.02600 45.27400 45.39200 ... 83.002 80.224 80.000 80.116
[505 rows x 1259 columns]
The first four rows are all NaN because the rolling mean isn't defined for fewer than 5 rows.
If we do it again (making a total of two times):
full_df = full_df.rolling(window=5).mean()
print(full_df.head(9))
0 1 2 ... 1256 1257 1258
A NaN NaN NaN ... NaN NaN NaN
AAL NaN NaN NaN ... NaN NaN NaN
AAP NaN NaN NaN ... NaN NaN NaN
AAPL NaN NaN NaN ... NaN NaN NaN
ABBV NaN NaN NaN ... NaN NaN NaN
ABC NaN NaN NaN ... NaN NaN NaN
ABT NaN NaN NaN ... NaN NaN NaN
ACN NaN NaN NaN ... NaN NaN NaN
ADBE 49.619072 49.471424 49.192048 ... 108.3420 110.4848 110.4976
You can see the first 8 rows are all NaN since the fourth row reaches down to the eighth in the rolling mean. Given the size of your data frame (505 rows) if you ran the rolling mean 127 times, the entire df would be consumed withNaN values, and your for loop is doing it even more times than that, which is why your df is filled with NaN values.
Also, note that you're averaging across different stock tickers, which doesn't make sense. What I believe you want to be doing is averaging the rows, not the columns in which case you simply need to do
full_df = full_df.rolling(axis = 'columns', window=5).mean()
print(full_df)
0 1 2 3 4 5 ... 1253 1254 1255 1256 1257 1258
A NaN NaN NaN NaN 44.72600 44.1600 ... 73.926 73.720 73.006 71.744 70.836 69.762
AAL NaN NaN NaN NaN 14.42600 14.3760 ... 53.142 53.308 53.114 52.530 52.248 51.664
AAP NaN NaN NaN NaN 78.74000 78.7600 ... 120.742 120.016 118.074 115.468 114.054 112.642
AAPL NaN NaN NaN NaN 67.32592 66.9025 ... 168.996 168.330 166.128 163.834 163.046 161.468
ABBV NaN NaN NaN NaN 35.87200 36.1380 ... 116.384 117.992 116.384 113.824 112.888 113.168
... ... ... ... ... ... ... ... ... ... ... ... ... ...
XYL NaN NaN NaN NaN 27.84600 28.0840 ... 73.278 73.598 73.848 73.698 73.350 73.256
YUM NaN NaN NaN NaN 64.58000 64.3180 ... 85.504 85.168 84.454 83.118 82.316 81.424
ZBH NaN NaN NaN NaN 75.85600 75.8660 ... 126.284 126.974 126.886 126.044 125.316 124.048
ZION NaN NaN NaN NaN 24.44200 24.4820 ... 53.838 54.230 54.256 53.748 53.466 53.464
ZTS NaN NaN NaN NaN 33.37400 33.5600 ... 78.720 78.434 77.772 76.702 75.686 75.112
Again, your first four columns are not managed here.
To correct for that, we add one more term:
full_df = full_df.rolling(axis = 'columns', window=5, min_periods = 1).mean()
print(full_df)
0 1 2 3 4 5 ... 1253 1254 1255 1256 1257 1258
A 45.0800 44.8400 44.766667 44.7625 44.72600 44.1600 ... 73.926 73.720 73.006 71.744 70.836 69.762
AAL 14.7500 14.6050 14.493333 14.5350 14.42600 14.3760 ... 53.142 53.308 53.114 52.530 52.248 51.664
AAP 78.9000 78.6450 78.630000 78.7150 78.74000 78.7600 ... 120.742 120.016 118.074 115.468 114.054 112.642
AAPL 67.8542 68.2078 67.752800 67.4935 67.32592 66.9025 ... 168.996 168.330 166.128 163.834 163.046 161.468
ABBV 36.2500 36.0500 35.840000 35.6975 35.87200 36.1380 ... 116.384 117.992 116.384 113.824 112.888 113.168
... ... ... ... ... ... ... ... ... ... ... ... ... ...
XYL 27.0900 27.2750 27.500000 27.6900 27.84600 28.0840 ... 73.278 73.598 73.848 73.698 73.350 73.256
YUM 65.3000 64.9250 64.866667 64.7525 64.58000 64.3180 ... 85.504 85.168 84.454 83.118 82.316 81.424
ZBH 75.8500 75.7500 75.646667 75.7350 75.85600 75.8660 ... 126.284 126.974 126.886 126.044 125.316 124.048
ZION 24.1400 24.1750 24.280000 24.3950 24.44200 24.4820 ... 53.838 54.230 54.256 53.748 53.466 53.464
ZTS 33.0500 33.1550 33.350000 33.4000 33.37400 33.5600 ... 78.720 78.434 77.772 76.702 75.686 75.112
In the above data frame the first column is just the value at time 0, the second is the average of times 0 and 1, the third is the average of times 0, 1, and 2, etc. The window size continues growing until you get to your value of window=5, at which point the window moves along with your rolling average. Note that you can also center the rolling mean if you want to rather than have a trailing window. You can see the documentation here.
I'm not quite sure what you are trying to do. Could you explain in more detail, what the goal of your operation is? I assume you try to build a moinving (rolling) average with a 5 day intervall across each asset and calculate the mean prices for each intervall.
But first, let me answer why you see all the NaNs:
What you are doing with this code below, is thare you are just doing the same operation over and over again and the result of it is always NaNs. That is, because you are doing something weird with the dict and the first rows all have NaNs so average will also be NaNs. And since you overwrite the variable full_df by the result of this computation, your dataframe shows only NaNs.
for i in full_df:
full_df = full_df.rolling(window=5).mean()
Let me explain in more detail. You were (probably) trying to iterate over the dataframe (using a window of 5 days) and compute the mean. The function full_df.rolling(window=5).mean() already does exactly that, and the output is a new dataframe, with the mean of each window over the entire datafrane full_df. By running this function in a loop, without additional indexing you are only running the same function across the entire dataframe over an over again.
Maybe this will get you what you want:
import pandas as pd
df = pd.read_csv("all_stocks_5yr.csv", index_col=[0,6])
means = df.rolling(window=5).mean()

How to only pass values where there are x amount of consecutive not-null values in Pandas?

I have a time series of monthly temperature anomaly data that is 60 years long. I would like to only pass temperature values where there are 6 or more consecutive months in the time series where temperature anomalies are greater than 0.5. Although I find it easy enough to replace the values < 0.5 with NaN, I'm not sure how to replace values where the temperature is > 0.5 but there are only 2 or 3 consecutive values that are greater than 0.5. Snippet below:
time = [1950.04167, 1950.125 , 1950.20833, 1950.29167, 1950.375 ,
1950.45833, 1950.54167, 1950.625 , 1950.70833, 1950.79167,
1950.875 , 1950.95833, 1951.04167, 1951.125 , 1951.20833,
1951.29167, 1951.375 , 1951.45833, 1951.54167, 1951.625 ,
1951.70833, 1951.79167, 1951.875 , 1951.95833, 1952.04167,
1952.125 , 1952.20833, 1952.29167, 1952.375 , 1952.45833,
1952.54167, 1952.625 , 1952.70833, 1952.79167, 1952.875 ,
1952.95833, 1953.04167, 1953.125 , 1953.20833, 1953.29167,
1953.375 , 1953.45833, 1953.54167, 1953.625 , 1953.70833,
1953.79167, 1953.875 , 1953.95833, 1954.04167, 1954.125 ,
1954.20833, 1954.29167, 1954.375 , 1954.45833, 1954.54167,
1954.625 , 1954.70833, 1954.79167, 1954.875 , 1954.95833]
sst = [-1.67623 , -1.685853, -1.69083 , -1.61898 , -1.40235 ,
-1.097773, -0.835867, -0.718727, -0.694087, -0.785423,
-0.9312 , -1.01925 , -0.8868 , -0.48022 , -0.007597,
0.448647, 0.66546 , 0.852427, 0.922443, 1.14481 ,
1.291153, 1.338903, 0.993053, 0.68006, 0.493597,
0.500197, 0.528363, 0.515583, 0.418493, 0.168387,
-0.003403, 0.033933, 0.15759 , 0.113847, 0.019967,
0.111413, 0.372967, 0.623067, 0.763903, 0.909743,
0.990287, 1.01288 , 0.969407, 0.985817, 0.982607,
1.01244 , 1.039917, 1.11755, 1.044333, 0.799593,
0.3769 , 0.105033, -0.070743, -0.281483, -0.59861,
-0.875743, -0.88768 , -0.642517, -0.548043, -0.547057]
series = pd.Series(index=time,data=sst)
greater = series.where(cond=(series>= 0.5))
So for example, I'd like to be able to 'pass' the SST values that correspond to the time spans of 1951.375 to 1951.95833 and 1953.125 to 1954.125 where SST is greater than 0.5 for 8 and 13 consecutive values respectively, but replace the SST values with NaN for the SST values corresponding to 1952.125 to 1952.29167 where there are only 3 consecutive values that are > 0.5.
Any suggestions? TIA!
You can find the length of > 0.5 runs with series.groupby(series.le(0.5).cumsum()) and then use .apply() to replace values with runs that are too short.
The .groupby ends up lumping the last <= 0.5 value, and so we want to limit it to runs of 5 of more and to replace the first value with an np.nan.
In [61]: (
series
.groupby(series.le(0.5).cumsum())
.apply(lambda x: pd.Series(np.nan if len(x) < 5 else [np.nan] + list(x)[1:], x.index))
)
Out[61]:
1950.04167 NaN
1950.12500 NaN
1950.20833 NaN
1950.29167 NaN
1950.37500 NaN
1950.45833 NaN
1950.54167 NaN
1950.62500 NaN
1950.70833 NaN
1950.79167 NaN
1950.87500 NaN
1950.95833 NaN
1951.04167 NaN
1951.12500 NaN
1951.20833 NaN
1951.29167 NaN
1951.37500 0.665460
1951.45833 0.852427
1951.54167 0.922443
1951.62500 1.144810
1951.70833 1.291153
1951.79167 1.338903
1951.87500 0.993053
1951.95833 0.680060
1952.04167 NaN
1952.12500 NaN
1952.20833 NaN
1952.29167 NaN
1952.37500 NaN
1952.45833 NaN
1952.54167 NaN
1952.62500 NaN
1952.70833 NaN
1952.79167 NaN
1952.87500 NaN
1952.95833 NaN
1953.04167 NaN
1953.12500 0.623067
1953.20833 0.763903
1953.29167 0.909743
1953.37500 0.990287
1953.45833 1.012880
1953.54167 0.969407
1953.62500 0.985817
1953.70833 0.982607
1953.79167 1.012440
1953.87500 1.039917
1953.95833 1.117550
1954.04167 1.044333
1954.12500 0.799593
1954.20833 NaN
1954.29167 NaN
1954.37500 NaN
1954.45833 NaN
1954.54167 NaN
1954.62500 NaN
1954.70833 NaN
1954.79167 NaN
1954.87500 NaN
1954.95833 NaN
dtype: float64

Dateframe interpolate not working in Panda + multidimensional interpolation

multidimensional interpolation with dataframe not working
import pandas as pd
import numpy as np
raw_data = {'CCY_CODE': ['SGD','USD','USD','USD','USD','USD','USD','EUR','EUR','EUR','EUR','EUR','EUR','USD'],
'END_DATE': ['16/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018',
'17/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018','17/03/2018'],
'STRIKE':[0.005,0.01,0.015,0.02,0.025,0.03,0.035,0.04,0.045,0.05,0.55,0.06,0.065,0.07],
'VOLATILITY':[np.nan,np.nan,0.3424,np.nan,0.2617,0.2414,np.nan,np.nan,0.215,0.212,0.2103,np.nan,0.2092,np.nan]
}
df_volsurface = pd.DataFrame(raw_data,columns = ['CCY_CODE','END_DATE','STRIKE','VOLATILITY'])
df_volsurface['END_DATE'] = pd.to_datetime(df_volsurface['END_DATE'])
df_volsurface.interpolate(method='akima',limit_direction='both')
Output:
<table><tbody><tr><th> </th><th>CCY_CODE</th><th>END_DATE</th><th>STRIKE</th><th>VOLATILITY</th></tr><tr><td>0</td><td>SGD</td><td>3/16/2018</td><td>0.005</td><td>NaN</td></tr><tr><td>1</td><td>USD</td><td>3/17/2018</td><td>0.01</td><td>NaN</td></tr><tr><td>2</td><td>USD</td><td>3/17/2018</td><td>0.015</td><td>0.3424</td></tr><tr><td>3</td><td>USD</td><td>3/17/2018</td><td>0.02</td><td>0.296358</td></tr><tr><td>4</td><td>USD</td><td>3/17/2018</td><td>0.025</td><td>0.2617</td></tr><tr><td>5</td><td>USD</td><td>3/17/2018</td><td>0.03</td><td>0.2414</td></tr><tr><td>6</td><td>USD</td><td>3/17/2018</td><td>0.035</td><td>0.230295</td></tr><tr><td>7</td><td>EUR</td><td>3/17/2018</td><td>0.04</td><td>0.220911</td></tr><tr><td>8</td><td>EUR</td><td>3/17/2018</td><td>0.045</td><td>0.215</td></tr><tr><td>9</td><td>EUR</td><td>3/17/2018</td><td>0.05</td><td>0.212</td></tr><tr><td>10</td><td>EUR</td><td>3/17/2018</td><td>0.55</td><td>0.2103</td></tr><tr><td>11</td><td>EUR</td><td>3/17/2018</td><td>0.06</td><td>0.209471</td></tr><tr><td>12</td><td>EUR</td><td>3/17/2018</td><td>0.065</td><td>0.2092</td></tr><tr><td>13</td><td>USD</td><td>3/17/2018</td><td>0.07</td><td>NaN</td></tr></tbody></table>
Expected Result:
<table><tbody><tr><th> </th><th>CCY_CODE</th><th>END_DATE</th><th>STRIKE</th><th>VOLATILITY</th></tr><tr><td>0</td><td>SGD</td><td>3/16/2018</td><td>0.005</td><td>NaN</td></tr><tr><td>1</td><td>USD</td><td>3/17/2018</td><td>0.01</td><td>Expected some logical value</td></tr><tr><td>2</td><td>USD</td><td>3/17/2018</td><td>0.015</td><td>0.3424</td></tr><tr><td>3</td><td>USD</td><td>3/17/2018</td><td>0.02</td><td>0.296358</td></tr><tr><td>4</td><td>USD</td><td>3/17/2018</td><td>0.025</td><td>0.2617</td></tr><tr><td>5</td><td>USD</td><td>3/17/2018</td><td>0.03</td><td>0.2414</td></tr><tr><td>6</td><td>USD</td><td>3/17/2018</td><td>0.035</td><td>0.230295</td></tr><tr><td>7</td><td>EUR</td><td>3/17/2018</td><td>0.04</td><td>0.220911</td></tr><tr><td>8</td><td>EUR</td><td>3/17/2018</td><td>0.045</td><td>0.215</td></tr><tr><td>9</td><td>EUR</td><td>3/17/2018</td><td>0.05</td><td>0.212</td></tr><tr><td>10</td><td>EUR</td><td>3/17/2018</td><td>0.55</td><td>0.2103</td></tr><tr><td>11</td><td>EUR</td><td>3/17/2018</td><td>0.06</td><td>0.209471</td></tr><tr><td>12</td><td>EUR</td><td>3/17/2018</td><td>0.065</td><td>0.2092</td></tr><tr><td>13</td><td>USD</td><td>3/17/2018</td><td>0.07</td><td>Expected some logical value</td></tr></tbody></table>
Linear interpolation methods gives copy last available values to all backward and forward missing value without considering ccy_code
df_volsurface.interpolate(method='linear',limit_direction='both')
Output:
<table><tbody><tr><th>CCY_CODE</th><th>END_DATE</th><th>STRIKE</th><th>VOLATILITY</th><th> </th></tr><tr><td>0</td><td>SGD</td><td>3/16/2018</td><td>0.005</td><td>0.3424</td></tr><tr><td>1</td><td>USD</td><td>3/17/2018</td><td>0.01</td><td>0.3424</td></tr><tr><td>2</td><td>USD</td><td>3/17/2018</td><td>0.015</td><td>0.3424</td></tr><tr><td>3</td><td>USD</td><td>3/17/2018</td><td>0.02</td><td>0.30205</td></tr><tr><td>4</td><td>USD</td><td>3/17/2018</td><td>0.025</td><td>0.2617</td></tr><tr><td>5</td><td>USD</td><td>3/17/2018</td><td>0.03</td><td>0.2414</td></tr><tr><td>6</td><td>USD</td><td>3/17/2018</td><td>0.035</td><td>0.2326</td></tr><tr><td>7</td><td>EUR</td><td>3/17/2018</td><td>0.04</td><td>0.2238</td></tr><tr><td>8</td><td>EUR</td><td>3/17/2018</td><td>0.045</td><td>0.215</td></tr><tr><td>9</td><td>EUR</td><td>3/17/2018</td><td>0.05</td><td>0.212</td></tr><tr><td>10</td><td>EUR</td><td>3/17/2018</td><td>0.55</td><td>0.2103</td></tr><tr><td>11</td><td>EUR</td><td>3/17/2018</td><td>0.06</td><td>0.20975</td></tr><tr><td>12</td><td>EUR</td><td>3/17/2018</td><td>0.065</td><td>0.2092</td></tr><tr><td>13</td><td>USD</td><td>3/17/2018</td><td>0.07</td><td>0.2092</td></tr></tbody></table>
Any help is appreciated! Thanks!
I'd like to point out that this is still onedimensional interpolation. We have one independent variable ('STRIKE') and one dependent variable ('VOLATILITY'). Interpolation is done for different conditions, e.g. for each day, each currency, each scenario, etc. The following is an example of how the interpolation can be done based on 'END_DATE' and 'CCY_CODE'.
# set all the conditions as index
df_volsurface.set_index(['END_DATE', 'CCY_CODE', 'STRIKE'], inplace=True)
df_volsurface.sort_index(level=['END_DATE', 'CCY_CODE', 'STRIKE'], inplace=True)
# create separate columns for all criteria except the independent variable
df_volsurface = df_volsurface.unstack(level=['END_DATE', 'CCY_CODE'])
for ccy in df_volsurface:
indices = df_volsurface[ccy].notna()
if not any(indices):
continue # we are not interested in a column with only NaN
x = df_volsurface.index.get_level_values(level='STRIKE') # independent var
y = df_volsurface[ccy] # dependent var
# create interpolation function
f_interp = scipy.interpolate.interp1d(x[indices], y[indices], kind='linear',
bounds_error=False, fill_value='extrapolate')
df_volsurface['VOL_INTERP', ccy[1], ccy[2]] = f_interp(x)
print(df_volsurface)
The interpolation for the other conditions should work analogously. This is the resulting DataFrame:
VOLATILITY VOL_INTERP
END_DATE 2018-03-16 2018-03-17 2018-03-17
CCY_CODE SGD EUR USD EUR USD
STRIKE
0.005 NaN NaN NaN 0.23900 0.42310
0.010 NaN NaN NaN 0.23600 0.38275
0.015 NaN NaN 0.3424 0.23300 0.34240
0.020 NaN NaN NaN 0.23000 0.30205
0.025 NaN NaN 0.2617 0.22700 0.26170
0.030 NaN NaN 0.2414 0.22400 0.24140
0.035 NaN NaN NaN 0.22100 0.22110
0.040 NaN NaN NaN 0.21800 0.20080
0.045 NaN 0.2150 NaN 0.21500 0.18050
0.050 NaN 0.2120 NaN 0.21200 0.16020
0.055 NaN 0.2103 NaN 0.21030 0.13990
0.060 NaN NaN NaN 0.20975 0.11960
0.065 NaN 0.2092 NaN 0.20920 0.09930
0.070 NaN NaN NaN 0.20865 0.07900
Use df_volsurface.stack() to return to a multiindex of your choice. There are also several pandas interpolation methods to choose from. However, I have not found a satisfactory solution for your problem using method='akima' because it only interpolates between the given data points, but does not seem to extrapolate beyond.

How to get the rolling(window = 3).max() on a numpy.ndarray?

I have a numpy.ndarray as follow. It's the output from talib.RSI. It's the type of numpy.ndarray. I want to get the list of rolling(windows=3).max() and the rolling(window=3).min
How to do that?
[ nan nan nan nan nan
nan nan nan nan nan
nan nan nan nan 56.50118203
60.05461743 56.99068148 55.70899949 59.2299361 64.19044898
60.62186599 53.96346826 44.06538636 52.04519976 51.32884016
58.65240379 60.44789401 58.94743634 59.75308787 53.56534397
54.22091468 47.22502341 51.5425848 50.0923126 49.80608264
45.69087847 50.07778871 54.21701441 58.79268406 63.59307774
66.08195696 65.49255218 65.11035657 68.47403716 70.70530564
73.21955929 76.57474822 65.89852612 66.51497688 72.42658468
73.80944844 69.56561001]
If you can afford adding a new dependency, I would rather do that with Pandas.
import numpy
import pandas
x = numpy.array([0, 1, 2, 3, 4])
s = pandas.Series(x)
print(s.rolling(3).min())
print(s.rolling(3).max())
print(s.rolling(3).mean())
print(s.rolling(3).std())
Note that converting your NumPy array to a Pandas series does not create a copy of the array, as Pandas uses NumPy arrays internally for its series.
You can use np.lib.stride_tricks.as_strided:
# a smaller example
import numpy.random as npr
npr.seed(123)
arr = npr.randn(10)
arr[:4] = np.nan
windows = np.lib.stride_tricks.as_strided(arr, shape=(8, 3), strides=(8, 8))
print(windows.max(axis=1))
print(windows.sum(axis=1))
[ nan nan nan nan 1.65143654 1.65143654
1.26593626 1.26593626]
[ nan nan nan nan -1.35384296 -1.20415534
-1.58965561 -0.02971677]

Categories

Resources