Apply a function to a numpy array - python

I have a numpy array with data from yahoo finance that i got like this:
!pip install yfinance
import yfinance
tickers = yfinance.Tickers('GCV22.CMX CLV22.NYM')
So for each symbol I have open low high close prices as well as volume, on a daily basis:
Open High Low Close Volume Dividends Stock Splits
Date
2021-09-20 1752.000000 1766.000000 1740.500000 1761.800049 3656 0 0
2021-09-21 1763.400024 1780.800049 1756.300049 1776.099976 11490 0 0
2021-09-22 1773.099976 1785.900024 1762.800049 1776.699951 6343 0 0
2021-09-23 1766.900024 1774.500000 1736.300049 1747.699951 10630 0 0
2021-09-24 1741.300049 1755.599976 1738.300049 1749.699951 10630 0 0
I found this function in a paper (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2422183) and I would like to apply it to my dataset, but I can't understand how to apply it:
def fitKCA(t,z,q,fwd=0):
'''
Inputs:
t: Iterable with time indices
z: Iterable with measurements
q: Scalar that multiplies the seed states covariance
fwd: number of steps to forecast (optional, default=0)
Output:
x[0]: smoothed state means of position velocity and acceleration
x[1]: smoothed state covar of position velocity and acceleration
Dependencies: numpy, pykalman
'''
#1) Set up matrices A,H and a seed for Q
h=(t[-1]-t[0])/t.shape[0]
A=np.array([[1,h,.5*h**2],
[0,1,h],
[0,0,1]])
Q=q*np.eye(A.shape[0])
#2) Apply the filter
kf=KalmanFilter(transition_matrices=A,transition_covariance=Q)
#3) EM estimates
kf=kf.em(z)
#4) Smooth
x_mean,x_covar=kf.smooth(z)
#5) Forecast
for fwd_ in range(fwd):
x_mean_,x_covar_=kf.filter_update(filtered_state_mean=x_mean[-1], \
filtered_state_covariance=x_covar[-1])
x_mean=np.append(x_mean,x_mean_.reshape(1,-1),axis=0)
x_covar_=np.expand_dims(x_covar_,axis=0)
x_covar=np.append(x_covar,x_covar_,axis=0)
#6) Std series
x_std=(x_covar[:,0,0]**.5).reshape(-1,1)
for i in range(1,x_covar.shape[1]):
x_std_=x_covar[:,i,i]**.5
x_std=np.append(x_std,x_std_.reshape(-1,1),axis=1)
return x_mean,x_std,x_covar
In the paper they say: Numpy array t conveys the index of
observations. Numpy array z passes the observations. Scalar q provides a seed value for
initializing the EM estimation of the states covariance. How can i call this function with my data? I understand t should be the index column of each symbol, that is the data column, the z is the close price for each symbol of my numpy array, and q a random seed, but i can't make it works

The function in the paper states that you need :
t: Iterable with time indices
z: Iterable with measurements
q: Scalar that multiplies the seed states covariance
here is how you would compute them :
import yfinance
from random import random
tickers = yfinance.Ticker('MSFT')
history = tickers.history()
# t is the timestamps indexed at 0 for each row
t = [h[0] for h in history.values]
# z is the measurement here choosing open price
z = history.values.Open
# q random seeds
q = [random() for _ in t]
# finally call the function
fitKCA(t,z, q)

Related

Optimization function yields wrong results

im trying to replicate a certain code from yuxing Yan's python for finance.
I am at a road block because I am getting very high minimized figures(in this case stock weights, which ca be both +(long) and (-short) after optimization with fmin().
can anyone help me with a fresh pair of eyes. I have seen some suggestion about avoiding passing negative or complex figures to fmin() but I can't afford to as its vital to my code
#Lets import our modules
from scipy.optimize import fmin #to minimise our negative sharpe-ratio
import numpy as np#deals with numbers python
from datetime import datetime#handles date objects
import pandas_datareader.data as pdr #to read download equity data
import pandas as pd #for reading and accessing tables etc
import scipy as sp
from scipy.stats import norm
import scipy.stats as stats
from scipy.optimize import fminbound
assets=('AAPL',
'IBM',
'GOOG',
'BP',
'XOM',
'COST',
'GS')
#start and enddate to be downloaded
startdate='2016-01-01'
enddate='2016-01-31'
rf_rate=0.0003
n=len(assets)
#_______________________________________________
#This functions takes the assets,start and end dates and
#returns portfolio return
#__________________________________________________
def port_returns (assets,startdate,enddate):
#We use adjusted clsoing prices of sepcified dates of assets
#as we will only be interested in returns
data = pdr.get_data_yahoo(assets, start=startdate, end=enddate)['Adj Close']
#We calculate the percentage change of our returns
#using pct_change function in python
returns=data.pct_change()
return returns
def portfolio_variance(returns,weight):
#finding the correlation of our returns by
#dropping the nan values and transposing
correlation_coefficient = np.corrcoef(returns.dropna().T)
#standard deviation of our returns
std=np.std(returns,axis=0)
#initialising our variance
port_var = 0.0
#creating a nested loop to calculate our portfolio variance
#where the variance is w12σ12 + w22σ22 + 2w1w2(Cov1,2)
#and correlation coefficient is given by covaraince btn two assets divided by standard
#multiplication of standard deviation of both assets
for i in range(n):
for j in range(n):
#we calculate the variance by continuously summing up the varaince between two
#assets using i as base loop, multiplying by std and corrcoef
port_var += weight[i]*weight[j]*std[i]*std[j]*correlation_coefficient[i, j]
return port_var
def sharpe_ratio(returns,weights):
#call our variance function
variance=portfolio_variance(returns,weights)
avg_return=np.mean(returns,axis=0)
#turn our returns to an array
returns_array = np.array(avg_return)
#Our sharpe ratio uses expected return gotten from multiplying weights and return
# and standard deviation gotten by square rooting our variance
#https://en.wikipedia.org/wiki/Sharpe_ratio
return (np.dot(weights,returns_array) - rf_rate)/np.sqrt(variance)
def negate_sharpe_ratio(weights):
#returns=port_returns (assets,startdate,enddate)
#creating an array with our weights by
#summing our n-1 inserted and subtracting by 1 to make our last weight
weights_new=np.append(weights,1-sum(weights))
#returning a negative sharpe ratio
return -(sharpe_ratio(returns_data,weights_new))
returns_data=port_returns(assets,startdate,enddate)
# for n stocks, we could only choose n-1 weights
ones_weights_array= (np.ones(n-1, dtype=float) * 1.0 )/n
weight_1 = fmin(negate_sharpe_ratio,ones_weights_array)
final_weight = np.append(weight_1, 1 - sum(weight_1))
final_sharpe_ratio = sharpe_ratio(returns_data,final_weight)
print ('Optimal weights are ')
print (final_weight)
print ('final Sharpe ratio is ')
print(final_sharpe_ratio)
A few things are causing your code not to work as written
is assets the list of items in ticker?
shouldstartdate be set equal to begdate?
Your call to port_returns() is looking for both assets and startdate which are never defined.
Function sharpe_ratio() is looking for a variable called rf_rate which is never defined. I assume this is the risk-free rate and the value assigned to rf at the beginning of the script. So should rf be called rf_rate instead?
After changing rf to rf_rate, begdate to startdate, and setting assets = list(ticker), it appears that this will work as written

Python: plotting across time with a single column file

I must write a function that allows me to find the local max and min from a series of values.
Data for function is x, y of each "peak".
Output are 4 vectors that contain x, y max and min "peaks".
To find max peaks, I must "stand" on each data point and check it is mare or less than neighbors on both sides in order to decide if it is a peak (save as max/min peak).
Points on both ends only have 1 neighbor, do not consider those for this analysis.
Then write a program to read a data file and invoke the function to calculate the peaks. The program must generate a graph showing the entered data with the calculated peaks.
1st file is an Array of float64 of (2001,) size. All data is in column 0. This file represents the amplitude of a signal in time, frequency of sampling is 200Hz. Asume initial time is 0.
Graph should look like this
Program must also generate an .xls file that shows 2 tables; 1 with min peaks, and another with max peaks. Each table must be titled and consist of 2 column, one with the time at which peaks occur, and the other with the amplitude of each peak.
No Pandas allowed.
first file is a .txt file, and is single column, 2001 rows total
0
0.0188425
0.0376428
0.0563589
0.0749497
0.0933749
0.111596
0.129575
0.147277
0.164669
0.18172
...
Current attempt:
import numpy as np
import matplotlib.pyplot as plt
filename = 'location/file_name.txt'
T = np.loadtxt(filename,comments='#',delimiter='\n')
x = T[::1] # all the files of column 0 are x vales
a = np.empty(x, dtype=array)
y = np.linspace[::1/200]
X, Y = np.meshgrid(x,y)
This does what you ask. I had to generate random data, since you didn't share yours. You can surely build your spreadsheet from the minima and maxima values.
import numpy as np
import matplotlib.pyplot as plt
#filename = 'location/file_name.txt'
#T = np.loadtxt(filename,comments='#',delimiter='\n')
#
#y = T[::1] # all the files of column 0 are x vales
y = np.random.random(200) * 2.0
minima = []
maxima = []
for i in range(0,y.shape[0]-1):
if y[i-1] < y[i] and y[i+1] < y[i]:
maxima.append( (i/200, y[i]) )
if y[i-1] > y[i] and y[i+1] > y[i]:
minima.append( (i/200, y[i]) )
minima = np.array(minima)
maxima = np.array(maxima)
print(minima)
print(maxima)
x = np.linspace(0, 1, 200 )
plt.plot( x, y )
plt.scatter( maxima[:,0], maxima[:,1] )
plt.show()

Johansen Test Is Producing An Incorrect Eigenvector

I'm attempting to replicate Ernie Chan's example 2.7 outlined in his seminal book Algorithmic Trading (page 55) in python. There isn't much pertinent material found online but the statsmodel library is very helpful. However the eigenvector my code produces looks incorrect in that the values do not properly correlate to the test data. Here's the code in several steps:
import pandas as pd
import yfinance as yf
from datetime import datetime
from dateutil.relativedelta import relativedelta
years = 5
today = datetime.today().strftime('%Y-%m-%d')
lastyeartoday = (datetime.today() - relativedelta(years=years)).strftime('%Y-%m-%d')
symbols = ['BTC-USD', 'BCH-USD','ETH-USD']
df = yf.download(symbols,
start=lastyeartoday,
end=today,
progress=False)
df = df.dropna()
data = pd.DataFrame()
for symbol in symbols:
data[symbol] = df['Close'][symbol]
data.tail()
This produces the following output:
Let's plot the the three series:
# Plot the prices series
import matplotlib.pyplot as plt
%matplotlib inline
for symbol in symbols:
data[symbol].plot(figsize=(10,8))
plt.show()
Graph:
Now we run the cointegrated Johansen test on the dataset:
import numpy as np
import pandas as pd
import statsmodels.api as sm
# data = pd.read_csv("http://web.pdx.edu/~crkl/ceR/data/usyc87.txt",index_col='YEAR',sep='\s+',nrows=66)
# y = data['Y']
# c = data['C']
from statsmodels.tsa.vector_ar.vecm import coint_johansen
"""
Johansen cointegration test of the cointegration rank of a VECM
Parameters
----------
endog : array_like (nobs_tot x neqs)
Data to test
det_order : int
* -1 - no deterministic terms - model1
* 0 - constant term - model3
* 1 - linear trend
k_ar_diff : int, nonnegative
Number of lagged differences in the model.
Returns
-------
result: Holder
An object containing the results which can be accessed using dot-notation. The object’s attributes are
eig: (neqs) - Eigenvalues.
evec: (neqs x neqs) - Eigenvectors.
lr1: (neqs) - Trace statistic.
lr2: (neqs) - Maximum eigenvalue statistic.
cvt: (neqs x 3) - Critical values (90%, 95%, 99%) for trace statistic.
cvm: (neqs x 3) - Critical values (90%, 95%, 99%) for maximum eigenvalue statistic.
method: str “johansen”
r0t: (nobs x neqs) - Residuals for Δ𝑌.
rkt: (nobs x neqs) - Residuals for 𝑌−1.
ind: (neqs) - Order of eigenvalues.
"""
def joh_output(res):
output = pd.DataFrame([res.lr2,res.lr1],
index=['max_eig_stat',"trace_stat"])
print(output.T,'\n')
print("Critical values(90%, 95%, 99%) of max_eig_stat\n",res.cvm,'\n')
print("Critical values(90%, 95%, 99%) of trace_stat\n",res.cvt,'\n')
# model with constant/trend (deterministic) term with lags set to 1
joh_model = coint_johansen(data,0,1) # k_ar_diff +1 = K
joh_output(joh_model)
As the test values are far greater than the critical values we can rule out the null hypothesis and declare that there is very high cointegration between the three crpto pairs.
Now let's print the eigenvalues:
array([0.02903038, 0.01993949, 0.00584357])
The first row of our eigenvectors should be considered the strongest in that it has the shortest half-life for mean reversion:
print('Eigenvector in scientific notation:\n{0}\n'.format(joh_model.evec[0]))
print('Eigenvector in decimal notation:')
i = 0
for val in joh_model.evec[0]:
print('{0}: {1:.10f}'.format(i, val))
i += 1
Result:
Eigenvector in scientific notation:
[ 2.21531848e-04 -1.70103937e-04 -9.40374745e-05]
Eigenvector in decimal notation:
0: 0.0002215318
1: -0.0001701039
2: -0.0000940375
And here's the problem I've mentioned in my introduction. Per Ernie's description these values should correlate with the hedge ratios for each of the crosses. However they are a) way to small b) two of them are negative (obviously incorrect for these three crypto pairs) and c) seem to be completely uncorrelated to the test data (e.g. BTC is obviously trading at a massive premium and should be the smallest value).
Now I'm no math genius and there's a good chance that I messed up somewhere, which is why I provided all the code/steps involved for replication. Any pointers and insights would be much appreciated. Many thanks in advance.
UPDATE: Based on MilTom's suggestion I converted my dataset to percent returns and here's the result:
max_eig_stat trace_stat
0 127.076209 133.963475
1 6.581045 6.887266
2 0.306221 0.306221
Critical values(90%, 95%, 99%) of max_eig_stat
[[18.8928 21.1314 25.865 ]
[12.2971 14.2639 18.52 ]
[ 2.7055 3.8415 6.6349]]
Critical values(90%, 95%, 99%) of trace_stat
[[27.0669 29.7961 35.4628]
[13.4294 15.4943 19.9349]
[ 2.7055 3.8415 6.6349]]
Eigenvector in scientific notation:
[ 0.00400041 -0.01952632 -0.0133122 ]
Eigenvector in decimal notation:
0: 0.0040004070
1: -0.0195263209
2: -0.0133122020
This looks more appropriate but it seems that Johansen test fails to rule out the null scenario given the low values in row 1 and 2. Apparently there is no correlation, at least that's how I'm reading the result.

Nan values when using np.linspace() as input

I am trying to calculate the probability of transmission for an electron through a series of potential wells. When looping through energy values using np.linspace() I get a return of nan for any value under 15. I understand this for values of 0 and 15, since they return a value of zero in the denominator for the k and q values. If I simply call getT(5) for example, I get a real value. However when getT(5) gets called from the loop using np.linspace(0,30,2001) then it returns nan. Shouldnt it return either nan or a value in both cases?
import numpy as np
import matplotlib.pyplot as plt
def getT(Ein):
#constants
hbar=1.055e-34 #J-s
m=9.109e-31 #mass of electron kg
N=10 #number of cells
a=1e-10 #meters
b=2e-10 #meters
#convert energy and potential to Joules
conv_J=1.602e-19
E_eV=Ein
V_eV=15
E=conv_J*E_eV
V=conv_J*V_eV
#calculate values for k and q
k=(2*m*E/hbar**2)**.5
q=(2*m*(E-V)/hbar**2)**.5
#create M1, M2 Matrices
M1=np.matrix([[((q+k)/(2*q))*np.exp(1j*k*b),((q-k)/(2*q))*np.exp(-1j*k*b)], \
[((q-k)/(2*q))*np.exp(1j*k*b),((q+k)/(2*q))*np.exp(-1j*k*b)]])
M2=np.matrix([[((q+k)/(2*k))*np.exp(1j*q*a),((k-q)/(2*k))*np.exp(-1j*q*a)], \
[((k-q)/(2*k))*np.exp(1j*q*a),((q+k)/(2*k))*np.exp(-1j*q*a)]])
#calculate M_Cell
M_Cell=M1*M2
#calculate M for N cells
M=M_Cell**N
#get items in M_Cell
M11=M.item(0,0)
M12=M.item(0,1)
M21=M.item(1,0)
M22=M.item(1,1)
#calculate r and t values
r=-M21/M22
t=M11-M12*M21/M22
#calculate final T value
T=abs(t)**2
return Ein,T
#create empty array for data to plot
data=[]
#Calculate T for 500 values of E in between 0 and 30 eV
for i in np.linspace(0,30,2001):
data.append(getT(i))
data=np.transpose(data)
#generate plot
fig, (ax1)=plt.subplots(1)
ax1.set_xlim([0,30])
ax1.set_xlabel('Energy (eV)',fontsize=32)
ax1.set_ylabel('T',fontsize=32)
ax1.grid()
plt.tick_params(labelsize=32)
plt.plot(data[0],data[1],lw=6)
plt.draw()
plt.show()
I think the difference comes from the line
q=(2*m*(E-V)/hbar**2)**.5
When testing with single values between 0 and 15, you're basically taking the root of a negative number (because E-V is negative), which is irrational, for example:
(-2)**0.5
>> (8.659560562354934e-17+1.4142135623730951j)
But when using np.linspace, you take the root of a NumPy array with negative values, which results in nan (and a warning):
np.array(-2)**0.5
>> RuntimeWarning: invalid value encountered in power
>> nan

Rolling average pairwise correlation in Python

I have daily returns from three markets (GLD, SPY, and USO). My goal is to calculate the the average pairwise correlation from a correlation matrix on a rolling basis of 130 days.
My starting point was:
import numpy as np
import pandas as pd
import os as os
import pandas.io.data as web
import datetime as datetime
from pandas.io.data import DataReader
stocks = ['spy', 'gld', 'uso']
start = datetime.datetime(2010,1,1)
end = datetime.datetime(2016,1,1)
df = web.DataReader(stocks, 'yahoo', start, end)
adj_close_df = df['Adj Close']
returns = adj_close_df.pct_change(1).dropna()
returns = returns.dropna()
rollingcor = returns.rolling(130).corr()
This creates a panel of correlation matrices. However, extracting the lower(or upper) triangles, removing the diagonals and then calculating the average for each observation is where I've drawn a blank. Ideally I would like the output for each date to be in a Series where I can then index it by the dates.
Maybe I've started from the wrong place but any help would be appreciated.
To get the average pairwise correlation, you can find the sum of the correlation matrix, substract n (ones on the diagonal), divide by 2 (symmetry), and finally divide by n (average). I think this should do it:
>>> n = len(stocks)
>>> ((rollingcor.sum(skipna=0).sum(skipna=0) - n) / 2) / n
Date
2010-01-05 NaN
2010-01-06 NaN
2010-01-07 NaN
...
2015-12-29 0.164356
2015-12-30 0.168102
2015-12-31 0.166462
dtype: float64
You could use numpy's tril to access the lower triangle of the dataframe.
def tril_sum(df):
# -1 ensures we skip the diagonal
return np.tril(df.unstack().values, -1).sum()
Calculates the sum of the lower triangle of the matrix. Notice the unstack() in the middle of that. I'm expecting to have a multiindex series that I'll need to pivot to a dataframe.
Then apply it to your panel
n = len(stock)
avg_cor = rollingcor.dropna().to_frame().apply(tril_sum) / ((n ** 2 - n) / 2)
Looks like:
print avg_cor.head()
Date
2010-07-12 0.398973
2010-07-13 0.403664
2010-07-14 0.402483
2010-07-15 0.403252
2010-07-16 0.407769
dtype: float64
This answer skips the diagonals.

Categories

Resources