I'm attempting to replicate Ernie Chan's example 2.7 outlined in his seminal book Algorithmic Trading (page 55) in python. There isn't much pertinent material found online but the statsmodel library is very helpful. However the eigenvector my code produces looks incorrect in that the values do not properly correlate to the test data. Here's the code in several steps:
import pandas as pd
import yfinance as yf
from datetime import datetime
from dateutil.relativedelta import relativedelta
years = 5
today = datetime.today().strftime('%Y-%m-%d')
lastyeartoday = (datetime.today() - relativedelta(years=years)).strftime('%Y-%m-%d')
symbols = ['BTC-USD', 'BCH-USD','ETH-USD']
df = yf.download(symbols,
start=lastyeartoday,
end=today,
progress=False)
df = df.dropna()
data = pd.DataFrame()
for symbol in symbols:
data[symbol] = df['Close'][symbol]
data.tail()
This produces the following output:
Let's plot the the three series:
# Plot the prices series
import matplotlib.pyplot as plt
%matplotlib inline
for symbol in symbols:
data[symbol].plot(figsize=(10,8))
plt.show()
Graph:
Now we run the cointegrated Johansen test on the dataset:
import numpy as np
import pandas as pd
import statsmodels.api as sm
# data = pd.read_csv("http://web.pdx.edu/~crkl/ceR/data/usyc87.txt",index_col='YEAR',sep='\s+',nrows=66)
# y = data['Y']
# c = data['C']
from statsmodels.tsa.vector_ar.vecm import coint_johansen
"""
Johansen cointegration test of the cointegration rank of a VECM
Parameters
----------
endog : array_like (nobs_tot x neqs)
Data to test
det_order : int
* -1 - no deterministic terms - model1
* 0 - constant term - model3
* 1 - linear trend
k_ar_diff : int, nonnegative
Number of lagged differences in the model.
Returns
-------
result: Holder
An object containing the results which can be accessed using dot-notation. The object’s attributes are
eig: (neqs) - Eigenvalues.
evec: (neqs x neqs) - Eigenvectors.
lr1: (neqs) - Trace statistic.
lr2: (neqs) - Maximum eigenvalue statistic.
cvt: (neqs x 3) - Critical values (90%, 95%, 99%) for trace statistic.
cvm: (neqs x 3) - Critical values (90%, 95%, 99%) for maximum eigenvalue statistic.
method: str “johansen”
r0t: (nobs x neqs) - Residuals for Δ𝑌.
rkt: (nobs x neqs) - Residuals for 𝑌−1.
ind: (neqs) - Order of eigenvalues.
"""
def joh_output(res):
output = pd.DataFrame([res.lr2,res.lr1],
index=['max_eig_stat',"trace_stat"])
print(output.T,'\n')
print("Critical values(90%, 95%, 99%) of max_eig_stat\n",res.cvm,'\n')
print("Critical values(90%, 95%, 99%) of trace_stat\n",res.cvt,'\n')
# model with constant/trend (deterministic) term with lags set to 1
joh_model = coint_johansen(data,0,1) # k_ar_diff +1 = K
joh_output(joh_model)
As the test values are far greater than the critical values we can rule out the null hypothesis and declare that there is very high cointegration between the three crpto pairs.
Now let's print the eigenvalues:
array([0.02903038, 0.01993949, 0.00584357])
The first row of our eigenvectors should be considered the strongest in that it has the shortest half-life for mean reversion:
print('Eigenvector in scientific notation:\n{0}\n'.format(joh_model.evec[0]))
print('Eigenvector in decimal notation:')
i = 0
for val in joh_model.evec[0]:
print('{0}: {1:.10f}'.format(i, val))
i += 1
Result:
Eigenvector in scientific notation:
[ 2.21531848e-04 -1.70103937e-04 -9.40374745e-05]
Eigenvector in decimal notation:
0: 0.0002215318
1: -0.0001701039
2: -0.0000940375
And here's the problem I've mentioned in my introduction. Per Ernie's description these values should correlate with the hedge ratios for each of the crosses. However they are a) way to small b) two of them are negative (obviously incorrect for these three crypto pairs) and c) seem to be completely uncorrelated to the test data (e.g. BTC is obviously trading at a massive premium and should be the smallest value).
Now I'm no math genius and there's a good chance that I messed up somewhere, which is why I provided all the code/steps involved for replication. Any pointers and insights would be much appreciated. Many thanks in advance.
UPDATE: Based on MilTom's suggestion I converted my dataset to percent returns and here's the result:
max_eig_stat trace_stat
0 127.076209 133.963475
1 6.581045 6.887266
2 0.306221 0.306221
Critical values(90%, 95%, 99%) of max_eig_stat
[[18.8928 21.1314 25.865 ]
[12.2971 14.2639 18.52 ]
[ 2.7055 3.8415 6.6349]]
Critical values(90%, 95%, 99%) of trace_stat
[[27.0669 29.7961 35.4628]
[13.4294 15.4943 19.9349]
[ 2.7055 3.8415 6.6349]]
Eigenvector in scientific notation:
[ 0.00400041 -0.01952632 -0.0133122 ]
Eigenvector in decimal notation:
0: 0.0040004070
1: -0.0195263209
2: -0.0133122020
This looks more appropriate but it seems that Johansen test fails to rule out the null scenario given the low values in row 1 and 2. Apparently there is no correlation, at least that's how I'm reading the result.
Related
im trying to replicate a certain code from yuxing Yan's python for finance.
I am at a road block because I am getting very high minimized figures(in this case stock weights, which ca be both +(long) and (-short) after optimization with fmin().
can anyone help me with a fresh pair of eyes. I have seen some suggestion about avoiding passing negative or complex figures to fmin() but I can't afford to as its vital to my code
#Lets import our modules
from scipy.optimize import fmin #to minimise our negative sharpe-ratio
import numpy as np#deals with numbers python
from datetime import datetime#handles date objects
import pandas_datareader.data as pdr #to read download equity data
import pandas as pd #for reading and accessing tables etc
import scipy as sp
from scipy.stats import norm
import scipy.stats as stats
from scipy.optimize import fminbound
assets=('AAPL',
'IBM',
'GOOG',
'BP',
'XOM',
'COST',
'GS')
#start and enddate to be downloaded
startdate='2016-01-01'
enddate='2016-01-31'
rf_rate=0.0003
n=len(assets)
#_______________________________________________
#This functions takes the assets,start and end dates and
#returns portfolio return
#__________________________________________________
def port_returns (assets,startdate,enddate):
#We use adjusted clsoing prices of sepcified dates of assets
#as we will only be interested in returns
data = pdr.get_data_yahoo(assets, start=startdate, end=enddate)['Adj Close']
#We calculate the percentage change of our returns
#using pct_change function in python
returns=data.pct_change()
return returns
def portfolio_variance(returns,weight):
#finding the correlation of our returns by
#dropping the nan values and transposing
correlation_coefficient = np.corrcoef(returns.dropna().T)
#standard deviation of our returns
std=np.std(returns,axis=0)
#initialising our variance
port_var = 0.0
#creating a nested loop to calculate our portfolio variance
#where the variance is w12σ12 + w22σ22 + 2w1w2(Cov1,2)
#and correlation coefficient is given by covaraince btn two assets divided by standard
#multiplication of standard deviation of both assets
for i in range(n):
for j in range(n):
#we calculate the variance by continuously summing up the varaince between two
#assets using i as base loop, multiplying by std and corrcoef
port_var += weight[i]*weight[j]*std[i]*std[j]*correlation_coefficient[i, j]
return port_var
def sharpe_ratio(returns,weights):
#call our variance function
variance=portfolio_variance(returns,weights)
avg_return=np.mean(returns,axis=0)
#turn our returns to an array
returns_array = np.array(avg_return)
#Our sharpe ratio uses expected return gotten from multiplying weights and return
# and standard deviation gotten by square rooting our variance
#https://en.wikipedia.org/wiki/Sharpe_ratio
return (np.dot(weights,returns_array) - rf_rate)/np.sqrt(variance)
def negate_sharpe_ratio(weights):
#returns=port_returns (assets,startdate,enddate)
#creating an array with our weights by
#summing our n-1 inserted and subtracting by 1 to make our last weight
weights_new=np.append(weights,1-sum(weights))
#returning a negative sharpe ratio
return -(sharpe_ratio(returns_data,weights_new))
returns_data=port_returns(assets,startdate,enddate)
# for n stocks, we could only choose n-1 weights
ones_weights_array= (np.ones(n-1, dtype=float) * 1.0 )/n
weight_1 = fmin(negate_sharpe_ratio,ones_weights_array)
final_weight = np.append(weight_1, 1 - sum(weight_1))
final_sharpe_ratio = sharpe_ratio(returns_data,final_weight)
print ('Optimal weights are ')
print (final_weight)
print ('final Sharpe ratio is ')
print(final_sharpe_ratio)
A few things are causing your code not to work as written
is assets the list of items in ticker?
shouldstartdate be set equal to begdate?
Your call to port_returns() is looking for both assets and startdate which are never defined.
Function sharpe_ratio() is looking for a variable called rf_rate which is never defined. I assume this is the risk-free rate and the value assigned to rf at the beginning of the script. So should rf be called rf_rate instead?
After changing rf to rf_rate, begdate to startdate, and setting assets = list(ticker), it appears that this will work as written
Suppose I have a sample, which I have reason to believe follows an exponential distribution. I want to estimate the distributions parameter (lambda) and some indication of the confidence. Either the confidence interval or the standard error would be fine. Sadly, scipy.stats.expon.fit does not seem to permit that. Here's an example, I will use lambda=1/120=0.008333 for the test data:
""" Generate test data"""
import scipy.stats
test_data = scipy.stats.expon(scale=120).rvs(size=3000)
""" Scipy.stats fit"""
fit = scipy.stats.expon.fit(test_data)
print("\nExponential parameters:", fit, " Specifically lambda: ", 1/fit[1], "\n")
# Exponential parameters: (0.0066790678905608875, 116.8376079908356) Specifically lambda: 0.008558887991599736
The answer to a similar question about gamma-distributed data suggests using GenericLikelihoodModel from the statsmodels module. While I can confirm that this works nicely for gamma-distributed data, it does not for exponential distributions, because the optimization apparently results in a non-invertible Hessian matrix. This results either from non-finite elements in the Hessian or from np.linalg.eigh producing non-positive eigenvalues for the Hessian. (Source code here; HessianInversionWarning is raised in the fit method of the LikelihoodModel class.).
""" Statsmodel fit"""
from statsmodels.base.model import GenericLikelihoodModel
class Expon(GenericLikelihoodModel):
nparams = 2
def loglike(self, params):
return scipy.stats.expon.logpdf(self.endog, *params).sum()
res = Expon(test_data).fit(start_params=fit)
res.df_model = len(fit)
res.df_resid = len(test_data) - len(fit)
print(res.summary())
#Optimization terminated successfully.
# Current function value: 5.760785
# Iterations: 38
# Function evaluations: 76
#/usr/lib/python3.8/site-packages/statsmodels/tools/numdiff.py:352: RuntimeWarning: invalid value encountered in double_scalars
# hess[i, j] = (f(*((x + ee[i, :] + ee[j, :],) + args), **kwargs)
#/usr/lib/python3.8/site-packages/statsmodels/base/model.py:547: HessianInversionWarning: Inverting hessian failed, no bse or cov_params available
# warn('Inverting hessian failed, no bse or cov_params '
#/usr/lib/python3.8/site-packages/scipy/stats/_distn_infrastructure.py:903: RuntimeWarning: invalid value encountered in greater
# return (a < x) & (x < b)
#/usr/lib/python3.8/site-packages/scipy/stats/_distn_infrastructure.py:903: RuntimeWarning: invalid value encountered in less
# return (a < x) & (x < b)
#/usr/lib/python3.8/site-packages/scipy/stats/_distn_infrastructure.py:1912: RuntimeWarning: invalid value encountered in less_equal
# cond2 = cond0 & (x <= _a)
# Expon Results
#==============================================================================
#Dep. Variable: y Log-Likelihood: -17282.
#Model: Expon AIC: 3.457e+04
#Method: Maximum Likelihood BIC: 3.459e+04
#Date: Thu, 06 Aug 2020
#Time: 13:55:24
#No. Observations: 3000
#Df Residuals: 2998
#Df Model: 2
#==============================================================================
# coef std err z P>|z| [0.025 0.975]
#------------------------------------------------------------------------------
#par0 0.0067 nan nan nan nan nan
#par1 116.8376 nan nan nan nan nan
#==============================================================================
This seems to happen every single time, so it might be something related to exponentially-distributed data.
Is there some other possible approach? Or am I maybe missing something or doing something wrong here?
Edit: It turns out, I was doing something wrong, namely, I wrongly had
test_data = scipy.stats.expon(120).rvs(size=3000)
instead of
test_data = scipy.stats.expon(scale=120).rvs(size=3000)
and was correspondingly looking at the forst element of the fit tuple while I should have been looking at the second.
As a result, the two other options I considered (manually computing the fit and confidence intervals following the standard procedure described on wikipedia) and using scikits.bootstrap as suggested in this answer actually does work and is part of the solution I'll add in a minute and not of the problem.
As mentioned in the edited question, part of the problem was that I was looking at the wrong parameter when creating the sample and again in the fit.
What remains is that scipy.stats.expon.fit does not offer the possibility to compute confidence or errors and that using GenericLikelihoodModel from the statsmodels module as suggested here fails because of a malformed Hessian.
There are, however three approaches that do work:
1. Using the simple inference procedure for confidence intervals in exponential data as given in the wikipedia article
""" Maximum likelihood"""
import numpy as np
ML_lambda = 1 / np.mean(test_data)
print("\nML lambda: {0:8f}".format(ML_lambda))
#ML lambda: 0.008558
""" Bias corrected ML"""
ML_BC_lambda = ML_lambda - ML_lambda / (len(test_data) - 1)
print("\nML bias-corrected lambda: {0:8f}".format(ML_BC_lambda))
#ML bias-corrected lambda: 0.008556
Computation of the confidence intervals::
""" Maximum likelihood 95% confidence"""
CI_distance = ML_BC_lambda * 1.96/(len(test_data)**0.5)
print("\nLambda with confidence intervals: {0:8f} +/- {1:8f}".format(ML_BC_lambda, CI_distance))
print("Confidence intervals: ({0:8f}, {1:9f})".format(ML_BC_lambda - CI_distance, ML_BC_lambda + CI_distance))
#Lambda with confidence intervals: 0.008556 +/- 0.000306
#Confidence intervals: (0.008249, 0.008862)
Second option with this: In addition, the confidence interval equation should also be valid for a lambda estimate produced by a different such as the one from scipy.stats.expon.fit. (I thought that the fitting procedure in scipy.stats.expon.fit was more reliable, but it turns out it is actually the same, without the bias correction (see above).)
""" Maximum likelihood 95% confidence based on scipy.stats fit"""
scipy_stats_lambda = 1 / fit[1]
scipy_stats_CI_distance = scipy_stats_lambda * 1.96/(len(test_data)**0.5)
print("\nOr, based on scipy.stats fit:")
print("Lambda with confidence intervals: {0:8f} +/- {1:8f}".format(scipy_stats_lambda, scipy_stats_CI_distance))
print("Confidence intervals: ({0:8f}, {1:9f})".format(scipy_stats_lambda - scipy_stats_CI_distance,
scipy_stats_lambda + scipy_stats_CI_distance))
#Or, based on scipy.stats fit:
#Lambda with confidence intervals: 0.008559 +/- 0.000306
#Confidence intervals: (0.008253, 0.008865)
2. Bootstrapping with scikits.bootstrap following the suggestion in this answer
This yields an InstabilityWarning: Some values were NaN; results are probably unstable (all values were probably equal), so this should be treated a bit skeptically.
""" Bootstrapping with scikits"""
print("\n")
import scikits.bootstrap as boot
bootstrap_result = boot.ci(test_data, scipy.stats.expon.fit)
print(bootstrap_result)
#tmp/expon_fit_test.py:53: InstabilityWarning: Some values were NaN; results are probably unstable (all values were probably equal)
# bootstrap_result = boot.ci(test_data, scipy.stats.expon.fit)
#[[6.67906789e-03 1.12615588e+02]
# [6.67906789e-03 1.21127091e+02]]
3. Using rpy2
""" Using r modules with rpy2"""
import rpy2.robjects as robjects
from rpy2.robjects.packages import importr
MASS = importr('MASS')
import rpy2.robjects.numpy2ri
rpy2.robjects.numpy2ri.activate()
rpy_fit = MASS.fitdistr(test_data, "exponential")
rpy_estimate = rpy_fit.rx("estimate")[0][0]
rpy_sd = rpy_fit.rx("sd")[0][0]
rpy_lower = rpy_estimate - 2*rpy_sd
rpy_upper = rpy_estimate + 2*rpy_sd
print("\nrpy2 fit: \nLambda={0:8f} +/- {1:8f}, CI: ({2:8f}, {3:8f})".format(rpy_estimate, rpy_sd, rpy_lower, rpy_upper))
#rpy2 fit:
#Lambda=0.008558 +/- 0.000156, CI: (0.008246, 0.008871)
You have already found a solution to your problem, but here is a solution based on OpenTURNS. I think it relies on bootstrapping under the hood.
OpenTURNS requires you to reshape the data so that it is clear we are dealing with 3000 one-dimensional points and not a single 3000-dimensional point.
test_data = test_data.reshape(-1, 1)
The rest is rather straightforward.
import openturns as ot
confidence_level = 0.9
params_ci = ot.ExponentialFactory().buildEstimator(test_data).getParameterDistribution().computeBilateralConfidenceInterval(confidence_level)
lambda_ci = [params_ci.getLowerBound()[0], params_ci.getUpperBound()[0]]
# the index 0 means we are interested in the CI on lambda
print(lambda_ci)
I got the following output (but it depends on the random seed):
[0.008076302149561718, 0.008688296487447742]
I did not find all anwsers regarding my problem or each of questions deal with just part of it. After few days of trying I decided to post a question.
I am doing biomechanical research involving computing maximum velocity of a kicks. There are three kicks captured in each file (simple finding maximum value wont do). I need to find maxium values of those kicks. With some help I manage to do it using matlab/octave but I for future work I decided to stick with python for data processing.
The point is that I have time,x,y,z data of a specific marker and I need to compute its velocity for each registered frame and pick maximum velocity from each kick.
This is a code in octave:
pkg load signal
txyz=importdata('295ltoe.txt',',',8); % read the text file
txyz=txyz.data; % all data in array time,x,y,z
dxyz=diff(txyz); % first differences of all columns
vxyz=dxyz(:,2:end)./dxyz(:,1)/1000; % compute velocity components
v=sqrt(sum(vxyz.^2,2)); % and the total velocity
[pks,locs]=findpeaks(v,'minpeakheight',6 )
I tried to convert it to pandas with this code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import find_peaks
data1 = pd.read_csv("B0264_dollyo_air_P_T01 rtoe.txt") #examle of txt
imported from c3d file
df = data1.diff()
dfx = (df['X'] /1000) / df['T']
dfy = (df['Y'] /1000) / df['T']
dfz = (df['Z'] /1000) / df['T']
dfx1 = dfx**2
dfy1 = dfy**2
dfz1 = dfz**2
v = (dfx1 + dfy1 + dfz1)**1/2
peaks, _ = find_peaks(v, height=6)
plt.plot(v)
plt.plot(peaks, v[peaks], "x")
plt.show()
The problem is that velocity gets values like that:
0 NaN
1 6.450000e-07
2 8.237500e-07
3 1.159062e-06
4 1.250312e-06
5 1.657500e-06
instead of normal correct values it gives me values more than 60. I am attaching plots that I recived vs excel correct polot (excel computing is time consuming due to constant copypasting).
My total aim is to get 3 max peaks and 3 min peaks to compute time of kick execution, but I do not know how to obtain it.
For know, if anyone is willing to help me I can provide files I have used.
I am trying to learn how to sample truncated distributions. To begin with I decided to try a simple example I found here example
I didn't really understand the division by the CDF, therefore I decided to tweak the algorithm a bit. Being sampled is an exponential distribution for values x>0 Here is an example python code:
# Sample exponential distribution for the case x>0
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
def pdf(x):
return x*np.exp(-x)
xvec=np.zeros(1000000)
x=1.
for i in range(1000000):
a=x+np.random.normal()
xs=x
if a > 0. :
xs=a
A=pdf(xs)/pdf(x)
if np.random.uniform()<A :
x=xs
xvec[i]=x
x=np.linspace(0,15,1000)
plt.plot(x,pdf(x))
plt.hist([x for x in xvec if x != 0],bins=150,normed=True)
plt.show()
Ant the output is:
The code above seems to work fine only for when using the condition if a > 0. :, i.e. positive x, choosing another condition (e.g. if a > 0.5 :) produces wrong results.
Since my final goal was to sample a 2D-Gaussian - pdf on a truncated interval I tried extending the simple example using the exponential distribution (see the code below). Unfortunately, since the simple case didn't work, I assume that the code given below would yield wrong results.
I assume that all this can be done using the advanced tools of python. However, since my primary idea was to understand the principle behind, I would greatly appreciate your help to understand my mistake.
Thank you for your help.
EDIT:
# code updated according to the answer of CrazyIvan
from scipy.stats import multivariate_normal
RANGE=100000
a=2.06072E-02
b=1.10011E+00
a_range=[0.001,0.5]
b_range=[0.01, 2.5]
cov=[[3.1313994E-05, 1.8013737E-03],[ 1.8013737E-03, 1.0421529E-01]]
x=a
y=b
j=0
for i in range(RANGE):
a_t,b_t=np.random.multivariate_normal([a,b],cov)
# accept if within bounds - all that is neded to truncate
if a_range[0]<a_t and a_t<a_range[1] and b_range[0]<b_t and b_t<b_range[1]:
print(dx,dy)
EDIT:
I changed the code by norming the analytic pdf according to this scheme, and according to the answers given by, #Crazy Ivan and #Leandro Caniglia , for the case where the bottom of the pdf is removed. That is dividing by (1-CDF(0.5)) since my accept condition is x>0.5. This seems again to show some discrepancies. Again the mystery prevails ..
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
def pdf(x):
return x*np.exp(-x)
# included the corresponding cdf
def cdf(x):
return 1. -np.exp(-x)-x*np.exp(-x)
xvec=np.zeros(1000000)
x=1.
for i in range(1000000):
a=x+np.random.normal()
xs=x
if a > 0.5 :
xs=a
A=pdf(xs)/pdf(x)
if np.random.uniform()<A :
x=xs
xvec[i]=x
x=np.linspace(0,15,1000)
# new part norm the analytic pdf to fix the area
plt.plot(x,pdf(x)/(1.-cdf(0.5)))
plt.hist([x for x in xvec if x != 0],bins=200,normed=True)
plt.savefig("test_exp.png")
plt.show()
It seems that this can be cured by choosing larger shift size
shift=15.
a=x+np.random.normal()*shift.
which is in general an issue of the Metropolis - Hastings. See the graph below:
I also checked shift=150
Bottom line is that changing the shift size definitely improves the convergence. The misery is why, since the Gaussian is unbounded.
You say you want to learn the basic idea of sampling a truncated distribution, but your source is a blog post about
Metropolis–Hastings algorithm? Do you actually need this "method for obtaining a sequence of random samples from a probability distribution for which direct sampling is difficult"? Taking this as your starting point is like learning English by reading Shakespeare.
Truncated normal
For truncated normal, basic rejection sampling is all you need: generate samples for original distribution, reject those outside of bounds. As Leandro Caniglia noted, you should not expect truncated distribution to have the same PDF except on a shorter interval — this is plain impossible because the area under the graph of a PDF is always 1. If you cut off stuff from sides, there has to be more in the middle; the PDF gets rescaled.
It's quite inefficient to gather samples one by one, when you need 100000. I would grab 100000 normal samples at once, accept only those that fit; then repeat until I have enough. Example of sampling truncated normal between amin and amax:
import numpy as np
n_samples = 100000
amin, amax = -1, 2
samples = np.zeros((0,)) # empty for now
while samples.shape[0] < n_samples:
s = np.random.normal(0, 1, size=(n_samples,))
accepted = s[(s >= amin) & (s <= amax)]
samples = np.concatenate((samples, accepted), axis=0)
samples = samples[:n_samples] # we probably got more than needed, so discard extra ones
And here is the comparison with the PDF curve, rescaled by division by cdf(amax) - cdf(amin) as explained above.
from scipy.stats import norm
_ = plt.hist(samples, bins=50, density=True)
t = np.linspace(-2, 3, 500)
plt.plot(t, norm.pdf(t)/(norm.cdf(amax) - norm.cdf(amin)), 'r')
plt.show()
Truncated multivariate normal
Now we want to keep the first coordinate between amin and amax, and the second between bmin and bmax. Same story, except there will be a 2-column array and the comparison with bounds is done in a relatively sneaky way:
(np.min(s - [amin, bmin], axis=1) >= 0) & (np.max(s - [amax, bmax], axis=1) <= 0)
This means: subtract amin, bmin from each row and keep only the rows where both results are nonnegative (meaning we had a >= amin and b >= bmin). Also do a similar thing with amax, bmax. Accept only the rows that meet both criteria.
n_samples = 10
amin, amax = -1, 2
bmin, bmax = 0.2, 2.4
mean = [0.3, 0.5]
cov = [[2, 1.1], [1.1, 2]]
samples = np.zeros((0, 2)) # 2 columns now
while samples.shape[0] < n_samples:
s = np.random.multivariate_normal(mean, cov, size=(n_samples,))
accepted = s[(np.min(s - [amin, bmin], axis=1) >= 0) & (np.max(s - [amax, bmax], axis=1) <= 0)]
samples = np.concatenate((samples, accepted), axis=0)
samples = samples[:n_samples, :]
Not going to plot, but here are some values: naturally, within bounds.
array([[ 0.43150033, 1.55775629],
[ 0.62339265, 1.63506963],
[-0.6723598 , 1.58053835],
[-0.53347361, 0.53513105],
[ 1.70524439, 2.08226558],
[ 0.37474842, 0.2512812 ],
[-0.40986396, 0.58783193],
[ 0.65967087, 0.59755193],
[ 0.33383214, 2.37651975],
[ 1.7513789 , 1.24469918]])
To compute the truncated density function pdf_t from the entire density function pdf, do the following:
Let [a, b] be the truncation interval; (x axis)
Let A := cdf(a) and B := cdf(b); (cdf = non-truncated cumulative distribution function)
Then pdf_t(x) := pdf(x) / (B - A) if x in [a, b] and 0 elsewhere.
In cases where a = -infinity (resp. b = +infinity), take A := 0 (resp. B := 1).
As for the "mystery" you see
please note that your blue curve is wrong. It is not the pdf of your truncated distribution, it is just the pdf of the non-truncated one, scaled by the correct amount (division by 1-cdf(0.5)). The actual truncated pdf curve starts with a vertical line on x = 0.5 which goes up until it reaches your current blue curve. In other words, you only scaled the curve but forgot to truncate it, in this case to the left. Such a truncation corresponds to the "0 elsewhere" part of step 3 in the algorithm above.
For a series of angle values in (-pi, pi) range, I make a histogram. Is there an effective way to calculate a mean and modal (post probable) value? Consider following examples:
import numpy as N, cmath
deg = N.pi/180.
d = N.array([-175., 170, 175, 179, -179])*deg
i = N.sum(N.exp(1j*d))
ave = cmath.phase(i)
i /= float(d.size)
stdev = -2. * N.log(N.sqrt(i.real**2 + i.imag**2))
print ave/deg, stdev/deg
Now, let's have a histogram:
counts, bins = N.histogram(data, N.linspace(-N.pi, N.pi, 360))
Is it possible to calculate mean, mode having counts and bins? For non-periodic data, calculation of a mean is straightforward:
ave = sum(counts*bins[:-1])
Calculations of a modal value requires more effort. Actually, I'm not sure my code below is correct: firstly, I identify bins which occur most frequently and then I calculate an arithmetic mean:
cmax = bins[N.argmax(counts)]
mode = N.mean(N.take(bins, N.nonzero(counts == cmax)[0]))
I have no idea, how to calculate standard deviation from such data, though. One obvious solution to all my problems (at least those described above) is to convert histogram data to a data series and then use it in calculations. This is not elegant, however, and inefficient.
Any hints will be very appreciated.
This is the partial solution I wrote.
import numpy as N, cmath
import scipy.stats as ST
d = [-175, 170.2, 175.57, 179, -179, 170.2, 175.57, 170.2]
deg = N.pi/180.
data = N.array(d)*deg
i = N.sum(N.exp(1j*data))
ave = cmath.phase(i) # correct and exact mean for periodic data
wrong_ave = N.mean(d)
i /= float(data.size)
stdev = -2. * N.log(N.sqrt(i.real**2 + i.imag**2))
wrong_stdev = N.std(d)
bins = N.linspace(-N.pi, N.pi, 360)
counts, bins = N.histogram(data, bins, normed=False)
# consider it weighted vector addition
nz = N.nonzero(counts)[0]
weight = counts[nz]
i = N.sum(weight * N.exp(1j*bins[nz])/len(nz))
pave = cmath.phase(i) # correct and approximated mean for periodic data
i /= sum(weight)/float(len(nz))
pstdev = -2. * N.log(N.sqrt(i.real**2 + i.imag**2))
print
print 'scipy: %12.3f (mean) %12.3f (stdev)' % (ST.circmean(data)/deg, \
ST.circstd(data)/deg)
When run, it gives following results:
mean: 175.840 85.843 175.360
stdev: 0.472 151.785 0.430
scipy: 175.840 (mean) 3.673 (stdev)
A few comments now: the first column gives mean/stdev calculated. As can be seen, the mean agrees well with scipy.stats.circmean (thanks JoeKington for pointing it out). Unfortunately stdev differs. I will look at it later. The second column gives completely wrong results (non-periodic mean/std from numpy obviously does not work here). The 3rd column gives sth I wanted to obtain from the histogram data (#JoeKington: my raw data won't fit memory of my computer.., #dmytro: thanks for your input: of course, bin size will influence result but in my application I don't have much choice, i.e. I have to reduce data somehow). As can be seen, the mean (3rd column) is properly calculated, stdev needs further attention :)
Have a look at scipy.stats.circmean and scipy.stats.circstd.
Or do you only have the histogram counts, and not the "raw" data? If so, you could fit a Von Mises distribution to your histogram counts and approximate the mean and stddev in that way.
Here's how to get an approximation.
Since Var(x) = <x^2> - <x>^2, we have:
meanX = N.sum(counts * bins[:-1]) / N.sum(counts)
meanX2 = N.sum(counts * bins[:-1]**2) / N.sum(counts)
std = N.sqrt(meanX2 - meanX**2)