scipy.optimize with SLSQP. 'Singular matrix C in LSQ subproblem' - python

I'm trying to minimize a dot product of 2 vectors but it doesn't work and I have no idea why. Can someone please help me?
I have a matrix c of this form:
c = [[c11, c12, c13, c14, c15],
[c21, c22, c23, c24, c25]]
I want to get a matrix p of this form:
p = [[p11, p12, p13, p14, p15],
[p21, p22, p23, p24, p25]]
I want to maximize this value :
c11*p11 + c12*p12 +c13*p13 + c14*p14 + c15*p15 + c21*p21 + c22*p22 +c23*p23 + c24*p24 + c25*p25
To get that I convert the c and p to 1-D vector and do the dot product so that my function to maximize is:
f(p) = c.dot(p)
The constraints are:
c11 + c12 + c13 + c14 + c15 = 1
c21 + c22 + c23 + c24 + c25 = 1
every element in p must be between 0.01 and 0.99.
I have tried scipy.optimize.linprog and it works:
from scipy.optimize import linprog
c = np.array([0. , 0. , 0. , 0. , 0. , 0. , 20094.21019108, 4624.08079143, 6625.51724138, 3834.81081081])
A_eq = np.array([[1,1,1,1,1,0,0,0,0,0],
[0,0,0,0,0,1,1,1,1,1]])
b_eq = np.array([1, 1])
res = linprog(-c, A_eq=A_eq, b_eq=b_eq, bounds=(0.01, 0.99))
res
Out[561]:
fun: -19441.285871873002
message: 'Optimization terminated successfully.'
nit: 13
slack: array([0.03, 0.98, 0.98, 0.98, 0.98, 0.98, 0.03, 0.98, 0.98, 0.98, 0. ,
0. , 0.95, 0. , 0. , 0. , 0. , 0. , 0. , 0. ])
status: 0
success: True
x: array([0.96, 0.01, 0.01, 0.01, 0.01, 0.01, 0.96, 0.01, 0.01, 0.0
But I'm trying to use scipy.optimize.minimize with SLSQP instead and that's where I get this 'Singular matrix C in LSQ subproblem' . Here is what I've done:
from scipy.optimize import minimize
def build_objective(ck, sign = -1.00):
"""
Builds the objective fuction for matrix ck
"""
# Here I turn my c matrix to a 1-D matrix
ck = np.concatenate(ck)
def objective(P):
return sign*(ck.dot(P))
return objective
def build_constraint_rows(ck):
"""
Builds the constraint functions that specify that the sum of the proportions for
each bin equals 1
"""
ncol = ck.shape[1]
nrow = ck.shape[0]
constrain_dict = []
for i in range(nrow):
vector = np.zeros((nrow,ncol))
vector[i, :] = 1
vector = np.concatenate(vector)
def row_constrain(P):
return 1 - vector.dot(P)
constrain_dict.append({'type': 'eq', 'fun': row_constrain})
return constrain_dict
# Matrix: Notice that this is not in vector form yet
c = np.array([[0. , 0. , 0. , 0., 0.],
[0. , 20094.21019108, 4624.08079143, 6625.51724138, 3834.81081081]])
# I need some initial p matrix for the function 'minimize'. I look for the value of the row that is the highest and assign it a proportion p of 0.96 and the rest 0.01 so the sum in 1 per row
P_initial = np.ones(c.shape)*0.01
nrow = test.shape[0]
for i in range(nrow):
index= np.where(c[i,] == np.max(c[i,]))[0]
if index.shape[0] > 1:
index = int(np.random.choice(index, size = 1))
else:
index = int(index)
P_initial[i,index] = 0.96
# I turn the P_initial to vector form
P_initial = np.concatenate(P_initial)
# These are the bounds of each p value
b = (0.01,0.99)
bnds = (b,)*c.size
# I then use my previous functions
objective_fun = build_objective(c)
cons = build_constraint_rows(c)
res = minimize(objective_fun,P_initial,method='SLSQP',\
bounds=bnds,constraints=cons)
This is my final result:
res
Out[546]:
fun: -19434.501741138763
jac: array([0. , 0.,0. , 0. ,0. , 0., -20094.21020508, -4624.08056641, -6625.51708984, -3834.81079102])
message: 'Singular matrix C in LSQ subproblem'
nfev: 24
nit: 2
njev: 2
status: 6
success: False
x: array([0.96 , 0.01 , 0.01 , 0.01 , 0.01 ,
0.01020202, 0.95962502, 0.01006926, 0.01001178, 0.01009192])
Please help me understand what I'm doing wrong.
Thank you in advanced,
Karol

Related

Changing the values of a matrix above a threshold in python

I have a matrix :
matrix = np.array([[[0,0.5,0.6],[0.9,1.2,0]],[[0,0.5,0.6],[0.9,1.2,0]]])
I want to replace all the values 0.55 < x < 0.95 by 0.55.
PS : My question is similar to this question. But the answer does not work in my case.
You can use np.where:
matrix = np.array([[[0,0.5,0.6],[0.9,1.2,0]],[[0,0.5,0.6],[0.9,1.2,0]]])
matrix[np.where((matrix > 0.55) & (matrix < 0.95))] = 0.55
# Or
# matrix[(matrix > 0.55) & (matrix < 0.95)] = 0.55
Output:
>>> matrix
array([[[0. , 0.5 , 0.55],
[0.55, 1.2 , 0. ]],
[[0. , 0.5 , 0.55],
[0.55, 1.2 , 0. ]]])

Drop NaN in a for loop for each column (Longstaff Schwartz Monte Carlo)

I will try to explain my problem. So I have two DataFrames , Df1 and Df2.
Each of them has 3 columns and 4 rows.
I will solve a quadratic functions with np.polyfit.
M=3
for t in range(M-1,0,-1):
regs = np.polyfit(Df1[:,t],Df2[:,t+1],2)
C = np.polyval(regs,Df1[:,t])
But I want to use only the values which are smaller than 1.1
Df1[Df1 < 1.1]
Now I have something like that
[1. , 1.09, 1.08, NaN]
[1. , 1., 1.07, 1.04]
[1. , NaN, 1.01, NaN]
[1. , 0.78, NaN,0.95]
And my Df2 looks like
[0.1 , 0., 0.08, 0.]
[0.1 , 0.11, 0., 0.09]
[0.1 , 0.33, 0.22, 0.]
[0.1 , 0.09, 0.108, 0.]
So what I want to do is for each column from Df1, if Df1 has a NaN
Then I don't want to calculate it.
Here is what I tried to explain:
X =[1.08,1.07,1.01]
Y =[0.,0.09,0]
I tried this one
S = [[1.,1.09,1.08,1.34],[1.,1.16,1.26,1.54],[1.,1.22,1.07,1.03],[1.,0.93,0.97,0.92],[1.,1.11,1.56,1.52],
[1.,0.76,0.77,0.9],[1.,0.92,0.84,1.01],[1.,0.88,1.22,1.34]]
K= 1.1
Sn = np.asarray(S)
r = 0.06
T=1
M=3
dt = T/M
h= np.maximum(K-Sn,0)
V = np.copy(h)
disk = np.exp(-r*dt)
for i in range(M-1,0,-1):
reg = np.polyfit(Sn[:,i],V[:,i+1]*disk,2)
C = np.polyval(reg,Sn[:,i])
V[:,i] = np.where(C > h[:,i],V[:,i+1]*disk,h[:,i])
C0 = disk* 1/8 * np.sum(V[:,1])
And my result for C0 is 0.11973..
This is the Longstaff Schwartz Monte Carlo Algorithm for pricing American Options.
But in the paper from Longstaff Schwartz ,they get a little different result
https://people.math.ethz.ch/~hjfurrer/teaching/LongstaffSchwartzAmericanOptionsLeastSquareMonteCarlo.pdf
(Page120)
They get 0.114. But I don't see my mistake

Linear least-squares solution for 3d inputs

Problem
Say I have two arrays with the following shapes:
y.shape is (z, b). Picture this as a collection of z (b,) y vectors.
x.shape is (z, b, c). Picture this as a collection of z (b, c) multivariate x matrices.
My intent is to find the z independent vectors of least-squares coefficient solutions. I.e. the first solution is from regressing y[0] on x[0], where those inputs have shape (b, ) and (b, c) respectively. (b observations, c features.) The result would be shape (z, c).
Some example data
np.random.seed(123)
x = np.random.randn(190, 20, 3)
y = np.random.randn(190, 20) # Assumes no intercept term
# First vector of coefficients
np.linalg.lstsq(x[0], y[0])[0]
# array([-0.12823781, -0.3055392 , 0.11602805])
# Last vector of coefficients
np.linalg.lstsq(x[-1], y[-1])[0]
# array([-0.02777503, -0.20425779, 0.22874169])
NumPy's least-squares solver lstsq can't operate on these. (With my intended result being shape (190, 3), or 190 vectors of 3 coefficients each. Each (3,) vector is one coefficient set from regressions with n=20.)
Is there a workaround to get to the coefficient matrices wrapped into one result array? I'm thinking possibly of the matrix formulation:
For a 1d y and 2d x this would just be:
def coefs(y, x):
return np.dot(np.linalg.inv(np.dot(x.T, x)), np.dot(x.T, y))
but I'm having trouble getting this to accept a 2d y and 3d x as above.
Lastly, I'm curious as to why lstsq has trouble here. Is there a simple answer as to why the inputs must be at most 2d?
Here is some demo to demonstrate:
the problems mentioned in my comments
a mostly empirical analysis of looped-lstsq vs. one-step-embedded-lstsq
(with some surprising result at the end which is to be taken with a grain of salt):
Code
import numpy as np
import scipy.sparse as sp
from sklearn.datasets import make_regression
from time import perf_counter as pc
np.set_printoptions(edgeitems=3,infstr='inf',
linewidth=160, nanstr='nan', precision=1,
suppress=False, threshold=1000, formatter=None)
""" Create task """
Z, B, C = 4, 3, 2
Zs = []
Bs = []
for i in range(Z):
X, y, = make_regression(n_samples=B, n_features=C, random_state=i)
Zs.append(X)
Bs.append(y)
Zs = np.array(Zs)
Bs = np.array(Bs)
""" Independent looping """
print('LOOPED CALLS')
start = pc()
result = np.empty((Z, C))
for z in range(Z):
result[z] = np.linalg.lstsq(Zs[z], Bs[z])[0]
end = pc()
print('lhs-shape: ', Zs.shape)
print('lhs-dense-fill-ratio: ', np.count_nonzero(Zs) / np.product(Zs.shape))
print('used time: ', end-start)
print(result)
""" Embedding in one """
print('EMBEDDING INTO ONE CALL')
Zs_ = sp.block_diag([Zs[i] for i in range(Z)]).todense() # convenient to use scipy.sparse
# oops: there is a dense-one too:
# -> scipy.linalg.block_diag
Bs_ = Bs.flatten()
start = pc() # one could argue if transform above should be timed too!
result_ = np.linalg.lstsq(Zs_, Bs_)[0]
end = pc()
print('lhs-shape: ', Zs_.shape)
print('lhs-dense-fill-ratio: ', np.count_nonzero(Zs_) / np.product(Zs_.shape))
print('used time: ', end-start)
print(result_)
Output
LOOPED CALLS
lhs-shape: (4, 3, 2)
lhs-dense-fill-ratio: 1.0
used time: 0.0005415275241778155
[[ 89.2 43.8]
[ 68.5 41.9]
[ 61.9 20.5]
[ 5.1 44.1]]
EMBEDDING INTO ONE CALL
lhs-shape: (12, 8)
lhs-dense-fill-ratio: 0.25
used time: 0.00015907748341232328
[ 89.2 43.8 68.5 41.9 61.9 20.5 5.1 44.1]
lstsq problem-dimensions for each case
While the original data looks like:
[[[ 2.2 1. ]
[-1. 1.9]
[ 0.4 1.8]]
[[-1.1 -0.5]
[-2.3 0.9]
[-0.6 1.6]]
[[ 1.6 -2.1]
[-0.1 -0.4]
[-0.8 -1.8]]
[[-0.3 -0.4]
[ 0.1 -1.9]
[ 1.8 0.4]]]
[[ 242.7 -5.4 112.9]
[ -95.7 -121.4 26.2]
[ 57.9 -12. -88.8]
[ -17.1 -81.6 28.4]]
and each solve looks like:
LHS
[[ 2.2 1. ]
[-1. 1.9]
[ 0.4 1.8]]
RHS
[ 242.7 -5.4 112.9]
the embedded problem (one solving-step) looks like:
LHS
[[ 2.2 1. 0. 0. 0. 0. 0. 0. ]
[-1. 1.9 0. 0. 0. 0. 0. 0. ]
[ 0.4 1.8 0. 0. 0. 0. 0. 0. ]
[ 0. 0. -1.1 -0.5 0. 0. 0. 0. ]
[ 0. 0. -2.3 0.9 0. 0. 0. 0. ]
[ 0. 0. -0.6 1.6 0. 0. 0. 0. ]
[ 0. 0. 0. 0. 1.6 -2.1 0. 0. ]
[ 0. 0. 0. 0. -0.1 -0.4 0. 0. ]
[ 0. 0. 0. 0. -0.8 -1.8 0. 0. ]
[ 0. 0. 0. 0. 0. 0. -0.3 -0.4]
[ 0. 0. 0. 0. 0. 0. 0.1 -1.9]
[ 0. 0. 0. 0. 0. 0. 1.8 0.4]]
RHS
[ 242.7 -5.4 112.9 -95.7 -121.4 26.2 57.9 -12. -88.8 -17.1 -81.6 28.4]
There is no way, given the assumptions / standard-form of lstsq to embed this independence-assumption without introducing a lot of zeros!
lstsq is:
not able to exploit sparsity as the core-algorithm is a dense-one
take a look at the transformed shape: this will be heavy in terms of memory and computation!
not able to use information from fit 0 to speed up something in fit 1
they are independent after all; no information gain in theory
able to vectorize a lot (but that's not helping in general)
Your example-shapes
Trimmed output for your specific shapes, this time: test a sparse-solver too:
Added code (at the end)
print('EMBEDDING -> sparse-solver')
Zs_ = sp.csc_matrix(Zs_) # sparse!
start = pc()
result__ = sp.linalg.lsmr(Zs_, Bs_)[0]
end = pc()
print('lhs-shape: ', Zs_.shape)
print('lhs-dense-fill-ratio: ', Zs_.nnz / np.product(Zs_.shape))
print('used time: ', end-start)
print(result__)
Output
LOOPED CALLS
lhs-shape: (190, 20, 3)
lhs-dense-fill-ratio: 1.0
used time: 0.01716980329027777
[ 11.9 31.8 29.6]
...
[ 44.8 28.2 62.3]]
EMBEDDING INTO ONE CALL
lhs-shape: (3800, 570)
lhs-dense-fill-ratio: 0.00526315789474
used time: 0.6774500271820254
[ 11.9 31.8 29.6 ... 44.8 28.2 62.3]
EMBEDDING -> sparse-solver
lhs-shape: (3800, 570)
lhs-dense-fill-ratio: 0.00526315789474
used time: 0.0038423098412817547 # a bit of a surprise
[ 11.9 31.8 29.6 ... 44.8 28.2 62.3]
Conclusion
In general: solve independently!
In some cases, the task above will be solved faster when using the sparse-solver approach, but analysis here is hard as we are comparing two completely different algorithms (direct vs. iterative) and the results might change in some dramatical way for other data.
Here is the linear algebra solution, with the speed right on par with #sascha's looped version for smaller arrays.
print('Matrix formulation')
start = pc()
result = np.squeeze(np.matmul(np.linalg.inv(np.matmul(Zs.swapaxes(1,2), Zs)),
np.matmul(Zs.swapaxes(1,2), np.atleast_3d(Bs))))
end = pc()
print('used time: ', end-start)
print(result)
Ouput:
Matrix formulation
used time: 0.00015713176480858237
[[ 89.2 43.8]
[ 68.5 41.9]
[ 61.9 20.5]
[ 5.1 44.1]]
However, #sascha's answer wins out easily for much larger inputs, especially as the size of the third dimension grows (number of exogenous variables/features).
Z, B, C = 400, 300, 20
Zs = []
Bs = []
for i in range(Z):
X, y, = make_regression(n_samples=B, n_features=C, random_state=i)
Zs.append(X)
Bs.append(y)
Zs = np.array(Zs)
Bs = np.array(Bs)
# --------
print('Matrix formulation')
start = pc()
result = np.squeeze(np.matmul(np.linalg.inv(np.matmul(Zs.swapaxes(1,2), Zs)),
np.matmul(Zs.swapaxes(1,2), np.atleast_3d(Bs))))
end = pc()
print('used time: ', end-start)
print(result)
# --------
print('Looped calls')
start = pc()
result = np.empty((Z, C))
for z in range(Z):
result[z] = np.linalg.lstsq(Zs[z], Bs[z])[0]
end = pc()
print('used time: ', end-start)
print(result)
Output:
Matrix formulation
used time: 0.24000779996413257
[[ 1.2e+01 1.3e-15 6.3e+01 ..., -8.9e-15 5.3e-15 -1.1e-14]
[ 5.8e+01 2.7e-14 -4.8e-15 ..., 8.5e+01 -1.5e-14 1.8e-14]
[ 1.2e+01 -1.2e-14 4.4e-16 ..., 6.0e-15 8.6e+01 6.0e+01]
...,
[ 2.9e-15 6.6e+01 1.1e-15 ..., 9.8e+01 -2.9e-14 8.4e+01]
[ 2.8e+01 6.1e+01 -1.2e-14 ..., -2.5e-14 6.3e+01 5.9e+01]
[ 7.0e+01 3.3e-16 8.4e+00 ..., 4.1e+01 -6.2e-15 5.8e+01]]
Looped calls
used time: 0.17400113389658145
[[ 1.2e+01 7.1e-15 6.3e+01 ..., -2.8e-14 1.1e-14 -4.8e-14]
[ 5.8e+01 -5.7e-14 -4.9e-14 ..., 8.5e+01 -5.3e-15 6.8e-14]
[ 1.2e+01 3.6e-14 4.5e-14 ..., -3.6e-15 8.6e+01 6.0e+01]
...,
[ 6.3e-14 6.6e+01 -1.4e-13 ..., 9.8e+01 2.8e-14 8.4e+01]
[ 2.8e+01 6.1e+01 -2.1e-14 ..., -1.4e-14 6.3e+01 5.9e+01]
[ 7.0e+01 -1.1e-13 8.4e+00 ..., 4.1e+01 -9.4e-14 5.8e+01]]

add headings to rows and columns of matrices in python

I have 2 matrices and I want to add headings to them. How can I add them and call each cell with the headings of rows and columns of them? And also I have to calculate the sum of each element of rows with the elements of columns. I did it but is there a better way (a loop) to do it instead of what I did? and it returns the headings of rows and columns with the sum (for example: xy:0.022)
Hint: the headings of rows must be different with the headings of columns.
And I prefer not to use pandas.
here is my code:
import numpy as np
T = np.array([0,0.012,0.054,0,1,0.03,0.08,0.14,0.02]).reshape(3,3)
#print (T)
#print ('----------------------------')
W = np.array([1,0,0.03,0.01,0.099,0.020,2,0,0.05]).reshape(3,3)
#print (W)
x = T[0][0] + W[0][0]
y = T[0][1] + W[1][0]
z = T[0][2] + W[2][0]
v = T[1][0] + W[0][1]
n = T[1][1] + W[1][1]
m = T[1][2] + W[2][1]
s = T[2][0] + W[0][2]
g = T[2][1] + W[1][2]
k = T[2][2] + W[2][2]
print (x, y, z, v, n, m, s, g, k)
This is one way to do it:
>>> import itertools
>>> T = np.array([0,0.012,0.054,0,1,0.03,0.08,0.14,0.02]).reshape(3,3)
>>> T
array([[ 0. , 0.012, 0.054],
[ 0. , 1. , 0.03 ],
[ 0.08 , 0.14 , 0.02 ]])
>>> W = np.array([1,0,0.03,0.01,0.099,0.020,2,0,0.05]).reshape(3,3)
>>> W
array([[ 1. , 0. , 0.03 ],
[ 0.01 , 0.099, 0.02 ],
[ 2. , 0. , 0.05 ]])
>>> Y = T + W.T # The additions you do can be expressed like this.
>>> Y
array([[ 1. , 0.022, 2.054],
[ 0. , 1.099, 0.03 ],
[ 0.11 , 0.16 , 0.07 ]])
>>> labels = map(lambda x:"".join(x), itertools.product(["x","y","z"], ["a","b","c"]))
>>> labels
['xa', 'xb', 'xc', 'ya', 'yb', 'yc', 'za', 'zb', 'zc']
>>> labels = np.array(labels).reshape((3,3))
>>> labels
array([['xa', 'xb', 'xc'],
['ya', 'yb', 'yc'],
['za', 'zb', 'zc']],
dtype='|S2')
>>> for (x,y), value in np.ndenumerate(Y):
... print labels[x,y], Y[x,y]
...
xa 1.0
xb 0.022
xc 2.054
ya 0.0
yb 1.099
yc 0.03
za 0.11
zb 0.16
zc 0.07
I am not sure on which format you want the headings, but I think this method makes sense.

Fastest way to compute upper-triangular matrix of geometric series (Python)

and thanks in advance for the help.
Using Python (mostly numpy), I am trying to compute an upper-triangular matrix where each row "j" is the first j-terms of a geometric series, all rows using the same parameter.
For example, if my parameter is B (where abs(B)=<1, i.e. B in [-1,1]), then row 1 would be [1 B B^2 B^3 ... B^(N-1)], row 2 would be [0 1 B B^2...B^(N-2)] ... row N would be [0 0 0 ... 1].
This computation is key to a Bayesian Metropolis-Gibbs sampler, and so needs to be done thousands of times for new values of "B".
I have currently tried this two ways:
Method 1 - Mostly Vectorized:
B_Matrix = np.triu(np.dot(np.reshape(B**(-1*np.array(range(N))),(N,1)),np.reshape(B**(np.array(range(N))),(1,N))))
Essentially, this is the upper triangle part of a product of an Nx1 and 1xN set of matrices:
upper triangle ([1 B^(-1) B^(-2) ... B^(-(N-1))]' * [1 B B^2 B^3 ... B^(N-1)])
This works great for small N (algebraically it is correct), but for large N it errs out. And it produces errors out for B=0 (which should be allowed). I believe this is stemming from taking B^(-N) ~ inf for small B and large N.
Method 2:
B_Matrix = np.zeros((N,N))
B_Row_1 = B**(np.array(range(N)))
for n in range(N):
B_Matrix[n,n:] = B_Row_1[0:N-n]
So that just fills in the matrix row by row, but uses a loop which slows things down.
I was wondering if anyone had run into this before, or had any better ideas on how to compute this matrix in a faster way.
I've never posted on stackoverflow before, but didn't see this question anywhere, and thought I'd ask.
Let me know if there's a better place to ask this, and if I should provide anymore detail.
You could use scipy.linalg.toeplitz:
In [12]: n = 5
In [13]: b = 0.5
In [14]: toeplitz(b**np.arange(n), np.zeros(n)).T
Out[14]:
array([[ 1. , 0.5 , 0.25 , 0.125 , 0.0625],
[ 0. , 1. , 0.5 , 0.25 , 0.125 ],
[ 0. , 0. , 1. , 0.5 , 0.25 ],
[ 0. , 0. , 0. , 1. , 0.5 ],
[ 0. , 0. , 0. , 0. , 1. ]])
If your use of the array is strictly "read only", you can play tricks with numpy strides to quickly create an array that uses only 2*n-1 elements (instead of n^2):
In [55]: from numpy.lib.stride_tricks import as_strided
In [56]: def make_array(b, n):
....: vals = np.zeros(2*n - 1)
....: vals[n-1:] = b**np.arange(n)
....: a = as_strided(vals[n-1:], shape=(n, n), strides=(-vals.strides[0], vals.strides[0]))
....: return a
....:
In [57]: make_array(0.5, 4)
Out[57]:
array([[ 1. , 0.5 , 0.25 , 0.125],
[ 0. , 1. , 0.5 , 0.25 ],
[ 0. , 0. , 1. , 0.5 ],
[ 0. , 0. , 0. , 1. ]])
If you will modify the array in-place, make a copy of the result returned by make_array(b, n). That is, arr = make_array(b, n).copy().
The function make_array2 incorporates the suggestion #Jaime made in the comments:
In [30]: def make_array2(b, n):
....: vals = np.zeros(2*n-1)
....: vals[n-1] = 1
....: vals[n:] = b
....: np.cumproduct(vals[n:], out=vals[n:])
....: a = as_strided(vals[n-1:], shape=(n, n), strides=(-vals.strides[0], vals.strides[0]))
....: return a
....:
In [31]: make_array2(0.5, 4)
Out[31]:
array([[ 1. , 0.5 , 0.25 , 0.125],
[ 0. , 1. , 0.5 , 0.25 ],
[ 0. , 0. , 1. , 0.5 ],
[ 0. , 0. , 0. , 1. ]])
make_array2 is more than twice as fast as make_array:
In [35]: %timeit make_array(0.99, 600)
10000 loops, best of 3: 23.4 µs per loop
In [36]: %timeit make_array2(0.99, 600)
100000 loops, best of 3: 10.7 µs per loop

Categories

Resources