Related
I have a transition matrix for which I want to calculate a steady state vector. The code I'm using is adapted from this question, and it works well for matrices of normal size:
def steady_state(matrix):
dim = matrix.shape[0]
q = (matrix - np.eye(dim))
ones = np.ones(dim)
q = np.c_[q, ones]
qtq = np.dot(q, q.T)
bqt = np.ones(dim)
return np.linalg.solve(qtq, bqt)
However, the matrix I'm working with has about 1.5 million rows and columns. It isn't a sparse matrix either; most entries are small but non-zero. Of course, just trying to build that matrix throws a memory error.
How can I modify the above code to work with huge matrices? I've heard of solutions like PyTables, but I'm not sure how to apply them, and I don't know if they would work for tasks like np.linalg.solve.
Being very new to numpy and very inexperienced with linear algebra, I'd very much appreciate an example of what to do in my case. I'm open to using something other than numpy, and even something other than Python if needed.
Here's some ideas to start with:
We can use the fact that any initial probability vector will converge on the steady state under time evolution (assuming it's ergodic, aperiodic, regular, etc).
For small matrices we could use
def steady_state(matrix):
dim = matrix.shape[0]
prob = np.ones(dim) / dim
other = np.zeros(dim)
while np.linalg.norm(prob - other) > 1e-3:
other = prob.copy()
prob = other # matrix
return prob
(I think the conventions assumed by the function in the question is that distributions go in rows).
Now we can use the fact that matrix multiplication and norm can be done chunk by chunk:
def steady_state_chunk(matrix, block_in=100, block_out=10):
dim = matrix.shape[0]
prob = np.ones(dim) / dim
error = 1.
while error > 1e-3:
error = 0.
other = prob.copy()
for i in range(0, dim, block_out):
outs = np.s_[i:i+block_out]
vec_out = np.zeros(block_out)
for j in range(0, dim, block_in):
ins = np.s_[j:j+block_in]
vec_out += other[ins] # matrix[ins, outs]
error += np.linalg.norm(vec_out - prob[outs])**2
prob[outs] = vec_out
error = np.sqrt(error)
return prob
This should use less memory for temporaries, thought you could do better by using the out parameter of np.matmul.
I should add something to deal with the last slice in each loop, in case dim isn't divisible by block_*, but I hope you get the idea.
For arrays that don't fit in memory to start with, you can apply the tools from the links in the comments above.
I have a set of 46 years worth of rainfall data. It's in the form of 46 numpy arrays each with a shape of 145, 192, so each year is a different array of maximum rainfall data at each lat and lon coordinate in the given model.
I need to create a global map of tau values by doing an M-K test (Mann-Kendall) for each coordinate over the 46 years.
I'm still learning python, so I've been having trouble finding a way to go through all the data in a simple way that doesn't involve me making 27840 new arrays for each coordinate.
So far I've looked into how to use scipy.stats.kendalltau and using the definition from here: https://github.com/mps9506/Mann-Kendall-Trend
EDIT:
To clarify and add a little more detail, I need to perform a test on for each coordinate and not just each file individually. For example, for the first M-K test, I would want my x=46 and I would want y=data1[0,0],data2[0,0],data3[0,0]...data46[0,0]. Then to repeat this process for every single coordinate in each array. In total the M-K test would be done 27840 times and leave me with 27840 tau values that I can then plot on a global map.
EDIT 2:
I'm now running into a different problem. Going off of the suggested code, I have the following:
for i in range(145):
for j in range(192):
out[i,j] = mk_test(yrmax[:,i,j],alpha=0.05)
print out
I used numpy.stack to stack all 46 arrays into a single array (yrmax) with shape: (46L, 145L, 192L) I've tested it out and it calculates p and tau correctly if I change the code from out[i,j] to just out. However, doing this messes up the for loop so it only takes the results from the last coordinate in stead of all of them. And if I leave the code as it is above, I get the error: TypeError: list indices must be integers, not tuple
My first guess was that it has to do with mk_test and how the information is supposed to be returned in the definition. So I've tried altering the code from the link above to change how the data is returned, but I keep getting errors relating back to tuples. So now I'm not sure where it's going wrong and how to fix it.
EDIT 3:
One more clarification I thought I should add. I've already modified the definition in the link so it returns only the two number values I want for creating maps, p and z.
I don't think this is as big an ask as you may imagine. From your description it sounds like you don't actually want the scipy kendalltau, but the function in the repository you posted. Here is a little example I set up:
from time import time
import numpy as np
from mk_test import mk_test
data = np.array([np.random.rand(145, 192) for _ in range(46)])
mk_res = np.empty((145, 192), dtype=object)
start = time()
for i in range(145):
for j in range(192):
out[i, j] = mk_test(data[:, i, j], alpha=0.05)
print(f'Elapsed Time: {time() - start} s')
Elapsed Time: 35.21990394592285 s
My system is a MacBook Pro 2.7 GHz Intel Core I7 with 16 GB Ram so nothing special.
Each entry in the mk_res array (shape 145, 192) corresponds to one of your coordinate points and contains an entry like so:
array(['no trend', 'False', '0.894546014835', '0.132554125342'], dtype='<U14')
One thing that might be useful would be to modify the code in mk_test.py to return all numerical values. So instead of 'no trend'/'positive'/'negative' you could return 0/1/-1, and 1/0 for True/False and then you wouldn't have to worry about the whole object array type. I don't know what kind of analysis you might want to do downstream but I imagine that would preemptively circumvent any headaches.
Thanks to the answers provided and some work I was able to work out a solution that I'll provide here for anyone else that needs to use the Mann-Kendall test for data analysis.
The first thing I needed to do was flatten the original array I had into a 1D array. I know there is probably an easier way to go about doing this, but I ultimately used the following code based on code Grr suggested using.
`x = 46
out1 = np.empty(x)
out = np.empty((0))
for i in range(146):
for j in range(193):
out1 = yrmax[:,i,j]
out = np.append(out, out1, axis=0) `
Then I reshaped the resulting array (out) as follows:
out2 = np.reshape(out,(27840,46))
I did this so my data would be in a format compatible with scipy.stats.kendalltau 27840 is the total number of values I have at every coordinate that will be on my map (i.e. it's just 145*192) and the 46 is the number of years the data spans.
I then used the following loop I modified from Grr's code to find Kendall-tau and it's respective p-value at each latitude and longitude over the 46 year period.
`x = range(46)
y = np.zeros((0))
for j in range(27840):
b = sc.stats.kendalltau(x,out2[j,:])
y = np.append(y, b, axis=0)`
Finally, I reshaped the data one for time as shown:newdata = np.reshape(y,(145,192,2)) so the final array is in a suitable format to be used to create a global map of both tau and p-values.
Thanks everyone for the assistance!
Depending on your situation, it might just be easiest to make the arrays.
You won't really need them all in memory at once (not that it sounds like a terrible amount of data). Something like this only has to deal with one "copied out" coordinate trend at once:
SIZE = (145,192)
year_matrices = load_years() # list of one 145x192 arrays per year
result_matrix = numpy.zeros(SIZE)
for x in range(SIZE[0]):
for y in range(SIZE[1]):
coord_trend = map(lambda d: d[x][y], year_matrices)
result_matrix[x][y] = analyze_trend(coord_trend)
print result_matrix
Now, there are things like itertools.izip that could help you if you really want to avoid actually copying the data.
Here's a concrete example of how Python's "zip" might works with data like yours (although as if you'd used ndarray.flatten on each year):
year_arrays = [
['y0_coord0_val', 'y0_coord1_val', 'y0_coord2_val', 'y0_coord2_val'],
['y1_coord0_val', 'y1_coord1_val', 'y1_coord2_val', 'y1_coord2_val'],
['y2_coord0_val', 'y2_coord1_val', 'y2_coord2_val', 'y2_coord2_val'],
]
assert len(year_arrays) == 3
assert len(year_arrays[0]) == 4
coord_arrays = zip(*year_arrays) # i.e. `zip(year_arrays[0], year_arrays[1], year_arrays[2])`
# original data is essentially transposed
assert len(coord_arrays) == 4
assert len(coord_arrays[0]) == 3
assert coord_arrays[0] == ('y0_coord0_val', 'y1_coord0_val', 'y2_coord0_val', 'y3_coord0_val')
assert coord_arrays[1] == ('y0_coord1_val', 'y1_coord1_val', 'y2_coord1_val', 'y3_coord1_val')
assert coord_arrays[2] == ('y0_coord2_val', 'y1_coord2_val', 'y2_coord2_val', 'y3_coord2_val')
assert coord_arrays[3] == ('y0_coord2_val', 'y1_coord2_val', 'y2_coord2_val', 'y3_coord2_val')
flat_result = map(analyze_trend, coord_arrays)
The example above still copies the data (and all at once, rather than a coordinate at a time!) but hopefully shows what's going on.
Now, if you replace zip with itertools.izip and map with itertools.map then the copies needn't occur — itertools wraps the original arrays and keeps track of where it should be fetching values from internally.
There's a catch, though: to take advantage itertools you to access the data only sequentially (i.e. through iteration). In your case, it looks like the code at https://github.com/mps9506/Mann-Kendall-Trend/blob/master/mk_test.py might not be compatible with that. (I haven't reviewed the algorithm itself to see if it could be.)
Also please note that in the example I've glossed over the numpy ndarray stuff and just show flat coordinate arrays. It looks like numpy has some of it's own options for handling this instead of itertools, e.g. this answer says "Taking the transpose of an array does not make a copy". Your question was somewhat general, so I've tried to give some general tips as to ways one might deal with larger data in Python.
I ran into the same task and have managed to come up with a vectorized solution using numpy and scipy.
The formula are the same as in this page: https://vsp.pnnl.gov/help/Vsample/Design_Trend_Mann_Kendall.htm.
The trickiest part is to work out the adjustment for the tied values. I modified the code as in this answer to compute the number of tied values for each record, in a vectorized manner.
Below are the 2 functions:
import copy
import numpy as np
from scipy.stats import norm
def countTies(x):
'''Count number of ties in rows of a 2D matrix
Args:
x (ndarray): 2d matrix.
Returns:
result (ndarray): 2d matrix with same shape as <x>. In each
row, the number of ties are inserted at (not really) arbitary
locations.
The locations of tie numbers in are not important, since
they will be subsequently put into a formula of sum(t*(t-1)*(2t+5)).
Inspired by: https://stackoverflow.com/a/24892274/2005415.
'''
if np.ndim(x) != 2:
raise Exception("<x> should be 2D.")
m, n = x.shape
pad0 = np.zeros([m, 1]).astype('int')
x = copy.deepcopy(x)
x.sort(axis=1)
diff = np.diff(x, axis=1)
cated = np.concatenate([pad0, np.where(diff==0, 1, 0), pad0], axis=1)
absdiff = np.abs(np.diff(cated, axis=1))
rows, cols = np.where(absdiff==1)
rows = rows.reshape(-1, 2)[:, 0]
cols = cols.reshape(-1, 2)
counts = np.diff(cols, axis=1)+1
result = np.zeros(x.shape).astype('int')
result[rows, cols[:,1]] = counts.flatten()
return result
def MannKendallTrend2D(data, tails=2, axis=0, verbose=True):
'''Vectorized Mann-Kendall tests on 2D matrix rows/columns
Args:
data (ndarray): 2d array with shape (m, n).
Keyword Args:
tails (int): 1 for 1-tail, 2 for 2-tail test.
axis (int): 0: test trend in each column. 1: test trend in each
row.
Returns:
z (ndarray): If <axis> = 0, 1d array with length <n>, standard scores
corresponding to data in each row in <x>.
If <axis> = 1, 1d array with length <m>, standard scores
corresponding to data in each column in <x>.
p (ndarray): p-values corresponding to <z>.
'''
if np.ndim(data) != 2:
raise Exception("<data> should be 2D.")
# alway put records in rows and do M-K test on each row
if axis == 0:
data = data.T
m, n = data.shape
mask = np.triu(np.ones([n, n])).astype('int')
mask = np.repeat(mask[None,...], m, axis=0)
s = np.sign(data[:,None,:]-data[:,:,None]).astype('int')
s = (s * mask).sum(axis=(1,2))
#--------------------Count ties--------------------
counts = countTies(data)
tt = counts * (counts - 1) * (2*counts + 5)
tt = tt.sum(axis=1)
#-----------------Sample Gaussian-----------------
var = (n * (n-1) * (2*n+5) - tt) / 18.
eps = 1e-8 # avoid dividing 0
z = (s - np.sign(s)) / (np.sqrt(var) + eps)
p = norm.cdf(z)
p = np.where(p>0.5, 1-p, p)
if tails==2:
p=p*2
return z, p
I assume your data come in the layout of (time, latitude, longitude), and you are examining the temporal trend for each lat/lon cell.
To simulate this task, I synthesized a sample data array of shape (50, 145, 192). The 50 time points are taken from Example 5.9 of the book Wilks 2011, Statistical methods in the atmospheric sciences. And then I simply duplicated the same time series 27840 times to make it (50, 145, 192).
Below is the computation:
x = np.array([0.44,1.18,2.69,2.08,3.66,1.72,2.82,0.72,1.46,1.30,1.35,0.54,\
2.74,1.13,2.50,1.72,2.27,2.82,1.98,2.44,2.53,2.00,1.12,2.13,1.36,\
4.9,2.94,1.75,1.69,1.88,1.31,1.76,2.17,2.38,1.16,1.39,1.36,\
1.03,1.11,1.35,1.44,1.84,1.69,3.,1.36,6.37,4.55,0.52,0.87,1.51])
# create a big cube with shape: (T, Y, X)
arr = np.zeros([len(x), 145, 192])
for i in range(arr.shape[1]):
for j in range(arr.shape[2]):
arr[:, i, j] = x
print(arr.shape)
# re-arrange into tabular layout: (Y*X, T)
arr = np.transpose(arr, [1, 2, 0])
arr = arr.reshape(-1, len(x))
print(arr.shape)
import time
t1 = time.time()
z, p = MannKendallTrend2D(arr, tails=2, axis=1)
p = p.reshape(145, 192)
t2 = time.time()
print('time =', t2-t1)
The p-value for that sample time series is 0.63341565, which I have validated against the pymannkendall module result. Since arr contains merely duplicated copies of x, the resultant p is a 2d array of size (145, 192), with all 0.63341565.
And it took me only 1.28 seconds to compute that.
I would like to optimize over all 30 by 30 matrices with entries that are 0 or 1. My objective function is the determinant. One way to do this would be some sort of stochastic gradient descent or simulated annealing.
I looked at scipy.optimize but it doesn't seem to support this sort of optimization as far as I can tell. scipy.optimize.basinhopping looked very tempting but it seems to require continuous variables.
Are there any tools in Python for this sort of general discrete optimization?
I think a genetic algorithm might work quite well in this case. Here's a quick example thrown together using deap, based loosely on their example here:
import numpy as np
import deap
from deap import algorithms, base, tools
import imp
class GeneticDetMinimizer(object):
def __init__(self, N=30, popsize=500):
# we want the creator module to be local to this instance, since
# creator.create() directly adds new classes to the module's globals()
# (yuck!)
cr = imp.load_module('cr', *imp.find_module('creator', deap.__path__))
self._cr = cr
self._cr.create("FitnessMin", base.Fitness, weights=(-1.0,))
self._cr.create("Individual", np.ndarray, fitness=self._cr.FitnessMin)
self._tb = base.Toolbox()
# an 'individual' consists of an (N^2,) flat numpy array of 0s and 1s
self.N = N
self.indiv_size = N * N
self._tb.register("attr_bool", np.random.random_integers, 0, 1)
self._tb.register("individual", tools.initRepeat, self._cr.Individual,
self._tb.attr_bool, n=self.indiv_size)
# the 'population' consists of a list of such individuals
self._tb.register("population", tools.initRepeat, list,
self._tb.individual)
self._tb.register("evaluate", self.fitness)
self._tb.register("mate", self.crossover)
self._tb.register("mutate", tools.mutFlipBit, indpb=0.025)
self._tb.register("select", tools.selTournament, tournsize=3)
# create an initial population, and initialize a hall-of-fame to store
# the best individual
self.pop = self._tb.population(n=popsize)
self.hof = tools.HallOfFame(1, similar=np.array_equal)
# print summary statistics for the population on each iteration
self.stats = tools.Statistics(lambda ind: ind.fitness.values)
self.stats.register("avg", np.mean)
self.stats.register("std", np.std)
self.stats.register("min", np.min)
self.stats.register("max", np.max)
def fitness(self, individual):
"""
assigns a fitness value to each individual, based on the determinant
"""
return np.linalg.det(individual.reshape(self.N, self.N)),
def crossover(self, ind1, ind2):
"""
randomly swaps a subset of array values between two individuals
"""
size = self.indiv_size
cx1 = np.random.random_integers(0, size - 2)
cx2 = np.random.random_integers(cx1, size - 1)
ind1[cx1:cx2], ind2[cx1:cx2] = (
ind2[cx1:cx2].copy(), ind1[cx1:cx2].copy())
return ind1, ind2
def run(self, ngen=int(1E6), mutation_rate=0.3, crossover_rate=0.7):
np.random.seed(seed)
pop, log = algorithms.eaSimple(self.pop, self._tb,
cxpb=crossover_rate,
mutpb=mutation_rate,
ngen=ngen,
stats=self.stats,
halloffame=self.hof)
self.log = log
return self.hof[0].reshape(self.N, self.N), log
if __name__ == "__main__":
np.random.seed(0)
gd = GeneticDetMinimizer()
best, log = gd.run()
It takes about 40 seconds to run 1000 generations on my laptop, which gets me from a minimum determinant value of about -5.7845x108 to -6.41504x1011. I haven't really played around much with the meta-parameters (population size, mutation rate, crossover rate etc.), so I'm sure it's possible to do a lot better.
Here's a greatly improved version that implements a much smarter crossover function that swaps blocks of rows or columns across individuals, and uses a cachetools.LRUCache to guarantee that each mutation step produces a novel configuration, and to skip evaluation of the determinant for configurations that have already been tried:
import numpy as np
import deap
from deap import algorithms, base, tools
import imp
from cachetools import LRUCache
# used to control the size of the cache so that it doesn't exceed system memory
MAX_MEM_BYTES = 11E9
class GeneticDetMinimizer(object):
def __init__(self, N=30, popsize=500, cachesize=None, seed=0):
# an 'individual' consists of an (N^2,) flat numpy array of 0s and 1s
self.N = N
self.indiv_size = N * N
if cachesize is None:
cachesize = int(np.ceil(8 * MAX_MEM_BYTES / self.indiv_size))
self._gen = np.random.RandomState(seed)
# we want the creator module to be local to this instance, since
# creator.create() directly adds new classes to the module's globals()
# (yuck!)
cr = imp.load_module('cr', *imp.find_module('creator', deap.__path__))
self._cr = cr
self._cr.create("FitnessMin", base.Fitness, weights=(-1.0,))
self._cr.create("Individual", np.ndarray, fitness=self._cr.FitnessMin)
self._tb = base.Toolbox()
self._tb.register("attr_bool", self.random_bool)
self._tb.register("individual", tools.initRepeat, self._cr.Individual,
self._tb.attr_bool, n=self.indiv_size)
# the 'population' consists of a list of such individuals
self._tb.register("population", tools.initRepeat, list,
self._tb.individual)
self._tb.register("evaluate", self.fitness)
self._tb.register("mate", self.crossover)
self._tb.register("mutate", self.mutate, rate=0.002)
self._tb.register("select", tools.selTournament, tournsize=3)
# create an initial population, and initialize a hall-of-fame to store
# the best individual
self.pop = self._tb.population(n=popsize)
self.hof = tools.HallOfFame(1, similar=np.array_equal)
# print summary statistics for the population on each iteration
self.stats = tools.Statistics(lambda ind: ind.fitness.values)
self.stats.register("avg", np.mean)
self.stats.register("std", np.std)
self.stats.register("min", np.min)
self.stats.register("max", np.max)
# keep track of configurations that have already been visited
self.tabu = LRUCache(cachesize)
def random_bool(self, *args):
return self._gen.rand(*args) < 0.5
def mutate(self, ind, rate=1E-3):
"""
mutate an individual by bit-flipping one or more randomly chosen
elements
"""
# ensure that each mutation always introduces a novel configuration
while np.packbits(ind.astype(np.uint8)).tostring() in self.tabu:
n_flip = self._gen.binomial(self.indiv_size, rate)
if not n_flip:
continue
idx = self._gen.random_integers(0, self.indiv_size - 1, n_flip)
ind[idx] = ~ind[idx]
return ind,
def fitness(self, individual):
"""
assigns a fitness value to each individual, based on the determinant
"""
h = np.packbits(individual.astype(np.uint8)).tostring()
# look up the fitness for this configuration if it has already been
# encountered
if h not in self.tabu:
fitness = np.linalg.det(individual.reshape(self.N, self.N))
self.tabu.update({h: fitness})
else:
fitness = self.tabu[h]
return fitness,
def crossover(self, ind1, ind2):
"""
randomly swaps a block of rows or columns between two individuals
"""
cx1 = self._gen.random_integers(0, self.N - 2)
cx2 = self._gen.random_integers(cx1, self.N - 1)
ind1.shape = ind2.shape = self.N, self.N
if self._gen.rand() < 0.5:
# row swap
ind1[cx1:cx2, :], ind2[cx1:cx2, :] = (
ind2[cx1:cx2, :].copy(), ind1[cx1:cx2, :].copy())
else:
# column swap
ind1[:, cx1:cx2], ind2[:, cx1:cx2] = (
ind2[:, cx1:cx2].copy(), ind1[:, cx1:cx2].copy())
ind1.shape = ind2.shape = self.indiv_size,
return ind1, ind2
def run(self, ngen=int(1E6), mutation_rate=0.3, crossover_rate=0.7):
pop, log = algorithms.eaSimple(self.pop, self._tb,
cxpb=crossover_rate,
mutpb=mutation_rate,
ngen=ngen,
stats=self.stats,
halloffame=self.hof)
self.log = log
return self.hof[0].reshape(self.N, self.N), log
if __name__ == "__main__":
np.random.seed(0)
gd = GeneticDetMinimizer(0)
best, log = gd.run()
My best score thus far is about -3.23718x1013 -3.92366x1013 after 10000 1000 generations, which takes about 45 seconds on my machine.
Based on the solution cthonicdaemon linked to in the comments, the maximum determinant for a 31x31 Hadamard matrix must be at least 75960984159088×230 ~= 8.1562x1022 (it's not yet proven whether that solution is optimal). The maximum determinant for an (n-1 x n-1) binary matrix is 21-n times the value for an (n x n) Hadamard matrix, i.e. 8.1562x1022 x 2-30 ~= 7.5961x1013, so the genetic algorithm gets within an order of magnitude of the current best known solution.
However, the fitness function seems to plateau around here, and I'm having a hard time breaking -4x1013. Since it's a heuristic search there is no guarantee that it will eventually find the global optimum.
I don't know of any straight-forward method for discrete optimization in scipy. One alternative is using the simanneal package from pip or github, which allows you to introduce your own move function, such that you can restrict it to moves within your domain:
import random
import numpy as np
import simanneal
class BinaryAnnealer(simanneal.Annealer):
def move(self):
# choose a random entry in the matrix
i = random.randrange(self.state.size)
# flip the entry 0 <=> 1
self.state.flat[i] = 1 - self.state.flat[i]
def energy(self):
# evaluate the function to minimize
return -np.linalg.det(self.state)
matrix = np.zeros((5, 5))
opt = BinaryAnnealer(matrix)
print(opt.anneal())
I have looked into this a bit.
A couple of things first off: 1) 56 million is the max value when the size of the matrix is 21x21, not 30x30:https://en.wikipedia.org/wiki/Hadamard%27s_maximal_determinant_problem#Connection_of_the_maximal_determinant_problems_for_.7B1.2C.C2.A0.E2.88.921.7D_and_.7B0.2C.C2.A01.7D_matrices.
But that is also an upper bound on -1, 1 matrices, not 1,0.
EDIT: Reading more carefully from that link:
The maximal determinants of {1, −1} matrices up to size n = 21 are given in the following table. Size 22 is the smallest open case. In the table, D(n) represents the maximal determinant divided by 2n−1. Equivalently, D(n) represents the maximal determinant of a {0, 1} matrix of size n−1.
So that table can be used for upper bounds, but remember they're divided by 2n−1. Also note that 22 is the smallest open case, so trying to find the maximum of a 30x30 matrix has not been done, and is not even close to being done just yet.
2) The reason David Zwicker's code gives an answer of 30 million is probably due to the fact that he's minimising. Not maximising.
return -np.linalg.det(self.state)
See how he's got the minus sign there?
3) Also, the solution space for this problem is very big. I calculate the number of different matrices to be 2^(30*30) i.e. in the order of 10^270. So looking at each matrix is simply impossible, and even look at most of them is too.
I have a bit of code here (adapted from David Zwicker's code) that runs, but I have no idea how close it is to the actual maximum. It takes around 45 mins to do 10 million iterations on my PC, or only about 2 mins for 1 mill iterations. I get a max value of around 3.4 billion. But again, I have no idea how close this is to the theoretical maximum.
import numpy as np
import random
import time
MATRIX_SIZE = 30
def Main():
startTime = time.time()
mat = np.zeros((MATRIX_SIZE, MATRIX_SIZE), dtype = int)
for i in range(MATRIX_SIZE):
for j in range(MATRIX_SIZE):
mat[i,j] = random.randrange(2)
print("Starting matrix:\n", mat)
maxDeterminant = 0
for i in range(1000000):
# choose a random entry in the matrix
x = random.randrange(MATRIX_SIZE)
y = random.randrange(MATRIX_SIZE)
mat[x,y] = 1 - mat[x,y]
#print(mat)
detValue = np.linalg.det(mat)
if detValue > maxDeterminant:
maxDeterminant = detValue
timeTakenStr = "\nTotal time to complete: " + str(round(time.time() - startTime, 4)) + " seconds"
print(timeTakenStr )
print(maxDeterminant)
Main()
Does this help?
I am trying to implement a very simple example of the law of large numbers using PyMC. The goal is to generate many sample averages of samples of different sizes. For example, in the code below, I'm taking repeatedly taking groups of 5 samples (samples_to_average = 5), calculating their mean, and then finding the 95% CI of the resulting trace.
The code below runs, but what I'd like to do is modify samples_to_average to be a list, so that I can calculate confidence intervals for a range of different sample sizes in a single pass.
import scipy.misc
import numpy as np
import pymc as mc
samples_to_average = 5
list_of_samples = mc.DiscreteUniform("response", lower=1, upper=10, size=1000)
#mc.deterministic
def sample_average(x=list_of_samples, n=samples_to_average):
samples = int(n)
selected = x[0:samples]
total = np.sum(selected)
sample_average = float(total) / samples
return sample_average
def getConfidenceInterval():
responseModel = mc.Model([samples_to_average, list_of_samples, sample_average])
mapRes = mc.MAP(responseModel)
mapRes.fit()
mcmc = mc.MCMC(responseModel)
mcmc.sample( 10000, 5000)
upper = np.percentile(mcmc.trace('sample_average')[:],95)
lower = np.percentile(mcmc.trace('sample_average')[:],5)
return (lower, upper)
print getConfidenceInterval()
Most examples I've seen using the deterministic decorator use global stochastic variables. However, to achieve my aim, I think what I need to do is create a stochastic variable (of the correct length) in getConfidenceInterval(), and pass this to sample_average (rather than supplying sample_average using globals / default parameter).
How can a variable created in getConfidenceInterval() be passed into sample_average(), or alternatively, what is another way that I can evaluate multiple models using different values of samples_to_average? I'd like to avoid globals if possible.
Before addressing your question, I would like to simplify the way sample_average is written so that it is more compact and easier to understand.
sample_average = mc.Lambda('sample_average', lambda x=list_of_samples, n=samples_to_average: np.mean(x[:n]))
Now you can generalize this to the case where samples_to_average is an array of parameters:
samples_to_average = np.arange(5, 25, 5)
sample_average = mc.Lambda('sample_average', lambda x=list_of_samples, n=samples_to_average: [np.mean(x[:t]) for t in n])
The getConfidenceInterval function would also have to be changed as shown below:
def getConfidenceInterval():
responseModel = mc.Model([samples_to_average, list_of_samples, sample_average])
mapRes = mc.MAP(responseModel)
mapRes.fit()
mcmc = mc.MCMC(responseModel)
mcmc.sample( 10000, 5000)
average = np.vstack((t for t in mcmc.trace('sample_average')))
upper = np.percentile(average, 95, axis = 0)
lower = np.percentile(average, 5, axis = 0)
return (lower, upper)
I used vstack to aggregate the sample averages into a 2D array and then used the axis option in Numpy's percentile function to compute percentiles along each column.
I recently asked about trying to optimise a Python loop for a scientific application, and received an excellent, smart way of recoding it within NumPy which reduced execution time by a factor of around 100 for me!
However, calculation of the B value is actually nested within a few other loops, because it is evaluated at a regular grid of positions. Is there a similarly smart NumPy rewrite to shave time off this procedure?
I suspect the performance gain for this part would be less marked, and the disadvantages would presumably be that it would not be possible to report back to the user on the progress of the calculation, that the results could not be written to the output file until the end of the calculation, and possibly that doing this in one enormous step would have memory implications? Is it possible to circumvent any of these?
import numpy as np
import time
def reshape_vector(v):
b = np.empty((3,1))
for i in range(3):
b[i][0] = v[i]
return b
def unit_vectors(r):
return r / np.sqrt((r*r).sum(0))
def calculate_dipole(mu, r_i, mom_i):
relative = mu - r_i
r_unit = unit_vectors(relative)
A = 1e-7
num = A*(3*np.sum(mom_i*r_unit, 0)*r_unit - mom_i)
den = np.sqrt(np.sum(relative*relative, 0))**3
B = np.sum(num/den, 1)
return B
N = 20000 # number of dipoles
r_i = np.random.random((3,N)) # positions of dipoles
mom_i = np.random.random((3,N)) # moments of dipoles
a = np.random.random((3,3)) # three basis vectors for this crystal
n = [10,10,10] # points at which to evaluate sum
gamma_mu = 135.5 # a constant
t_start = time.clock()
for i in range(n[0]):
r_frac_x = np.float(i)/np.float(n[0])
r_test_x = r_frac_x * a[0]
for j in range(n[1]):
r_frac_y = np.float(j)/np.float(n[1])
r_test_y = r_frac_y * a[1]
for k in range(n[2]):
r_frac_z = np.float(k)/np.float(n[2])
r_test = r_test_x +r_test_y + r_frac_z * a[2]
r_test_fast = reshape_vector(r_test)
B = calculate_dipole(r_test_fast, r_i, mom_i)
omega = gamma_mu*np.sqrt(np.dot(B,B))
# write r_test, B and omega to a file
frac_done = np.float(i+1)/(n[0]+1)
t_elapsed = (time.clock()-t_start)
t_remain = (1-frac_done)*t_elapsed/frac_done
print frac_done*100,'% done in',t_elapsed/60.,'minutes...approximately',t_remain/60.,'minutes remaining'
One obvious thing you can do is replace the line
r_test_fast = reshape_vector(r_test)
with
r_test_fast = r_test.reshape((3,1))
Probably won't make any big difference in performance, but in any case it makes sense to use the numpy builtins instead of reinventing the wheel.
Generally speaking, as you probably have noticed by now, the trick with optimizing numpy is to express the algorithm with the help of numpy whole-array operations or at least with slices instead of iterating over each element in python code. What tends to prevent this kind of "vectorization" is so-called loop-carried dependencies, i.e. loops where each iteration is dependent on the result of a previous iteration. Looking briefly at your code, you have no such thing, and it should be possible to vectorize your code just fine.
EDIT: One solution
I haven't verified this is correct, but should give you an idea of how to approach it.
First, take the cartesian() function, which we'll use. Then
def calculate_dipole_vect(mus, r_i, mom_i):
# Treat each mu sequentially
Bs = []
omega = []
for mu in mus:
rel = mu - r_i
r_norm = np.sqrt((rel * rel).sum(1))
r_unit = rel / r_norm[:, np.newaxis]
A = 1e-7
num = A*(3*np.sum(mom_i * r_unit, 0)*r_unit - mom_i)
den = r_norm ** 3
B = np.sum(num / den[:, np.newaxis], 0)
Bs.append(B)
omega.append(gamma_mu * np.sqrt(np.dot(B, B)))
return Bs, omega
# Transpose to get more "natural" ordering with row-major numpy
r_i = r_i.T
mom_i = mom_i.T
t_start = time.clock()
r_frac = cartesian((np.arange(n[0]) / float(n[0]),
np.arange(n[1]) / float(n[1]),
np.arange(n[2]) / float(n[2])))
r_test = np.dot(r_frac, a)
B, omega = calculate_dipole_vect(r_test, r_i, mom_i)
print 'Total time for vectorized: %f s' % (time.clock() - t_start)
Well, in my testing, this is in fact slightly slower than the loop-based approach I started from. The thing is, in the original version in the question, it was already vectorized with whole-array operations over arrays of shape (20000, 3), so any further vectorization doesn't really bring much further benefit. In fact, it may worsen the performance, as above, maybe due to big temporary arrays.
If you profile your code, you'll see that 99% of the running time is in calculate_dipole so reducing the time for this looping really won't give a noticeable reduction in execution time. You still need to focus on calculate_dipole if you want to make this faster. I tried my Cython code for calculate_dipole on this and got a reduction by about a factor of 2 in the overall time. There might be other ways to improve the Cython code too.