Defining a cost/loss function in TensorFlow - python

I am working on a graph network problem where I would like to leverage the power of TensorFlow.
I have troubles implementing the cost function in TensorFlow correctly though.
my cost function is given as:
sum_i>j A_ij*log(pi_ij)+(1-A_ij)*log(1-pi_ij)
where: pi_ij = sigmoid(-|z_i-z_j|+beta)
|| is the euclidian distance, pi_ij denotes the chance for a link between i and j, and A_ij = 1 if link and 0 if not (in a simple adjencency matrix), both are matrices of same size. I have solved this optimization problem manually using python and a simple SGD method. I calculate the cost function as following:
import tensorflow as tf
import numpy as np
import scipy.sparse.csgraph as csg
from scipy.spatial import distance
Y = np.array([[0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 1., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 1., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 1., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 1.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 1.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 1., 0., 0., 1., 0., 0., 0.]])
# removing all non linked entries
temp = Y[~np.all(Y == 0, axis=1)]
temp = temp[:,~np.all(Y == 0, axis=1)]
Y = temp
n = np.shape(Y)[0]
k = 2
# finding shortest path and cmdscaling
D = csg.shortest_path(Y, directed=True)
Z = cmdscale(D)[0][:,0:k]
Z = Z - Z.mean(axis=0, keepdims=True)
# calculating cost
euclideanZ = distance.cdist(Z, Z, 'euclidean')
sigmoid = lambda x: 1 / (1 + np.exp(-x))
vectorSigmoid = np.vectorize(sigmoid)
pi = vectorSigmoid(euclideanZ)
cost = np.sum(Y*np.log(pi)+(1-Y)*np.log(1-pi))
How could I define such a loss function in TensorFlow? Is it even possible? Any help or nudge in the right direction would be greatly appreciated.
EDIT
I got this down in tensor flow:
tfY = tf.placeholder(shape=(15, 15), dtype=tf.float32)
with tf.variable_scope('test'):
shape = [] # Shape [] means that we're using a scalar variable
B = tf.Variable(tf.zeros(shape))
tfZ = tf.Variable(tf.zeros(shape=(15,2)))
def loss():
r = tf.reduce_sum(tfZ*tfZ, 1)
r = tf.reshape(r, [-1, 1])
D = tf.sqrt(r - 2*tf.matmul(tfZ, tf.transpose(tfZ)) + tf.transpose(r))
return tf.reduce_sum(tfY*tf.log(tf.sigmoid(D+B))+(1-tfY)*tf.log(1-tf.sigmoid(D+B)))
LOSS = loss()
GRADIENT = tf.gradients(LOSS, [B, tfZ])
sess = tf.Session()
sess.run(tf.global_variables_initializer())
tot_loss = sess.run(LOSS, feed_dict={tfZ: Z,
tfY: Y})
print(tot_loss)
loss_grad = sess.run(GRADIENT, feed_dict={tfZ: Z,
tfY: Y})
print(loss_grad)
which prints the following:
-487.9079
[-152.56271, array([[nan, nan],
[nan, nan],
[nan, nan],
[nan, nan],
[nan, nan],
[nan, nan],
[nan, nan],
[nan, nan],
[nan, nan],
[nan, nan],
[nan, nan],
[nan, nan],
[nan, nan],
[nan, nan],
[nan, nan]], dtype=float32)]
My beta returns a value, and adding it multiplied with the learning rate improves the score, but my tfZ vector is only returning nans, I am obviously doing something wrong, if anyone can spot what I am doing wrong, I would be grateful.

Just change this:
D = tf.sqrt(r - 2*tf.matmul(tfZ, tf.transpose(tfZ)) + tf.transpose(r) + 1e-8) # adding a small constant.
Because the distances have zeros in the diagonal and the gradient of sqrt can not be computed when the value being zero.

Related

Is there an efficient way of representing a 2D numpy array for the purpose of fitting a GMM to it?

I have been using Gaussian Mixture Models (GMM) to model a set of peaks in a 2D numpy array (a).
a = np.array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 1., 100., 1000., 100., 2., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 1., 1., 100., 100., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 2., 1., 2., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 1., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
The problem is that in order to fit a GMM to my data with sklearn I have to first generate a density_array, which holds a huge amount of data points depending on the height of the peaks in a.
def convert_to_density_array(array):
"""
Convert an array to a density array
"""
density_list = []
# iterate over each i,j coordinate in the array
for (i, j), value in np.ndenumerate(array):
for x in range(int(value)):
density_list.append((i, j))
return np.array(density_list)
density_array = convert_to_density_array(a)
gmm = mixture.GaussianMixture(n_components=2,covariance_type='full').fit(density_array)
Is there an efficient way of representing a 2D numpy array for the purpose of fitting a GMM to it?
you can store data using less precision by adding dtype=np.float32 to your np.array call, which is okay as long as you are fine with 8 digits of precision instead of 15 (which is totally acceptable in your case), but that's the only way to store the same data in memory in less footprint and still pass it to gmm.
what you are trying to do is curve fitting, not data modelling , so you can use scipy curve fit on your original data without making density_array to start with, you just have to pass it a function of two gaussians and in a loop change the initial estimate randomly until you get the least error, but as writing the code for it will take some time, consider this approach only if you cannot get your data in memory using any other method.

Reordering block matrix

I have a multi-level indexed square matrix, that needs to be reordered.
Say I have a two-level indexing system x and y and the square matrix M has the shape (len(x)*len(y), len(x)*len(y)).
M is sorted by the x index and I want to transform it to be sorted by the y index. Here is an example to contruct an arbitary square matrix M:
import numpy as np
nx = 4 # equal to len(x), arbitary
ny = 3 # equal to len(y), arbitary
A=np.ones(ny*ny).reshape(ny,ny) #arbitary
B=np.ones(ny*ny).reshape(ny,ny)*2 #arbitary
C=np.ones(ny*ny).reshape(ny,ny)*3 #arbitary
D=np.ones(ny*ny).reshape(ny,ny)*4 #arbitary
E=np.arange(ny*ny).reshape(ny,ny) #arbitary
M = np.block([[A, np.zeros((ny,ny)), E, np.zeros((ny,ny))],
[np.zeros((ny,ny)), B, np.zeros((ny,ny)),np.zeros((ny,ny))],
[np.zeros((ny,ny)),np.zeros((ny,ny)),C, np.zeros((ny,ny))],
[np.zeros((ny,ny)), np.zeros((ny,ny)), np.zeros((ny,ny)), D]])
and the resulting matrix M may look like this
array([[1., 1., 1., 0., 0., 0., 0., 1., 2., 0., 0., 0.],
[1., 1., 1., 0., 0., 0., 3., 4., 5., 0., 0., 0.],
[1., 1., 1., 0., 0., 0., 6., 7., 8., 0., 0., 0.],
[0., 0., 0., 2., 2., 2., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 2., 2., 2., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 2., 2., 2., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 3., 3., 3., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 3., 3., 3., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 3., 3., 3., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 4., 4., 4.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 4., 4., 4.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 4., 4., 4.]])
Now I want to transform the M into M_transformed that looks like this
array([[1., 0., 0., 0., 1., 0., 1., 0., 1., 0., 2., 0.],
[0., 2., 0., 0., 0., 2., 0., 0., 0., 2., 0., 0.],
[0., 0., 3., 0., 0., 0., 3., 0., 0., 0., 3., 0.],
[0., 0., 0., 4., 0., 0., 0., 4., 0., 0., 0., 4.],
[1., 0., 3., 0., 1., 0., 4., 0., 1., 0., 5., 0.],
[0., 2., 0., 0., 0., 2., 0., 0., 0., 2., 0., 0.],
[0., 0., 3., 0., 0., 0., 3., 0., 0., 0., 3., 0.],
[0., 0., 0., 4., 0., 0., 0., 4., 0., 0., 0., 4.],
[1., 0., 6., 0., 1., 0., 7., 0., 1., 0., 8., 0.],
[0., 2., 0., 0., 0., 2., 0., 0., 0., 2., 0., 0.],
[0., 0., 3., 0., 0., 0., 3., 0., 0., 0., 3., 0.],
[0., 0., 0., 4., 0., 0., 0., 4., 0., 0., 0., 4.]])
I use a very elementary, 4 layers of for loops to solve this problem and I believe there must be a more straight forward way (like a library) to solve this issue, as the matrix M can grow very large depending on the length of x and y (nx and ny)
M_transformed = np.zeros(M.shape)
for i in range(nx):
for j in range(nx):
for k in range(ny):
for l in range(ny):
M_transformed[k * nx + i,l * nx + j] = M[i * ny + k, j * ny + l]
I did it with no calculations, just borrowing some ideas from how to do maxpooling and experimenting a lot with swaps of axes.
I came to solution with this plan:
And this is my solution:
w = (3, 3)
initial_shape = M.shape
M = M.reshape((M.shape[0]//w[0], w[0], M.shape[1]//w[1], w[1]))
M = M.swapaxes(0, 1)
M = M.swapaxes(2, 3)
M = M.reshape(initial_shape)

Exploding loss in pyTorch

I am trying to train a latent space model in pytorch. The model is relatively simple and just requires me to minimize my loss function but I am getting an odd error. After running for a short while the loss suddenly explodes upwards.
import numpy as np
import scipy.sparse.csgraph as csg
import torch
from torch.autograd import Variable
import torch.autograd as autograd
import matplotlib.pyplot as plt
%matplotlib inline
def cmdscale(D):
# Number of points
n = len(D)
# Centering matrix
H = np.eye(n) - np.ones((n, n))/n
# YY^T
B = -H.dot(D**2).dot(H)/2
# Diagonalize
evals, evecs = np.linalg.eigh(B)
# Sort by eigenvalue in descending order
idx = np.argsort(evals)[::-1]
evals = evals[idx]
evecs = evecs[:,idx]
# Compute the coordinates using positive-eigenvalued components only
w, = np.where(evals > 0)
L = np.diag(np.sqrt(evals[w]))
V = evecs[:,w]
Y = V.dot(L)
return Y, evals
Y = np.array([[0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 1., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 1., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 1., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 1.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 1.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 1., 0., 0., 1., 0., 0., 0.]])
temp = Y[~np.all(Y == 0, axis=1)]
temp = temp[:,~np.all(Y == 0, axis=1)]
Y = temp
n = np.shape(Y)[0]
k = 2
D = csg.shortest_path(Y, directed=True)
Z = cmdscale(D)[0][:,0:k]
Z = Z - Z.mean(axis=0, keepdims=True)
tZ = autograd.Variable(torch.Tensor(Z), requires_grad=True)
B = autograd.Variable(torch.Tensor([0]), requires_grad=True)
tY = torch.autograd.Variable(torch.Tensor(Y), requires_grad=False)
#calculating pairwise euclidean distance
def distMatrix(m):
n = m.size(0)
d = m.size(1)
x = m.unsqueeze(1).expand(n, n, d)
y = m.unsqueeze(0).expand(n, n, d)
return torch.sqrt(torch.pow(x - y, 2).sum(2) + 1e-4)
def loss(tY):
d = -distMatrix(tZ)+B
sigmoidD = torch.sigmoid(d)
#removing diagonal
reduce = tY*torch.log(sigmoidD)+(1-tY)*torch.log(1-sigmoidD)
reduce[torch.eye(n).byte()] = 0
return -reduce.sum()
losses = []
learning_rate = 1e-4
l = loss(tY)
stepSize = 1000
for i in range(stepSize):
l.backward(retain_graph=True)
losses.append(float(loss(tY)))
tZ.data = tZ.data - learning_rate * tZ.grad.data
B.data = B.data - learning_rate * B.grad.data
tZ.grad.data.zero_()
B.grad.data.zero_()
plt.subplot(122)
plt.plot(losses)
plt.title('Loss')
plt.xlabel('Iteration')
plt.ylabel('loss')
plt.show()
shouldnt the loss keep going down? or atleast converge to some point? I must've done something wrong, I am new to pytorch, any hints or nudges in the right direction would be highly appreciated!
The issue was that I defined my loss
l = loss(tY)
outside of the loop that ran and updated my gradients, I am not entirely sure why it had the effect that it did, but moving the loss function definition inside of the loop solved the problem, resulting in this loss:

How to generate a matrix with circle of ones in numpy/scipy

There are some signal generation helper functions in python's scipy, but these are only for 1 dimensional signal.
I want to generate a 2-D ideal bandpass filter, which is a matrix of all zeros, with a circle of ones to remove some periodic noise from my image.
I am now doing with:
def unit_circle(r):
def distance(x1, y1, x2, y2):
return math.sqrt((x1 - x2) ** 2 + (y1 - y2) ** 2)
d = 2*r + 1
mat = np.zeros((d, d))
rx , ry = d/2, d/2
for row in range(d):
for col in range(d):
dist = distance(rx, ry, row, col)
if abs(dist - r) < 0.5:
mat[row, col] = 1
return mat
result:
In [18]: unit_circle(6)
Out[18]:
array([[ 0., 0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0., 0.],
[ 0., 0., 1., 1., 0., 0., 0., 0., 0., 1., 1., 0., 0.],
[ 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0.],
[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0.],
[ 0., 0., 1., 1., 0., 0., 0., 0., 0., 1., 1., 0., 0.],
[ 0., 0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0., 0.]])
Is there a more direct way to generate a matrix of circle of ones, all else zeros?
Edit:
Python 2.7.12
Here's a vectorized approach -
def unit_circle_vectorized(r):
A = np.arange(-r,r+1)**2
dists = np.sqrt(A[:,None] + A)
return (np.abs(dists-r)<0.5).astype(int)
Runtime test -
In [165]: %timeit unit_circle(100) # Original soln
10 loops, best of 3: 31.1 ms per loop
In [166]: %timeit my_unit_circle(100) ##Eli Korvigo's soln
100 loops, best of 3: 2.68 ms per loop
In [167]: %timeit unit_circle_vectorized(100)
1000 loops, best of 3: 582 µs per loop
Here is a pure NumPy alternative that should run significantly faster and looks cleaner, imho. Basically, we vectorise your code by replacing built-in sqrt and abs with their NumPy alternatives and working on matrices of indices.
Updated to replace distance with np.hypot(courtesy of James K)
In [5]: import numpy as np
In [6]: def my_unit_circle(r):
...: d = 2*r + 1
...: rx, ry = d/2, d/2
...: x, y = np.indices((d, d))
...: return (np.abs(np.hypot(rx - x, ry - y)-r) < 0.5).astype(int)
...:
In [7]: my_unit_circle(6)
Out[7]:
array([[ 0., 0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0., 0.],
[ 0., 0., 1., 1., 0., 0., 0., 0., 0., 1., 1., 0., 0.],
[ 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0.],
[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0.],
[ 0., 0., 1., 1., 0., 0., 0., 0., 0., 1., 1., 0., 0.],
[ 0., 0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0., 0.]])
Benchmarks
In [12]: %timeit unit_circle(100)
100 loops, best of 3: 17.7 ms per loop
In [13]: %timeit my_unit_circle(100)
1000 loops, best of 3: 480 µs per loop
result of code execution
def gen_circle(img: np.ndarray, center: tuple, diameter: int) -> np.ndarray:
"""
Creates a matrix of ones filling a circle.
"""
# gets the radious of the image
radious = diameter//2
# gets the row and column center of the image
row, col = center
# generates theta vector to variate the angle
theta = np.arange(0, 360)*(np.pi/180)
# generates the indexes of the column
y = (radious*np.sin(theta)).astype("int32")
# generates the indexes of the rows
x = (radious*np.cos(theta)).astype("int32")
# with:
# img[x, y] = 1
# you can draw the border of the circle
# instead of the inner part and the border.
# centers the circle at the input center
rows = x + (row)
cols = y + (col)
# gets the number of rows and columns to make
# to cut by half the execution
nrows = rows.shape[0]
ncols = cols.shape[0]
# makes a copy of the image
img_copy = copy.deepcopy(img)
# We use the simetry in our favour
# does reflection on the horizontal axes
# and in the vertical axes
for row_down, row_up, col1, col2 in zip(rows[:nrows//4],
np.flip(rows[nrows//4:nrows//2]),
cols[:ncols//4],
cols[nrows//2:3*ncols//4]):
img_copy[row_up:row_down, col2:col1] = 1
return img_copy
center = (30,40)
ones = np.zeros((center[0]*2, center[1]*2))
diameter = 30
circle = gen_circle(ones, center, diameter)
plt.imshow(circle)

Solving Matrix Differential Equation in Python using Scipy/Numpy- NDSolve equivalent?

I have two numpy arrays: 9x9 and 9x1. I'd like to solve the differential equation at discrete time points, but am having trouble getting ODEInt to work. I do am unsure if I'm even doing the right thing.
With Mathematica, the equation is:
Solution = {A[t]} /. NDSolve[{A'[t] == Ab.A[t] && A[0] == A0}, {A[t]}, {t, 0, .5}, MaxSteps -> \[Infinity]];
time = 0.25;
increment = 0.05;
MA = Table[Solution, {t, 0, time, increment}];
Where Ab is the 9x9 matrix, A0 is the 9x1 matrix (initial). Here, I solve for time and life is good.
In Python implementation I have the following code which gives me the wrong answer:
from scipy.integrate import odeint
from numpy import array, dot, pi
def deriv(A, t, Ab):
return dot(Ab, A)
def MatrixBM3(k12,k21,k13,k31,k23,k32,delta1,delta2,delta3,
w1, R1, R2):
K = array([[-k21 -k23, k12, k32, 0., 0., 0., 0., 0., 0.],
[k21, -k12 - k13, k31, 0., 0., 0., 0., 0., 0.],
[k23, k13, -k31 - k32, 0., 0., 0., 0., 0., 0.],
[0., 0., 0., -k21 - k23, k12, k32, 0., 0., 0.],
[0., 0., 0., k21, -k12 - k13, k31, 0., 0., 0.],
[0., 0., 0., k23, k13, -k31 - k32, 0., 0., 0.],
[0., 0., 0., 0., 0., 0., -k21 - k23, k12, k32],
[0., 0., 0., 0., 0., 0., k21, -k12 - k13, k31],
[0., 0., 0., 0., 0., 0., k23, k13, -k31 - k32]])
Der = array([[0., 0., 0., -delta2, 0., 0., 0., 0., 0.],
[0., 0., 0., 0., -delta1, 0., 0., 0., 0.],
[0., 0., 0., 0., 0., -delta3, 0., 0., 0.],
[delta2, 0., 0., 0., 0., 0., 0., 0., 0.],
[0., delta1, 0., 0., 0., 0., 0., 0., 0.],
[0., 0., delta3, 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0.]])
W = array([[0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., w1, 0., 0.],
[0., 0., 0., 0., 0., 0., 0., w1, 0.],
[0., 0., 0., 0., 0., 0., 0., 0., w1],
[0., 0., 0., w1, 0., 0., 0., 0., 0.],
[0., 0., 0., 0., w1, 0., 0., 0., 0.],
[0., 0., 0., 0., 0., w1, 0., 0., 0.]])*2*pi
R = array([[-R2, 0., 0., 0., 0., 0., 0., 0., 0.],
[0., -R2, 0., 0., 0., 0., 0., 0., 0.],
[0., 0., -R2, 0., 0., 0., 0., 0., 0.],
[0., 0., 0., -R2, 0., 0., 0., 0., 0.],
[0., 0., 0., 0., -R2, 0., 0., 0., 0.],
[0., 0., 0., 0., 0., -R2, 0., 0., 0.],
[0., 0., 0., 0., 0., 0., -R1, 0., 0.],
[0., 0., 0., 0., 0., 0., 0., -R1, 0.],
[0., 0., 0., 0., 0., 0., 0., 0., -R1]])
return(K + Der + W + R)
Ab = MatrixBM3(21.218791062154633, 17653.497151475527, 40.50203461096454, 93956.36617049483, 0.0, 0.0, -646.4238856161137, 6727.748368359598, 20919.132768439955, 200.0, 2.36787, 5.39681)
A0 = array([-0.001071585381162955, -0.89153191708755708, -0.00038431516707591748, 0.0, 0.0, 0.0, 0.00054009700135979673, 0.4493470361764082, 0.00019370128872934646])
time = array([0.0,0.05,0.1,0.15,0.2,0.25])
MA = odeint(deriv, A0, time, args=(Ab,), maxsteps=2000)
Output is:
[[ -1.07158538e-003 -8.91531917e-001 -3.84315167e-004 0.00000000e+000
0.00000000e+000 0.00000000e+000 5.40097001e-004 4.49347036e-001
1.93701289e-004]
[ 3.09311322e+019 9.45061860e+022 2.35327270e+019 2.11901406e+020
1.63784238e+023 7.60569684e+019 2.29098804e+020 1.89766602e+023
8.18752241e+019]
[ 9.84409730e+042 3.00774018e+046 7.48949158e+042 6.74394343e+043
5.21257342e+046 2.42057805e+043 7.29126532e+043 6.03948436e+046
2.60574901e+043]
[ 3.13296814e+066 9.57239028e+069 2.38359473e+066 2.14631766e+067
1.65894606e+070 7.70369662e+066 2.32050753e+067 1.92211754e+070
8.29301904e+066]
[ 9.97093898e+089 3.04649506e+093 7.58599405e+089 6.83083947e+090
5.27973769e+093 2.45176732e+090 7.38521364e+090 6.11730342e+093
2.63932422e+090]
[ 3.17333659e+113 9.69573101e+116 2.41430747e+113 2.17397307e+114
1.68032166e+117 7.80295913e+113 2.35040739e+114 1.94688412e+117
8.39987500e+113]]
But the correct answer should be:
{-0.0010733126291998989, -0.8929689437405254, -0.0003849346301906338, 0., 0., 0., 0.0005366563145999495, 0.4464844718702628, 0.00019246731509531696}
{-0.000591095648651598, -0.570032546156741, -0.00023381082725213798, -0.00024790706920038567, 0.00010389803046880286, -0.00005361569187144767, 0.0003273277204077012, 0.2870035216110215, 0.00012300339326137006}
{-0.0003770535829276868, -0.364106358478121, -0.0001492324135668267, -0.0001596072774600538, -0.0011479989178276948, -0.000034744485507007025, 0.00020965172928479557, 0.18378613639965447, 0.00007876820247280559}
{-0.00024100792803807562, -0.23298939195213314, -0.00009543704274825206, -0.00010271831380730501, -0.0013205519868311284, -0.000022472380871477824, 0.00013326471695185768, 0.11685506361394844, 0.00005008078740423007}
{-0.00015437993249587976, -0.1491438843823813, -0.00006111736454518403, -0.00006545797627466387, -0.0005705018939767294, -0.000014272382451480663, 0.00008455890984798549, 0.0741820536557778, 0.00003179071165818503}
{-0.00009882799610556456, -0.09529950309336405, -0.00003909275555213336, -0.00004138741286392128, 0.00006303116741431477, -8.944610716890746*^-6, 0.00005406263888971806, 0.04743157303933772, 0.00002032674776723143}
Can anyone point me to what I may be doing wrong?
In the call to odeint, try changing tuple(array[Ab]) to (array(Ab),), or even just (Ab,). That is, use
MA = odeint(deriv, A0, time, (Ab,))
Without seeing how you defined A0 and Ab, I can't be sure that this will fix the problem, but the following variation of your code will work. I used a 3x3 array instead of 9x9.
import numpy as np
from scipy.integrate import odeint
def deriv(A, t, Ab):
return np.dot(Ab, A)
Ab = np.array([[-0.25, 0, 0],
[ 0.25, -0.2, 0],
[ 0, 0.2, -0.1]])
time = np.linspace(0, 25, 101)
A0 = np.array([10, 20, 30])
MA = odeint(deriv, A0, time, args=(Ab,))

Categories

Resources