How to implement stdp in tensorflow? - python

I'm trying to implement STDP (Spike-Timing Dependent Plasticity) in tensorflow. It's a bit complicated. Any ideas (to get running entirely within a tensorflow graph)?
It works like this: say I have 2 input neurons, and they connect to 3 output neurons, via this matrix: [[1.0, 1.0, 0.0], [0.0, 0.0, 1.0]] (input neuron 0 connects to output neurons 0 and 1...).
Say I have these spikes for the input neurons (2 neurons, 7 timesteps):
Input Spikes:
[[0, 0, 1, 1, 0, 1, 0],
[1, 1, 0, 0, 0, 0, 1]]
And these spikes for the output neurons (3 neurons, 7 timesteps):
Output Spikes:
[[0, 0, 0, 1, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 1]]
Now, for each non-zero weight, I want to compute a dw. For instance, for input neuron 0 connecting to output neuron 0:
The time stamps of the spikes for input neuron 0 are [2, 3, 5], and the timestamps for output neuron 0 are [3, 6]. Now, I compute all the delta times:
Delta Times = [ 2-3, 2-6, 3-3, 3-6, 5-3, 5-6 ] = [ -1, -4, 0, -3, 2, -1 ]
Then, I compute some function (the actual STDP function, which isn't important for this question - some exponential thing)
dw = SUM [ F(-1), F(-4), F(0), F(-3), F(2), F(-1) ]
And that's the dw for the weight connecting input neuron 0 to output neuron 0. Repeat for all non-zero weights.
So I can do all this in numpy, but I'd like to be able to do it entirely within a single tensorflow graph. In particular, I'm stuck on computing the delta times. And how to do all this for all non-zero weights, in parallel.
This is the actual stdp function, btw (the constants can be parameters):
def stdp_f(x):
return tf.where(
x == 0, np.zeros(x.shape), tf.where(
x > 0, 1.0 * tf.exp(-1.0 * x / 10.0), -1.0 * 1.0 * tf.exp(x / 10.0)))
A note on performance: the method given by #jdehesa, below, is both correct and clever. But it also turns out to be slow. In particular, for a real neural network of 784 input neurons feeding into 400 neurons, over 500 time steps, the spike_match = step performs multiplication of (784, 1, 500, 1) and (1, 400, 1, 500) tensors.

I am not familiar with STDP, so I hope I understood correctly what you meant. I think this does what you describe:
import tensorflow as tf
def f(x):
# STDP function
return x * 1
def stdp(input_spikes, output_spikes):
input_shape = tf.shape(input_spikes)
t = input_shape[-1]
# Compute STDP function for all possible time difference values
stdp_values = f(tf.cast(tf.range(-t + 1, t), dtype=input_spikes.dtype))
# Arrange in matrix such that position [i, j] contains f(i - j)
matrix_idx = tf.expand_dims(tf.range(t - 1, 2 * t - 1), 1) + tf.range(0, -t, -1)
stdp_matrix = tf.gather(stdp_values, matrix_idx)
# Find spike matches
spike_match = (input_spikes[:, tf.newaxis, :, tf.newaxis] *
output_spikes[tf.newaxis, :, tf.newaxis, :])
# Sum values where there are spike matches
return tf.reduce_sum(spike_match * stdp_matrix, axis=(2, 3))
# Test
input_spikes = [[0, 0, 1, 1, 0, 1, 0],
[1, 1, 0, 0, 0, 0, 1]]
output_spikes = [[0, 0, 0, 1, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 1]]
with tf.Graph().as_default(), tf.Session() as sess:
ins = tf.placeholder(tf.float32, [None, None])
outs = tf.placeholder(tf.float32, [None, None])
res = stdp(ins, outs)
res_val = sess.run(res, feed_dict={ins: input_spikes, outs: output_spikes})
print(res_val)
# [[ -7. 10. -15.]
# [-13. 7. -24.]]
Here I assume that f is probably expensive (and that its value is the same for every pair of neurons), so I compute it only once for every possible time delta and then redistribute the computed values in a matrix, so I can multiply at the pairs of coordinates where the input and output spikes happen.
I used the identity function for f as a placeholder, so the resulting values are actually just the sum of time differences in this case.
EDIT: Just for reference, replacing f with the STDP function you included:
def f(x):
return tf.where(x == 0,
tf.zeros_like(x),
tf.where(x > 0,
1.0 * tf.exp(-1.0 * x / 10.0),
-1.0 * 1.0 * tf.exp(x / 10.0)))
The result is:
[[-3.4020822 2.1660795 -5.694256 ]
[-2.974073 0.45364904 -3.1197631 ]]

Related

how to aggregate tensors based on another tensor in Pytorch

Assume we have two input tensors named emb and mention, we need an algorithm to aggregate emb based on mention. Here is an example:
# Inputs:
emb = tc.tensor([[float(i)] * 4 for i in range(1, 17)], requires_grad=True)
mention = tc.tensor([2., 0, 0, 2, 1, 0, 0, 0, 2, 1, 1, 0, 0, 1, 1, 1], requires_grad=True)
# Expect output:
out = tc.tensor([[1, 1, 1, 1], # equals to emb[0:1].mean(axis=0)
[4.5, 4.5, 4.5, 4.5], # equals to emb[3:5].mean(axis=0)
[10, 10, 10, 10], # equals to emb[8:11].mean(axis=0)
])
In the mention tensor, data can be grouped into 3 parts: [0:1], [3:5], and [8:11]. The rules of grouping are following:
assign a new unique value for each position that value equals to 2.
keep the unique value from the previous positions if the current position is following a 2 started sequence.
ignore value=1 positions which has no 2 a head.

Python. Recursive matrix product with result as a set of vectors

As an application of Eulers method, I'm trying to implement a code which would compute the recursive matrix product Yn = Yn-1 + A(Yn-1), where Y is a vector and A is a matrix such that the product is defined. This is the current code I have
def f(A, y):
return A.dot(y)
def euler(f, t0, y0, T, dt):
t = np.arange(t0, T + dt, dt)
y = [0,0,0,0]*len(t)
y[0] = y0
for i in range(1, len(t)):
y[i] = y[i - 1] + f(A, y[i - 1])*dt
return t, y
# Define problem specific values
A = np.array([[0, 0, 1, 0],
[0, 0, 0, 1],
[-2, -3, 0, 0],
[-3, -2, 0, 0]])
y1_0 = 1
y2_0 = 2
y3_0 = 0
y4_0 = 0
y0 = [y1_0, y2_0, y3_0, y4_0]
t,y = euler(f,0,y0,2,1)
print(t,y)
For example, the result for points in the range t0 = 0, T = 2 should be the vectors Y1 and Y2. Instead I have
[0 1 2] [[1, 2, 0, 0], array([ 1, 2, -8, -7]), 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Something is wrong here. While Y1 = [1, 2, -8, -7 ] does show up, there is all of this unnecessary stuff. And Y2 is not printed at all. I suspect this is due to how I define the variable y. For every point in the range of t, I need a vector of 4 zeros - which is then filled up by the function euler, I think. How should correct this?
The computer always does what you tell it to do. In your case y is constructed by repeating 4 zeros len(t) times, giving a list of 12 zeros. The first list entry is replaced by the list y0. The second list entry is replaced by the result of the numpy operations which is a numpy.array. Then the return statement at the level of the loop instructions breaks the loop and returns the t and y arrays. y still contains 10 zeros from its construction that were not replaced.
So construct
y = np.zeros([len(t), len(y0)])
and repair the indentation level.

calculate sum of Nth column of numpy array entry grouped by the indices in first two columns?

I would like to loop over following check_matrix in such a way that code recognize whether the first and second element is 1 and 1 or 1 and 2 etc? Then for each separate class of pair i.e. 1,1 or 1,2 or 2,2, the code should store in the new matrices, the sum of last element (which in this case has index 8) times exp(-i*q(check_matrix[k][2:5]-check_matrix[k][5:8])), where i is iota (complex number), k is the running index on check_matrix and q is a vector defined as given below. So there are 20 q vectors.
import numpy as np
q= []
for i in np.linspace(0, 10, 20):
q.append(np.array((0, 0, i)))
q = np.array(q)
check_matrix = np.array([[1, 1, 0, 0, 0, 0, 0, -0.7977, -0.243293],
[1, 1, 0, 0, 0, 0, 0, 1.5954, 0.004567],
[1, 2, 0, 0, 0, -1, 0, 0, 1.126557],
[2, 1, 0, 0, 0, 0.5, 0.86603, 1.5954, 0.038934],
[2, 1, 0, 0, 0, 2, 0, -0.7977, -0.015192],
[2, 2, 0, 0, 0, -0.5, 0.86603, 1.5954, 0.21394]])
This means in principles I will have to have 20 matrices of shape 2x2, corresponding to each q vector.
For the moment my code is giving only one matrix, which appears to be the last one, even though I am appending in the Matrices. My code looks like below,
for i in range(2):
i = i+1
for j in range(2):
j= j +1
j_list = []
Matrices = []
for k in range(len(check_matrix)):
if check_matrix[k][0] == i and check_matrix[k][1] == j:
j_list.append(check_matrix[k][8]*np.exp(-1J*np.dot(q,(np.subtract(check_matrix[k][2:5],check_matrix[k][5:8])))))
j_11 = np.sum(j_list)
I_matrix[i-1][j-1] = j_11
Matrices.append(I_matrix)
I_matrix is defined as below:
I_matrix= np.zeros((2,2),dtype=np.complex_)
At the moment I get following output.
Matrices = [array([[-0.66071446-0.77603624j, -0.29038112+2.34855023j], [-0.31387562-0.08116629j, 4.2788 +0.j ]])]
But, I desire to get a matrix corresponding to each q value meaning that in total there should be 20 matrices in this case, where each 2x2 matrix element would be containing sums such that elements belong to 1,1 and 1,2 and 2,2 pairs in following manner
array([[11., 12.],
[21., 22.]])
I shall highly appreciate your suggestion to correct it. Thanks in advance!
I am pretty sure you can solve this problem in an easier way and I am not 100% sure that I understood you correctly, but here is some code that does what I think you want. If you have a possibility to check if the results are valid, I would suggest you do so.
import numpy as np
n = 20
q = np.zeros((20, 3))
q[:, -1] = np.linspace(0, 10, n)
check_matrix = np.array([[1, 1, 0, 0, 0, 0, 0, -0.7977, -0.243293],
[1, 1, 0, 0, 0, 0, 0, 1.5954, 0.004567],
[1, 2, 0, 0, 0, -1, 0, 0, 1.126557],
[2, 1, 0, 0, 0, 0.5, 0.86603, 1.5954, 0.038934],
[2, 1, 0, 0, 0, 2, 0, -0.7977, -0.015192],
[2, 2, 0, 0, 0, -0.5, 0.86603, 1.5954, 0.21394]])
check_matrix[:, :2] -= 1 # python indexing is zero based
matrices = np.zeros((n, 2, 2), dtype=np.complex_)
for i in range(2):
for j in range(2):
k_list = []
for k in range(len(check_matrix)):
if check_matrix[k][0] == i and check_matrix[k][1] == j:
k_list.append(check_matrix[k][8] *
np.exp(-1J * np.dot(q, check_matrix[k][2:5]
- check_matrix[k][5:8])))
matrices[:, i, j] = np.sum(k_list, axis=0)
NOTE: I changed your indices to have consistent
zero-based indexing.
Here is another approach where I replaced the k-loop with a vectored version:
for i in range(2):
for j in range(2):
k = np.logical_and(check_matrix[:, 0] == i, check_matrix[:, 1] == j)
temp = np.dot(check_matrix[k, 2:5] - check_matrix[k, 5:8], q[:, :, np.newaxis])[..., 0]
temp = check_matrix[k, 8:] * np.exp(-1J * temp)
matrices[:, i, j] = np.sum(temp, axis=0)
3 line solution
You asked for efficient solution in your original title so how about this solution that avoids nested loops and if statements in a 3 liner, which is thus hopefully faster?
fac=2*(check_matrix[:,0]-1)+(check_matrix[:,1]-1)
grp=np.split(check_matrix[:,8], np.cumsum(np.unique(fac,return_counts=True)[1])[:-1])
[np.sum(x) for x in grp]
output:
[-0.23872600000000002, 1.126557, 0.023742000000000003, 0.21394]
How does it work?
I combine the first two columns into a single index, treating each as "bits" (i.e. base 2)
fac=2*(check_matrix[:,0]-1)+(check_matrix[:,1]-1)
( If you have indexes that exceed 2, you can still use this technique but you will need to use a different base to combine the columns. i.e. if your indices go from 1 to 18, you would need to multiply column 0 by a number equal to or larger than 18 instead of 2. )
So the result of the first line is
array([0., 0., 1., 2., 2., 3.])
Note as well it assumes the data is ordered, that one column changes fastest, if this is not the case you will need an extra step to sort the index and the original check matrix. In your example the data is ordered.
The next step groups the data according to the index, and uses the solution posted here.
np.split(check_matrix[:,8], np.cumsum(np.unique(fac,return_counts=True)[1])[:-1])
[array([-0.243293, 0.004567]), array([1.126557]), array([ 0.038934, -0.015192]), array([0.21394])]
i.e. it outputs the 8th column of check_matrix according to the grouping of fac
then the last line simply sums those... knowing how the first two columns were combined to give the single index allows you to map the result back. Or you could simply add it to check matrix as a 9th column if you wanted.

Programming absolute deviation as linear program

I am attempting to convert a sum of absolute deviations to a linear programming problem so that I can utilize CPLEX (or other solver). I am stuck on how the matrices are to be set up. The problem is as follows:
minimize abs(x1 - 5) + abs(x2 - 3)
s.t. x1 + x2 = 10
I have the following constraints set up to transform the problem into a linear form:
x1 - 5 <= t1
-(x1 - 5) <= t1 and
x2 - 3 <= t2
-(x2 - 3) <= t2
I've set up the objective function as
c = [0,0,1,1]
but I am lost on how to set up
Ax <= b
in matrix form. What I have so far is:
A = [[ 1, -1, 0, 0],
[-1, -1, 0, 0],
[ 0, 0, 1,-1],
[ 0, 0,-1,-1]]
b = [ 5, -5, 3,-3]
I have set up the other constraint in matrix for as:
B = [1, 1, 0, 0]
b2 = [10]
When I run the following:
linprog(c,A_ub=A,b_ub=b,A_eq=B,b_eq=b2,bounds=[(0,None),(0,None)])
I get the following error message back:
ValueError: Invalid input for linprog: A_eq must have exactly two dimensions, and the number of columns in A_eq must be equal to the size of c
I know there is a solution because when I use scipy.optimize.minimize it solves to [6,4]. I'm sure the issue is I am not formulating the input matrices correctly but I am not sure how to set them up so that it runs.
Edit - here is the code that does not run:
import numpy as np
from scipy.optimize import linprog, minimize
c = np.block([np.zeros(2),np.ones(2)])
print("c =>",c)
A = [[ 1, -1, 0, 0],
[-1, -1, 0, 0],
[ 0, 0, 1,-1],
[ 0, 0,-1,-1]]
b = [[ 5, -5, 3,-3]]
print(A)
print(np.multiply(A,b))
B = [ 1, 1, 0, 0]
b2 = [10]
print(np.multiply(B,b2))
linprog(c,A_ub=A,b_ub=b,A_eq=B,b_eq=b2,bounds=[(0,None),(0,None)],
options={'disp':True})
I think the message is quite good. B should be 2-dimensional matrix instead of a 1-dimensional vector. So:
B = [[1, 1, 0, 0]]
Secondly, the bounds array is too short.
Thirdly, your ordering of variables is inconsistent. The columns in A are x1,t1,x2,t2 while the columns in B (and c) seem to be x1,x2,t1,t2. They need to follow the same scheme.

Solving linear regression equation for a sparse matrix

I'd like to solve a multivariate linear regression equation for vector X with m elements while I have n observations, Y. If I assume the measurements have Gaussian random errors. How can I solve this problem using python? My problem looks like this:
A simple example of W when m=5, is given as follows:
P.S. I would like to consider the effect of errors and precisely I want to measure the standard deviation of errors.
You can do it like this
def myreg(W, Y):
from numpy.linalg import pinv
m, n = Y.shape
k = W.shape[1]
X = pinv(W.T.dot(W)).dot(W.T).dot(Y)
Y_hat = W.dot(X)
Residuals = Y_hat - Y
MSE = np.square(Residuals).sum(axis=0) / (m - 2)
X_var = (MSE[:, None] * pinv(W.T.dot(W)).flatten()).reshape(n, k, k)
Tstat = X / np.sqrt(X_var.diagonal(axis1=1, axis2=2)).T
return X, Tstat
demo
W = np.array([
[ 1, -1, 0, 0, 1],
[ 0, -1, 1, 0, 0],
[ 1, 0, 0, 1, 0],
[ 0, 1, -1, 0, 1],
[ 1, -1, 0, 0, -1],
])
Y = np.array([2, 4, 1, 5, 3])[:, None]
X, V = myreg(W, Y)

Categories

Resources