A model that I am working on should be predicting quite a lot of variables simultaneously (>1000). Therefore I would like to have a small neural network at the end of the network for each output.
In order to do this compactly, I would like to find a way to create a sparse trainable connection between two layers in the neural network within the Tensorflow framework.
Only a small portion of the connection matrix should be trainable: It is only the parameters that are part of the block-diagonal.
For example:
The connection matrix is the following:
The trainable parameters should be in the place of the 1's.
I have written exactly such a layer:
https://github.com/ArnovanHilten/GenNet/blob/master/GenNet_utils/LocallyDirectedConnected_tf2.py
It takes a sparse matrix as an input and lets you decide how to connect between layers. The layer uses sparse tensors and matrix multiplications.
edit
so the comment was Is this a trainable object though?
The answer: No. You cannot use sparse matrix currently and make it trainable. Instead you can use a mask matrix (see at the end)
But if you need to use sparse matrix, you just have to use tf.sparse.sparse_dense_matmul() or tf.sparse_tensor_to_dense() where your sparse interacts with a dense matrix. I have taken a simple XOR example from here and replaced dense with a sparse matrix:
#Declaring necessary modules
import tensorflow as tf
import numpy as np
"""
A simple numpy implementation of a XOR gate to understand the backpropagation
algorithm
"""
x = tf.placeholder(tf.float32,shape = [4,2],name = "x")
#declaring a place holder for input x
y = tf.placeholder(tf.float32,shape = [4,1],name = "y")
#declaring a place holder for desired output y
m = np.shape(x)[0]#number of training examples
n = np.shape(x)[1]#number of features
hidden_s = 2 #number of nodes in the hidden layer
l_r = 1#learning rate initialization
theta1 = tf.SparseTensor(indices=[[0, 0],[0, 1], [1, 1]], values=[0.1, 0.2, 0.1], dense_shape=[3, 2])
#theta1 = tf.cast(tf.Variable(tf.random_normal([3,hidden_s]),name = "theta1"),tf.float64)
theta2 = tf.cast(tf.Variable(tf.random_normal([hidden_s+1,1]),name = "theta2"),tf.float32)
#conducting forward propagation
a1 = tf.concat([np.c_[np.ones(x.shape[0])],x],1)
#the weights of the first layer are multiplied by the input of the first layer
#z1 = tf.sparse_tensor_dense_matmul(theta1, a1)
z1 = tf.matmul(a1,tf.sparse_tensor_to_dense(theta1))
#the input of the second layer is the output of the first layer, passed through the
a2 = tf.concat([np.c_[np.ones(x.shape[0])],tf.sigmoid(z1)],1)
#the input of the second layer is multiplied by the weights
z3 = tf.matmul(a2,theta2)
#the output is passed through the activation function to obtain the final probability
h3 = tf.sigmoid(z3)
cost_func = -tf.reduce_sum(y*tf.log(h3)+(1-y)*tf.log(1-h3),axis = 1)
#built in tensorflow optimizer that conducts gradient descent using specified
optimiser = tf.train.GradientDescentOptimizer(learning_rate = l_r).minimize(cost_func)
#setting required X and Y values to perform XOR operation
X = [[0,0],[0,1],[1,0],[1,1]]
Y = [[0],[1],[1],[0]]
#initializing all variables, creating a session and running a tensorflow session
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
#running gradient descent for each iterati
for i in range(200):
sess.run(optimiser, feed_dict = {x:X,y:Y})#setting place holder values using feed_dict
if i%100==0:
print("Epoch:",i)
print(sess.run(theta1))
and the output is:
Epoch: 0
SparseTensorValue(indices=array([[0, 0],
[0, 1],
[1, 1]]), values=array([0.1, 0.2, 0.1], dtype=float32), dense_shape=array([3, 2]))
Epoch: 100
SparseTensorValue(indices=array([[0, 0],
[0, 1],
[1, 1]]), values=array([0.1, 0.2, 0.1], dtype=float32), dense_shape=array([3, 2]))
So the only way is to use a mask matrix. You can use it by multiplication or tf.where
1) Multiplication: You can create mask matrix of the desired shape and multiply it with your weight matrix:
mask = tf.Variable([[1,0,0],[0,1,0],[0,0,1]],name ='mask', trainable=False)
weight = tf.cast(tf.Variable(tf.random_normal([3,3])),tf.float32)
desired_tensor = tf.matmul(weight, mask)
2) tf.where
mask = tf.Variable([[1,0,0],[0,1,0],[0,0,1]],name ='mask', trainable=False)
weight = tf.cast(tf.Variable(tf.random_normal([3,3])),tf.float32)
desired_tensor = tf.where(mask > 0, tf.ones_like(weight), weight)
Hope it helps
You can do that by using sparse tensors like so:
SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])
and the output is:
[[1, 0, 0, 0]
[0, 0, 2, 0]
[0, 0, 0, 0]]
you can look up more on the documentation of sparse tensor here:
https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor
Hope it helps!
Related
Here I am trying to understand neural networks by coding one from scratch (in numpy only). I did the forward pass (using dot products) successfully. But I have no idea how I should proceed to do the backward pass (partial derivatives with respect to each trainable parameter and update using SDG equation). Loss can be the mean square error for example.
Here is my code so far, I added comments below the code describing what is left.
'''
I want to design a NN that has :
input layer I of 4 neurons
hidden layer H1 of 3 neurons
hidden layer H2 of 3 neurons
output layer O of 1 neurons
'''
import numpy as np
inputs = [1, 2, 3, 2.5]
# -------------- Hidden layers ---------------------------
wh1 = [[0.2, 0.8, -0.5, 1],
[0.5, -0.91, 0.26, -0.5],
[-0.26, -0.27, 0.17, 0.87]]
bh1 = [2, 3, 0.5]
wh2 = [[0.1, -0.14, 0.5],
[-0.5, 0.12, -0.33],
[-0.44, 0.73, -0.13]]
bh2 = [-1, 2, -0.5]
layer1_outputs = np.dot(wh1, np.array(inputs)) + bh1
layer2_outputs = np.dot(wh2, layer1_outputs,) + bh2
# ------------ output layer ------------------------------
who = [0.1, -0.14, 0.5]
bho = [4]
layer_out = np.dot(who, layer2_outputs,) + bho
# --------------------------------------------------------
print(layer_out)
true_outputs = np.sin(inputs)
# compute RMSE
# compute partial derivatives
# update weights
architecture of the NN :
Backpropagation in Neural Network uses chain rule of derivatives if you wish to implement backpropagation you have to find a way to implement the feature.
Here is my suggestion.
Create a class for your neural network, so you can create a separate function for each task.
Use a loop to pass through your network from front to back, and use the chain rule to calculate the partial derivatives at each level.
Adding sample code, from my old work, refer to GitHub repo for full code.
https://github.com/akash-agni/DeepLearning/blob/main/Neural_Network_From_Scratch_using_Numpy.ipynb
def backpropogate(self, X, y):
delta = list() #Empty list to store derivatives
delta_w = [0 for _ in range(len(self.layers))] #stores weight updates
delta_b = [0 for _ in range(len(self.layers))] #stores bias updates
error_o = (self.layers[-1].z - y.T) #Calculate the the error at output layer.
for i in reversed(range(len(self.layers) - 1)):
error_i = np.multiply(self.layers[i+1].weights.T.dot(error_o), self.layers[i].activation_grad()) # mutliply error with weights transpose to get gradients
delta_w[i+1] = error_o.dot(self.layers[i].a.T)/len(y) # store gradient for weights
delta_b[i+1] = np.sum(error_o, axis=1, keepdims=True)/len(y) # store gradients for biases
error_o = error_i # now make assign the previous layers error as current error and repeat the process.
delta_w[0] = error_o.dot(X) # gradients for last layer
delta_b[0] = np.sum(error_o, axis=1, keepdims=True)/len(y)
return (delta_w, delta_b) return gradients.
I want to elementwise multiply a dense tensor with shape [n, n, k] with a sparse tensor that has the shape [n, n, 1]. I want the values from the sparse tensor to repeat along the axis with the size s, like it would do if I used a dense tensor instead and relied on implicit broadcasting.
However the SparseTensor.__mul__ operation does not support broadcasting the sparse operand. I didn't find an operator to explicitly broadcast the sparse Tensor. How could I achieve this?
If you do not want to just convert the sparse tensor to dense, you can extract select the right values from the dense tensor to build a sparse result directly, something like this:
import tensorflow as tf
import numpy as np
with tf.Graph().as_default(), tf.Session() as sess:
# Input data
x = tf.placeholder(tf.float32, shape=[None, None, None])
y = tf.sparse.placeholder(tf.float32, shape=[None, None, 1])
# Indices of sparse tensor without third index coordinate
indices2 = y.indices[:, :-1]
# Values of dense tensor corresponding to sparse tensor values
x_sp = tf.gather_nd(x, indices2)
# Values of the resulting sparse tensor
res_vals = tf.reshape(x_sp * tf.expand_dims(y.values, 1), [-1])
# Shape of the resulting sparse tensor
res_shape = tf.shape(x, out_type=tf.int64)
# Make sparse tensor indices
k = res_shape[2]
v = tf.size(y.values)
# Add third coordinate to existing sparse tensor coordinates
idx1 = tf.tile(tf.expand_dims(indices2, 1), [1, k, 1])
idx2 = tf.tile(tf.range(k), [v])
res_idx = tf.concat([tf.reshape(idx1, [-1, 2]), tf.expand_dims(idx2, 1)], axis=1)
# Make sparse result
res = tf.SparseTensor(res_idx, res_vals, res_shape)
# Dense value for testing
res_dense = tf.sparse.to_dense(res)
# Dense operation for testing
res_dense2 = x * tf.sparse.to_dense(y)
# Test
x_val = np.arange(48).reshape(4, 4, 3)
y_val = tf.SparseTensorValue([[0, 0, 0], [2, 3, 0], [3, 1, 0]], [1, 2, 3], [4, 4, 1])
res_dense_val, res_dense2_val = sess.run((res_dense, res_dense2),
feed_dict={x: x_val, y: y_val})
print(np.allclose(res_dense_val, res_dense2_val))
# True
For any 2D tensor like
[[2,5,4,7],
[7,5,6,8]],
I want to do softmax for the top k element in each row and then construct a new tensor by replacing all the other elements to 0.
The result should be to get the softmax of top k (here k=2) elements for each row [[7,5],[8,7]],
which is thus
[[0.880797,0.11920291],
[0.7310586,0.26894143]]
and then reconstruct a new tensor according to the index of the top k elements in the original tensor, the final result should be
[[0,0.11920291,0,0.880797],
[0.26894143,0,0,0.7310586]].
Is it possible to implement this kind of masked softmax in tensorflow? Many thanks in advance!
Here is how you can do that:
import tensorflow as tf
# Input data
a = tf.placeholder(tf.float32, [None, None])
num_top = tf.placeholder(tf.int32, [])
# Find top elements
a_top, a_top_idx = tf.nn.top_k(a, num_top, sorted=False)
# Apply softmax
a_top_sm = tf.nn.softmax(a_top)
# Reconstruct into original shape
a_shape = tf.shape(a)
a_row_idx = tf.tile(tf.range(a_shape[0])[:, tf.newaxis], (1, num_top))
scatter_idx = tf.stack([a_row_idx, a_top_idx], axis=-1)
result = tf.scatter_nd(scatter_idx, a_top_sm, a_shape)
# Test
with tf.Session() as sess:
result_val = sess.run(result, feed_dict={a: [[2, 5, 4, 7], [7, 5, 6, 8]], num_top: 2})
print(result_val)
Output:
[[0. 0.11920291 0. 0.880797 ]
[0.26894143 0. 0. 0.7310586 ]]
EDIT:
Actually, there is a function that more closely does what you intend, tf.sparse.softmax. However, it requires a SparseTensor as input, and I'm not sure it should be faster since it has to figure out which sparse values go together in the softmax. The good thing about this function is that you could have different number of elements to softmax in each row, but in your case that does not seem to be important. Anyway, here is an implementation with that, in case you find it useful.
import tensorflow as tf
a = tf.placeholder(tf.float32, [None, None])
num_top = tf.placeholder(tf.int32, [])
# Find top elements
a_top, a_top_idx = tf.nn.top_k(a, num_top, sorted=False)
# Flatten values
sparse_values = tf.reshape(a_top, [-1])
# Make sparse indices
shape = tf.cast(tf.shape(a), tf.int64)
a_row_idx = tf.tile(tf.range(shape[0])[:, tf.newaxis], (1, num_top))
sparse_idx = tf.stack([a_row_idx, tf.cast(a_top_idx, tf.int64)], axis=-1)
sparse_idx = tf.reshape(sparse_idx, [-1, 2])
# Make sparse tensor
a_top_sparse = tf.SparseTensor(sparse_idx, sparse_values, shape)
# Reorder sparse tensor
a_top_sparse = tf.sparse.reorder(a_top_sparse)
# Softmax
result_sparse = tf.sparse.softmax(a_top_sparse)
# Convert back to dense (or you can keep working with the sparse tensor)
result = tf.sparse.to_dense(result_sparse)
# Test
with tf.Session() as sess:
result_val = sess.run(result, feed_dict={a: [[2, 5, 4, 7], [7, 5, 6, 8]], num_top: 2})
print(result_val)
# Same as before
Let's say you have a weights tensor w with shape (None, N)
Find the minimum value of the top k elements
top_kw = tf.math.top_k(w, k=10, sorted=False)[0]
min_w = tf.reduce_min(top_kw, axis=1, keepdims=True)
Generate a boolean mask for the weights tensor
mask_w = tf.greater_equal(w, min_w)
mask_w = tf.cast(mask_w, tf.float32)
Compute custom softmax using the mask
w = tf.multiply(tf.exp(w), mask_w) / tf.reduce_sum(tf.multiply(tf.exp(w), mask_w), axis=1, keepdims=True)
I am trying to implement simple autoencoder like below.
The number of input features are 2, and I want to build sparse autoencoder for dimension reduction to feature 1. I selected the number of nodes are 2(input), 8(hidden), 1(reduced feature), 8(hidden), 2(output) to add some more complexity than using only (2, 1, 2) nodes. The number of samples N is around 10000.
'DATA' is a just a 2x10000 matrix containing integer values.
import tensorflow as tf
x = tf.placeholder(shape=[None, 2])
w1 = tf.Variable(tf.random_normal(shape=[2, 8]))
w2 = tf.Variable(tf.random_normal(shape=[8, 1]))
h1 = tf.nn.relu(tf.matmul(x, w1))
encoded = tf.matmul(h1, w2)
h2 = tf.nn.relu(encoded)
h3 = tf.nn.relu(tf.matmul(h2, tf.transpose(w2)))
y = tf.matmul(h3, tf.transpose(w1))
mse = tf.reduce_mean(tf.squared_difference(x, y))
optimizer =
tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(mse)
sess = tf.Session()
sess.run(init)
fd = {x: DATA}
loss_value, reduced_feature = sess.run([mse, encoded], feed_dict=fd)
I have 2 questions with the implementation, as the result was quite different as I expected.
Is this implementation correct? Will the variable 'reduced_feature' show the reduced feature(1d feature) from 2 feature inputs?
Should I add some sparsity condition if I want to use more hidden nodes than input? If yes, can you show some sample code for this task?
I have two datasets, which is like:
input:
array([[[ 0.99309823],
...
[ 0. ]]])
shape : (1, 2501)
output:
array([[0, 0, 0, ..., 0, 0, 1],
...,
[0, 0, 0, ..., 0, 0, 0]])
shape : (2501, 9)
And I processed it with TFLearn; as
input_layer = tflearn.input_data(shape=[None,2501])
hidden1 = tflearn.fully_connected(input_layer,1205,activation='ReLU', regularizer='L2', weight_decay=0.001)
dropout1 = tflearn.dropout(hidden1,0.8)
hidden2 = tflearn.fully_connected(dropout1,1205,activation='ReLU', regularizer='L2', weight_decay=0.001)
dropout2 = tflearn.dropout(hidden2,0.8)
softmax = tflearn.fully_connected(dropout2,9,activation='softmax')
# Regression with SGD
sgd = tflearn.SGD(learning_rate=0.1,lr_decay=0.96, decay_step=1000)
top_k=tflearn.metrics.Top_k(3)
net = tflearn.regression(softmax,optimizer=sgd,metric=top_k,loss='categorical_crossentropy')
model = tflearn.DNN(net)
model.fit(input,output,n_epoch=10,show_metric=True, run_id='dense_model')
It works but not the way that I want. It's a DNN model. I want that when I enter 0.95, model must give me corresponding prediction for example [0,0,0,0,0,0,0,0,1]. However, when I want to enter 0.95, it says that,
ValueError: Cannot feed value of shape (1,) for Tensor 'InputData/X:0', which has shape '(?, 2501)'
When I tried to understand I realise that I need (1,2501) shaped data to predict for my wrong based model.
What i want is for every element in input, predict corresponding element in output. As you can see, in the instance dataset,
for [0.99309823], corresponding output is [0,0,0,0,0,0,0,0,1]. I want tflearn to train itself like this.
I may have wrong structured data, or model(probably dataset), I explained all the things, I need help I'm really out of my mind.
Your input data should be Nx1 (N = number of samples) dimensional to archive this transformation ([0.99309823] --> [0,0,0,0,0,0,0,0,1] ). According to your input data shape, it looks more likely including 1 sample with 2501 dimensions.
ValueError: Cannot feed value of shape (1,) for Tensor 'InputData/X:0', which has shape '(?, 2501)' This error means that tensorflow expecting you to provide a vector with shape (,2501), but you are feeding the network with a vector with shape (1,).
Example modified code with dummy data:
import numpy as np
import tflearn
#creating dummy data
input_data = np.random.rand(1, 2501)
input_data = np.transpose(input_data) # now shape is (2501,1)
output_data = np.random.randint(8, size=2501)
n_values = 9
output_data = np.eye(n_values)[output_data]
# checking the shapes
print input_data.shape #(2501,1)
print output_data.shape #(2501,9)
input_layer = tflearn.input_data(shape=[None,1]) # now network is expecting ( Nx1 )
hidden1 = tflearn.fully_connected(input_layer,1205,activation='ReLU', regularizer='L2', weight_decay=0.001)
dropout1 = tflearn.dropout(hidden1,0.8)
hidden2 = tflearn.fully_connected(dropout1,1205,activation='ReLU', regularizer='L2', weight_decay=0.001)
dropout2 = tflearn.dropout(hidden2,0.8)
softmax = tflearn.fully_connected(dropout2,9,activation='softmax')
# Regression with SGD
sgd = tflearn.SGD(learning_rate=0.1,lr_decay=0.96, decay_step=1000)
top_k=tflearn.metrics.Top_k(3)
net = tflearn.regression(softmax,optimizer=sgd,metric=top_k,loss='categorical_crossentropy')
model = tflearn.DNN(net)
model.fit(input_data, output_data, n_epoch=10,show_metric=True, run_id='dense_model')
Also my friend warned me about same thing as rcmalli. He says
reshape:
input = tf.reshape(input, (2501,1))
change
input_layer = tflearn.input_data(shape=[None,2501])
to
input_layer = tflearn.input_data(shape=[None, 1])
Variable dimension must be "None". In your wrong case, 2501 is the magnitude(or something else, I translated from another lang., but you got it) of your dataset. 1 is constant input magnitude.