User defined activation function in CNTK - python

Is there way to provide user defined activation function for layers in CNTK (Python API) instead of only primitive ones like tanh, relu etc.?
Something like this
def f(x):
return x * x
LSTM(number_of_cells, activation=f)

Yes, what you wrote should work as is.
This tutorial might be useful to you:
https://www.cntk.ai/Tutorials/CVPR2017/CVPR_2017_Tutorial_final.pdf
Also, CNTK has a number of tutorials and manuals:
https://github.com/Microsoft/CNTK/tree/master/Tutorials
https://github.com/Microsoft/CNTK/tree/master/Manual

What you wrote should work, you can use any of the CNTK expression to compose a more complex activation function.

Related

Chainer: custome sigmoid activation function

I want to implement the following sigmoid function with a custom slope parameter k.
y = f(x)= 1/ ( 1+exp(-1*k*x))
gradient gy = k * f(x)*(1-f(x))
I want to use this in my autoencoder. How do I implement this in Chainer?
If k is constant (i.e., a hyperparameter), F.sigmoid(k * x) should just work.
If k is a parameter that should be learned in the same way as other weights, you may want to subclass a link like L.PReLU, and use it just like other links, e.g. L.Linear and L.Convolution2D. You can still implement the forward method of the link like the above simple expression.
An activation function should be a subclass of Chainer.FunctionNode (FunctionNode docs). An example of this is the Swish function provided by chainer library. You can observe its source here and clone it (or any other function such as tanh) to make necessary changes to its forward and backward operation declaration to fit it to your needs.

What are the operations allowed in tensorflow loss function definition?

I learned that we need to use tf.OPERATIONS to define the computation graph, but I found sometimes, using + or = are just fine, without using tf.add or tf.assign see here.
My question is that what are the operations allowed in tensorflow loss function definition without using "tf.OPERATIONS". In other words, other than + and = what else? can we use for example *, or ^2 on variables?
PS: I just do not understand why x*x is ok but x^2 is not ...

What does the function control_dependencies do?

I would like to have an example illustrating the use of the function tf.control_dependencies. For example, I want to create two tensors X and Y and if they are equal do or print something.
import tensorflow as tf
session = tf.Session()
X = tf.constant(5)
Y = tf.constant(50)
with tf.control_dependencies([tf.assert_equal(X, Y)]):
print('X and Y are equal!')
In the code above, X is clearly not equal to Y. What is tf.control_dependencies doing in this case?
control_dependencies is not a conditional. It is a mechanism to add dependencies to whatever ops you create in the with block. More specifically, what you specify in the argument to control_dependencies is ensured to be evaluated before anything you define in the with block.
In your example, you don't add any (TensorFlow) operations in the with block, so the block does nothing.
This answer has an example of how to use control_dependencies, where it is used to make sure the assignments happen before the batchnorm operations are evaluated.

Implementing a function with tensorflow

I am new to programming, especially programming with tensorflow. I'm making toy problems to understand using it.
In that case I want to build a function like softmax, where the denominator is not the sum of all classes, but a sum of some sampled classes.
In python using numpy would be like:
def my_softmax(X,W, num_of_samples):
K = 4
S = np.zeros(((np.dot(X,np.transpose(W))).shape))
for line in range(X.shape[0]):
XW = np.dot(X[line],np.transpose(W))
m = np.max(XW)
samples_sum = 0
for s in range(num_of_samples):
r = (randint(0,K-1))
samples_sum += np.exp(XW[r]- m)
S[line] = (np.exp(XW-m))/(samples_sum)
return S
How could this be implemented in tensorflow?
More generally, is there a possible way to create new "custom" functions like that?
You can wrap Python/numpy functions as tensorflow operators. See tf.py_func
https://www.tensorflow.org/versions/r0.9/api_docs/python/script_ops.html
However, it is better to not use it in production setting as performance will be (significantly) impacted. For most of np.* functions you will find corresponding tf.* functions that you can use. Try to represent all your computation in terms of matrix/vector instead of for loop.
Also see
https://www.tensorflow.org/versions/r0.11/api_docs/python/constant_op.html

What's difference between tf.sub and just minus operation in tensorflow?

I am trying to use Tensorflow. Here is an very simple code.
train = tf.placeholder(tf.float32, [1], name="train")
W1 = tf.Variable(tf.truncated_normal([1], stddev=0.1), name="W1")
loss = tf.pow(tf.sub(train, W1), 2)
step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
Just ignore the optimization part (4th line). It will take a floating number and train W1 so as to increase squared difference.
My question is simple. If I use just minus sign instead of
tf.sub" as below, what is different? Will it cause a wrong result?
loss = tf.pow(train-W1, 2)
When I replace it, the result looks the same. If they are the same, why do we need to use the "tf.add/tf.sub" things?
Built-in back propagation calculation can be done only by the "tf.*" things?
Yes, - and + resolve to tf.sub ad tf.add. If you look at the tensorflow code you will see that these operators on tf.Variable are overloaded with the tf.* methods.
As to why both exists I assume the tf.* ones exist for consistency. So sub and say matmul operation can be used in the same way. While the operator overloading is for convenience.
(tf.sub appears to have been replaced with tf.subtract)
The only advantage I see is that you can specify a name of the operation as in:
tf.subtract(train, W1, name='foofoo')
This helps identify the operation causing an error as the name you provide is also shown:
ValueError: Dimensions must be equal, but are 28 and 40 for 'foofoo' (op: 'Sub') with input shapes
it may also help with TensorBoard understanding. It might be overkill for most people as python also shows the line number which triggered the error.

Categories

Resources