How to get output layer values during training tensorflow - python

Is it possible to get the output layer values during the training in order to build a custom loss function.
to be more specific i want to get the output value and compute the loss using external method
my problem is I can't pass tf.eval() before initialize variables using tf.global_variables_initializer()
def run_command(im_path,p):
s = 'cmd'+p.eval()
os.system(s)
im = imread(im_path)
return im
def cross_corr(y_true, y_pred):
path1 = 'path_to_input_image'
true_image = run_command(path1, y_pred)
path2 = 'path_to_predicted_image'
predicted_image = run_command(path2,y_true)
pearson_r, update_op = tf.contrib.metrics.streaming_pearson_correlation(predicted_image, true_image, name='pearson_r')
loss = 1-(tf.math.square(pearson_r))
return loss
***
***
# Create the network
***
tf.global_variables_initializer()
***
# run training
with tf.Session() as sess:
***

If your model has a single output value, you can subclass tf.keras.losses.Loss. For example, a trivial custom loss function (which wouldn't be very good in training) implementation:
import tensorflow as tf
class LossCustom(tf.keras.losses.Loss):
def __init__(self, some_arg):
super(LossCustom, self).__init__()
self.param_some_arg = some_arg
def get_config(self):
config = super(LossCustom, self).get_config()
config.update({
"some_arg": self.param_some_arg,
})
return config
def call(self, y_true, y_pred):
result = y_pred - y_true
return tf.math.reduce_sum(result)
However, if you have multiple outputs and want to run them through the same loss function, you need to artificially combine them into a single value.
I do this in one of my models with an a dummy layer at the end of my model like this:
import tensorflow as tf
class LayerCheeseMultipleOut(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(LayerCheeseMultipleOut, self).__init__(**kwargs)
def call(self, inputs):
return tf.stack(inputs, axis=1) # [batch_size, OUTPUTS, .... ]
Then, in your custom loss function, unstack again like so:
output_a, output_b = tf.unstack(y_pred, axis=1)

Related

pytorch_lightning.utilities.exceptions.MisconfigurationException when training in pytorch lightning

I am training a sample model with dummy data then i got this error. I have gave everything properly but still i am getting this error: No `configure_optimizers()` method defined. Lightning `Trainer` expects as minimum a `training_step()`, `train_dataloader()` and `configure_optimizers()` to be defined. when i start training. Is the problem because the way i feed the dummy data into network or is their any other reason.
import torch
from torch import nn, optim
import pytorch_lightning as pl
from torch.utils.data import DataLoader
class ImageClassifier(pl.LightningModule):
def __init__(self, learning_rate=0.001):
super().__init__()
self.learning_rate = learning_rate
self.conv_layer1 = nn.Conv2d(in_channels=3, out_channels=3, kernel_size=3, stride=1, padding=1)
def forward(self,x):
output = self.conv_layer1(x)
print(output.shape)
return output
def training_step(self,batch, batch_idx):
inputs, targets = batch
output = self(inputs)
accuracy = self.binary_accuracy(output, targets)
loss = self.loss(output, targets)
self.log('train_accuracy', accuracy, prog_bar=True)
self.log('train_loss', loss)
return {'loss':loss,"training_accuracy": accuracy}
def test_step(self, batch, batch_idx):
inputs, targets = batch
outputs = self.inputs(inputs)
accuracy = self.binary_accuracy(outputs, targets)
loss = self.loss(outputs, targets)
self.log('test_accuracy', accuracy)
return {"test_loss":loss, "test_accuracy":accuracy}
def configure_optimizer(self):
params = self.parameters()
optimizer = optim.Adam(params=params, lr=self.learning_rate)
return optimizer
def binary_accuracy(self, outputs, inputs):
_, outputs = torch.max(outputs,1)
correct_results_sum = (outputs == targets).sum().float()
acc = correct_results_sum/targets.shape[0]
return acc
model = ImageClassifier()
Input = DataLoader(torch.randn(1,3,28,28))
trainer = pl.Trainer(max_epochs=10, progress_bar_refresh_rate=1)
trainer.fit(model, train_dataloader = Input)
In your code, the method name is configure_optimizer(). Therefore, no configure_optimizers() method defined. Seems like an error in name of the function.
I have a same problem, then I realize that a wrong method name could lead to the error. just make sure you type medthods name or import package and use it appropriately.

Why does tf.executing_eagerly() return False in TensorFlow 2?

Let me explain my set up. I am using TensorFlow 2.1, the Keras version shipped with TF, and TensorFlow Probability 0.9.
I have a function get_model that creates (with the functional API) and returns a model using Keras and custom layers. In the __init__ method of these custom layers A, I call a method A.m, which executes the statement print(tf.executing_eagerly()), but it returns False. Why?
To be more precise, this is roughly my setup
def get_model():
inp = Input(...)
x = A(...)(inp)
x = A(...)(x)
...
model = Model(inp, out)
model.compile(...)
return model
class A(tfp.layers.DenseFlipout): # TensorFlow Probability
def __init__(...):
self.m()
def m(self):
print(tf.executing_eagerly()) # Prints False
The documentation of tf.executing_eagerly says
Eager execution is enabled by default and this API returns True in most of cases. However, this API might return False in the following use cases.
Executing inside tf.function, unless under tf.init_scope or tf.config.experimental_run_functions_eagerly(True) is previously called.
Executing inside a transformation function for tf.dataset.
tf.compat.v1.disable_eager_execution() is called.
But these cases are not my case, so tf.executing_eagerly() should return True in my case, but no. Why?
Here's a simple complete example (in TF 2.1) that illustrates the problem.
import tensorflow as tf
class MyLayer(tf.keras.layers.Layer):
def call(self, inputs):
tf.print("tf.executing_eagerly() =", tf.executing_eagerly())
return inputs
def get_model():
inp = tf.keras.layers.Input(shape=(1,))
out = MyLayer(8)(inp)
model = tf.keras.Model(inputs=inp, outputs=out)
model.summary()
return model
def train():
model = get_model()
model.compile(optimizer="adam", loss="mae")
x_train = [2, 3, 4, 1, 2, 6]
y_train = [1, 0, 1, 0, 1, 1]
model.fit(x_train, y_train)
if __name__ == '__main__':
train()
This example prints tf.executing_eagerly() = False.
See the related Github issue.
As far as I know, when an input to a custom layer is symbolic input, then the layer is executed in graph (non-eager) mode. However, if your input to the custom layer is an eager tensor (as in the following example #1, then the custom layer is executed in the eager mode. So your model's output tf.executing_eagerly() = False is expected.
Example #1
from tensorflow.keras import layers
class Linear(layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(initial_value=w_init(shape=(input_dim, units),
dtype='float32'),
trainable=True)
b_init = tf.zeros_initializer()
self.b = tf.Variable(initial_value=b_init(shape=(units,),
dtype='float32'),
trainable=True)
def call(self, inputs):
print("tf.executing_eagerly() =", tf.executing_eagerly())
return tf.matmul(inputs, self.w) + self.b
x = tf.ones((1, 2)) # returns tf.executing_eagerly() = True
#x = tf.keras.layers.Input(shape=(2,)) #tf.executing_eagerly() = False
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
#output in graph mode: Tensor("linear_9/Identity:0", shape=(None, 4), dtype=float32)
#output in Eager mode: tf.Tensor([[-0.03011466 0.02563028 0.01234017 0.02272708]], shape=(1, 4), dtype=float32)
Here is another example with Keras functional API where custom layer was used (similar to you). This model is executed in graph mode and prints tf.executing_eagerly() = False as in your case.
from tensorflow import keras
from tensorflow.keras import layers
class CustomDense(layers.Layer):
def __init__(self, units=32):
super(CustomDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
print("tf.executing_eagerly() =", tf.executing_eagerly())
return tf.matmul(inputs, self.w) + self.b
inputs = keras.Input((4,))
outputs = CustomDense(10)(inputs)
model = keras.Model(inputs, outputs)
You might be running in a Colab. If so, try the following immediately after importing Tensorflow:
tf.compat.v1.enable_v2_behavior()
More generally, check the docs at https://www.tensorflow.org/api_docs/python/tf/executing_eagerly for more information on eager execution.

tf.function input_signature for distributed dataset in tensorflow 2.0

I am trying to build a distributed custom training loop in TensorFlow 2.0, but I can't figure out how to annotate the autograph tf.function signature in order to avoid retracing.
I have tried to use DatasetSpec and various combinations of TensorSpec tuples, but I get all sorts of errors.
My question
Is it possible to specify a tf.function input signature that accepts batched distributed datasets?
Minimal reproducing code
import tensorflow as tf
from tensorflow import keras
import numpy as np
class SimpleModel(keras.layers.Layer):
def __init__(self, name='simple_model', **kwargs):
super(SimpleModel, self).__init__(name=name, **kwargs)
self.w = self.add_weight(shape=(1, 1),
initializer=tf.constant_initializer(5.0),
trainable=True,
dtype=np.float32,
name='w')
def call(self, x):
return tf.matmul(x, self.w)
class Trainer:
def __init__(self):
self.mirrored_strategy = tf.distribute.MirroredStrategy()
with self.mirrored_strategy.scope():
self.simple_model = SimpleModel()
self.optimizer = tf.optimizers.Adam(learning_rate=0.01)
def train_batches(self, dataset):
dataset_dist = self.mirrored_strategy.experimental_distribute_dataset(dataset)
with self.mirrored_strategy.scope():
loss = self.train_batches_dist(dataset_dist)
return loss.numpy()
#tf.function(input_signature=(tf.data.DatasetSpec(element_spec=tf.TensorSpec(shape=(None, 1), dtype=tf.float32)),))
def train_batches_dist(self, dataset_dist):
total_loss = 0.0
for batch in dataset_dist:
losses = self.mirrored_strategy.experimental_run_v2(
Trainer.train_batch, args=(self, batch)
)
mean_loss = self.mirrored_strategy.reduce(tf.distribute.ReduceOp.MEAN, losses, axis=0)
total_loss += mean_loss
return total_loss
def train_batch(self, batch):
with tf.GradientTape() as tape:
losses = tf.square(2 * batch - self.simple_model(batch))
gradients = tape.gradient(losses, self.simple_model.trainable_weights)
self.optimizer.apply_gradients(zip(gradients, self.simple_model.trainable_weights))
return losses
def main():
values = np.random.sample((100, 1)).astype(np.float32)
dataset = tf.data.Dataset.from_tensor_slices(values)
dataset = dataset.batch(10)
trainer = Trainer()
for epoch in range(0, 100):
loss = trainer.train_batches(dataset)
print(loss / 10.0)
if __name__ == '__main__':
main()
Error message
TypeError: If shallow structure is a sequence, input must also be a sequence. Input has type: <class 'tensorflow.python.distribute.input_lib.DistributedDataset'>

TensorFlow Estimator : model_fn has following not expected args: ['self']

I'm using TensorFlow (1.1) high-level API Estimators to create my neural net. But I'm using it into a class and I have to call an instance of my class to generate the model of the neural network. (Here self.a)
class NeuralNetwork(object):
def __init__(self):
""" Create neural net """
regressor = tf.estimator.Estimator(model_fn=self.my_model_fn,
model_dir="/tmp/data")
// ...
def my_model_fn(self, features, labels, mode):
""" Generate neural net model """
self.a = a
predictions = ...
loss = ...
train_op = ...
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions,
loss=loss,
train_op=train_op)
But I get the error :
ValueError: model_fn [...] has following not expected args: ['self'].
I tried to remove the self for the args of my model but got another error TypeError: … got multiple values for keyword argument.
Is there any way to use these EstimatorSpec into a class ?
It looks like the Estimator's argument checking is a bit overzealous. As a workaround, you can wrap the member-function model_fn in a lambda like so:
import tensorflow as tf
class ModelClass(object):
def __init__(self):
self._constant = 2.
self.regressor = tf.estimator.Estimator(
model_fn=lambda features, labels, mode: self._model_fn(
features, labels, mode))
def _model_fn(self, features, labels, mode):
loss = tf.constant(self._constant)
train_op = tf.no_op()
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op)
ModelClass()
However, this is rather annoying. Would you mind filing a feature request on Github to relax this argument checking for member functions?
Update: Should be fixed in TensorFlow 1.3+. Thanks, Yuan!

Using make_template() in TensorFlow

I am trying to use make_template() to avoid passing reuse flag throughout my model. But it seems that make_template() doesn't work correctly when it is used inside of a python class. I pasted ]my model code and the error I am getting below. It is a simple MLP to train on the MNIST dataset.
Since the code is kinda long, the main part here is the _weights() function. I try to wrap it using make_template() and then use get_variables() inside it to create and reuse weights throughout my model. _weights() is used by _create_dense_layer() and that in turn is used by _create_model() to create the graph. The train() function accepts tensors that I get from a data reader.
Model
class MLP(object):
def __init__(self, hidden=[], biases=False, activation=tf.nn.relu):
self.graph = tf.get_default_graph()
self.hidden = hidden
self.activation = activation
self.biases = biases
self.n_features = 784
self.n_classes = 10
self.bsize = 100
self.l2 = 0.1
def _real_weights(self, shape):
initializer=tf.truncated_normal_initializer(stddev=0.1)
weights = tf.get_variable('weights', shape, initializer=initializer)
return weights
# use make_template to make variable reuse transparent
_weights = tf.make_template('_weights', _real_weights)
def _real_biases(self, shape):
initializer=tf.constant_initializer(0.0)
return tf.get_variable('biases', shape, initializer=initializer)
# use make_template to make variable reuse transparent
_biases = tf.make_template('_biases', _real_biases)
def _create_dense_layer(self, name, inputs, n_in, n_out, activation=True):
with tf.variable_scope(name):
weights = self._weights([n_in, n_out])
layer = tf.matmul(inputs, weights)
if self.biases:
biases = self._biases([n_out])
layer = layer + biases
if activation:
layer = self.activation(layer)
return layer
def _create_model(self, inputs):
n_in = self.n_features
for i in range(len(self.hidden)):
n_out = self.hidden[i]
name = 'hidden%d' % (i)
inputs = self._create_dense_layer(name, inputs, n_in, n_out)
n_in = n_out
output = self._create_dense_layer('output', inputs, n_in, self.n_classes, activation=False)
return output
def _create_loss_op(self, logits, labels):
cent = tf.nn.softmax_cross_entropy_with_logits(logits, labels)
weights = self.graph.get_collection('weights')
l2 = (self.l2 / self.bsize) * tf.reduce_sum([tf.reduce_sum(tf.square(w)) for w in weights])
return tf.reduce_mean(cent, name='loss') + l2
def _create_train_op(self, loss):
optimizer = tf.train.AdamOptimizer()
return optimizer.minimize(loss)
def _create_accuracy_op(self, logits, labels):
predictions = tf.nn.softmax(logits)
errors = tf.equal(tf.argmax(predictions, 1), tf.argmax(labels, 1))
return tf.reduce_mean(tf.cast(errors, tf.float32))
def train(self, images, labels):
logits = model._create_model(images)
loss = model._create_loss_op(logits, labels)
return model._create_train_op(loss)
def accuracy(self, images, labels):
logits = model._create_model(images)
return model._create_accuracy_op(logits, labels)
def predict(self, images):
return model._create_model(images)
The error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
in ()
25 model = MLP(hidden=[128])
26 # define ops
---> 27 train = model.train(images, labels)
28 accuracy = model.accuracy(eval_images, eval_labels)
29 # load test data and create a prediction op
in train(self, images, labels)
60
61 def train(self, images, labels):
---> 62 logits = model._create_model(images)
63 loss = model._create_loss_op(logits, labels)
64 return model._create_train_op(loss)
in _create_model(self, inputs)
39 n_out = self.hidden[i]
40 name = 'hidden%d' % (i)
---> 41 inputs = self._create_dense_layer(name, inputs, n_in, n_out)
42 n_in = n_out
43 output = self._create_dense_layer('output', inputs, n_in, self.n_classes, activation=False)
in _create_dense_layer(self, name, inputs, n_in, n_out, activation)
25 def _create_dense_layer(self, name, inputs, n_in, n_out, activation=True):
26 with tf.variable_scope(name):
---> 27 weights = self._weights([n_in, n_out])
28 layer = tf.matmul(inputs, weights)
29 if self.biases:
/usr/local/lib/python3.5/site-packages/tensorflow/python/ops/template.py in __call__(self, *args, **kwargs)
265 self._unique_name, self._name) as vs:
266 self._var_scope = vs
--> 267 return self._call_func(args, kwargs, check_for_new_variables=False)
268
269 #property
/usr/local/lib/python3.5/site-packages/tensorflow/python/ops/template.py in _call_func(self, args, kwargs, check_for_new_variables)
206 ops.get_collection(ops.GraphKeys.TRAINABLE_VARIABLES))
207
--> 208 result = self._func(*args, **kwargs)
209 if check_for_new_variables:
210 trainable_variables = ops.get_collection(
TypeError: _real_weights() missing 1 required positional argument: 'shape'
originally defined at:
File "", line 1, in
class MLP(object):
File "", line 17, in MLP
_weights = tf.make_template('_weights', _real_weights)
There are multiple problems with this code as it is here, e.g. the model references in the train, accuracy and predict methods. I assume this is due to cutting the code from its natural habitat.
The reason for the TypeError you mention,
TypeError: _real_weights() missing 1 required positional argument: 'shape'
most likely comes from the fact that _real_weights itself is an instance method of the MLP class, not a regular function or static method. As such, the first parameter to the function is always the self reference pointing to the instance of the class at the time of the call (an explicit version of the this pointer in C-like languages), as can be seen in the function declaration:
def _real_weights(self, shape):
initializer=tf.truncated_normal_initializer(stddev=0.1)
weights = tf.get_variable('weights', shape, initializer=initializer)
return weights
Note that even though you don't use the argument, it's still required in this case. Thus when creating a template of the function using
tf.make_template('_weights', self._real_weights)
you basically state that the _weights template you create should take two positional arguments: self and weights (as does the _real_weights method). Consequently, when you call the function created from the template as
weights = self._weights([n_in, n_out])
you pass the array to the self argument, leaving the (required) shape argument unspecified.
From what it looks like you'd have two options here: You could either make _real_weights a regular function outside of the MLP class, so that
def _real_weights(shape):
initializer=tf.truncated_normal_initializer(stddev=0.1)
weights = tf.get_variable('weights', shape, initializer=initializer)
return weights
class MLP():
# etc.
which is probably not what you want, given that you already created a class for the model - or you could explicitly make it a static method of the MLP class, so that
class MLP():
#staticmethod
def _real_weights(shape):
initializer=tf.truncated_normal_initializer(stddev=0.1)
weights = tf.get_variable('weights', shape, initializer=initializer)
return weights
Since static methods by definition do not operate on a class instance, you can (and have to) omit the self reference.
You would then create the templates as
tf.make_template('_weights', _real_weights)
in the first case and
tf.make_template('_weights', MLP._real_weights)
in the second case, explicitly specifying the class MLP as the name scope of the static method. Either way, the _real_weights function/method and the _weights template both now have only one argument, the shape of the variable to create.

Categories

Resources