I try to generate image summaries to be displayed in tensorboard. This worked in an eager execution environment.
Now, I try to use the eval_metric_ops returning a dict of operations to compute metrics during execution of the computation graph. For this, I rely on tf.py_func to do my metrics computations and plots. This function signature is
tf.py_func(
func,
inp,
Tout,
stateful=True,
name=None
)
Where Tout is the returned type of the function. I managed to make it work for simple metrics (float values). As far as I understand, I need to define a string returned type for my summaries which will be parsed after to rebuild my images.
Here is the blocking point.
I build my Summary with:
summ = tf.Summary(value=[
tf.Summary.Value(
tag=metric_name,
image=tf.Summary.Image(
encoded_image_string=encode_image_array_as_png_str(
self._last_metrics[metric_name])))])
Returning it as is, I get: W tensorflow/core/framework/op_kernel.cc:1306] Unimplemented: Unsupported object type Summary
Returning str(summ) gives: WARNING:tensorflow:Skipping summary for ..., cannot parse string to Summary.
I also tried to build it with:
tf.summary.image(
name,
tensor,
max_outputs=3,
collections=None,
family=None
)
But this gives: W tensorflow/core/framework/op_kernel.cc:1306] Unimplemented: Unsupported object type Tensor
Do you know how to serialize a Summary to a string/bytes iterable/whatever can be interpreted as a string Tensor, in a way that it can be parsed back to an image Summary after that.
Thanks.
Shame on me.
As many other classes in tensorflow, Summary is defined by a Protocol Buffer message and thus, implement the SerializeToString().
Hence, just returning summ.SerializeToString() works!
Related
I've been following Apple's coremltools docs for converting PyTorch segmentation models to CoreML.
While it works fine when we're loading a remote PyTorch model, I'm yet to figure out a working Python script to perform conversions with local/already-downloaded PyTorch models.
The following piece of code throws a TypeError: 'dict' object is not callable
#This works fine: model = torch.hub.load('pytorch/vision:v0.6.0', 'deeplabv3_resnet101',pretrained=True).eval()
model = torch.load('local_model_file.pth')
input_tensor = preprocess(input_image)
input_batch = input_tensor.unsqueeze(0)
with torch.no_grad():
output = model(input_batch)['out'][0] #error here
torch_predictions = output.argmax(0)
There is a SO answer that offers a solution by initialising the model class and loading the state_dict, but I wonder what's the concrete solution when we don't have access to the PyTorch model?
In your code, model is a state dict, which is a dictionary from parameter names to the parameter tensor values. As the linked answer stated, the right way to load a state dict is by (a) creating the model object that the state dict belongs to and then (b) use nn.Module.load_state_dict to load the state dict. To do (a), you need access to the model's class definition. If you don't have that access, then unfortunately I don't see any reliable way to load the state dict.
You might be able to guess what the class' __init__ look like by looking at the parameter names in the state dict (e.g., 'module.stage1.rebnconvin.conv_s1.weight' looks like a convolution). However, even if the guess is correct and the state dict can be loaded, you still need to define the forward method because the state dict only stores the parameters.
I'd like to use pre-trained sentence embeddings in my tensorflow graph execution model. The embeddings are available dynamically from a function call, which takes in an array of sentences and outputs an array of sentence embeddings. This function uses a pre-trained pytorch model so has to remain separate from the tensorflow model I'm training:
def get_pretrained_embeddings(sentences):
return pretrained_pytorch_model.encode(sentences)
My tensorflow model looks like this:
class SentenceModel(tf.keras.Model):
def __init__(self):
super().__init__()
def call(self, sentences):
embedding_layer = tf.keras.layers.Embedding(
10_000,
256,
embeddings_initializer=tf.keras.initializers.Constant(get_pretrained_embeddings(sentences)),
trainable=False,
)
sentence_text_embedding = tf.keras.Sequential([
embedding_layer,
tf.keras.layers.GlobalAveragePooling1D(),
])
return sentence_text_embedding,
But when I try to train this model using
cached_train = train.shuffle(100_000).batch(1024)
model.fit(cached_train)
my embeddings_initializer call gets the error:
OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
I assume this is because tensorflow is trying to compile the graph using symbolic data. How can I get my external function, which relies on the current training data batch, to work with tensorflow's graph training?
Tensorflow compiles models to an execution graph before performing the actual training process. The obvious side-effect that clues us into this is if we have a regular Python print() statement in e.g. our call() method, it will only get executed once as Tensorflow runs through your code to construct the execution graph, which it will later convert to native code.
The other side effect of this is that cannot use anything that isn't a tensor of some description when training. By 'tensor' here, all of the following can be considered a tensor:
The input value of your call() method (obviously)
A tf.Sequential
A tf.keras.Model/tf.keras.layers.Layer subclass
A SparseTensor
A tf.constant()
....probably more I haven't listed here.
To this end, you would need to convert your PyTorch model to a Tensorflow one to be able to reference it in a subclass of tf.keras.Model/tf.keras.layers.Layer.
As a side note, if you do find you need to iterate a tensor, you should just be able to iterate it on the 1st dimension (i.e. the batch size) like so:
for part in some_tensor:
pass
If you want to iterate on some other dimension, I recommend doing a tf.unstack(some_tensor, axis=AXIS_NUMBER_HERE) first and iterate over the result thereof.
I am sub-classing tensorflow.keras.Model to implement a certain model. Expected behavior:
Training (fitting) time: returns a list of tensors including the final output and auxiliary output;
Inferring (predicting) time: returns a single output tensor.
And the code is:
class SomeModel(tensorflow.keras.Model):
# ......
def call(self, x, training=True):
# ......
return [aux1, aux2, net] if training else net
This is how i use it:
model=SomeModel(...)
model.compile(...,
loss=keras.losses.SparseCategoricalCrossentropy(),
loss_weights=[0.4, 0.4, 1],...)
# ......
model.fit(data, [labels, labels, labels])
And got:
AssertionError: in converted code:
ipython-input-33-862e679ab098:140 call *
`return [aux1, aux2, net] if training else net`
...\tensorflow_core\python\autograph\operators\control_flow.py:918 if_stmt
Then the problem is that the if statement is converted into the calculation graph and this would of course cause the problem. I found the whole stack trace is long and useless so it's not included here.
So, is there any way to make TensorFlow generate different graph based on training or not?
Which tensorflow version are you using? You can overwrite behaviour in the .fit, .predict and .evaluate methods in Tensorflow 2.2, which would generate different graphs for these methods (I assume) and potentially work for your use-case.
The problems with earlier versions is that subclassed models get created by tracing the call method. This means Python conditionals become Tensorflow conditionals and face several limitations during graph creation and execution.
First, both branches (if-else) have to be defined, and regarding python collections (eg. lists), the branches have to have the same structure (eg. number of elements). You can read about the limitations and effects of Autograph here and here.
(Also, a conditional may not get evaluated at every run, if the condition is based on a Python variable and not a tensor.)
Training an image classifier using .fit_generator() or .fit() and passing a dictionary to class_weight= as an argument.
I never got errors in TF1.x but in 2.1 I get the following output when starting training:
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
What does it mean to coerce something from ... to ['...']?
The source for this warning on tensorflow's repo is here, comments placed are:
Attempt to coerce sample_weight_modes to the target structure. This implicitly depends on the fact that Model flattens outputs for its internal representation.
This seems like a bogus message. I get the same warning message after upgrading to TensorFlow 2.1, but I do not use any class weights or sample weights at all. I do use a generator that returns a tuple like this:
return inputs, targets
And now I just changed it to the following to make the warning go away:
return inputs, targets, [None]
I don't know if this is relevant, but my model uses 3 inputs, so my inputs variable is actually a list of 3 numpy arrays. targets is just a single numpy array.
In any case, it's just a warning. The training works fine either way.
Edit for TensorFlow 2.2:
This bug seems to have been fixed in TensorFlow 2.2, which is great. However the fix above will fail in TF 2.2, because it will try to get the shape of the sample weights, which will obviously fail with AttributeError: 'NoneType' object has no attribute 'shape'. So undo the above fix when upgrading to 2.2.
I believe this is a bug with tensorflow that will happen when you call model.compile() with default parameter sample_weight_mode=None and then call model.fit() with specified sample_weight or class_weight.
From the tensorflow repos:
fit() eventually calls _process_training_inputs()
_process_training_inputs() sets sample_weight_modes = [None] based on model.sample_weight_mode = None and then creates a DataAdapter with sample_weight_modes = [None]
the DataAdapter calls broadcast_sample_weight_modes() with sample_weight_modes = [None] during initialization
broadcast_sample_weight_modes() seems to expect sample_weight_modes = None but receives [None]
it asserts that [None] is a different structure from sample_weight / class_weight, overwrites it back to None by fitting to the structure of sample_weight / class_weight and outputs a warning
Warning aside this has no effect on fit() as sample_weight_modes in the DataAdapter is set back to None.
Note that tensorflow documentation states that sample_weight must be a numpy-array. If you call fit() with sample_weight.tolist() instead, you will not get a warning but sample_weight is silently overwritten to None when _process_numpy_inputs() is called in preprocessing and receives an input of length greater than one.
I have taken your Gist and installed Tensorflow 2.0, instead of TFA and it worked without any such Warning.
Here is the Gist of the complete code. Code for installing the Tensorflow is shown below:
!pip install tensorflow==2.0
Screenshot of the successful execution is shown below:
Update: This bug is fixed in Tensorflow Version 2.2.
instead of providing a dictionary
weights = {'0': 42.0, '1': 1.0}
i tried a list
weights = [42.0, 1.0]
and the warning disappeared.
I'd like to create a tf.data.Dataset.from_generator(...) dataset. I need to pass in a Python generator.
I would like to pass in a property of a previous dataset to the generator like so:
dataset = dataset.interleave(
map_func=lambda x: tf.data.Dataset.from_generator(generator=lambda: gen(x), output_types=tf.int64),
cycle_length=2
)
Where I define gen(...) to take a value (which is a pointer to some data such as a filename which gen knows how to access).
This fails because gen receives a tensor object, not a python/numpy value.
Is there a way to resolve the tensor object to a value inside of gen(...)?
The reason for interleaving the generators is so I can manipulate the list of data-pointers/filenames with other dataset operations such as .shuffle() and .repeat() without the need to bake those into the gen(...) function, which would be necessary if I started with the generator directly from the list of data-pointers/filenames.
I want to use the generator because a large number of data values will be generated per data-pointer/filename.
TensorFlow now supports passing tensor arguments to the generator:
def map_func(tensor):
dataset = tf.data.Dataset.from_generator(generator, tf.float32, args=(tensor,))
return dataset
The answer is indeed no. Here is a reference to a couple of relevant git issues (open as of the time of this writing) for further developments on the question:
https://github.com/tensorflow/tensorflow/issues/13101
https://github.com/tensorflow/tensorflow/issues/16343