tf.placeholder() is not compatible with eager execution [duplicate] - python

I have upgraded with tf_upgrade_v2 TF1 code to TF2. I'm a noob with both. I got the next error:
RuntimeError: tf.placeholder() is not compatible with eager execution.
I have some tf.compat.v1.placeholder().
self.temperature = tf.compat.v1.placeholder_with_default(1., shape=())
self.edges_labels = tf.compat.v1.placeholder(dtype=tf.int64, shape=(None, vertexes, vertexes))
self.nodes_labels = tf.compat.v1.placeholder(dtype=tf.int64, shape=(None, vertexes))
self.embeddings = tf.compat.v1.placeholder(dtype=tf.float32, shape=(None, embedding_dim))
Could you give me any advice about how to proceed? Any "fast" solutions? or should I to recode this?

I found an easy solution here: disable Tensorflow eager execution
Basicaly it is:
tf.compat.v1.disable_eager_execution()
With this, you disable the default activate eager execution and you don't need to touch the code much more.

tf.placeholder() is meant to be fed to the session that when run receive the values from feed dict and perform the required operation.
Generally, you would create a Session() with 'with' keyword and run it. But this might not favour all situations due to which you would require immediate execution. This is called eager execution.
Example:
generally, this is the procedure to run a Session:
import tensorflow as tf
def square(num):
return tf.square(num)
p = tf.placeholder(tf.float32)
q = square(num)
with tf.Session() as sess:
print(sess.run(q, feed_dict={num: 10})
But when we run with eager execution we run it as:
import tensorflow as tf
tf.enable_eager_execution()
def square(num):
return tf.square(num)
print(square(10))
Therefore we need not run it inside a session explicitly and can be more intuitive in most of the cases. This provides more of an interactive execution.
For further details visit:
https://www.tensorflow.org/guide/eager
If you are converting the code from tensorflow v1 to tensorflow v2, You must implement tf.compat.v1 and Placeholder is present at tf.compat.v1.placeholder but this can only be executed in eager mode off.
tf.compat.v1.disable_eager_execution()
TensorFlow released the eager execution mode, for which each node is immediately executed after definition. Statements using tf.placeholder are thus no longer valid.

In TensorFlow 1.X, placeholders are created and meant to be fed with actual values when a tf.Session is instantiated. However, from TensorFlow2.0 onwards, Eager Execution has been enabled by default, so the notion of a "placeholder" does not make sense as operations are computed immediately (rather than being differed with the old paradigm).
Also see Functions, not Sessions,
# TensorFlow 1.X
outputs = session.run(f(placeholder), feed_dict={placeholder: input})
# TensorFlow 2.0
outputs = f(input)

If you are getting this error while doing object detection using TensorFlow model then use exporter_main_v2.py instead of export_inference_graph.py for exporting the model. This is right method to do. If you just off eager_execution then it will solve this error but generate other.
Also note that there are some parameter change like hear you will specify the path to directory of checkpoint instead of path to checkpoint. refer to this document for how to do object detection with TensorFlow V2

To solve this, you have to disable the default activate eager execution. So add the following code line.
tf.compat.v1.disable_eager_execution() #<--- Disable eager execution
Before the error fixing
After the error fixing

Related

Tf.mirrored strategy enabling eager execution of code

I am using tensorflow2.6 and my code requires setting the below code at starting because I use symbolic keras tensor in partial loss in my model
from tensorflow.python.framework.ops import disable_eager_execution
disable_eager_execution()
At the same time I also want to train on multiple gpus hence I used mirrored strategy but the issue is mirrored strategy requires setting eager execution which fails with my above disableness. Please help me if there exists another way of training on multiple gpus.
I have tried calling my code with the below but it got stuck saying significant overhead ahead.
tf.config.run_functions_eagerly(True)
but this is wrong i believe since as i mentioned** i need the disableness of eagerly mode.**

Tensorflow 2 getting "WARNING:tensorflow:x out of the last x calls to <function> triggered tf.function retracing."

I'm working on a project where I have trained a series of binary classifiers with Keras, with Tensorflow as the backend engine. The input data I have is a series of images, where each binary classifier must make the prediction on the images, later I save the predictions on a CSV file.
The problem I have is when I get the predictions from the first series of binary classifiers there isn't any warning, but when the 5th or 6th binary classifier calls the method predict on the input data I get the following warning:
WARNING:tensorflow:5 out of the last 5 calls to <function
Model.make_predict_function..predict_function at
0x2b280ff5c158> triggered tf.function retracing. Tracing is expensive
and the excessive number of tracings could be due to (1) creating
#tf.function repeatedly in a loop, (2) passing tensors with different
shapes, (3) passing Python objects instead of tensors. For (1), please
define your #tf.function outside of the loop. For (2), #tf.function
has experimental_relax_shapes=True option that relaxes argument shapes
that can avoid unnecessary retracing. For (3), please refer to
https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args
and https://www.tensorflow.org/api_docs/python/tf/function for more
details.
To answer each point in the parenthesis, here are my answers:
The predict method is called inside a for loop.
I don't pass tensors but a list of NumPy arrays of gray scale images, all of them with the same size in width and height. The only thing that can change is the batch size because the list can have only 1 image or more than one.
As I wrote in point 2, I pass a list of NumPy arrays.
I have debugged my program and found that this warning always happens when the method predict is called. To summarize the code I have written is the following:
import cv2 as cv
import tensorflow as tf
from tensorflow.keras.models import load_model
# Load the models
binary_classifiers = [load_model(path) for path in path2models]
# Get the images
images = [#Load the images with OpenCV]
# Apply the resizing and reshapes on the images.
my_list = list()
for image in images:
image_reworked = # Apply the resizing and reshaping on images
my_list.append(image_reworked)
# Get the prediction from each model
# This is where I get the warning
predictions = [model.predict(x=my_list,verbose=0) for model in binary_classifiers]
What I have tried
I have defined a function as tf.function and putted the code of the predictions inside the tf.function like this
#tf.function
def testing(models, faces):
return [model.predict(x=faces,verbose=0) for model in models]
But I ended up getting the following error:
RuntimeError: Detected a call to Model.predict inside a
tf.function. Model.predict is a high-level endpoint that manages
its own tf.function. Please move the call to Model.predict outside
of all enclosing tf.functions. Note that you can call a Model
directly on Tensors inside a tf.function like: model(x).
So calling the method predict is basically already a tf.function. So it's useless to define a tf.function when the warning I get it's from that method.
I have also checked those other two questions:
Tensorflow 2: Getting "WARNING:tensorflow:9 out of the last 9 calls to triggered tf.function retracing. Tracing is expensive"
Loading multiple saved tensorflow/keras models for prediction
But neither of the two questions answers my question about how to avoid this warning. Plus I have also checked the links in the warning message but I couldn't solve my problem.
What I want
I simply want to avoid this warning. While I'm still getting the predictions from the models I noticed that the python program takes way too much time on doing predictions for a list of images.
What I'm using
Python 3.6.13
Tensorflow 2.3.0
Solution
After some tries to suppress the warning from the predict method, I have checked the documentation of Tensorflow and in one of the first tutorials on how to use Tensorflow it is explained that, by default, Tensorflow is executed in eager mode, which is useful for testing and debugging the network models. Since I have already tested my models many times, it was only required to disable the eager mode by writing this single python line of code:
tf.compat.v1.disable_eager_execution()
Now the warning doesn't show up anymore.
For the benefit of community providing solution here
After some tries to suppress the warning from the predict method, I
have checked the documentation of Tensorflow and in one of the first
tutorials on how to use Tensorflow it is explained that, by default,
Tensorflow is executed in eager mode, which is useful for testing and
debugging the network models. Since I have already tested my models
many times, it was only required to disable the eager mode by writing
this single python line of code:
tf.compat.v1.disable_eager_execution()
Now the warning doesn't show up anymore. (paraphrased from Simone)
tf.compat.v1.disable_eager_execution() can only be called before any Graphs, Ops, or Tensors have been created. It can be used at the beginning of the program for migration projects from TensorFlow 1.x to 2.x.
For more details you can refer Eager execution

Tensorflow - Unable to use a BasicLSTMCell with a MirroredStrategy distribution in Estimator

I have a tf.nn.rnn_cell.BasicLSTMCell as part of my NN architecture. I use a for loop because it is recursing over input a fixed number of time steps. Something like this:
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(num_units=lstm_dimensionality, name="forward_lstm")
_, (lstm_memory, lstm_hidden) = lstm_cell(input_m, state=[lstm_memory, lstm_hidden])
for i in range(3):
# HERE is where the error is thrown
_, (lstm_memory, lstm_hidden) = lstm_cell(input_m, state=[lstm_memory, lstm_hidden])
It works quite well locally on a single device. It also works fine in Google ML Engine on a single GPU. However, when I try distributing to 4 GPUs using tf.distribute.MirroredStrategy, it throws an exception
ValueError: At least one of name (None) and default_name (None) must be provided.
The lstm_cell callable doesn't even take a name parameter so it's confusing.
There isn't much room for details here, so I've created a toy example in this Github repo to reproduce the bug in ML Engine. It is specifically on this line where the error is thrown.
Tensorflow: 1.13.1
ML Engine: --runtime-version 1.13
In your code here you use a scope in the function compute_initial_lstm_state.
The you reuse the 2 returned values here
You use a scope to generate values and you assign them without scope.
This should be your root error. With a single GPU, the scope can be deduce automatically. But with multi gpu, it's not possible and it fail.

Tensorflow eager no keras

With NO keras can you do eager execution in tensorflow? I have a non-neural network model in TensorFlow graph code to move to eager. This is a low rank matrix factorization for recommender system.
Python language.
Thank you
Request that answerers please demonstrate working code. If answer includes speculation then please state explicitly.
Yes, you can certainly use eager execution without Keras. Keras is built on top of the lower level operations that support eager execution.
For example:
import tensorflow as tf
import numpy as np
tf.enable_eager_execution()
W = tf.contrib.eager.Variable(tf.random_normal((10, 10)))
def model(x):
return tf.matmul(x, W)
data = np.random.randn(3, 10).astype(np.float32)
print(model(data))
You can see some more detailed tutorials at https://www.tensorflow.org/tutorials/eager/
That said, there are various corner cases/errors you might hit if trying to run arbitrary code written to construct a graph with eager execution enabled, and slight refactoring may be needed. Those would depend on the details of how the code is structured.
The reverse (i.e., writing code that works with eager execution enabled) generally works out well to construct the equivalent graph when eager execution is not enabled.
Hope that helps.

How to load in TensorFlow C++ a custom op

I've designed a custom op and it works as expected in python. I call it with the usual lines of code:
import tensorflow as tf
custom_mod = tf.load_op_library('/path/to/.so')
with tf.device("/gpu:0"):
with tf.Session as sess:
#(note: function name .exec() depends on kernel code)
custom_mod_in_graph = custom_mod.exec()
sess.run(custom_mod_in_graph)
However I need to run the same op with the TF C++ API, unfortunately I can't find any respective calling procedures (i.e.: no equivalent for tf.load_op_library).
Are they missing or is there a different way of integrating the ops there?

Categories

Resources