TF 1 can't use tf.Tensor as bool condition - python

I am learning TF 2, but for some reason, I need to use TF 1 in a project.
However, there was a problem coming out when I used tensor as bool condition in if.
For simply demonstrating, I will use the following example.
import tensorflow as tf
a = tf.constant(2.0)
if a :
print('hi')
And the error:
using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with #tf.function.
I want the type to actually be float, do you have any idea?

As traceback error shows, tf.Tensor as a Python bool is not allowed in Graph execution`.
To execute this code you may need to upgrade the Tensorflow into TF 2.x (eager mode) by using below command:
!pip install --upgrade tensorflow
!pip install tensorflow==2.1.0 # to install specific TF version
This code ran successfully in latest TF version.
import tensorflow as tf
print(tf.__version__)
import tensorflow as tf
a = tf.constant(2.0)
if a :
print('hi')
print(a)
Output:
2.7.0
hi
tf.Tensor(2.0, shape=(), dtype=float32)

Related

How to define dynamic-shape variable when building computational graph with Tensorflow 1.15

System information
Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04
Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
TensorFlow installed from (source or binary): Conda repo
TensorFlow version (use command below): 1.15
Python version: 3.7.7
Bazel version (if compiling from source):
GCC/Compiler version (if compiling from source):
CUDA/cuDNN version: 10.1
GPU model and memory: Tesla V100-SMX3-32GB
Describe the current behavior
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [] rhs shape= [1,1]
[[{{node Variable/Assign}}]]
Describe the expected behavior
No error
Standalone code to reproduce the issue
import tensorflow as tf
import numpy as np
import os
os.environ["CUDA_VISIBLE_DEVICES"]='0'
with tf.Session() as sess:
v = tf.Variable(np.zeros(shape=[1,1]),shape=tf.TensorShape(None))
sess.run(tf.global_variables_initializer())
Obseration:
The error did not appear when I use eager_execution_mode()
Code:
tf.enable_eager_execution()
v = tf.Variable(np.zeros([1,1]),shape=tf.TensorShape(None))
tf.print(v)
v.assign(np.ones([2,2]))
tf.print(v)
Output:
[[0]]
[[1 1]
[1 1]]
Link to a MWE: https://colab.research.google.com/gist/amahendrakar/3fe8345db4092d520246205be4b97948/41620.ipynb
Just have to enable resource variable as dynamic shape behavior is only available for this 'updated' Variable class.
import tensorflow as tf
import numpy as np
import os
os.environ["CUDA_VISIBLE_DEVICES"]='0'
tf.compat.v1.enable_resource_variables()
with tf.Session() as sess:
v = tf.Variable(np.zeros(shape=[1,1]),shape=tf.TensorShape(None))
sess.run(tf.global_variables_initializer())

Keras Import Error when loading a pre trained model

I am trying to use a pre-trained model in Tensorflow. I am using the following code:
import tensorflow as tf
from tensorflow import keras
from keras.applications import mobilenet_v2
I get the following error:
ModuleNotFoundError: No module named 'keras'
However, the following codes do work:
from tensorflow.keras.applications import mobilenet_v2
OR
from keras_applications import mobilenet_v2
The 2 methods mentioned above work but the 1st one doesn't. Why does this happen?
I've solved this problem by downgrading tensorflow to version 2.0 using this command:
pip install tensorflow==2.0
I hope it helps you.

Seq2Seq Training helper module in tensorflow

In the latest tf 2.0 update tensorflow removed the contrib module from the framework. Thought they have compensated for most of the function from it in tf.compat.v1 but I couldn't find the substitute for the functions such as tf.contrib.seq2seq.BasicDecoder , tf.contrib.seq2seq.dynamic_decode , tf.contrib.seq2seq.GreedyEmbeddingHelper and tf.contrib.seq2seq.TrainingHelper.
How to use these functions in my model?
Check out TensorFlow-Addons:
https://github.com/tensorflow/addons,
https://www.tensorflow.org/addons/api_docs/python/tfa/seq2seq?hl=en
the functions you are looking for are all there.
pip install tensorflow-addons
Then:
import tensorflow as tf
import tensorflow_addons as tfa

ImportError: cannot import name normalize_data_format

I am very new to use github. I have installed github in ubuntu 16.04, I installed python 2.7.12, tensorflow 1.9 and keras. I want to use my own custom activation and optimizer in keras RNN. I searched in web and came to know i need to install keras-contrib package to use advanced activation and custom activation function.
So, I install the keras-contrib from github. But I don't know how to work with it and how to run the program using keras-contrib.
But i tried with following commands
git clone https://www.github.com/keras-team/keras-contrib.git
cd keras-contrib
python setup.py install
then I tried with this following code
from keras.models import Sequential
from keras.layers import Dense
import numpy as np
from keras_contrib.layers.advanced_activations import PELU
it showing the following error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "keras_contrib/__init__.py", line 4, in <module>
from . import layers
File "keras_contrib/layers/__init__.py", line 3, in <module>
from .convolutional import *
File "keras_contrib/layers/convolutional.py", line 15, in <module>
from keras.utils.conv_utils import normalize_data_format
ImportError: cannot import name normalize_data_format
Anyone please check this error and help me to sort out this error.
I update the keras contribute source code installed in my linux. Follow the changes:
https://github.com/ekholabs/keras-contrib/commit/0dac2da8a19f34946448121c6b9c8535bfb22ce2
Now, it works well.
I had the same problem. I installed keras 2.2.2 version using the following command and problem solved.
pip install -q keras==2.2.2
Refer this PR.
https://github.com/keras-team/keras-contrib/pull/292
Had the same issue. The problem is that normalize_data_format function was moved to keras.backend.common from keras.utils.conv_utils in later versions of keras. You can use
import keras
and then in your code use
keras.utils.conv_utils.normalize_data_format
I found that in keras version 2.6.0 the normalize function is not lost, it is just "stored" in a file "np_utils.py", so what we need to do is just change
"
from keras.utils import normalize
to
from keras.utils.np_utils import normalize
It must be because the keras_contrib you have downloaded is not compatible with updated version of keras. Check this link https://github.com/keras-team/keras/blob/master/keras/utils/conv_utils.py
There is no function here like normalise_data_format, that is where it is throwing error.
It must be because the keras_contrib you have downloaded is not compatible with updated version of keras. Check this link https://github.com/keras-team/keras/blob/master/keras/utils/conv_utils.py
It does not work...
This bug is reported and fixed here: https://github.com/keras-team/keras-contrib/issues/291
On my Windows 10 system and in Colaboratory, using Python 3.7, I solved this problem updating Keras and installing git version of keras-contrib.
pip install -q keras==2.2.2
pip install git+https://www.github.com/keras-team/keras-contrib.git
Check your Keras version with
import keras
print(keras.__version__)
I had the same problem. I solved it by using this :
from tensorflow.keras.utils import normalize
instead of :
from keras.utils import normalize

AttributeError: module 'tensorflow' has no attribute 'feature_column'

So I am new to machine learning and was trying out the TensorFlow Linear Model Tutorial given here:
https://www.tensorflow.org/tutorials/wide
I literally just downloaded their tutorial and tried to run it in my computer but I got the error:
AttributeError: module 'tensorflow' has no attribute 'feature_column'
I searched online and got to know that this can happen on older versions of tensorflow, but I am running the latest version: 1.3.0
So why am I getting this error and how to fix it?
Tensorflow 1.3 should support feature_column well. You might accidentally used an old version. Try the following code to verify your version:
import tensorflow as tf
print(tf.__version__)
print(dir(tf.feature_column))
If you're importing Tensorflow in a project that uses Keras, import Keras modules first, then Tensorflow. That solved the problem for me.
Do this: (notice the order)
from keras.backend.tensorflow_backend import set_session
from keras.models import Sequential
from keras import applications
import tensorflow as tf
Do not do this:
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
from keras.models import Sequential
from keras import applications
Upgrading your tensorflow might help.
pip install --upgrade tensorflow
I faced a similar error while running a session using the Tensorflow 2.0 beta. I used the following form for running a session:
import tensorflow as tf
constant = tf.constant([[1, 2, 3],[4, 5, 6],[7, 8, 9]])
with tf.compat.v1.Session() as sess:
print(sess.run(constant))
instead of:
import tensorflow as tf
constant = tf.constant([[1, 2, 3],[4, 5, 6],[7, 8, 9]])
with tf.Session() as sess:
print(sess.run(constant))
Also,
tf.compat.v1.Session()
is backward compatible. You might face a similar error when using other functions in Tensorflow 2.0 beta like print, get_variable, etc. Use a similar form as shown above in the example.

Categories

Resources