'tensorflow' has no attribute 'Session' - python

I am trying to convert a Tensor to a numpy array.
The tensor i have has a shape as below
LastDenseLayer.output.shape
TensorShape([None, 128])
When i am running the code as below,
with tf.Session() as sess:
LastLayer = LastDenseLayer.output.eval()
getting the below error
module 'tensorflow' has no attribute 'Session'
I am running Keras model and trying to get the values of a specific layer out of that.
Unable to understand that is wrong here.
Regards
Sachin

TensorFlow 2.x removed tf.Session because eager execution is now a default. Please refer to the TensorFlow migration guide for more information.

It's recommended to update your code to meet Tensorflow 2.0 requirements. As a quick solution you can use tf.compat.v1.Session() instead of tf.Session():
with tf.compat.v1.Session() as sess:
LastLayer = LastDenseLayer.output.eval()

Related

'tensorflow' has no attribute 'to_int32'

I am trying to implement CTC loss to audio files but I get the following error:
TensorFlow has no attribute 'to_int32'
I'm running tf.version 2.0.0.
I think it's with the version, I'm currently using, as we see the error is thrown in the package itself ' tensorflow_backend.py' code.
I have imported packages as "tensorflow.keras.class_name" with backend as K. Below is the screenshot.
You can cast the tensor in TensorFlow 2 as follows:
tf.cast(my_tensor, tf.int32)
You can read the documentation of the method in https://www.tensorflow.org/api_docs/python/tf/cast
You can also see that the to_int32 is deprecated and was used in TensorFlow 1
https://www.tensorflow.org/api_docs/python/tf/compat/v1/to_int32
After you make the import just write
tf.to_int=lambda x: tf.cast(x, tf.int32)
This is similar to writing the behavior of tf.to_int in everywhere in the code, so you don't have to manually edit a TF1.0 code

AttributeError: module 'tensorflow_core._api.v2.image' has no attribute 'resize_images'

I want to resize images from 28*28 to 32*32,used tf.image.resize_images(x_train, (32, 32)).It returns AttributeError: module 'tensorflow_core._api.v2.image' has no attribute 'resize_images'.The version of tersorflow is 2.0.0. How can I fix it?
It should be tf.image.resize See the updated doc https://www.tensorflow.org/api_docs/python/tf/image/resize
The problem
The tf.image.resize_image function has not been support longer and when you execute the code like below:
import tensorflow as tf
img_final = tf.image.resize_images(img_tensor, [192, 192])
You get the following exception:
AttributeError: module 'tensorflow._api.v2.image' has no attribute 'resize_images'
The solution
The function has been renamed into resize. You should change your code like it was done below:
import tensorflow as tf
import tensorflow as tf
img_final = tf.image.resize(img_tensor, [192, 192])
For more info please check here:
https://www.google.com/amp/s/better-coding.com/solved-tensorflow-attributeerror-module-tensorflow-_api-v2-image-has-no-attribute-resize_images/amp/
tf.image.resize(trainX, size=(32,32))
More info on https://www.tensorflow.org/api_docs/python/tf/image/resize
Note that trainX should be a 4D or 3D tensor
According to TensorFlow Core v2.7.0 docs you should use tf.compat.v1.image.resize_bilinear instead of tf.image.resize_bilinear
https://github.com/JiahuiYu/neuralgym/issues/16
The function has been renamed to resize. You should change your code like it was done below. It worked for me.
import tensorflow as tf
img_final = tf.image.resize(img_tensor, [192, 192])
The function has been renamed into resize.
The code can be changed as follows;
import tensorflow as tf
img_A=tf.image.resize(................)

Tensorflow 2.0 - AttributeError: module 'tensorflow' has no attribute 'Session'

When I am executing the command sess = tf.Session() in Tensorflow 2.0 environment, I am getting an error message as below:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'tensorflow' has no attribute 'Session'
System Information:
OS Platform and Distribution: Windows 10
Python Version: 3.7.1
Tensorflow Version: 2.0.0-alpha0 (installed with pip)
Steps to reproduce:
Installation:
pip install --upgrade pip
pip install tensorflow==2.0.0-alpha0
pip install keras
pip install numpy==1.16.2
Execution:
Execute command: import tensorflow as tf
Execute command: sess = tf.Session()
According to TF 1:1 Symbols Map, in TF 2.0 you should use tf.compat.v1.Session() instead of tf.Session()
https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0
To get TF 1.x like behaviour in TF 2.0 one can run
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
but then one cannot benefit of many improvements made in TF 2.0. For more details please refer to the migration guide
https://www.tensorflow.org/guide/migrate
TF2 runs Eager Execution by default, thus removing the need for Sessions. If you want to run static graphs, the more proper way is to use tf.function() in TF2. While Session can still be accessed via tf.compat.v1.Session() in TF2, I would discourage using it. It may be helpful to demonstrate this difference by comparing the difference in hello worlds:
TF1.x hello world:
import tensorflow as tf
msg = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(msg))
TF2.x hello world:
import tensorflow as tf
msg = tf.constant('Hello, TensorFlow!')
tf.print(msg)
For more info, see Effective TensorFlow 2
I faced this problem when I first tried python after installing windows10 + python3.7(64bit) + anacconda3 + jupyter notebook.
I solved this problem by refering to "https://vispud.blogspot.com/2019/05/tensorflow200a0-attributeerror-module.html"
I agree with
I believe "Session()" has been removed with TF 2.0.
I inserted two lines. One is tf.compat.v1.disable_eager_execution() and the other is sess = tf.compat.v1.Session()
My Hello.py is as follows:
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
hello = tf.constant('Hello, TensorFlow!')
sess = tf.compat.v1.Session()
print(sess.run(hello))
For TF2.x, you can do like this.
import tensorflow as tf
with tf.compat.v1.Session() as sess:
hello = tf.constant('hello world')
print(sess.run(hello))
>>> b'hello world
If this is your code, the correct solution is to rewrite it to not use Session(), since that's no longer necessary in TensorFlow 2
If this is just code you're running, you can downgrade to TensorFlow 1 by running
pip3 install --upgrade --force-reinstall tensorflow-gpu==1.15.0
(or whatever the latest version of TensorFlow 1 is)
Tensorflow 2.x support's Eager Execution by default hence Session is not supported.
For Tensorflow 2.0 and later, try this.
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
a = tf.constant(5)
b = tf.constant(6)
c = tf.constant(7)
d = tf.multiply(a,b)
e = tf.add(c,d)
f = tf.subtract(a,c)
with tf.compat.v1.Session() as sess:
outs = sess.run(f)
print(outs)
import tensorflow as tf
sess = tf.Session()
this code will show an Attribute error on version 2.x
to use version 1.x code in version 2.x
try this
import tensorflow.compat.v1 as tf
sess = tf.Session()
use this:
sess = tf.compat.v1.Session()
if there is an error, use the following
tf.compat.v1.disable_eager_execution()
sess = tf.compat.v1.Session()
I also faced same problem when I first tried Google Colab after updating Windows 10. Then I changed and inserted two lines,
tf.compat.v1.disable_eager_execution()
sess = tf.compat.v1.Session()
As a result, everything goes OK
import tensorflow._api.v2.compat.v1 as tf
tf.disable_v2_behavior()
Using Anaconda + Spyder (Python 3.7)
[code]
import tensorflow as tf
valor1 = tf.constant(2)
valor2 = tf.constant(3)
type(valor1)
print(valor1)
soma=valor1+valor2
type(soma)
print(soma)
sess = tf.compat.v1.Session()
with sess:
print(sess.run(soma))
[console]
import tensorflow as tf
valor1 = tf.constant(2)
valor2 = tf.constant(3)
type(valor1)
print(valor1)
soma=valor1+valor2
type(soma)
Tensor("Const_8:0", shape=(), dtype=int32)
Out[18]: tensorflow.python.framework.ops.Tensor
print(soma)
Tensor("add_4:0", shape=(), dtype=int32)
sess = tf.compat.v1.Session()
with sess:
print(sess.run(soma))
5
TF v2.0 supports Eager mode vis-a-vis Graph mode of v1.0. Hence, tf.session() is not supported on v2.0. Hence, would suggest you to rewrite your code to work in Eager mode.
Same problem occurred for me
import tensorflow as tf
hello = tf.constant('Hello World ')
sess = tf.compat.v1.Session() *//I got the error on this step when I used
tf.Session()*
sess.run(hello)
Try replacing it with tf.compact.v1.Session()
If you're doing it while some imports like,
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
import numpy as np
Then I suggest you to follow these steps, NOTE: For TensorFlow2 and for CPU Process only
Step 1: Tell your code to act as if the compiler is TF1 and disable TF2 behavior, use the following code:
import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
Step 2: While importing libraries, remind your code that it has to act like TF1, yes EVERYTIME.
tf.disable_v2_behavior()
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
import numpy as np
Conclusion: This should work, let me know if something goes wrong, also if it is GPU, then do mention to add a backend code for keras. Also, TF2 does not support session there is a separate understanding for that and has been mentioned on TensorFlow, the link is:
TensorFlow Page for using Sessions in TF2
Other major TF2 changes have been mentioned in this link, it is long but please go through it, use Ctrl+F for assistance. Link,
Effective TensorFlow 2 Page Link
It is not easy as you think, running TF 1.x with TF 2.x environment I found some errors and need to reviews of some variables usages when I fixed the problems on the neuron networks on the Internet. Transform to TF 2.x is better idea.
( Easier and adaptive )
TF 2.X
while not done:
next_obs, reward, done, info = env.step(action)
env.render()
img = tf.keras.preprocessing.image.array_to_img(
img,
data_format=None,
scale=True
)
img_array = tf.keras.preprocessing.image.img_to_array(img)
predictions = model_self_1.predict(img_array) ### Prediction
### Training: history_highscores = model_highscores.fit(batched_features, epochs=1 ,validation_data=(dataset.shuffle(10))) # epochs=500 # , callbacks=[cp_callback, tb_callback]
TF 1.X
with tf.compat.v1.Session() as sess:
saver = tf.compat.v1.train.Saver()
saver.restore(sess, tf.train.latest_checkpoint(savedir + '\\invader_001'))
train_loss, _ = sess.run([loss, training_op], feed_dict={X:o_obs, y:y_batch, X_action:o_act})
for layer in mainQ_outputs:
model.add(layer)
model.add(tf.keras.layers.Flatten() )
model.add(tf.keras.layers.Dense(6, activation=tf.nn.softmax))
predictions = model.predict(obs) ### Prediction
### Training: summ = sess.run(summaries, feed_dict={X:o_obs, y:y_batch, X_action:o_act})

Keras + Tensorflow: 'ConvLSTM2D' object has no attribute 'outbound_nodes'

I’m trying to have a ConvLSTM as part of my functioning tensorflow network, because I had some issues with using the tensorflow ConvLSTM implementation, I settled for using the ConvLSTM2D Keras Layer instead.
To make Keras available in my Tensorflow session I used the blogposts suggestion (I’m using the Tensorflow backend):
https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html
import tensorflow as tf
sess = tf.Session()
from keras import backend as K
K.set_session(sess)
A snippet of my code (that what causes the issues):
# state has a shape of [1, 75, 32, 32] with batchsize=1
state = tf.concat([screen, screen2, non_spatial], axis=1)
# Reshaping state to get time=1 to have the right shape for the ConvLSTM
state_reshaped = tf.reshape(state, [1, 1, 75, 32, 32])
# Keras ConvLSTM2D Layer
# I tried leaving out the batch_size for the input_shape but it didn't make a difference for the error and it seems to be fine
lstm_layer = ConvLSTM2D(filters=5, kernel_size=(3, 3), input_shape=(1, 1, 75, 32, 32), data_format='channels_first', stateful=True)(state_reshaped)
fc1 = layers.fully_connected(inputs=layers.flatten(lstm_layer), num_outputs=256, activation_fn=tf.nn.relu)
This gives me the following error:
AttributeError: 'ConvLSTM2D' object has no attribute 'outbound_nodes’”
I have no idea what this means. I thought it might has to do with mixing Keras ConvLSTM and tensorflows flatten. So I tried using Keras Flatten() instead like this:
# lstm_layer shape is (5, 5, 30, 30)
lstm_layer = Flatten(data_format='channels_first')(lstm_layer)
fc1 = layers.fully_connected(inputs=lstm_layer, num_outputs=256, activation_fn=tf.nn.relu)
and got the following error: ValueError: The last dimension of the inputs to 'Dense' should be defined. Found 'None'.
This error is caused by Flatten(), for whatever reason, having an output shape of (?, ?) and the fullyconnected layer needing to have a defined shape for the last dimension but I don't understand why it would be undefined. It was defined before.
Using Reshape((4500,))(lstm_layer) instead gives me the same no attribute 'outbound_nodes' error.
I googled the issue and I seem to not be the only one but I couldn't find a solution.
How can I solve this issue?
Is the unknown output shape of Flatten() a bug or wanted behavior, if so why?
I encountered the same problem and had a bit of a dig into the tensorflow code. The problem is that there was some refactoring done for Keras 2.2.0 and tf.keras hasn't yet been updated to this new API.
The 'outbound_nodes' attribute was renamed to '_outbound_nodes' in Keras 2.2.0. It's pretty easy to fix, there's two references in base.py you need to update:
/site-packages/tensorflow/python/layers/base.py
After updating it works fine for me.
In my case I was getting the error on a custom subclass, but the following solution can be applied nonetheless, if you subclass ConvLSTM2D and add this to your new class:
#property
def outbound_nodes(self):
if hasattr(self, '_outbound_nodes'):
print("outbound_nodes called but _outbound_nodes found")
return getattr(self, '_outbound_nodes', [])
I found the solution, even though I don't know why it works.
Currently I'm using Tensorflow 1.8 and Keras 2.2. If you downgrade Keras to ~2.1.1 it works without any problems and you can easily use Keras layers together with tensorflow. This fixed AttributeError: 'ConvLSTM2D' object has no attribute 'outbound_nodes’” and then I just used layers.flatten(lstm_layer) and everything worked.
As others have pointed out, this is because of a mismatch between your installed tensorflow and keras libraries.
Their solutions work, but in my opinion, the cleanest and easiest way to solve this is by using the keras layers contained within the tensorflow package itself rather than by using the keras library directly.
i.e, replace
from keras.layers import ConvLSTM2D
by
from tensorflow.python.keras.layers import ConvLSTM2D
This will ensure that your tensorflow and keras function calls / objects are always compatible, and solved this issue for me.

Keras + TensorFlow: “module 'tensorflow' has no attribute 'merge_all_summaries''”

Very similar to Keras + tensorflow gives the error "no attribute 'control_flow_ops'", from the Convolutional autoencoder example from https://blog.keras.io/building-autoencoders-in-keras.html I get the error
[...]lib/python3.5/site-packages/keras/callbacks.py in _set_model(self, model)
478 tf.histogram_summary('{}_out'.format(layer),
479 layer.output)
--> 480 self.merged = tf.merge_all_summaries()
481 if self.write_graph:
482 if parse_version(tf.__version__) >= parse_version('0.8.0'):
AttributeError: module 'tensorflow' has no attribute 'merge_all_summaries'
I tried
import tensorflow as tf
tf.merge_all_summaries = tf
but that did not work. What should I do?
In AttributeError: 'module' object has no attribute 'merge_all_summaries' the error is mentioned. I also have the version 1.0.0. But what is the solution? I don't want to downgrade TensorFlow.
Make42 is absolutely correct that the changes they describe in their answer must be made in order to migrate a codebase to work with TensorFlow 1.0. However, the errors you are seeing are in the Keras library itself. Fortunately, these errors have been fixed in the Keras codebase since January 2017, so upgrading to Keras 1.2.2 or later will fix the error for you.
The answer is to migrate as appropriate. Check out https://www.tensorflow.org/install/migration. There you see that
- tf.merge_summary
- should be renamed to tf.summary.merge
- tf.train.SummaryWriter
- should be renamed to tf.summary.FileWriter
(Actually SummaryWriter has also been changed.) So instead of
import tensorflow as tf
tf.merge_all_summaries = tf
you should write
import tensorflow as tf
tf.merge_all_summaries = tf.summary.merge_all
tf.train.SummaryWriter = tf.summary.FileWriter

Categories

Resources