Tensorflow gives different prediction than Keras - python

I have a model trained in Keras with 1.10 Tensorflow backend and I want to make inferences using Tensorflow 2.4.
I converted the .h5 model to SavedModel format:
import tensorflow as tf
from tensorflow.keras.models import load_model
from tensorflow.python.saved_model import builder
from tensorflow.python.saved_model.signature_def_utils import predict_signature_def
from tensorflow.python.saved_model import tag_constants
def export_h5_to_pb(path_to_h5, export_path):
if tf.executing_eagerly():
tf.compat.v1.disable_eager_execution()
loaded_model = load_model(path_to_h5)
b = builder.SavedModelBuilder(export_path)
signature = predict_signature_def(inputs={"inputs": loaded_model.input},
outputs={"score": loaded_model.output})
session = tf.compat.v1.Session()
init_op = tf.compat.v1.global_variables_initializer()
session.run(init_op)
b.add_meta_graph_and_variables(
sess=session, tags=[tag_constants.SERVING], signature_def_map={"serving_default": signature})
b.save()
export_h5_to_pb('./trained_nework_VGG3_5comp.h5', './export/Servo/1')
Tensorflow prediction gives me:
import tensorflow as tf
imported = tf.saved_model.load('./export/Servo/1', tags='serve')
f = imported.signatures["serving_default"]
f(inputs=tf.constant(test_payload))
> {'score': <tf.Tensor: shape=(1, 6), dtype=float32, numpy=array([[0.16693498, 0.16678475, 0.16666655, 0.16653116, 0.16678214,
0.16630043]], dtype=float32)>}
While the original (correct) Keras prediction gives:
from keras.models import load_model
model = load_model('./trained_nework_VGG3_5comp.h5')
model.predict(test_payload)
> array([[1.0000000e+00, 3.0078113e-09, 2.0143587e-10, 5.7580127e-09, 1.9100479e-09, 4.1776910e-10]], dtype=float32)
What am I doing wrong?

I had a very similar problem, which you replied to here, and I'll share what worked for me. If you are doing this for Sagemaker/AWS (which by the file directory path and the use of the word "payload" I assume you are?), then the problem is caused by a discrepancy in TensorFlow versions.
In all of the blogs I found (e.g. this one), they used framework_version 1.12 when loading their model using TensorflowModel. Because of this, I reinstalled TensorFlow in the Sagemaker Jupyter instance to version 1.12, retrained my model using 1.12, and changed the framework_version to 1.12, and this worked for me. If you aren't using the model for AWS this very well may not apply, but if you are then this is a potential solution. Good luck!

Related

Issue with opening h5 file with Python code in MATLAB environment

I have an issue with calling Python code in MATLAB. My Python code involves predicting the battery state of charge using LSTM with attention ANN based on the inputs sent from MATLAB. The prediction is then sent back to MATLAB. I already have previously trained weights and biases saved in an h5 file, which is loaded and used in the Python code. Below is the Python code:
import pandas as pd
import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from tensorflow import keras
from tensorflow.keras import Sequential
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Dense
from tensorflow.keras import optimizers
import matplotlib.pyplot as plt
from tensorflow.keras.layers import *
from tensorflow.keras.models import *
from tensorflow.keras import backend as K
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.models import load_model
from tensorflow.keras.layers import Dropout, InputLayer
import h5py
#to create sequential data
def create_inout_sequences(input_data, tw):
inout_seq = []
L = len(input_data)
for i in range(L-tw):
train_seq = input_data[i:i+tw]
#train_out = output_data[i:i+tw]
inout_seq.append(train_seq)
return inout_seq
def search(inputs):
class attention(Layer):
def __init__(self, return_sequences=True,**kwargs):
self.return_sequences = return_sequences
super(attention,self).__init__()
def build(self, input_shape):
self.W=self.add_weight(name="att_weight", shape=(input_shape[-1],1),
initializer="normal")
self.b=self.add_weight(name="att_bias", shape=(input_shape[1],1),
initializer="zeros")
super(attention,self).build(input_shape)
def call(self, x):
e=(K.dot(x,self.W)+self.b)
a = K.softmax(e, axis=1)
output = x*a
if self.return_sequences:
return output
return K.sum(output, axis=1)
def get_config(self):
# For serialization with 'custom_objects'
config = super().get_config()
config['return_sequences'] = self.return_sequences
return config
#convert test data to sequential form
inputs=np.array(inputs)
inputs=np.tile(inputs, (36, 1))
inputs_new=create_inout_sequences(inputs, 35)
inputs_new=np.array(inputs_new)
model1 = Sequential()
model1.add(InputLayer(input_shape=(35,5)))
model1.add((LSTM(22, return_sequences=True)))
model1.add(attention(return_sequences=False))
model1.add(Dense(104, activation="relu"))
model1.add(Dropout(0.2))
model1.add(Dense(1, activation="sigmoid"))
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=0.01,
decay_steps=10000,
decay_rate=0.99)
model1.compile(optimizer=tf.keras.optimizers.Adam(epsilon=1e-08,learning_rate=lr_schedule),loss='mse')
#call previously trained weights
model1.load_weights('SOC_weights.h5')
x=float(model1.predict(inputs_new, batch_size=100,verbose=0))
return x # send prediction to Matlab
Note: I am using Python 3.6, tensorflow version: 2.5, keras version: 2.4.3, h5py version: 3.1.0, cython version: 0.28
I am able to run this code without any error on Python, but have issues when used in MATLAB 2020a... below is my MATLAB code:
pyenv('Version','3.6');
py.importlib.import_module('tensorflow');
py.importlib.import_module('testingSOC'); % file containing the Python codes
inputs=[0.555555556,0.435139205,0.68313128,0.499987472,0.241225578];% test inputs
SOC_output=py.testingSOC.search(inputs)
Below is the error received on Matlab:
Error using training>load_weights (line 2312)
Python Error: ImportError: `load_weights` requires h5py when loading weights from HDF5.
Error in testingSOC>search (line 87)
the error looks like h5py is not identified by MATLAB, so I have tried reinstalling h5py by using the command prompt (I am using Windows 10):
pip uninstall h5py
pip install h5py
but no changes...
I have also tried with tensorflow version: 2.2, keras version 2.4.3, h5py version 2.10 and cython version 0.29 but still get the same error.
I would really appreciate if you guys can provide an insight in solving this issue, and if there are any fundamental parts that I have missed. I would be glad to share more details if required.
Thanks!
Thanks to #TimRoberts for pointing out about including 'py.importlib.import_module('h5py')' which helped me in resolving this issue.Below is my solution, for those who would like to refer:
When I included 'py.importlib.import_module('h5py')' in my matlab codes, I received the following error:
Error using h5>init h5py.h5 (line 1)
Python Error: ImportError: DLL load failed: The specified procedure could not be found.
It looks like Python environment seems to use Matlab's h5 library in my case, which does not have the same features as Python's h5 library...I found that there is an option of running Python codes as a separate process which seems to be working for me (as seen in this link):
https://www.mathworks.com/help/matlab/matlab_external/out-of-process-execution-of-python-functionality.html?searchHighlight=out%20of%20process%20python&s_tid=srchtitle

Are we able to prune a pre-trained model? Example: MobileNetV2

I'm trying to prune a pre-trained model: MobileNetV2 and I got this error. Tried searching online and couldn't understand. I'm running on Google Colab.
These are my imports.
import tensorflow as tf
import tensorflow_model_optimization as tfmot
import tensorflow_datasets as tfds
from tensorflow import keras
import os
import numpy as np
import matplotlib.pyplot as plt
import tempfile
import zipfile
This is my code.
model_1 = keras.Sequential([
basemodel,
keras.layers.GlobalAveragePooling2D(),
keras.layers.Dense(1)
])
model_1.compile(optimizer='adam',
loss=keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model_1.fit(train_batches,
epochs=5,
validation_data=valid_batches)
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0.50,
final_sparsity=0.80,
begin_step=0,
end_step=end_step)
}
model_2 = prune_low_magnitude(model_1, **pruning_params)
model_2.compile(optmizer='adam',
loss=keres.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
This is the error i get.
---> 12 model_2 = prune_low_magnitude(model, **pruning_params)
ValueError: Please initialize `Prune` with a supported layer. Layers should either be a `PrunableLayer` instance, or should be supported by the PruneRegistry. You passed: <class 'tensorflow.python.keras.engine.training.Model'>
I believe you are following Pruning in Keras Example and jumped into Fine-tune pre-trained model with pruning section without setting your prunable layers. You have to reinstantiate model and set layers you wish to set as prunable. Follow this guide for further information on how to set prunable layers.
https://www.tensorflow.org/model_optimization/guide/pruning/comprehensive_guide.md
I faced the same issue with:
tensorflow version: 2.2.0
Just updating the version of tensorflow to 2.3.0 solved the issue, I think Tensorflow added support to this feature in 2.3.0.
One thing I found is that the experimental preprocessing I added to my model was throwing this error. I had this at the beginning of my model to help add some more training samples but the keras pruning code doesn't like subclassed models like this. Similarly, the code doesn't like the experimental preprocessing like I have with centering of the image. Removing the preprocessing from the model solved the issue for me.
def classificationModel(trainImgs, testImgs):
L2_lambda = 0.01
data_augmentation = tf.keras.Sequential(
[ layers.experimental.preprocessing.RandomFlip("horizontal", input_shape=IM_DIMS),
layers.experimental.preprocessing.RandomRotation(0.1),
layers.experimental.preprocessing.RandomZoom(0.1),])
model = tf.keras.Sequential()
model.add(data_augmentation)
model.add(layers.experimental.preprocessing.Rescaling(1./255, input_shape=IM_DIMS))
...
Saving the model as below and reloading worked for me.
_, keras_file = tempfile.mkstemp('.h5')
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
print('Saved baseline model to:', keras_file)
Had the same problem today, its the following error.
If you don't want the layer to be pruned or don't care for it, you can use this code to only prune the prunable layers in a model:
from tensorflow_model_optimization.python.core.sparsity.keras import prunable_layer
from tensorflow_model_optimization.python.core.sparsity.keras import prune_registry
def apply_pruning_to_prunable_layers(layer):
if isinstance(layer, prunable_layer.PrunableLayer) or hasattr(layer, 'get_prunable_weights') or prune_registry.PruneRegistry.supports(layer):
return tfmot.sparsity.keras.prune_low_magnitude(layer)
print("Not Prunable: ", layer)
return layer
model_for_pruning = tf.keras.models.clone_model(
base_model,
clone_function=apply_pruning_to_pruneable_layers
)

I can't load my trained h5 model with load.models(), how do I fix this error?

So I think tensorflow.keras and the independant keras packages are in conflict and I can't load my model, which I have made with transfer learning.
Import in the CNN ipynb:
!pip install tensorflow-gpu==2.0.0b1
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
Loading this pretrained model
base_model = keras.applications.xception.Xception(weights="imagenet",
include_top=False)
avg = keras.layers.GlobalAveragePooling2D()(base_model.output)
output = keras.layers.Dense(n_classes, activation="softmax")(avg)
model = keras.models.Model(inputs=base_model.input, outputs=output)
Saving with:
model.save('Leavesnet Model 2.h5')
Then in the new ipynb for the already trained model (the imports are the same as in the CNN ipynb:
from keras.models import load_model
model =load_model('Leavesnet Model.h5')
I get the error:
AttributeError Traceback (most recent call last)
<ipython-input-4-77ca5a1f5f24> in <module>()
2 from keras.models import load_model
3
----> 4 model =load_model('Leavesnet Model.h5')
13 frames
/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py in placeholder(shape, ndim, dtype, sparse, name)
539 x = tf.sparse_placeholder(dtype, shape=shape, name=name)
540 else:
--> 541 x = tf.placeholder(dtype, shape=shape, name=name)
542 x._keras_shape = shape
543 x._uses_learning_phase = False
AttributeError: module 'tensorflow' has no attribute 'placeholder'
I think there might be a conflict between tf.keras and the independant keras, can someone help me out?
Yes, there is a conflict between tf.keras and keras packages, you trained the model using tf.keras but then you are loading it with the keras package. That is not supported, you should use only one version of this package.
The specific problem is that you are using TensorFlow 2.0, but the standalone keras package does not support TensorFlow 2.0 yet.
Try to replace
from keras.models import load_model
model =load_model('Leavesnet Model.h5')
with
model = tf.keras.models.load_model(model_path)
It works for me, and I am using:
tensorflow version: 2.0.0
keras version: 2.3.1
You can check the following:
https://www.tensorflow.org/api_docs/python/tf/keras/models/load_model?version=stable
I think downgrading your keras or tensorflow will not be that much suitable because you need to retrain your model for various dependencies. Why not try to load the weights instead of loading the model?
here is a piece of code
import tensorflow as tf
{your code here}
#save the weights
model.save_weights('model_checkpoint')
#initialise the model again (example - MobileNetv2)
encoder = MobileNetV2(input_tensor=inputs, weights="imagenet", include_top=False, alpha=0.35)
#load the weights
encoder.load_weights('model_checkpoint')
and you are good to go

TensorFlow Hub error when Saving model as H5 or SavedModel

I want to use this TF Hub asset:
https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/3
Versions:
Version: 1.15.0-dev20190726
Eager mode: False
Hub version: 0.5.0
GPU is available
Code
feature_extractor_url = "https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/3"
feature_extractor_layer = hub.KerasLayer(module,
input_shape=(HEIGHT, WIDTH, CHANNELS))
I get:
ValueError: Importing a SavedModel with tf.saved_model.load requires a 'tags=' argument if there is more than one MetaGraph. Got 'tags=None', but there are 2 MetaGraphs in the SavedModel with tag sets [[], ['train']]. Pass a 'tags=' argument to load this SavedModel.
I tried:
module = hub.Module("https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/3",
tags={"train"})
feature_extractor_layer = hub.KerasLayer(module,
input_shape=(HEIGHT, WIDTH, CHANNELS))
But when I try to save the model I get:
tf.keras.experimental.export_saved_model(model, tf_model_path)
# model.save(h5_model_path) # Same error
NotImplementedError: Can only generate a valid config for `hub.KerasLayer(handle, ...)`that uses a string `handle`.
Got `type(handle)`: <class 'tensorflow_hub.module.Module'>
Tutorial here
It's been a while, but assuming you have migrated to the TF2, this can easily be accomplished with the most recent model version as follows:
import tensorflow as tf
import tensorflow_hub as hub
num_classes=10 # For example
m = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v1_50/feature_vector/5", trainable=True)
tf.keras.layers.Dense(num_classes, activation='softmax')
])
m.build([None, 224, 224, 3]) # Batch input shape.
# train as needed
m.save("/some/output/path")
Please update this question if that doesn't work for you. I believe your issue arose from mixing hub.Module with hub.KerasLayer. The model version you were using was in TF1 Hub format, so within TF1 it is meant to be used exclusively with hub.Module, and not mixed with hub.KerasLayer. Within TF2, hub.KerasLayer can load TF1 Hub format models directly from their URL for composition in larger models, but they cannot be fine-tuned.
Please refer to this compatibility guide for more information
You should use tf.keras.models.save_model(model,'NeuralNetworkModel')
You will get saved model in a folder that can be used later in your sequential nework

Tensorflow 2.0 - AttributeError: module 'tensorflow' has no attribute 'Session'

When I am executing the command sess = tf.Session() in Tensorflow 2.0 environment, I am getting an error message as below:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'tensorflow' has no attribute 'Session'
System Information:
OS Platform and Distribution: Windows 10
Python Version: 3.7.1
Tensorflow Version: 2.0.0-alpha0 (installed with pip)
Steps to reproduce:
Installation:
pip install --upgrade pip
pip install tensorflow==2.0.0-alpha0
pip install keras
pip install numpy==1.16.2
Execution:
Execute command: import tensorflow as tf
Execute command: sess = tf.Session()
According to TF 1:1 Symbols Map, in TF 2.0 you should use tf.compat.v1.Session() instead of tf.Session()
https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0
To get TF 1.x like behaviour in TF 2.0 one can run
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
but then one cannot benefit of many improvements made in TF 2.0. For more details please refer to the migration guide
https://www.tensorflow.org/guide/migrate
TF2 runs Eager Execution by default, thus removing the need for Sessions. If you want to run static graphs, the more proper way is to use tf.function() in TF2. While Session can still be accessed via tf.compat.v1.Session() in TF2, I would discourage using it. It may be helpful to demonstrate this difference by comparing the difference in hello worlds:
TF1.x hello world:
import tensorflow as tf
msg = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(msg))
TF2.x hello world:
import tensorflow as tf
msg = tf.constant('Hello, TensorFlow!')
tf.print(msg)
For more info, see Effective TensorFlow 2
I faced this problem when I first tried python after installing windows10 + python3.7(64bit) + anacconda3 + jupyter notebook.
I solved this problem by refering to "https://vispud.blogspot.com/2019/05/tensorflow200a0-attributeerror-module.html"
I agree with
I believe "Session()" has been removed with TF 2.0.
I inserted two lines. One is tf.compat.v1.disable_eager_execution() and the other is sess = tf.compat.v1.Session()
My Hello.py is as follows:
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
hello = tf.constant('Hello, TensorFlow!')
sess = tf.compat.v1.Session()
print(sess.run(hello))
For TF2.x, you can do like this.
import tensorflow as tf
with tf.compat.v1.Session() as sess:
hello = tf.constant('hello world')
print(sess.run(hello))
>>> b'hello world
If this is your code, the correct solution is to rewrite it to not use Session(), since that's no longer necessary in TensorFlow 2
If this is just code you're running, you can downgrade to TensorFlow 1 by running
pip3 install --upgrade --force-reinstall tensorflow-gpu==1.15.0
(or whatever the latest version of TensorFlow 1 is)
Tensorflow 2.x support's Eager Execution by default hence Session is not supported.
For Tensorflow 2.0 and later, try this.
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
a = tf.constant(5)
b = tf.constant(6)
c = tf.constant(7)
d = tf.multiply(a,b)
e = tf.add(c,d)
f = tf.subtract(a,c)
with tf.compat.v1.Session() as sess:
outs = sess.run(f)
print(outs)
import tensorflow as tf
sess = tf.Session()
this code will show an Attribute error on version 2.x
to use version 1.x code in version 2.x
try this
import tensorflow.compat.v1 as tf
sess = tf.Session()
use this:
sess = tf.compat.v1.Session()
if there is an error, use the following
tf.compat.v1.disable_eager_execution()
sess = tf.compat.v1.Session()
I also faced same problem when I first tried Google Colab after updating Windows 10. Then I changed and inserted two lines,
tf.compat.v1.disable_eager_execution()
sess = tf.compat.v1.Session()
As a result, everything goes OK
import tensorflow._api.v2.compat.v1 as tf
tf.disable_v2_behavior()
Using Anaconda + Spyder (Python 3.7)
[code]
import tensorflow as tf
valor1 = tf.constant(2)
valor2 = tf.constant(3)
type(valor1)
print(valor1)
soma=valor1+valor2
type(soma)
print(soma)
sess = tf.compat.v1.Session()
with sess:
print(sess.run(soma))
[console]
import tensorflow as tf
valor1 = tf.constant(2)
valor2 = tf.constant(3)
type(valor1)
print(valor1)
soma=valor1+valor2
type(soma)
Tensor("Const_8:0", shape=(), dtype=int32)
Out[18]: tensorflow.python.framework.ops.Tensor
print(soma)
Tensor("add_4:0", shape=(), dtype=int32)
sess = tf.compat.v1.Session()
with sess:
print(sess.run(soma))
5
TF v2.0 supports Eager mode vis-a-vis Graph mode of v1.0. Hence, tf.session() is not supported on v2.0. Hence, would suggest you to rewrite your code to work in Eager mode.
Same problem occurred for me
import tensorflow as tf
hello = tf.constant('Hello World ')
sess = tf.compat.v1.Session() *//I got the error on this step when I used
tf.Session()*
sess.run(hello)
Try replacing it with tf.compact.v1.Session()
If you're doing it while some imports like,
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
import numpy as np
Then I suggest you to follow these steps, NOTE: For TensorFlow2 and for CPU Process only
Step 1: Tell your code to act as if the compiler is TF1 and disable TF2 behavior, use the following code:
import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
Step 2: While importing libraries, remind your code that it has to act like TF1, yes EVERYTIME.
tf.disable_v2_behavior()
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
import numpy as np
Conclusion: This should work, let me know if something goes wrong, also if it is GPU, then do mention to add a backend code for keras. Also, TF2 does not support session there is a separate understanding for that and has been mentioned on TensorFlow, the link is:
TensorFlow Page for using Sessions in TF2
Other major TF2 changes have been mentioned in this link, it is long but please go through it, use Ctrl+F for assistance. Link,
Effective TensorFlow 2 Page Link
It is not easy as you think, running TF 1.x with TF 2.x environment I found some errors and need to reviews of some variables usages when I fixed the problems on the neuron networks on the Internet. Transform to TF 2.x is better idea.
( Easier and adaptive )
TF 2.X
while not done:
next_obs, reward, done, info = env.step(action)
env.render()
img = tf.keras.preprocessing.image.array_to_img(
img,
data_format=None,
scale=True
)
img_array = tf.keras.preprocessing.image.img_to_array(img)
predictions = model_self_1.predict(img_array) ### Prediction
### Training: history_highscores = model_highscores.fit(batched_features, epochs=1 ,validation_data=(dataset.shuffle(10))) # epochs=500 # , callbacks=[cp_callback, tb_callback]
TF 1.X
with tf.compat.v1.Session() as sess:
saver = tf.compat.v1.train.Saver()
saver.restore(sess, tf.train.latest_checkpoint(savedir + '\\invader_001'))
train_loss, _ = sess.run([loss, training_op], feed_dict={X:o_obs, y:y_batch, X_action:o_act})
for layer in mainQ_outputs:
model.add(layer)
model.add(tf.keras.layers.Flatten() )
model.add(tf.keras.layers.Dense(6, activation=tf.nn.softmax))
predictions = model.predict(obs) ### Prediction
### Training: summ = sess.run(summaries, feed_dict={X:o_obs, y:y_batch, X_action:o_act})

Categories

Resources