Keras (Tensorflow) - name array_ops not defined - python

I'm having an issue with Keras/Tensorflow deserializing a model. Basically this is an implementation of a convolutional neural network on text, which requires a dimension to be added at an early stage. The error message is this:
File
"/usr/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/backend.py",
line 2231, in expand_dims NameError: name 'array_ops' is not defined
The code causing this error message:
import numpy as np
from docopt import docopt
import tensorflow as tf
from vdcnn import utils
if __name__ == '__main__':
arguments = docopt(__doc__, version='1.0')
model = tf.keras.models.load_model(arguments["--checkpoint"])
print(type(model))
proc = utils.Preprocessor(padding_size=256)
data, labels, test_data, test_labels = proc.process_document(arguments["--data"])
for i in range(len(test_data)):
test_vec = test_data[i]
prediction = model.predict(x=test_vec[np.newaxis])
predlabel = utils.labels_in_order[np.argmax(prediction)]
truthlabel = utils.labels_in_order[np.argmax(test_labels[i])]
print("Truth: {} \t Predicted: {}".format(truthlabel, predlabel)
The code that calls "expand_dims" uses a Keras Lambda wrapper around the Tensorflow function:
...
inputs = tf.keras.Input(shape=(self.sequence_max_length,), dtype='int32', name='inputs')
embedding = tf.keras.layers.Embedding(self.num_quantized_chars, self.embedding_size, input_length=self.sequence_max_length)(inputs)
embedding = tf.keras.layers.Lambda(tf.expand_dims, arguments={'axis' : -1, 'name' : 'embedding_expanded'})(embedding)
conv0 = tf.keras.layers.Conv2D(filters=64, kernel_size=3, strides=[1, self.embedding_size], padding='same', kernel_initializer='he_normal')(embedding)
conv0 = tf.keras.layers.Activation('relu')(conv0)
...
And, just for kicks, the line it's referencing in the tensorflow libs:
from tensorflow.python.ops import array_ops
[two thousand lines of crap]
def expand_dims(x, axis=-1):
"""Adds a 1-sized dimension at index "axis".
Arguments:
x: A tensor or variable.
axis: Position where to add a new axis.
Returns:
A tensor with expanded dimensions.
"""
return array_ops.expand_dims(x, axis)
I'm using Python 3.6 and Tensorflow 1.5, and this error occurs on both OSX 10.11.6 and RHEL 7. I've tried using various permutations of tf.keras, tf.keras.backend, just keras without tf, and all of it basically calls the exact same code, although sometimes it complains about "gen_array_ops" instead of "array_ops" with the same problem.
Anyone have any thoughts?

The issue was this: https://github.com/keras-team/keras/issues/8123#issuecomment-354857044
On top of that, it required reinstalling everything on all machines and using keras directly instead of tf.keras to get the proper error message, apparently because of how Keras uses object serialization and the way Python "tracebacks" work.

Related

Eager execution inside lambda layer in Tensorflow

I have Tensorflow 2.9.1 installed at my laptop, and according to the documentation the eager execution should be enabled by default. I have a problem while trying to convert Tensor object to Numpy Array inside a model. I keep getting 'Tensor' object has no attribute 'numpy'
I wanted to have some Lambda Layers inside my model and do some operations using Numpy, but the eager execution seems to be disabled inside the model. I tried to run tf.executing_eagerly() inside model and it returned false. On the otherhand, when I tried to run tf.executing_eagerly() outside the mode, I got true.
Could someone clear my confusion here?
import keras
import tensorflow as tf
from keras import layers, models
import numpy as np
import matplotlib.pyplot as plt
tf.config.run_functions_eagerly(True)
def do_something(input_tensor):
a = add_one(input_tensor.numpy)
b = minus_one(a)
c = tf.convert_to_tensor(b, dtype=tf.float32)
return c
def add_one(input):
return input + 1.0
def minus_one(input):
return input - 1.0
encoding_dim = 32
input_img = layers.Input(shape=(784,))
encoded = layers.Dense(encoding_dim, activation='relu')(input_img)
simulation_layer = layers.Lambda(do_something, name="channel_simulation")(encoded)
decoded = layers.Dense(784, activation='sigmoid')(encoded)
autoencoder = models.Model(input_img, decoded)
encoder = models.Model(input_img, encoded)
encoded_input = layers.Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-1]
decoder = models.Model(encoded_input, decoder_layer(encoded_input))
autoencoder.compile(optimizer='adam', loss='binary_crossentropy', run_eagerly=True)

ONNX model checker fails while ONNX runtime works fine when `tf.function` is used to decorate memeber function with loop

When a tensorflow model contains tf.function decorated function with for loop in it, the tf->onnx conversion yields warnings:
WARNING:tensorflow:From /Users/amit/Programs/lammps/kim/kliff/venv/lib/python3.7/site-packages/tf2onnx/tf_loader.py:706: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
Cannot infer shape for model/ex_layer/PartitionedCall/while: model/ex_layer/PartitionedCall/while:3
Cannot infer shape for model/ex_layer/PartitionedCall/Identity: model/ex_layer/PartitionedCall/Identity:0
Cannot infer shape for Func/model/ex_layer/PartitionedCall/output/_3: Func/model/ex_layer/PartitionedCall/output/_3:0
Cannot infer shape for Identity: Identity:0
missing output shape for while/Identity_3:0
missing output shape for while/Identity_3:0
missing output shape for while/Identity_3:0
missing output shape for while/Identity_3:0
...
And as the obtained model is run through onnxruntime it runs fine, but model checker gives the following error
Traceback (most recent call last):
File "failed_example.py", line 85, in <module>
onnx.checker.check_model(onnx.load("tmp.onnx"))
File "venv/lib/python3.7/site-packages/onnx/checker.py", line 106, in check_model
C.check_model(protobuf_string)
onnx.onnx_cpp2py_export.checker.ValidationError: Field 'shape' of type is required but missing.
Netron does not show any appreciable difference between the model with decorated function and without decorated function. I guess error comes from the fact that the for loop is converted to separate while-loop graph, whose input shape is not defined. But it does work perfectly without tf.function decorator. I am putting a minimal replication code below.
I think it is related to following issues:
https://github.com/onnx/onnx/issues/2932
https://github.com/onnx/onnx/issues/2492
https://github.com/onnx/onnx/pull/2937
Code to replicate:
import tensorflow as tf
import numpy as np
import sys
import onnx
import onnxruntime
import tf2onnx
# =============================================================================
# Layer and its herler functions
# COMMENT IT OUT TO PASS ONNX CHECK
#tf.function(
input_signature=[
tf.TensorSpec(shape=[None,None], dtype=tf.int32),
tf.TensorSpec(shape=[None,None], dtype=tf.float32),
tf.TensorSpec(shape=None, dtype=tf.float32),
])
def extra_function(
list1,
list2,
accum_var
):
some_num = 4
num_iter = tf.size(list1)//some_num
for i in range(num_iter):
xyz_i = list2[0, i * 3 : (i + 1) * 3]
accum_var += tf.reduce_sum(xyz_i)
return accum_var
class ExLayer(tf.keras.layers.Layer):
def __init__(self):
super().__init__()
# Doesnt tf.function also create graphs out of called functions?
# however it does not seem to do that if `call` function is decorated
# #tf.function(
# input_signature=[
# tf.TensorSpec(shape=[None,None], dtype=tf.float32),
# tf.TensorSpec(shape=[None,None], dtype=tf.int32),
# ])
def call(self, list2,list1):
accum_var = tf.constant(0.0)
accum_var = extra_function( list1, list2, accum_var)
return accum_var
# =============================================================================
# =============================================================================
# Example implementation
layer1 = tf.keras.layers.Input(shape=(1,))
layer2 = tf.keras.layers.Input(shape=(1,), dtype=tf.int32)
EL = ExLayer()(layer1,layer2)
model = tf.keras.models.Model(inputs=[layer1, layer2], outputs=EL)
# Define input data
list2_tf = tf.constant([[0.,0.,0.,1.,1.,1.,2.,2.,2.,3.,3.,3.]],dtype=tf.float32)
list1_tf = tf.constant([[0,1,2,-1,1,0,2,-1,2,0,1,-1]],dtype=tf.int32)
list2_np = np.array([[0.,0.,0.,1.,1.,1.,2.,2.,2.,3.,3.,3.]],dtype=np.float32)
list1_np = np.array([[0,1,2,-1,1,0,2,-1,2,0,1,-1]],dtype=np.int32)
# Save to onnx
model_proto, external_tensor_storage = tf2onnx.convert.from_keras(model,
input_signature=[
tf.TensorSpec(shape=[None,None], dtype=tf.float32, name="list2"),
tf.TensorSpec(shape=[None,None], dtype=tf.int32, name="list1")
],
opset=11,
output_path="tmp.onnx")
# Load onnx runtime session
ort_session = onnxruntime.InferenceSession("tmp.onnx")
inputs = {"list2":list2_np, "list1":list1_np}
print("===================================================")
print("Original model evaluation:")
print(model([list2_tf,list1_tf]))
print("ORT session evaluation")
print(ort_session.run(None, inputs))
print("===================================================")
# Check with model checker
onnx.checker.check_model(onnx.load("tmp.onnx"))
ONNX version: 1.10.2
Python version: 3.7.7
TF version: 2.7.0
Related github issues I submitted:
https://github.com/onnx/onnx/issues/3909
https://github.com/onnx/tensorflow-onnx/issues/1812
The problem is in the way you specified the shape of accumm_var.
In the input signature you have tf.TensorSpec(shape=None, dtype=tf.float32). Reading the code I see that you are passing a scalar tensor. A scalar tensor is a 0-Dimension tensor, so you should use shape=[] instead of shape=None.
I run here without warnings after annotating extra_function with
tf.function(
input_signature=[
tf.TensorSpec(shape=[None,None], dtype=tf.int32),
tf.TensorSpec(shape=[None,None], dtype=tf.float32),
tf.TensorSpec(shape=[], dtype=tf.float32),
])

Keras functional API and TensorFlow Hub

I'm trying to use a Universal Sentence Encoder from TF Hub as a keras layer in a functional way. I would like to use hub.KerasLayer with Keras Functional API, but i'm not sure how to achieve that, so far I've only seen exmaples of hub.KerasLayer with the Sequential API
import tensorflow_hub as hub
import tensorflow as tf
from tensorflow.keras import layers
import tf_sentencepiece
use_url = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/1'
english_sentences = ["dog", "Puppies are nice.", "I enjoy taking long walks along the beach with my dog."]
english_sentences = np.array(english_sentences, dtype=object)[:, np.newaxis]
seq = layers.Input(shape=(None, ), name='sentence', dtype=tf.string)
module = hub.KerasLayer(hub.Module(use_url))(seq)
model = tf.keras.models.Model(inputs=[seq], outputs=[module])
model.summary()
x = model.predict(english_sentences)
print(x)
the code above runs into this error when passing the input layer to the embedding: TypeError: Can't convert 'inputs': Shape TensorShape([Dimension(None), Dimension(None)]) is incompatible with TensorShape([Dimension(None)])
Is it possible to use hub.KerasLayer with keras functional API in TensorFlow 1.x? if it can be done, how?
Try This
sentence_encoding_layer = hub.KerasLayer("https://tfhub.dev/google/universal-sentence-encoder/4",
trainable=False,
input_shape = [],
dtype = tf.string,
name = 'U.S.E')
inputs = tf.keras.layers.Input(shape = (), dtype = 'string',name = 'input_layer')
x = sentence_encoding_layer(inputs)
x = tf.keras.layers.Dense(64,activation = 'relu')(x)
outputs = tf.keras.layers.Dense(1,activation = 'sigmoid',name = 'output_layer')(x)
model = tf.keras.Model(inputs,outputs,name = 'Transfer_learning_USE')
model.summary()
model.predict([sentence])
If you use v3 of the same universal sentence encoder with tf 1.15, you can do such thing by replacing lines from
import tf_sentencepiece
use_url = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/1'
module = hub.KerasLayer(hub.Module(use_url))(seq)
to
import tensorflow_text
use_url = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/3'
module = hub.KerasLayer(use_url)(seq)
First shape is what you are passing into the model, Shape TensorShape([Dimension(None), Dimension(None)]). Second shape is what you are expecting, TensorShape([Dimension(None)]). So in this error, its telling you it expecting a shape of ()...
Or
If you are expecting to do batches of text, perhaps do TimeDistributed layer, like so...
module = tf.keras.layers.TimeDistributed(hub.KerasLayer(hub.Module(use_url)))(seq)
However you maybe forced to do specific size for text length...

Bool value of Tensor with more than one value is ambiguous in Pytorch

I want to create a model in pytorch, but I can't
compute the loss.
It's always return Bool value of Tensor with more
than one value is ambiguous
Actually, I run example code, it work.
loss = CrossEntropyLoss()
input = torch.randn(8, 5)
input
target = torch.empty(8,dtype=torch.long).random_(5)
target
output = loss(input, target)
Here is my code,
################################################################################
##
##
import torch
from torch.nn import Conv2d, MaxPool2d, Linear, CrossEntropyLoss, MultiLabelSoftMarginLoss
from torch.nn.functional import relu, conv2d, max_pool2d, linear, softmax
from torch.optim import adadelta
##
##
## Train
Train = {}
Train["Image"] = torch.rand(2000, 3, 76, 76)
Train["Variable"] = torch.rand(2000, 6)
Train["Label"] = torch.empty(2000, dtype=torch.long).random_(2)
##
##
## Valid
Valid = {}
Valid["Image"] = torch.rand(150, 3, 76, 76)
Valid["Variable"] = torch.rand(150, 6)
Valid["Label"] = torch.empty(150, dtype=torch.long).random_(2)
################################################################################
##
##
## Model
ImageTerm = Train["Image"]
VariableTerm = Train["Variable"]
Pip = Conv2d(in_channels=3, out_channels=32, kernel_size=(3,3), stride=1, padding=0)(ImageTerm)
Pip = MaxPool2d(kernel_size=(2,2), stride=None, padding=0)(Pip)
Pip = Conv2d(in_channels=32, out_channels=64, kernel_size=(3,3), stride=1, padding=0)(Pip)
Pip = MaxPool2d(kernel_size=(2,2), stride=None, padding=0)(Pip)
Pip = Pip.view(2000, -1)
Pip = torch.cat([Pip, VariableTerm], 1)
Pip = Linear(in_features=18502, out_features=1000 , bias=True)(Pip)
Pip = Linear(in_features=1000, out_features=2 , bias=True)(Pip)
##
##
## Loss
Loss = CrossEntropyLoss(Pip, Train["Label"])
The error is on Loss = CrossEntropyLoss(Pip, Train["Label"]),
thanks.
In your minimal example, you create an object "loss" of the class "CrossEntropyLoss". This object is able to compute your loss as
loss(input, target)
However, in your actual code, you try to create the object "Loss", while passing Pip and the labels to the "CrossEntropyLoss" class constructor.
Instead, try the following:
loss = CrossEntropyLoss()
loss(Pip, Train["Label"])
Edit (explanation of the error message): The error Message Bool value of Tensor with more than one value is ambiguous appears when you try to cast a tensor into a bool value. This happens most commonly when passing the tensor to an if condition, e.g.
input = torch.randn(8, 5)
if input:
some_code()
The second argument of the CrossEntropyLoss class constructor expects a boolean. Thus, in the line
Loss = CrossEntropyLoss(Pip, Train["Label"])
the constructor will at some point try to use the passed tensor Train["Label"] as a boolean, which throws the mentioned error message.
You can not use the class CrossEntropyLoss directly. You should instantiate this class before using it.
original code:
loss = CrossEntropyLoss(Pip, Train["Label"])
should be replaced by:
loss = CrossEntropyLoss()
loss(Pip, Train["Label"])
First Instantiate loss
L = CrossEntropyLoss()
Then compute loss
L(y_pred, y_true)
This will fix the error.
if you landed up in this page because of pyplot not displaying your tensor image properly then use plt.imshow() instead of plt.show()
for example, instead of
plt.show(images[0].permute(1,2,0))
use
plt.imshow(images[0].permute(1,2,0))

How to show TensorBoard's CPU/memory usage (RunMetadata) for Keras

I want to view the CPU/memory usage in TensorBoard with Keras.
For this purpose, I need to execute the method of add_run_metadata.
But I cannot found the way to pass the add_run_metadata method in Keras's TensorBoard callback.
Is this good way to implement CPU/memory usage for Keras.
Reference
See following "Runtime Statistics" in TensorFlow
https://www.tensorflow.org/programmers_guide/graph_viz
add_run_metadata is defined in following location (in TensorFlow)
https://github.com/tensorflow/tensorflow/blob/v1.5.0/tensorflow/python/summary/writer/writer.py#L248
TensorBoard callback in Keras is defined here
https://github.com/keras-team/keras/blob/2.1.3/keras/callbacks.py#L587
EDIT: I encountered the same problem. I'm editing to share how I attempted to approch this.
I changed the keras source for: callbacks.py, and replaced this line in on_epoch_end() with -
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
result = self.sess.run([self.merged], feed_dict=feed_dict, options=run_options, run_metadata=run_metadata)
self.writer.add_run_metadata(run_metadata, 'epoch%d_step%d' % (epoch, i))
However I end up with the following error:
...\tensorflow\stream_executor\dso_loader.cc:141] Couldn't open CUDA library cupti64_90.dll
...\tensorflow/stream_executor/lib/statusor.h:212] Non-OK-status: status_ status: Failed precondition: could not dlopen DSO: cupti64_90.dll; dlerror: cupti64_90.dll not found
Which is puzzling to me as it seems to be related to the proper installation of cuda and not related in any obvious (to me) way to the change I made.
I'm using keras version 2.1.6 and tensorflow version 1.6.0
The solution is to run the Keras model in a TF session, and is based on the blog post: keras-as-a-simplified-interface-to-tensorflow#calling-keras-layers-on-tensorflow-tensors. Bellow is a detailed full and minimal working example.
First of all, dummy generation of data:
data.py
import numpy as np
def load_data(n = 1000):
x = np.random.rand(n, 100)
y = np.sum(x, axis=1, keepdims=True)
return x, y
The core idea is to run the model in TF session, so the main code is pure TF, and only the model itself is defined with Keras. For this to work (following the above mentioned tutorial):
The model needs to be built on top of a tf.placeholder, instead of the keras.layers.Input.
Remain as a tensor, and not compiled into a keras.models.Model.
from keras.layers import Dense
model.py
def load_network(input_tensor):
x = Dense(100, activation='relu')(input_tensor)
x = Dense(100, activation='relu')(x)
x = Dense(1, activation='sigmoid')(x)
return x
And the TF session that runs the keras model (a clean, but full, version of the TensorBoard tutorial):
run_runtime_stats.py
import tensorflow as tf
sess = tf.Session()
from keras import backend as K
from keras.objectives import mean_squared_error
K.set_session(sess)
from model import load_network
from data import load_data
# load your keras model as a tf.Tensor
input = tf.placeholder(tf.float32, shape=(None, 100)) # is passed as input to our keras layers
labels = tf.placeholder(tf.float32, shape=(None, 1))
net = load_network(input) # type(net) == tf.Tensor
loss = tf.reduce_mean(mean_squared_error(labels, net))
opt = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
writer = tf.summary.FileWriter(r'./logs', sess.graph)
sess.run(tf.global_variables_initializer())
with sess.as_default():
x, y = load_data(64)
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
sess.run([opt],
feed_dict={input: x, labels: y},
options=run_options,
run_metadata=run_metadata)
writer.add_run_metadata(run_metadata, 'runtime-statistics')
writer.close()

Categories

Resources