Related
I'm trying to create N x N tensor using tf.while_loop in my custom Keras layer. Here, N (timesteps in code) is a Keras symbolic tensor (integer scalar). The below code is __call__ method of my custom Keras layer in Functional Model.
import tensorflow as tf
from keras import backend as K
# timesteps = tf.constant(7) ## This makes this code work!!
timesteps = K.shape(inputs)[1] ## Or equivalently provided by timesteps = keras.layers.Input(shape= (), batch_size= 1, name= "timesteps")
# timesteps = tf.convert_to_tensor(timesteps) ## Does not work.
idx_outer = tf.constant(0)
timesteps_mixed_outer = tf.reshape(tf.Variable([]), (0, timesteps))
# timesteps_mixed_outer = Lambda(lambda timesteps : tf.reshape(tf.Variable([]), (0, timesteps)))(timesteps) ## Does not work
def body_inner(idx_inner, idx_outer, timesteps_mixed_inner):
timesteps_mixed_inner = tf.concat([timesteps_mixed_inner, [tf.cond(idx_inner == idx_outer, lambda : True, lambda : False)]], axis = 0)
return idx_inner + 1, idx_outer, timesteps_mixed_inner
def body_outer(idx_outer, timesteps_mixed_outer):
timesteps_mixed_inner = tf.Variable([])
idx_inner = tf.constant(0)
idx_inner, idx_outer, timesteps_mixed_inner = tf.while_loop(lambda idx_inner, idx_outer, timesteps_mixed_inner: K.less(idx_inner, timesteps), body_inner, [idx_inner, idx_outer, timesteps_mixed_inner], shape_invariants= [idx_inner.get_shape(), idx_outer.get_shape(), tf.TensorShape([None])])
timesteps_mixed_outer = tf.concat([timesteps_mixed_outer, [timesteps_mixed_inner]], axis = 0)
return idx_outer + 1, timesteps_mixed_outer
idx_outer, timesteps_mixed_outer = tf.while_loop(lambda idx_outer, timesteps_mixed_outer: K.less(idx_outer, timesteps), body_outer, [idx_outer, timesteps_mixed_outer], shape_invariants= [idx_outer.get_shape(), tf.TensorShape([None, None])]) ## Here raises error
The last line of above code raises the following error:
Exception has occurred: TypeError
Could not build a TypeSpec for <KerasTensor: shape=(0, None) dtype=float32 (created by layer 'tf.reshape')> with type KerasTensor
What I have tried:
I suspected the problem is came from Keras symbolic tensor input 'timesteps', so I have changed to timesteps = tf.constant(7) for experimental purpose. Then the code works and 'timesteps_mixed_outer' has the desired values:
<tf.Tensor: shape=(7, 7), dtype=float32, numpy=
array([[1., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 1.]], dtype=float32)>
I suspected the problem comes the use of from Keras symbolic tensor timesteps in tf.reshape function, so I have initialized timesteps_mixed_outer = tf.reshape(tf.Variable([]), (0, 7)) and leave timesteps = K.shape(inputs)[1]. Then new error occurs:
Exception has occurred: TypeError
Keras symbolic inputs/outputs do not implement `__len__`. You may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model. This error will also get raised if you try asserting a symbolic input/output directly.
I have also tried to wrap tf.reshape following two solutions suggested in TypeError: Could not build a TypeSpec for <KerasTensor when using tf.map_fn and keras functional model, but both raise the same error.
My environments is as follows:
MacOS 12.0.1
Python 3.7.3
keras-preprocessing [installed: 1.1.2]
keras.__version__ == 2.4.3
tensorflow [installed: 2.4.1]
tensorflow-estimator [installed: 2.4.0]
EDIT
This error is raised when I build Keras model, before feeding actual Numpy values. timesteps = K.shape(inputs)[1] is varying across inputs, so it is set to None as like a batch dimension.
timesteps = K.shape(inputs)[1]
==
<KerasTensor: shape=() dtype=int32 inferred_value=[None] (created by layer 'tf.__operators__.getitem_6')>
==
dtype:tf.int32
is_tensor_like:True
name:'tf.__operators__.getitem_6/strided_slice:0'
op:'Traceback (most recent call last):\n File "/Users/imgspoints/.vscode/extensions/ms-python.python-2022.2.1924087327/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_resolver.py", line 193, in _get_py_dictionary\n attr = getattr(var, name)\n File "/Users/imgspoints/.local/share/virtualenvs/experiments-m6CLaaa4/lib/python3.7/site-packages/tensorflow/python/keras/engine/keras_tensor.py", line 251, in op\n raise TypeError(\'Keras symbolic inputs/outputs do not \'\nTypeError: Keras symbolic inputs/outputs do not implement `op`. You may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.\n'
shape:TensorShape([])
type_spec:TensorSpec(shape=(), dtype=tf.int32, name=None)
_inferred_value:[None]
_keras_history:KerasHistory(layer=<tensorflow.python.keras.layers.core.SlicingOpLambda object at 0x1774fac88>, node_index=0, tensor_index=0)
_name:'tf.__operators__.getitem_6/strided_slice:0'
_overload_all_operators:<bound method KerasTensor._overload_all_operators of <class 'tensorflow.python.keras.engine.keras_tensor.KerasTensor'>>
_overload_operator:<bound method KerasTensor._overload_operator of <class 'tensorflow.python.keras.engine.keras_tensor.KerasTensor'>>
_to_placeholder:<bound method KerasTensor._to_placeholder of <KerasTensor: shape=() dtype=int32 inferred_value=[None] (created by layer 'tf.__operators__.getitem_6')>>
_type_spec:TensorSpec(shape=(), dtype=tf.int32, name=None)
When the error is raised, K.less(idx_outer, timesteps) can be evaluated succesfully:
timesteps == <KerasTensor: shape=() dtype=bool (created by layer 'tf.math.less')>
So I believe the error comes from tf.concat and I'm now trying to replace tf.concat to another operation (e.g. Keras Concatenate layer).
Simpler Example
The following codes work when end = tf.constant(7) but raises
Keras symbolic inputs/outputs do not implement `__len__`. You may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model. This error will also get raised if you try asserting a symbolic input/output directly.
error at _, final_output = tf.while_loop(cond, body, loop_vars=[step, output]) when end = Input(shape= (), batch_size= 1, name= "timesteps", dtype= tf.int32).
mport tensorflow as tf
from keras.layers import Input
# end = Input(shape= (), batch_size= 1, name= "timesteps", dtype= tf.int32) ## not works :(
end = tf.constant(7) ## works :)
array = tf.Variable([1., 1., 1., 1., 1., 1., 1.])
step = tf.constant(0)
output = tf.TensorArray(dtype=tf.float32, size=0, dynamic_size=True)
def cond(step, output):
return step < end
def body(step, output):
output = output.write(step, tf.gather(array, step))
return step + 1, output
_, final_output = tf.while_loop(cond, body, loop_vars=[step, output])
Try wrapping your logic in a custom layer and using tf operations:
import tensorflow as tf
class CustomLayer(tf.keras.layers.Layer):
def __init__(self):
super(CustomLayer, self).__init__()
def call(self, inputs):
input_shape = tf.shape(inputs)
end = input_shape[-1]
array = tf.ones((input_shape[-1],))
step = tf.constant(0)
output = tf.TensorArray(dtype=tf.float32, size=0, dynamic_size=True)
def cond(step, output):
return step < end
def body(step, output):
output = output.write(step, tf.gather(array, step))
return step + 1, output
_, final_output = tf.while_loop(cond, body, loop_vars=[step, output])
return tf.reshape(final_output.stack(), (input_shape))
inputs = tf.keras.layers.Input(shape= (None, ), batch_size= 1, name= "timesteps", dtype= tf.int32)
cl = CustomLayer()
outputs = cl(inputs)
model = tf.keras.Model(inputs, outputs)
random_data = tf.random.uniform((1, 7), dtype=tf.int32, maxval=50)
print(model(random_data))
tf.Tensor([1. 1. 1. 1. 1. 1. 1.], shape=(7,), dtype=float32)
timesteps_mixed_outer = tf.concat([timesteps_mixed_outer, [timesteps_mixed_inner]], axis = 0)
You have to check the shape of timesteps_mixed_outer and timesteps_mixed_inner. try to change the axis value.
or try this.
timesteps_mixed_outer = tf.concat([timesteps_mixed_outer.numpy(), timesteps_mixed_inner.numpy()], axis = 0)
I am observing a dimension mismatch in Keras to ONNX conversion.
I saved my model as a .h5 file.
It can successfully be saved and loaded again.
However, when converting it to an ONNX model, I get different output dimensions.
I think I experience this due to 2D output, because one of my output dimension is simply disappeared.
Loading Keras model...
>>> keras_model = load_model('model_checkpoints/DGCNN_modelbest_with_noise.h5')
>>> keras_output = keras_model.output
>>> keras_output
<tf.Tensor 'dense_2/truediv_5:0' shape=(None, 432, 5) dtype=float32>
Converting Keras model to ONNX...
>>> input_keras_model = 'model_checkpoints/DGCNN_modelbest_with_noise.h5'
>>> output_onnx_model = 'model_checkpoints/DGCNN_modelbest_with_noise.onnx'
>>> keras_model = load_model(input_keras_model)
>>> onnx_model = onnxmltools.convert_keras(keras_model)
>>> onnxmltools.utils.save_model(onnx_model, output_onnx_model)
Loading ONNX model...
>>> model = onnx.load("model_checkpoints/DGCNN_modelbest_with_noise.onnx")
>>> for _output in model.graph.output:
... m_dict = MessageToDict(_output)
... dim_info = m_dict.get("type").get("tensorType").get("shape").get("dim")
... output_shape = [d.get("dimValue") for d in dim_info]
... print(m_dict["name"])
... print(output_shape)
...
dense_2
[None, None, '5']
Any suggestions?
What am I doing wrong?
I don't see many examples for multidimensional output layers. Is this the reason?
Thank you for your time.
I have no problem following the example I try by loading and run it still have the same results but I using the pdb format.
The pdb format is a molecular format that includes sutures and using from model.save( ... )
### ( 1 ) : Save and convert
import tensorflow as tf
import tf2onnx
import onnx
model = tf.keras.Sequential()
#model.add(tf.keras.layers.InputLayer(input_shape=(1, 100, 100, 3), name='DekDee Input'))
model.add(tf.compat.v1.layers.dense(4, activation="relu", name='output1'))
Name specific and types is significant
input_signature = [tf.TensorSpec([3, 3], tf.float32, name='input1')]
#Use from_function for tf functions
onnx_model, _ = tf2onnx.convert.from_keras(model, input_signature, opset=13)
onnx.save(onnx_model, "F:\\models\\onnx\\model.onnx")
OR
model.save("F:\\models\\onnx\\modelpb")
Command : python -m tf2onnx.convert --saved-model "F:\models\onnx\modelpb" --output "F:\\models\\onnx\\model_2.onnx" --opset 13
### ( 2 ) : Load and run
import onnxruntime as ort
import numpy as np
import tensorflow as tf
Change shapes and types to match model
input1 = np.zeros((3, 3), np.float32)
sess = ort.InferenceSession("F:\\models\\onnx\\model.onnx", providers=["CUDAExecutionProvider"])
results_ort = sess.run(["output1"], {"input1": input1})
F:\temp\Python\tf_onnx>python onnx_verification_test_2.py
[array([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]], dtype=float32)]
Problem description
I have inputs x that are indicator variables, and outputs y, where each row is a random one-hot vector that depends on the values of x (data sample shown below).
I want to train a model that essentially learns the probabilistic relationship between x and y in the form of per-column weights. The model must "choose" one, and only one, indicator to output. My current approach is to sample a categorical random variable and produce a one-hot vector as a prediction.
The issue is that I'm getting an error ValueError: An operation has `None` for gradient when I try to train my Keras model.
I find this error odd, because I've trained mixture networks using Keras and Tensorflow, which use tf.contrib.distributions.Categorical, and I did not run into any gradient-related issues.
Code
Experiment
import tensorflow as tf
import tensorflow.contrib.distributions as tfd
import numpy as np
from keras import backend as K
from keras.layers import Layer
from keras.models import Sequential
from keras.utils import to_categorical
def make_xy_prob(rng, size=10000):
rng = np.random.RandomState(rng) if isinstance(rng, int) else rng
cols = 3
weights = np.array([[1, 2, 3]])
# generate data and drop zeros for now
x = rng.choice(2, (size, cols))
is_zeros = x.sum(axis=1) == 0
x = x[~is_zeros]
# use weights to create probabilities for determining y
weighted_x = x * weights
prob_x = weighted_x / weighted_x.sum(axis=1, keepdims=True)
y = np.row_stack([to_categorical(rng.choice(cols, p=p), cols) for p in prob_x])
# add zeros back and shuffle
zeros = np.zeros(((size - len(x), cols)))
x = np.row_stack([x, zeros])
y = np.row_stack([y, zeros])
shuffle_idx = rng.permutation(size)
x = x[shuffle_idx]
y = y[shuffle_idx]
return x, y
class OneHotGate(Layer):
def build(self, input_shape):
self.kernel = self.add_weight(name='kernel', shape=(1, input_shape[1]), initializer='ones')
def call(self, x):
zero_cond = x < 1
x_shape = tf.shape(x)
# weight indicators so that more probability is assigned to more likely columns
weighted_x = x * self.kernel
# fill zeros with -inf so that zero probability is assigned to that column
ninf_fill = tf.fill(x_shape, -np.inf)
masked_x = tf.where(zero_cond, ninf_fill, weighted_x)
onehot_gate = tf.squeeze(tfd.OneHotCategorical(logits=masked_x, dtype=x.dtype).sample(1))
# fill gate with zeros where input was originally zero
zeros_fill = tf.fill(x_shape, 0.0)
masked_gate = tf.where(zero_cond, zeros_fill, onehot_gate)
return masked_gate
def experiment(epochs=10):
K.clear_session()
rng = np.random.RandomState(2)
X, y = make_xy_prob(rng)
input_shape = (X.shape[1], )
model = Sequential()
gate_layer = OneHotGate(input_shape=input_shape)
model.add(gate_layer)
model.compile('adam', 'categorical_crossentropy')
model.fit(X, y, 64, epochs, verbose=1)
Data sample
>>> x
array([[1., 1., 1.],
[0., 1., 0.],
[1., 0., 1.],
...,
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 0.]])
>>> y
array([[0., 0., 1.],
[0., 1., 0.],
[1., 0., 0.],
...,
[0., 0., 1.],
[1., 0., 0.],
[1., 0., 0.]])
Error
ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.
The problem lies in the fact that in OneHotCategorical performs a discontinuous sampling - what causes gradient computation to fail. In order to replace this discontinuous sampling with a continuous (relaxed) version one may try to use RelaxedOneHotCategorical (which is based on interesting Gumbel Softmax technique).
I want to feed a batch_size integer as a placeholder in Tensorflow. But it does not act as an integer. Consider the following example:
import tensorflow as tf
max_length = 5
batch_size = 3
batch_size_placeholder = tf.placeholder(dtype=tf.int32)
mask_0 = tf.one_hot(indices=[0]*batch_size_placeholder, depth=max_length, on_value=0., off_value=1.)
mask_1 = tf.one_hot(indices=[0]*batch_size, depth=max_length, on_value=0., off_value=1.)
# new session
with tf.Session() as sess:
feed = {batch_size_placeholder : 3}
batch, mask0, mask1 = sess.run([
batch_size_placeholder, mask_0, mask_1
], feed_dict=feed)
When I print the values of batch, mask0 and mask1 I have the following:
print(batch)
>>> array(3, dtype=int32)
print(mask0)
>>> array([[0., 1., 1., 1., 1.]], dtype=float32)
print(mask1)
>>> array([[0., 1., 1., 1., 1.],
[0., 1., 1., 1., 1.],
[0., 1., 1., 1., 1.]], dtype=float32)
Indeed I thought mask0 and mask1 must be the same, but it seems that Tensorflow does not treat batch_size_placeholder as an integer. I believe it would be a tensor, but is there anyway that I can use it as an integer in my computations?
Is there anyway I can fix this problem? Just FYI, I used tf.one_hot as just an example, I want to run train/validation during training in my code where I will need a lot of other computations with different values for batch_size in training and in validation steps.
Any help would be appreciated.
In pure python usage, [0]*3 will be [0,0,0]. However, batch_size_placeholder is a placeholder, during the graph execution, it will be a tensor. [0]*tensor will be regarded as tensor multiplication. In your case, it will be a 1-d tensor which has 0 value. To correctly use batch_size_placeholder, you should create a tensor which has the same length as batch_size_placeholder.
mask_0 = tf.one_hot(tf.zeros(batch_size_placeholder, dtype=tf.int32), depth=max_length, on_value=0., off_value=1.)
It will have the same result as mask_1.
A simple example to show the difference.
batch_size_placeholder = tf.placeholder(dtype=tf.int32)
a = [0]*batch_size_placeholder
b = tf.zeros(batch_size_placeholder, dtype=tf.int32)
with tf.Session() as sess:
print(sess.run([a, b], feed_dict={batch_size_placeholder : 3}))
# [array([0], dtype=int32), array([0, 0, 0], dtype=int32)]
I am trying to implement a OCR project by Keras.So I try to learn from Keras OCR example.I have use my own train data to train a new model and get the .H5 modelfile.
Now I want to test a new image to see my model performance,so I code a
test.py like this:
from keras.models import Model
import cv2
from keras.preprocessing.image import img_to_array
import numpy as np
from keras.models import load_model
from keras import backend as K
from allNumList import alphabet
def labels_to_text(labels):
ret = []
for c in labels:
if c == len(alphabet): # CTC Blank
ret.append("")
else:
ret.append(alphabet[c])
return "".join(ret)
def decode_predict_ctc(out, top_paths = 1):
results = []
beam_width = 5
if beam_width < top_paths:
beam_width = top_paths
for i in range(top_paths):
lables = K.get_value(K.ctc_decode(out, input_length=np.ones(out.shape[0])*out.shape[1],
greedy=False, beam_width=beam_width, top_paths=top_paths)[0][i])[0]
text = labels_to_text(lables)
results.append(text)
return results
def test(modelPath,testPicTest):
img=cv2.imread(testPicTest)
img=cv2.resize(img,(128,64))
img=img_to_array(img)
img=np.array(img,dtype='float')/255.0
img=np.expand_dims(img, axis=0)
img=img.swapaxes(1,2)
model=load_model(modelPath,custom_objects = {'<lambda>': lambda y_true, y_pred: y_pred})
net_out_value = model.predict(img)
top_pred_texts = decode_predict_ctc(net_out_value)
return top_pred_texts
result=test(r'D:\code\testAndExperiment\py\KerasOcr\weights.h5',r'D:\code\testAndExperiment\py\KerasOcr\test\avo.jpg')
print(result)
but I get a error like this:
Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 4 array(s), but instead got the following list of 1 arrays: [array([[[[1., 1., 1.], [1., 1., 1.], [1., 1., 1.], ..., [1., 1., 1.], [1., 1., 1.], [1., 1., 1.]], [[1., 1., 1.], [1., 1., 1.],...
I have references some material:
https://stackoverflow.com/a/49537697/10689350
https://www.dlology.com/blog/how-to-train-a-keras-model-to-recognize-variable-length-text/
How to predict the results for OCR using keras image_ocr example?
some answer show that we should use 4 inputs [input_data, labels, input_length, label_length] in training but besides input_data, everything else is information used only for calculating the loss,so in testing maybe use the input_data is enough.So I just use a picture without labels, input_length, label_length.But I get the error above.
I am confused about if the model needs 4 inputs or 1 in testing?
It doesn't seem reasonable to require 4 inputs during the testing process.and now I have model.h5,what should I do next?
Thanks in advance.
My code is Here:https://github.com/hqabcxyxz/KerasOCR/tree/master
maybe I know why.Because in the OCR example,we make a lambda layer to count CTC loss.This Layer need 4 inputs!
The right way to do test is we make a model without this lambda layer during inference.Then load the model weight by name to do inference.After we get inference result,just use CTC decode it!
I will update my code in github later.....