Keras deep variational autoencoder - python

Im trying to adapt the Keras VAE example to a deep network by adding one more layer.
Original code: Original VAE code
CHANGES:
batch_size = 200
original_dim = 784
latent_dim = 2
intermediate_dim_deep = 384 # <<<<<<<
intermediate_dim = 256
nb_epoch = 20
#
x = Input(batch_shape=(batch_size, original_dim))
x = Dense(intermediate_dim_deep, activation='relu')(x) # NEW LAYER <<<<<<
h = Dense(intermediate_dim, activation='relu')(x)
z_mean = Dense(latent_dim)(h)
z_log_var = Dense(latent_dim)(h)
#
def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(batch_size, latent_dim), mean=0.)
return z_mean + K.exp(z_log_var / 2) * epsilon
# note that "output_shape" isn't necessary with the TensorFlow backend
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])
#
# we instantiate these layers separately so as to reuse them later
decoder_h = Dense(intermediate_dim, activation='relu')
decoder_d = Dense(intermediate_dim_deep, activation='rely') # NEW LAYER <<<<<<
decoder_mean = Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
d_decoded = decoder_d(h_decoded) # ADDED ONE MORE STEP HERE <<<<<<<
x_decoded_mean = decoder_mean(d_decoded)
#
def vae_loss(x, x_decoded_mean):
xent_loss = original_dim * objectives.binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
return xent_loss + kl_loss
#
vae = Model(x, x_decoded_mean)
vae.compile(optimizer='rmsprop', loss=vae_loss)
#####
Compile I've me this error:
/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py:1615: UserWarning: Model inputs must come from a Keras Input layer, they cannot be the output of a previous non-Input layer. Here, a tensor specified as input to "model_1" was not an Input tensor, it was generated by layer dense_1.
Note that input tensors are instantiated via `tensor = Input(shape)`.
The tensor that caused the issue was: None
str(x.name))
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-8-c9010948cdee> in <module>()
----> 1 vae = Model(x, x_decoded_mean)
2 vae.compile(optimizer='rmsprop', loss=vae_loss)
/usr/local/lib/python2.7/dist-packages/keras/engine/topology.pyc in __init__(self, input, output, name)
1788 'The following previous layers '
1789 'were accessed without issue: ' +
-> 1790 str(layers_with_complete_input))
1791 for x in node.output_tensors:
1792 computable_tensors.append(x)
Exception: Graph disconnected: cannot obtain value for tensor input_1 at layer "input_1". The following previous layers were accessed without issue: []
I have the other examples in the repo and it seems a valid way to do it.
Am I missing something?

When adding the new hidden layer you're overriding the x variable so you're left without an input layer. Also, is 'rely' a valid activation option?

Related

ValueError: Graph disconnected: cannot obtain value for tensor in VAE with concat

I am trying to extend the existing VAE network by leaking input to the variable for an autoregressive time series problem. For the same, I've used LSTM block and model works if I have standard network without Concatenate or Add layer. However, accuracy is not so great so I'd like to make it autoregressive in a sense that for synthesizing timeseries always take mu and sigma along with the previous samples as shown in the image.
Therefore, following code adds the input with an additional layer to match the dimension.
from tensorflow import keras
import tensorflow as tf
from tensorflow.keras import layers
class Sampling(layers.Layer):
"""Uses (z_mean, z_log_var) to sample z."""
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim),
mean=0., stddev=1.)
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
# Build encoder and decoder
latent_dim = 2
input_shape = (4, 96)
encoder_input = keras.Input(shape=input_shape)
enc = layers.LSTM(64)(encoder_input)
z_mean = layers.Dense(latent_dim, name="z_mean")(enc)
z_log_sigma = layers.Dense(latent_dim, name="z_log_var")(enc)
z = Sampling()([z_mean, z_log_sigma])
encoder = keras.Model(encoder_input, [z_mean, z_log_sigma, z], name="encoder")
encoder.summary()
# Extra layers for leaky input to decoder
leak_input_dec_ip = keras.Input(shape=input_shape)
leak_input_dec = layers.LSTM(2)(leak_input_dec_ip)
# Decoder where we have inputs from leaky input and sampling layer
inp_z = keras.Input(shape=(latent_dim,))
dec = layers.Concatenate()([leak_input_dec, inp_z]) # or Add
dec = layers.RepeatVector(96)(inp_z)
dec = layers.LSTM(64, return_sequences=True)(dec)
out = layers.TimeDistributed(layers.Dense(96))(dec)
decoder = keras.Model([leak_input_dec, inp_z], out)
decoder.summary()
But, it keeps giving the error ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_39:0", shape=(None, 4, 96), dtype=float32) at layer "lstm_29". The following previous layers were accessed without issue: ['repeat_vector_11', 'lstm_30', 'time_distributed_8']

Why is my neural network predicting -0 (PYTHON - backpropagation XOR)?

I'm working on developing a neural network from scratch. The issue seems to maybe be with my relu back-propagation. When I train the model it sometimes outputs -0 and sometimes outputs good predictions (relatively). Can someone tell me if I'm doing my back propagation incorrectly or if there's a reason why my relu would be predicting -0?
--
[edit]
Fixed the issue of predicting -0, but now it just predicts 0 for all inputs for the XOR. Can someone look over my backpropagation?
import numpy as np
# Each layer in our neural network
class NeuralLayer:
def __init__(self, input_neurons, output_neurons):
self.weights = np.random.randn(input_neurons, output_neurons)* np.sqrt(2. / input_neurons)
self.bias = np.ones((1,output_neurons)) * 0.5
# Two different activations, sigmoid by default
def sigmoid(self, neurons):
self.act = 1.0/(1.0 + np.exp(-neurons))
return self.act
def sigmoidBackward(self, grad):
return grad * self.act * (1 - self.act)
def relu(self, neurons):
self.act = (neurons > 0)
return neurons * self.act
def reluBackward(self, grad):
return grad * self.act
# Forward pass for this layer
def forward(self, input, activation):
self.input = np.atleast_2d(input)
if activation == 'sigmoid':
return self.sigmoid(input # self.weights + self.bias)
else:
return self.relu(input # self.weights + self.bias)
# backward pass for this layer
def backward(self, grad, activation):
if activation == 'sigmoid':
grad = self.sigmoidBackward(np.atleast_2d(grad))
else:
grad = self.reluBackward(np.atleast_2d(grad))
self.grad_weights = np.matmul(self.input.T, grad)
self.grad_bias = grad.sum()
return grad # self.weights.T
def step(self, step_size):
self.weights -= step_size*self.grad_weights
self.bias -= step_size*self.grad_bias
# Our neural net
class NeuralNetwork:
# Dynamically create all layers
def __init__(self, input_neurons, hidden_neurons, layer_count, activation, output_neurons = 1):
self.activation = activation
# Used to ensure input neurons match inputted data
self.neuron_safety = input_neurons
assert layer_count >= 2 and output_neurons >= 1
# Input layer
self.layers = [NeuralLayer(input_neurons, hidden_neurons)]
# Hidden Layers
for i in range(layer_count - 2):
self.layers.append(NeuralLayer(hidden_neurons, hidden_neurons))
# Output layer
self.layers.append(NeuralLayer(hidden_neurons, output_neurons))
# Forward pass for each layer
def forward(self, inp):
assert inp.shape[0] == self.neuron_safety
for layer in self.layers:
inp = layer.forward(inp, self.activation)
return inp
def backward(self, grad):
for layer in reversed(self.layers):
grad = layer.backward(grad, self.activation)
def step(self, step_size = 0.01):
for layer in self.layers:
layer.step(step_size)
# loss function - only 1 output neuron
def meanSquaredError(self, preds, labels):
self.labels = labels
self.preds = preds
return (self.preds - self.labels)**2
def meanSquaredErrorGrad(self):
return 2 * (self.preds - self.labels)
# Create a neural network with 2 inputs, 2 hidden neurons in each layer, and 2 layers
net = NeuralNetwork(2,16,4, 'relu')
epochs = 5000
# Input data (A,B) for XOR
X = np.array([[0,0],[1,1], [1,0],[0,1]])
# Expected output data
Y = np.array([[0],[0],[1],[1]])
for i in range(epochs):
preds = []
for idx, x in enumerate(X):
predictions = net.forward(x)
preds.append(predictions)
loss = net.meanSquaredError(predictions, Y[idx])
loss_grad = net.meanSquaredErrorGrad()
net.backward(loss_grad)
net.step()
print("Model predicted: {}\nactual values: {} ".format(preds, Y.T))
Output:
Model predicted: [array([[-0.]]), array([[-0.]]), array([[1.]]), array([[-0.]])]
actual values: [[0 0 1 1]]
Sometimes the predictions are perfect, but most of the time at least one prediction will be -0
The bias gradient is incorrect. You are using self.grad_bias = grad.sum(). This will compute the sum of the entire matrix. It needs to be self.grad_bias = grad.sum(axis=0, keepdims=True) to compute a 1 x output_neurons array that will properly update the bias vector. Otherwise, grad.sum() provides a single number that you are using to update all of your biases, which is not correct.
Also, make sure you update your forward pass for your ReLU to np.maximum(neurons, 0) as described in the comments.
def relu(self, neurons):
self.act = (neurons > 0)
return np.maximum(neurons, 0)
The gradient of the activations will be 0 or 1 depending on which parts of the inputs were positive.
Finally, for the XOR problem you typically do not use ReLU as the activation for the output layer because it is not bounded between [0-1] as per the XOR problem. The reason why you got good results with the sigmoid activation function is that the dynamic range of that activation function suits the XOR problem well. As an experiment, you can modify the output layer to be sigmoid, and the hidden layers to be ReLU. If you do this, you should get just as good a performance as using sigmoid all the way.

Keras custom layer to Conv2D input channels error, ValueError: number of input channels does not match corresponding dimension of filter, 50 != 3200

I am trying to create a model with Normalized cross correlation custom layer, code taken from here
from keras import backend as K
from keras.layers import Conv2D, MaxPooling2D, Dense, Input, Flatten
from keras.models import Model, Sequential
from keras.engine import InputSpec, Layer
from keras import regularizers
from keras.optimizers import SGD, Adam
from keras.utils.conv_utils import conv_output_length
from keras import activations
import numpy as np
class Normalized_Correlation_Layer(Layer):
# create a class inherited from keras.engine.Layer.
def __init__(self, patch_size=(5, 5),
dim_ordering='tf',
border_mode='same',
stride=(1, 1),
activation=None,
**kwargs):
if border_mode != 'same':
raise ValueError('Invalid border mode for Correlation Layer '
'(only "same" is supported as of now):', border_mode)
self.kernel_size = patch_size
self.subsample = stride
self.dim_ordering = dim_ordering
self.border_mode = border_mode
self.activation = activations.get(activation)
super(Normalized_Correlation_Layer, self).__init__(**kwargs)
def compute_output_shape(self, input_shape):
return(input_shape[0][0], input_shape[0][1], input_shape[0][2], self.kernel_size[0] * input_shape[0][2]*input_shape[0][-1])
def get_config(self):
config = {'patch_size': self.kernel_size,
'activation': self.activation.__name__,
'border_mode': self.border_mode,
'stride': self.subsample,
'dim_ordering': self.dim_ordering}
base_config = super(Correlation_Layer, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
def call(self, x, mask=None):
input_1, input_2 = x
stride_row, stride_col = self.subsample
inp_shape = input_1._keras_shape
output_shape = self.compute_output_shape([inp_shape, inp_shape])
padding_row = (int(self.kernel_size[0] / 2),int(self.kernel_size[0] / 2))
padding_col = (int(self.kernel_size[1] / 2),int(self.kernel_size[1] / 2))
input_1 = K.spatial_2d_padding(input_1, padding =(padding_row,padding_col))
input_2 = K.spatial_2d_padding(input_2, padding = ((padding_row[0]*2, padding_row[1]*2),padding_col))
output_row = output_shape[1]
output_col = output_shape[2]
output = []
for k in range(inp_shape[-1]):
xc_1 = []
xc_2 = []
# print("here")
for i in range(padding_row[0]):
for j in range(output_col):
xc_2.append(K.reshape(input_2[:, i:i+self.kernel_size[0], j:j+self.kernel_size[1], k],
(-1, 1,self.kernel_size[0]*self.kernel_size[1])))
for i in range(output_row):
slice_row = slice(i, i + self.kernel_size[0])
slice_row2 = slice(i + padding_row[0], i +self.kernel_size[0] + padding_row[0])
# print("dfg")
for j in range(output_col):
slice_col = slice(j, j + self.kernel_size[1])
xc_2.append(K.reshape(input_2[:, slice_row2, slice_col, k],
(-1, 1,self.kernel_size[0]*self.kernel_size[1])))
xc_1.append(K.reshape(input_1[:, slice_row, slice_col, k],
(-1, 1,self.kernel_size[0]*self.kernel_size[1])))
for i in range(output_row, output_row+padding_row[1]):
for j in range(output_col):
xc_2.append(K.reshape(input_2[:, i:i+ self.kernel_size[0], j:j+self.kernel_size[1], k],
(-1, 1,self.kernel_size[0]*self.kernel_size[1])))
xc_1_aggregate = K.concatenate(xc_1, axis=1)
xc_1_mean = K.mean(xc_1_aggregate, axis=-1, keepdims=True)
xc_1_std = K.std(xc_1_aggregate, axis=-1, keepdims=True)
xc_1_aggregate = (xc_1_aggregate - xc_1_mean) / xc_1_std
xc_2_aggregate = K.concatenate(xc_2, axis=1)
xc_2_mean = K.mean(xc_2_aggregate, axis=-1, keepdims=True)
xc_2_std = K.std(xc_2_aggregate, axis=-1, keepdims=True)
xc_2_aggregate = (xc_2_aggregate - xc_2_mean) / xc_2_std
xc_1_aggregate = K.permute_dimensions(xc_1_aggregate, (0, 2, 1))
block = []
len_xc_1= len(xc_1)
print("asdf")
for i in range(len_xc_1):
#This for loop is to compute the product of a given patch of feature map 1 and the feature maps on which it is supposed to
sl1 = slice(int(i/inp_shape[2])*inp_shape[2],
int(i/inp_shape[2])*inp_shape[2]+inp_shape[2]*self.kernel_size[0])
#This calculates which are the patches of feature map 2 to be considered for a given patch of first feature map.
block.append(K.reshape(K.batch_dot(xc_2_aggregate[:,sl1,:],
xc_1_aggregate[:,:,i]),(-1,1,1,inp_shape[2] *self.kernel_size[0])))
block = K.concatenate(block, axis=1)
# print("zxcv")
block= K.reshape(block,(-1,output_row,output_col,inp_shape[2] *self.kernel_size[0]))
output.append(block)
output = self.activation(output)
print(output)
return output
My model is a combination of cross correlation and Conv2D layers,
dt = 'float32'
def create_model():
ip = keras.layers.Input((50,50, 1))
ncx1_1 = Normalized_Correlation_Layer(patch_size=(1, 1))([ip,ip])
ncn1_1 = keras.layers.Conv2D(64, (1,1), activation = 'relu', dtype=dt)(ip)
ncn2_1 = keras.layers.Conv2D(64, (1,1), activation = 'relu', dtype=dt)(ncx1_1)
ncx2_1 = Normalized_Correlation_Layer(patch_size=(1, 1),dtype=dt)([ncn1_1,ncn2_1])
# ncx2_1 = keras.layers.Reshape((50, 50, 3200))(ncx2_1)
# Problem occurs here
ncn3 = keras.layers.Conv2D(filters=64,kernel_size=(1,1), activation = 'relu', dtype=dt)(ncx2_1)
ncn4 = keras.layers.Conv2D(12, (1,1), activation = 'sigmoid', dtype=dt)(ncn3)
model = keras.models.Model(ip,ncn4)
return model
The model till the last cross correlation layer is successfully created, but I get problem for ncn3 layer
ValueError: number of input channels does not match corresponding dimension of filter, 50 != 3200
The output shape printed from the ncx2_1 layer, while creating it is printed as (?, 50, 50, 50),
when I print ncx2_1.shape and also the outputs returned from call function of layer class ([<tf.Tensor 'normalized__correlation__layer_4/Reshape_10000:0' shape=(?, 50, 50, 50) dtype=float32>]).
But the model summary shows it as (?,50,50,3200) when I create the model till that layer only, ie. model = keras.models.Model(ip,ncx2_1)
When I reshape the layer using ncx2_1 = keras.layers.Reshape((50, 50, 3200))(ncx2_1) , I can create the model successfully, but when I try to fit the data on it, I get :
InvalidArgumentError: Input to reshape is a tensor with 6250000 values, but the requested shape has 400000000
[[node reshape_1/Reshape (defined at /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1781) ]]
[[node loss/mul (defined at /usr/local/lib/python3.6/dist-packages/keras/engine/training.py:865) ]]
Here, my batch size is 50, so for a layer with (B,H,W,C) inputs of (50,50,50,50), the size should be 6250000, butt for (50,50,50,3200), it should be 400000000, which means that the output of cross correlation layer is 50 channels.
I am either interpreting this wrong or I have made a mistake somewhere which I would like to know about.
I am using keras 2.1.2 with tensorflow 1.13.1 (That was the version in which the custom layer was written and I was getting other problems with latest version)
I am also using a custom generator if that is needed info and calling fit using md.fit_generator(train_gen,verbose=1). I can also add any other detail necessary.

python tensorflow 2.0 build a simple LSTM network without using Keras

I'm trying to build a tensorflow LSTM network without using Keras API. The model is very simple:
input of sequence of 4 word indices
embedding input 100 dim word vector
pass through LSTM layer
dense layer with output of sequence of 4 words
Loss function is sequence loss.
I have the following code:
# input
input_placeholder = tf.placeholder(tf.int32, shape=[config.batch_size, config.num_steps], name='Input')
labels_placeholder = tf.placeholder(tf.int32, shape=[config.batch_size, config.num_steps], name='Target')
# embedding
embedding = tf.get_variable('Embedding', initializer=embedding_matrix, trainable=False)
inputs = tf.nn.embedding_lookup(embedding, input_placeholder)
inputs = [tf.squeeze(x, axis=1) for x in tf.split(inputs, config.num_steps, axis=1)]
# LSTM
initial_state = tf.zeros([config.batch_size, config.hidden_size])
lstm_cell = tf.nn.rnn_cell.LSTMCell(config.hidden_size)
output, _ = tf.keras.layers.RNN(lstm_cell, inputs, dtype=tf.float32, unroll=True)
# loss op
all_ones = tf.ones([config.batch_size, config.num_steps])
cross_entropy = tfa.seq2seq.sequence_loss(output, labels_placeholder, all_ones, vocab_size)
tf.add_to_collection('total_loss', cross_entropy)
loss = tf.add_n(tf.get_collection('total_loss'))
# projection (dense)
proj_U = tf.get_variable('Matrix', [config.hidden_size, vocab_size])
proj_b = tf.get_variable('Bias', [vocab_size])
outputs = [tf.matmul(o, proj_U) + proj_b for o in output]
The problem I have is at the LSTM part now:
# tensorflow 1.x
output, _ = tf.contrib.rnn.static_rnn(
lstm_cell, inputs, dtype = tf.float32,
sequence_length = [config.num_steps]*config.batch_size)
I'm having problem converting this to tensorlow 2. In above code, I'm getting the following error:
--------------------------------------------------------------------------- TypeError Traceback (most recent call
last) in
----> 1 outputs, _ = tf.keras.layers.RNN(lstm_cell, inputs, dtype=tf.float32, unroll=True)
TypeError: cannot unpack non-iterable RNN object
The below code should work for TensorFlow 2.X.
import tensorflow as tf
# input
input_placeholder = tf.compat.v1.placeholder(tf.int32, shape=[config.batch_size, config.num_steps], name='Input')
labels_placeholder = tf.compat.v1.placeholder(tf.int32, shape=[config.batch_size, config.num_steps], name='Target')
# embedding
embedding = tf.compat.v1.get_variable('Embedding', initializer=embedding_matrix, trainable=False)
inputs = tf.nn.embedding_lookup(params=embedding, ids=input_placeholder)
inputs = [tf.squeeze(x, axis=1) for x in tf.split(inputs, config.num_steps, axis=1)]
# LSTM
initial_state = tf.zeros([config.batch_size, config.hidden_size])
lstm_cell = tf.compat.v1.nn.rnn_cell.LSTMCell(config.hidden_size)
output, _ = tf.keras.layers.RNN(lstm_cell, inputs, dtype=tf.float32, unroll=True)
# loss op
all_ones = tf.ones([config.batch_size, config.num_steps])
cross_entropy = tfa.seq2seq.sequence_loss(output, labels_placeholder, all_ones, vocab_size)
tf.compat.v1.add_to_collection('total_loss', cross_entropy)
loss = tf.add_n(tf.compat.v1.get_collection('total_loss'))
# projection (dense)
proj_U = tf.compat.v1.get_variable('Matrix', [config.hidden_size, vocab_size])
proj_b = tf.compat.v1.get_variable('Bias', [vocab_size])
outputs = [tf.matmul(o, proj_U) + proj_b for o in output]
# tensorflow 1.x
output, _ = tf.compat.v1.nn.static_rnn(
lstm_cell, inputs, dtype = tf.float32,
sequence_length = [config.num_steps]*config.batch_size)

Keras ctc_decode shape must be rank 1 but is rank 2

I am implementing an OCR with Keras, Tensorflow backend.
I want to use keras.backend.ctc_decode implementation.
I have a model class :
import keras
def ctc_lambda_func(args):
y_pred, y_true, input_x_width, input_y_width = args
# the 2 is critical here since the first couple outputs of the RNN
# tend to be garbage:
# y_pred = y_pred[:, 2:, :]
return keras.backend.ctc_batch_cost(y_true, y_pred, input_x_width, input_y_width)
class ModelOcropy(keras.Model):
def __init__(self, alphabet: str):
self.img_height = 48
self.lstm_size = 100
self.alphabet_size = len(alphabet)
# check backend input shape (channel first/last)
if keras.backend.image_data_format() == "channels_first":
input_shape = (1, None, self.img_height)
else:
input_shape = (None, self.img_height, 1)
# data input
input_x = keras.layers.Input(input_shape, name='x')
# training inputs
input_y = keras.layers.Input((None,), name='y')
input_x_widths = keras.layers.Input([1], name='x_widths')
input_y_widths = keras.layers.Input([1], name='y_widths')
# network
flattened_input_x = keras.layers.Reshape((-1, self.img_height))(input_x)
bidirectional_lstm = keras.layers.Bidirectional(
keras.layers.LSTM(self.lstm_size, return_sequences=True, name='lstm'),
name='bidirectional_lstm'
)(flattened_input_x)
dense = keras.layers.Dense(self.alphabet_size, activation='relu')(bidirectional_lstm)
y_pred = keras.layers.Softmax(name='y_pred')(dense)
# ctc loss
ctc = keras.layers.Lambda(ctc_lambda_func, output_shape=[1], name='ctc')(
[dense, input_y, input_x_widths, input_y_widths]
)
# init keras model
super().__init__(inputs=[input_x, input_x_widths, input_y, input_y_widths], outputs=[y_pred, ctc])
# ctc decoder
top_k_decoded, _ = keras.backend.ctc_decode(y_pred, input_x_widths)
self.decoder = keras.backend.function([input_x, input_x_widths], [top_k_decoded[0]])
# decoded_sequences = self.decoder([test_input_data, test_input_lengths])
My use of ctc_decode comes from another post : Keras using Lambda layers error with K.ctc_decode
I get an error :
ValueError: Shape must be rank 1 but is rank 2 for 'CTCGreedyDecoder' (op: 'CTCGreedyDecoder') with input shapes: [?,?,7], [?,1].
I guess I have to squeeze my input_x_widths, but Keras does not seem to have such function (it always outputs something like (batch_size, 1))
Indeed, the function is expecting a 1D tensor, and you've got a 2D tensor.
Keras does have the keras.backend.squeeze(x, axis=-1) function.
And you can also use keras.backend.reshape(x, (-1,))
If you need to go back to the old shape after the operation, you can both:
keras.backend.expand_dims(x)
keras.backend.reshape(x,(-1,1))
Complete fix :
# ctc decoder
flattened_input_x_width = keras.backend.reshape(input_x_widths, (-1,))
top_k_decoded, _ = keras.backend.ctc_decode(y_pred, flattened_input_x_width)
self.decoder = keras.backend.function([input_x, flattened_input_x_width], [top_k_decoded[0]])
# decoded_sequences = self.decoder([input_x, flattened_input_x_width])

Categories

Resources