Im creating a model that uses Keras's functional API, this model takes 2 inputs, hence im using
video_input = Input(shape=(16, 112, 112, 3))
image_input = Input(shape=(112, 112, 3))
Model(inputs=[video_input, image_input], outputs=merge_model)
So as you can see, this means that the model expects an array with the first element being of shape (16, 112, 112, 3) and second of shape (112, 112, 3).
I'm using a class that i created which inherits Keras.util.sequence class to provide generated batches of data.
the problem comes after generating batches of data when tensorflow attempts to feed the model with the input the input is changed from being array of 2 to be array 1 and this 1 element consists of 2 for example it should expect
[array(...), array(...)] instead it receives [array(array[...],array[...])]
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[array([[[[-76.87925 , -81.45539 , -82.91122 ],
[-76.90526 , -81.45103 , -83.00473 ],
[-76.77082 , -81.259674, -82.92529 ],
...,
[-76.17821 , -80.61866 , -8...
i tried to make the data holder in the sequence generator as python array where i append data then convert it to numpy array but got the error above.
somehow keras wraps it into 1 array before it returns it to the model.
this is the data generation method
def __data_generation(self, list_IDs_temp):
'Generates data containing batch_size samples' # X : (n_samples, *dim, n_channels)
# Initialization
X = []
y = np.empty((self.batch_size), dtype=int)
# Generate data
for i, ID in enumerate(list_IDs_temp):
# Store sample
print(ID)
frame_data = input_data.get_frames_data(
self.work_directory + ID, self.num_of_frames, self.crop_size)
image_index = random.randint(0, len(frame_data) - 1)
im = frame_data[image_index]
X.append([frame_data, im])
# Store class
y[i] = self.labels[ID]
return np.array(X), keras.utils.to_categorical(
y, num_classes=self.n_classes)
edited function that works
def __data_generation(self, list_IDs_temp):
'Generates data containing batch_size samples' # X : (n_samples, *dim, n_channels)
# Initialization
vX = np.empty((self.batch_size, *self.c3d_dim))
iX = np.empty((self.batch_size, *self.static_dim))
y = np.empty((self.batch_size), dtype=int)
# Generate data
for i, ID in enumerate(list_IDs_temp):
# Store sample
print(ID)
frame_data = input_data.get_frames_data(
self.work_directory + ID, self.num_of_frames, self.crop_size)
image_index = random.randint(0, len(frame_data) - 1)
im = frame_data[image_index]
vX[i, ] = frame_data
iX[i, ] = im
# Store class
y[i] = self.labels[ID]
return vX, iX, keras.utils.to_categorical(
y, num_classes=self.n_classes)
As I remember you should feed each input as independent array. For example you have 2 input images, you should not have array of type [[image_1, image_2], [image_3, image_4],[image_5, image_6] ..] but instead you should have something like [[image_1, image_3,image_5 ..], [image_2, image_4, image_6 ..]] as you see, first array is input for first image and second array is input for second image. This applies to your case as well. Just store inputs in different arrays and combine them when you apply fit. Should be something like [video_frames, images]
Hope it helps.
Related
I have a data generator that produces batches of input data (X) and targets (Y), and also a mask (batch_mask) to be applied to the model output (the same mask applies to all the datapoint in the batch; there are different masks for different batches and the data generator takes care of doing this).
As a result, the first dimension of batch_mask could have shape 1 or batch_size (by repeating the same mask along the first dimension batch_size times). I was expecting Keras to let me use either, and I wanted to simply create masks having a shape of 1 on the first dimension.
However, when I tried this, I got the error:
ValueError: Data cardinality is ambiguous:
x sizes: 128, 1
y sizes: 128
Make sure all arrays contain the same number of samples.
Why won't Keras broadcast along the first dimension? It seems like this should not be complicated.
Here's some minimal example code to observe this behavior
import tensorflow.keras as tfk
import numpy as np
#######################
# 1. model definition #
#######################
# model parameters
nfeatures_in = 6
target_size = 8
# model inputs
input = tfk.layers.Input(nfeatures_in)
input_mask = tfk.layers.Input(target_size)
# model graph
out = tfk.layers.Dense(target_size)(input)
out_masked = tfk.layers.Multiply()((out,input_mask)) # multiply all model outputs in the batch by the same mask
model = tfk.Model(inputs=(input, input_mask), outputs=out_masked)
##########################
# 2. dummy data creation #
##########################
batch_size = 32
# create masks the batch
zeros_vector = np.zeros((1,target_size)) # "batch_size"==1
zeros_vector[0,:6] = 1
batch_mask = zeros_vector
# dummy data creation
X = np.random.randn(batch_size, 6)
Y = np.random.randn(batch_size, target_size)*batch_mask # the target is masked by design in each batch
############################
# 3. compile model and fit #
############################
model.compile(optimizer="Adam", loss="mse")
model.fit((X, batch_mask),Y, batch_size=batch_size)
I know I could make this work by either:
repeating the mask to make the first dimension of batch_mask be the size of the first dimension of X (instead of 1).
using pure tensorflow (but I feel like broadcasting along the batch dimension should not be a problem for Keras).
How can I make this work with Keras?
Thank you!
You can create an IdentityLayer which receives as an external input parameter the batch_mask and returns it as a tensor.
class IdentityLayer(tfk.layers.Layer):
def __init__(self, my_mask, **kwargs):
super(IdentityLayer, self).__init__()
self.my_mask = my_mask
def call(self, _):
my_mask = tf.convert_to_tensor(self.my_mask, dtype=tf.float32)
return my_mask
def get_config(self):
config = super().get_config()
config.update({
"my_mask": self.my_mask,
})
return config
The usage of IdentityLayer in a model is straightforward:
# model inputs
input = tfk.layers.Input(nfeatures_in)
input_mask = IdentityLayer(batch_mask)(input)
# model graph
out = tfk.layers.Dense(target_size)(input)
out_masked = tfk.layers.Multiply()((out,input_mask))
model = tfk.Model(inputs=input, outputs=out_masked)
Where batch_mask is a numpy array created as you reported:
zeros_vector = np.zeros((1,target_size)) # "batch_size"==1
zeros_vector[0,:6] = 1
batch_mask = zeros_vector
The solution is to (properly) use a DataGenerator.
See the gist with the working code: https://gist.github.com/iranroman/2aaecf5b5621051df6b1b6b5394e5ef3
Thank you #Marco Cerliani for the discussion that led to figuring out the solution.
I'm using tf 1.15, i'm trying to make a regression task using a signal.
First of all i load my signals into the pipeline, i have several files, here i simulate the loading using a np.zeros to make the code usable by you.
Every file has this shape (?, 75000, 3), where ? is a random number of elements, 75000 is the number of samples in each element and 3 is the number of signals.
Using the tf.data i unpack them and i get a dataset who output signals with this shape (75000,), and i use them in my keras model.
Everything should be fine until i create the keras model, i copied my input pipeline because during my tests i got different errors using a generic tf.data.dataset or using the dataset built in this way.
import numpy as np
import tensorflow as tf
# called in the dataset pipeline
def my_func(x):
p = np.zeros([86, 75000, 3])
x = p[:,:,0]
y = p[:, :, 1]
z = p[:, :, 2]
return x, y, z
# called in the dataset pipeline
def load_sign(path):
func = tf.compat.v1.numpy_function(my_func, [path], [tf.float64, tf.float64, tf.float64])
return func
# Dataset pipeline
s = [1, 2] # here i have the file paths, i simulate it with numbers
AUTOTUNE = tf.data.experimental.AUTOTUNE
ds = tf.data.Dataset.from_tensor_slices(s)
# ds = ds.map(load_sign, num_parallel_calls=AUTOTUNE)
ds = ds.map(load_sign, num_parallel_calls=AUTOTUNE).unbatch()
itera = tf.data.make_one_shot_iterator(ds)
ABP, ECG, PLETH = itera.get_next()
# Until there everything should be fine
# Here i create my convolutional network
signal = tf.keras.layers.Input(shape=(None,75000), dtype='float32')
x = tf.compat.v1.keras.layers.Conv1D(64, (1), strides=1, padding='same')(signal)
x = tf.keras.layers.Dense(75000)(x)
model = tf.keras.Model(inputs=signal, outputs=x, name='resnet18')
# And finally i try to insert my signal into model
logits = model(PLETH)
I get this error:
ValueError: Input 0 of layer conv1d is incompatible with the layer: its rank is undefined, but the layer requires a defined rank.
Why? And how can i make it works?
Also the input size of my net should be this one according the documentation:
3D tensor with shape: (batch_size, steps, input_dim)
What is the steps? In my case i assume it should be (batch_size, 1, 75000), right?
when i use bilstm model to process NLP problem. i got the error while using session.run().I search on Google it seems that bad feed dict make the error. i print the input x shape, it is (100,), but i define it as that:[100, None,256].
how i can solve the error?
it's my evenrimont:
python:3.6
tensorflow:1.0.0
task:everty description has some tags, like stackoverflow, one question has some tags.i need to build a model to predict tags for questions. my training input x is:[batch_size,None,word_embedding_size] a batch of question description, one description has some words, and one word is expressed as vector as length is 256. input y is:[batch_size,n_classes],
this is my model code:
self.X_inputs = tf.placeholder(tf.float32, [self.n_steps,None,self.n_inputs])
self.targets = tf.placeholder(tf.float32, [None,self.n_classes])
#transpose the input x
x = tf.transpose(self.X_inputs, [1, 0, 2])
x = tf.reshape(x, [-1, self.n_inputs])
x = tf.split(x, self.n_steps)
# lstm cell
lstm_cell_fw = tf.contrib.rnn.BasicLSTMCell(self.hidden_dim)
lstm_cell_bw = tf.contrib.rnn.BasicLSTMCell(self.hidden_dim)
# dropout
if is_training:
lstm_cell_fw = tf.contrib.rnn.DropoutWrapper(lstm_cell_fw, output_keep_prob=(1 - self.dropout_rate))
lstm_cell_bw = tf.contrib.rnn.DropoutWrapper(lstm_cell_bw, output_keep_prob=(1 - self.dropout_rate))
lstm_cell_fw = tf.contrib.rnn.MultiRNNCell([lstm_cell_fw] * self.num_layers)
lstm_cell_bw = tf.contrib.rnn.MultiRNNCell([lstm_cell_bw] * self.num_layers)
# forward and backward
self.outputs, _, _ = tf.contrib.rnn.static_bidirectional_rnn(
lstm_cell_fw,
lstm_cell_bw,
x,
dtype=tf.float32
)
feed_dict like that:
feed_dict={
self.X_inputs: X_train_batch,
self.targets: y_train_batch
}
the X_train_batch is some sentence, they have such shape, [100,None,256], the'None' means, input sentence has not the same length, from 10 to 1500,i just get the real length. maybe it occur the error?
My question is:do you padding the sentence length as the same, or reshape the inputs while doing such nlp work?
I am trying to use the predict function of Keras library but it is accepting the input as 4 dimensional array. However, my data generator is producing input as 2 dimensional array. To be more specific;
expected input ---> [n, channel, width, height]
my input ---> [n, width*height]
I need to handle this without changing my data generator. But, I couldn't figure out how can I override the predict function of Keras Library. Or is there any other way to handle this?
My data generator
def genData(n=1000, max_digs=2, width=60):
capgen = ImageCaptcha()
data = []
target = []
for i in range(n):
x = np.random.randint(0, 10 ** max_digs)
img = misc.imread(capgen.generate(str(x)))
img = np.mean(img, axis=2)[:, :width]
data.append(img.flatten())
target.append(x)
return np.array(data), np.array(target)
Getting output classification with Lasagne/Theano
I am migrating my code from pure Theano to Lasagne.
I had this certain code from a tutorial to get the result of a prediction with a certain data and I would generate a csv file to send to kaggle.
But with lasagne, it doesn't work.
I have tried several things but they all give errors.
I would love if anyone could help me figure what's wrong!
I pasted the whole code here :
http://pastebin.com/e7ry3280
test_data = np.loadtxt("../inputData/test.csv", dtype=np.uint8, delimiter=',', skiprows=1)
# The inputs are vectors now, we reshape them to monochrome 2D images,
# following the shape convention: (examples, channels, rows, columns)
data = data.reshape(-1, 1, 28, 28)
test_data = test_data.reshape(-1, 1, 28, 28)
index = T.lscalar() # index to a [mini]batch
preds = []
for it in range(len(test_data)):
test_data = test_data[it]
N = len(test_data)
# print "N : ", N
test_data = theano.shared(np.asarray(test_data, dtype=theano.config.floatX))
test_labels = T.cast(theano.shared(np.asarray(np.zeros(batch_size), dtype=theano.config.floatX)),'uint8')
###target_var
#y = T.ivector('y') # the labels are presented as 1D vector of [int] labels
#index = T.lscalar() # index to a [mini]batch
ppm = theano.function([index],lasagne.layers.get_output(network, deterministic=True),
givens={
input_var: test_data[index * batch_size: (index + 1) * batch_size],
target_var: test_labels
}, on_unused_input='warn')
p = [ppm(ii) for ii in range(N // batch_size)]
p = np.array(p).reshape((N, 10))
print (p)
p = np.argmax(p, axis=1)
p = p.astype(int)
preds.append(p)
subm = np.empty((len(preds), 2))
subm[:, 0] = np.arange(1, len(preds) + 1)
subm[:, 1] = preds
np.savetxt('submission.csv', subm, fmt='%d', delimiter=',',header='ImageId,Label', comments='')
return preds
The code fails on the line that starts with ppm = theano.function...:
TypeError: Cannot convert Type TensorType(float32, 3D) (of Variable Subtensor{int64:int64:}.0) into Type TensorType(float32, 4D). You can try to manually convert Subtensor{int64:int64:}.0 into a TensorType(float32, 4D).
I'm just trying to input the test data to the CNN and get the results to a CSV file. How can I do it? I know I must use minibatches because the whole test data wont fit on the GPU.
As pointed out by the error message and Daniel Renshaw in the comments, the problem is a mismatch of dimensions between test_data and input_var. On the first line on the loop, you write:
test_data = test_data[it]
Which turns the 4D array test_data into a 3D array with the same name (that is why using the same variable name for different types is never recommended :) ). After that you encapsulate it in a shared variable which doesn't change the dimension, and then you slice it to assign it to input_var, which again doesn't change the dimension.
If I understand your code, I think you should just remove that first line. That way test_data remains a list of examples, and you can slice it to make a batch.