Essentially, I want to propagate data through a Keras model, without first training the Keras model. I tried using both predict() and feeding in raw tensors into the model.
The data is a 2D Numpy float64 array with shape (3, 3), filled entirely with zeros.
The model itself is outlined below:
inputs = keras.Input(shape=(3,), batch_size=1)
FFNNlayer1 = keras.layers.Dense(100, activation='relu')(inputs)
FFNNlayer2 = keras.layers.Dense(100, activation='relu')(FFNNlayer1)
numericalOutput = keras.layers.Dense(3, activation='sigmoid')(FFNNlayer2)
categoricalOutput = keras.layers.Dense(9, activation='softmax')(FFNNlayer2)
outputs = keras.layers.concatenate([numericalOutput, categoricalOutput])
hyperparameters = keras.Model(inputs=inputs, outputs=outputs, name="hyperparameters")
hyperparameters.summary()
The model needed two different activation functions in it's output layer, hence why I used Functional API.
I first attempted to use hyperparameter.predict(data[0]), but kept getting the following error:
WARNING:tensorflow:Model was constructed with shape (1, 3) for input KerasTensor(type_spec=TensorSpec(shape=(1, 3), dtype=tf.float32, name='input_15'), name='input_15', description="created by layer 'input_15'"), but it was called on an input with incompatible shape (None,).
Traceback (most recent call last):
File "<ipython-input-144-4c4a629eaefa>", line 1, in <module>
mainNet.hyperparameters.predict([dataset_info[0]])
File "C:\Users\hudso\anaconda3\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\hudso\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\framework\func_graph.py", line 1129, in autograph_handler
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
File "C:\Users\hudso\anaconda3\lib\site-packages\keras\engine\training.py", line 1621, in predict_function *
return step_function(self, iterator)
File "C:\Users\hudso\anaconda3\lib\site-packages\keras\engine\training.py", line 1611, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "C:\Users\hudso\anaconda3\lib\site-packages\keras\engine\training.py", line 1604, in run_step **
outputs = model.predict_step(data)
File "C:\Users\hudso\anaconda3\lib\site-packages\keras\engine\training.py", line 1572, in predict_step
return self(x, training=False)
File "C:\Users\hudso\anaconda3\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\hudso\anaconda3\lib\site-packages\keras\engine\input_spec.py", line 227, in assert_input_compatibility
raise ValueError(f'Input {input_index} of layer "{layer_name}" '
ValueError: Exception encountered when calling layer "hyperparameters" (type Functional).
Input 0 of layer "dense_20" is incompatible with the layer: expected min_ndim=2, found ndim=1. Full shape received: (None,)
Call arguments received:
• inputs=('tf.Tensor(shape=(None,), dtype=float32)',)
• training=False
• mask=None
I fiddled around with array dimensions a bit, but the model continued to give the same error. I then tried feeding raw tensors into the model, with the following code:
tensorflow_dataset_info = tf.data.Dataset.from_tensor_slices([dataset_info[0]]).batch(1)
aaaaa = enumerate(tensorflow_dataset_info)
predictions = mainNet.hyperparameters(aaaaa)
This code continued to give the following error:
Traceback (most recent call last):
File "<ipython-input-143-df51fe8fd203>", line 1, in <module>
hyperparameters = mainNet.hyperparameters(enumerate(tensorflow_dataset_info))
File "C:\Users\hudso\anaconda3\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\hudso\anaconda3\lib\site-packages\keras\engine\input_spec.py", line 196, in assert_input_compatibility
raise TypeError(f'Inputs to a layer should be tensors. Got: {x}')
TypeError: Inputs to a layer should be tensors. Got: <enumerate object at 0x000001F60081EA40>
I've looked online for a while, and I've searched through the tf.data documentation, but I'm still not sure how to fix this. Again, I've tried multiple variations of this code, and I continue to get mostly the same errors.
If data.shape = (3, 3), when you pass data[0] to model.predict(), you are actually sending a vector of shape (3, ), but your model is expecting shape (1, 3) which means 1 example of size 3.
Try slicing your data instead:
model.predict(data[:1])
This way your tensor will have shape (1, 3).
One way is to do Slicing model.predict(data[:1])
Another way is you can Try model.predict(np.array([list(data[0])]))
Related
I am building a prediction model for sequence data using conv1d layer provided by Keras. This is how I did
input_layer = Input(shape=(500,))
layer = Conv1D(128,5,activation="relu")(input_layer)
layer = MaxPooling1D(pool_size=2)(layer)
layer = Flatten()(layer)
layer = Dense(128, activation='relu')(layer)
output_layer = Dense(10, activation='softmax')(layer)
classifier = Model(input_layer, output_layer)
classifier.summary()
classifier.compile(optimizer=optimizers.Adam(), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
return classifier
However, am facing the following error:
Traceback (most recent call last):
File "train.py", line 71, in <module>
classifier = create_cnn_model()
File "train.py", line 60, in create_cnn_model
layer = Conv1D(128,5, activation="relu")(input_layer)
File "C:\Python368\lib\site-packages\keras\backend\tensorflow_backend.py", line 75, in symbolic_fn
_wrapper
return func(*args, **kwargs)
File "C:\Python368\lib\site-packages\keras\engine\base_layer.py", line 446, in __call__
self.assert_input_compatibility(inputs)
File "C:\Python368\lib\site-packages\keras\engine\base_layer.py", line 342, in assert_input_compat
ibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer conv1d_1: expected ndim=3, found ndim=2
I think the input_shape in the first layer is not setup right. How to set it up?
Right, conv layers need 3 dimensional input.
I am assuming you have a univariate time series with 500 samples.
You need to write a function to split the time series into steps.
For example:
x y
[t-n,...,t-2,t-1] t
So you are basically using the last n values to predict the next value in your series.
Then your input shape will be [len(x), n, 1]
I cant get my RNN classifier to work with my input data. I am using TF 2.0 pre-release with a sliding window.
I am trying to build an RNN which I am feeding 5 timesteps with 6 features each and having it produce the 6th timestep as the target. When I run my code it is giving me an error saying that the input is (None,6) where as when I print out my training data it clearly says the shape is (5,6). I am very confused as to how to fix this.
Error:
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 734, in fit
use_multiprocessing=use_multiprocessing)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 224, in fit
distribution_strategy=strategy)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 547, in _process_training_inputs
use_multiprocessing=use_multiprocessing)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 593, in _process_inputs
steps=steps)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2384, in _standardize_user_data
all_inputs, y_input, dict_inputs = self._build_model_with_inputs(x, y)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2587, in _build_model_with_inputs
self._set_inputs(cast_inputs)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2674, in _set_inputs
outputs = self(inputs, **kwargs)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 772, in __call__
self.name)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\input_spec.py", line 177, in assert_input_compatibility
str(x.shape.as_list()))
ValueError: Input 0 of layer sequential is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 6]
Print readout:
********************tf.Tensor(
[[0.07812838 0.08639083 0.07809999 0.08601701 0.6974719 0.6974719 ]
[0.06794664 0.06995372 0.06220453 0.06934043 0.70064694 0.70064694]
[0.08323035 0.08651368 0.07691107 0.08147305 0.69750804 0.69750804]
[0.09781507 0.10009027 0.08847085 0.08919457 0.6944895 0.6944895 ]
[0.12235662 0.12269666 0.11316498 0.11738694 0.6868 0.6868 ]], shape=(5, 6), dtype=float32)********************tf.Tensor([[0.08238748 0.09074993 0.07986343 0.09017278 0.6965872 0.6965872 ]], shape=(1, 6), dtype=float32)********************
/data comes in as an array of shape [737,6]
train=tf.data.Dataset.from_tensor_slices(features).window(6,1,1,drop_remainder=True).flat_map(lambda x: x.batch(6)).map(lambda window: (window[:-1],window[-1:]))
valid=train.take(200).shuffle(1000).repeat()
train=train.shuffle(3000).repeat()
for x,y in valid:
print('*'*20+str(x)+"*"*20+str(y)+"*"*20)
print(train)
model = tf.keras.Sequential()
model.add(layers.SimpleRNN(128,batch_size=10))
model.add(layers.Dense(124,kernel_initializer='he_uniform',activation='softmax'))
model.compile(optimizer='adagrad', batch_size=10,step_size=.01, loss=tf.keras.losses.MeanAbsoluteError(), metrics=['accuracy'])
history = model.fit(train,epochs=100, validation_data=valid,steps_per_epoch=3000,validation_steps=1000)
The model expects an input with rank 3, but is passed an input with rank 2.
The first layer is a SimpleRNN, which expects data in the form (batch_size, timesteps, features), i.e. rank 3. The shape of the data passed by the user is (5, 6), i.e. rank 2.
Passing rank-3 data (including the batch dimension) will fix the issue.
You need to reshape your data or train variable into 3 dimensions i.e. "[batch, timesteps, features]".
The model is expecting 3 dimensionsal input and your data is 2 dimensional.
You can reshape your data like this :
data = tf.reshape(data, [-1,5,6])
And it should solve your issue.
I am using Keras to built a LSTM model.
def LSTM_model_1(X_train,Y_train,Dropout,hidden_units):
model = Sequential()
model.add(Masking(mask_value=666, input_shape=(X_train.shape[1],X_train.shape[2])))
model.add(LSTM(hidden_units, activation='tanh', return_sequences=True, dropout=Dropout))
model.add(LSTM(hidden_units, return_sequences=True))
model.add(LSTM(hidden_units, return_sequences=True))
model.add(Dense(Y_train.shape[-1], activation='softmax'))
model.compile(loss='mean_squared_error', optimizer='adam',metrics['categorical_accuracy'])
return model
The input data is of shape
X_train.shape=(77,100,34); Y_Train.shape=(77,100,7)
The Y data is one-hot-encoded. Both input tensors are zero-padded for the last list entry. The padded values in Y_train is 0. So no state gets a value of 1 for the padded end. dropout=0 and hidden_units=2 which seems not related to the following error.
Unfortunately, I get following error which I think is connected with the shape of Y. But I cannot put my finger on it. The error happens when the first LSTM layer is initialized/added.
ValueError: Initializer for variable lstm_58/kernel/ is from inside a
control-flow construct, such as a loop or conditional. When creating a
variable inside a loop or conditional, use a lambda as the
initializer.
If I follow the error I noticed that it comes down to this:
dtype: If set, initial_value will be converted to the given type.
If None, either the datatype will be kept (if initial_value is
a Tensor), or convert_to_tensor will decide.
"convert to tensor' creates an object which is then None and leads to the error. Apparently, the LSTM tries to convert the input into a tensor... But if I look at my input, it is already a tensor.
Does any of you have an idea what went wrong or how to use lambda as an initializer? Thanks
EDit: the stack trace
File "C:\Users\310122653\Documents\GitHub\DNN\build_model.py", line
44, in LSTM_model_1
model.add(LSTM(hidden_units, activation='tanh', return_sequences=True, dropout=Dropout))
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\models.py",
line 492, in add
output_tensor = layer(self.outputs[0])
File
"C:\ProgramData\Anaconda3\lib\site-packages\keras\layers\recurrent.py",
line 499, in call
return super(RNN, self).call(inputs, **kwargs)
File
"C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py",
line 592, in call
self.build(input_shapes[0])
File
"C:\ProgramData\Anaconda3\lib\site-packages\keras\layers\recurrent.py",
line 461, in build
self.cell.build(step_input_shape)
File
"C:\ProgramData\Anaconda3\lib\site-packages\keras\layers\recurrent.py",
line 1838, in build
constraint=self.kernel_constraint)
File
"C:\ProgramData\Anaconda3\lib\site-packages\keras\legacy\interfaces.py",
line 91, in wrapper
return func(*args, **kwargs)
File
"C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py",
line 416, in add_weight
constraint=constraint)
File
"C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py",
line 395, in variable
v = tf.Variable(value, dtype=tf.as_dtype(dtype), name=name)
File
"C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\variables.py",
line 235, in init
constraint=constraint)
File
"C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\variables.py",
line 356, in _init_from_args
"initializer." % name)
The solution, in this case, was to restart the Kernel.
Thanks to Daniel Möller
I want to use Keras to build a CNN-LSTM network. However, I have trouble finding the right shape for the first layer's input_shape parameter.
My train_data is a ndarray of the shape (1433, 32, 32); 1433 pictures of size 32x32.
As found in this example, I tried using input_shape=train_data.shape[1:], which results in the same error as input_shape=train_data.shape:
IndexError: list index out of range
The relevant code is:
train_data, train_labels = get_training_data()
# train_data = train_data.reshape(train_data.shape + (1,))
model = Sequential()
model.add(TimeDistributed(Conv2D(
CONV_FILTER_SIZE[0],
CONV_KERNEL_SIZE,
activation="relu",
padding="same"),
input_shape=train_data.shape[1:]))
All the results I found for this error were produced under different dircumstances; not through input_shape. So how do I have to shape my Input? Do I have to look for the error somewhere completely different?
Update:
Complete error:
Traceback (most recent call last):
File "trajecgen_keras.py", line 131, in <module>
tf.app.run()
File "/home/.../lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 124, in run
_sys.exit(main(argv))
File "trajecgen_keras.py", line 85, in main
input_shape=train_data.shape))
File "/home/.../lib/python3.5/site-packages/keras/models.py", line 467, in add
layer(x)
File "/home/.../lib/python3.5/site-packages/keras/engine/topology.py", line 619, in __call__
output = self.call(inputs, **kwargs)
File "/home/.../lib/python3.5/site-packages/keras/layers/wrappers.py", line 211, in call
y = self.layer.call(inputs, **kwargs)
File "/home/.../lib/python3.5/site-packages/keras/layers/convolutional.py", line 168, in call
dilation_rate=self.dilation_rate)
File "/home/.../lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 3335, in conv2d
data_format=tf_data_format)
File "/home/.../lib/python3.5/site-packages/tensorflow/python/ops/nn_ops.py", line 753, in convolution
name=name, data_format=data_format)
File "/home/.../lib/python3.5/site-packages/tensorflow/python/ops/nn_ops.py", line 799, in __init__
input_channels_dim = input_shape[num_spatial_dims + 1]
File "/home/../lib/python3.5/site-packages/tensorflow/python/framework/tensor_shape.py", line 521, in __getitem__
return self._dims[key]
IndexError: list index out of range
When using a TimeDistributed layer combined with a Conv2D layer, it seems that input_shape requires a tuple of length 4 at least: input_shape = (number_of_timesteps, height, width, number_of_channels).
You could try to modify your code like this for example:
model = Sequential()
model.add(TimeDistributed(Conv2D(
CONV_FILTER_SIZE[0],
CONV_KERNEL_SIZE,
activation="relu",
padding="same"),
input_shape=(None, 32, 32, 1))
More info here.
I'm trying to get into machine learning and I've decided on using tflearn for a start.
I used tflearn's quickstart guide to get the basics and tried using that neural network for a task I've set myself:
Predicting the age of abalones from their dimensions. For this I downloaded the according dataset as .csv from the UCI repository. The table is in this format:
SEX|LENGTH|DIAMETER|HEIGHT|WHOLE WEIGHT|SHUCKED WEIGHT|VISCERA WEIGHT|SHELL WEIGHT|RINGS
Since the age is the same as the number of rings, I imported the .csv like this:
data, labels = load_csv("abalone.csv", categorical_labels=False, has_header=False)
The task is to predict the number of rings based on the data, so I set up my input layer like this:
net = tflearn.input_data(shape=[None, 8])
Added four hidden layers with the default linear activation function:
net = tflearn.fully_connected(net, 320)
net = tflearn.fully_connected(net, 200)
net = tflearn.fully_connected(net, 200)
net = tflearn.fully_connected(net, 320)
And an output layer with one node since there is only one result (no. of rings):
net = tflearn.fully_connected(net, 1, activation="sigmoid")
net = tflearn.regression(net)
Now I initialize the model but during training the above error occurs:
model = tflearn.DNN(net)
model.fit(data, labels, n_epoch=1000, show_metric=True, batch_size=1600)
The entire exception:
Traceback (most recent call last):
File "D:\OneDrive\tensornet.py", line 34, in <module>
model.fit(data, labels, n_epoch=1000, show_metric=True, batch_size=1600)
File "C:\Python3\lib\site-packages\tflearn\models\dnn.py", line 215, in fit
callbacks=callbacks)
File "C:\Python3\lib\site-packages\tflearn\helpers\trainer.py", line 333, in fit
show_metric)
File "C:\Python3\lib\site-packages\tflearn\helpers\trainer.py", line 774, in _train
feed_batch)
File "C:\Python3\lib\site-packages\tensorflow\python\client\session.py", line 767, in run
run_metadata_ptr)
File "C:\Python3\lib\site-packages\tensorflow\python\client\session.py", line 944, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1600,) for Tensor 'TargetsData/Y:0', which has shape '(?, 1)'
From what I understand, the exception occurs when trying to fit my labels (which are a 1600x1 Tensor) with my output layer. But I don't know how to fix this.
You need to add another axis to the labels so they'll have a (1600,1) shape instead of (1600,)
The simplest way to do it is like this:
labels = labels[:, np.newaxis]