Inserting data into regression network in keras - python

I am currently struggling to understand how i should train my regression network using keras. I am not sure how I should pass my input data to the network.
Both the input data and the output data is stored as a list of numpy arrays.
Each input numpy array is a matrix which has (400 rows, x columns)
Each output numpy array is a matrix which has (x number of rows, 13 columns)
So input dimension is 400 and output is 13.
But how do I pass each of these sets within the list to the training?
# Multilayer Perceptron
model = Sequential() # Feedforward
model.add(Dense(3, input_dim=400))
model.add(Activation('tanh'))
model.add(Dense(1))
model.compile('sgd', 'mse')
Just by parsing data into gives me this error message :
Traceback (most recent call last):
File "tensorflow_datapreprocess_mfcc_extraction_rnn.py", line 167, in <module>
model.fit(train_set_data,train_set_output,verbose=1)
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 620, in fit
sample_weight=sample_weight)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1034, in fit
batch_size=batch_size)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 961, in _standardize_user_data
exception_prefix='model input')
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 51, in standardize_input_data
'...')
Exception: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 arrays but instead got the following list of 270 arrays: [array([[ -1.52587891e-04, 3.05175781e-05, -1.52587891e-04,
-5.18798828e-04, 3.05175781e-05, -3.96728516e-04,
1.52587891e-04, 3.35693359e-04, -9.15527344e-05,
3.3...

Related

Error while training CNN for text classification in keras "ValueError: Input 0 is incompatible with layer"

I am building a prediction model for sequence data using conv1d layer provided by Keras. This is how I did
input_layer = Input(shape=(500,))
layer = Conv1D(128,5,activation="relu")(input_layer)
layer = MaxPooling1D(pool_size=2)(layer)
layer = Flatten()(layer)
layer = Dense(128, activation='relu')(layer)
output_layer = Dense(10, activation='softmax')(layer)
classifier = Model(input_layer, output_layer)
classifier.summary()
classifier.compile(optimizer=optimizers.Adam(), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
return classifier
However, am facing the following error:
Traceback (most recent call last):
File "train.py", line 71, in <module>
classifier = create_cnn_model()
File "train.py", line 60, in create_cnn_model
layer = Conv1D(128,5, activation="relu")(input_layer)
File "C:\Python368\lib\site-packages\keras\backend\tensorflow_backend.py", line 75, in symbolic_fn
_wrapper
return func(*args, **kwargs)
File "C:\Python368\lib\site-packages\keras\engine\base_layer.py", line 446, in __call__
self.assert_input_compatibility(inputs)
File "C:\Python368\lib\site-packages\keras\engine\base_layer.py", line 342, in assert_input_compat
ibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer conv1d_1: expected ndim=3, found ndim=2
I think the input_shape in the first layer is not setup right. How to set it up?
Right, conv layers need 3 dimensional input.
I am assuming you have a univariate time series with 500 samples.
You need to write a function to split the time series into steps.
For example:
x y
[t-n,...,t-2,t-1] t
So you are basically using the last n values to predict the next value in your series.
Then your input shape will be [len(x), n, 1]

Tensor Shape Not Recognized in Tensorflow 2.0

I cant get my RNN classifier to work with my input data. I am using TF 2.0 pre-release with a sliding window.
I am trying to build an RNN which I am feeding 5 timesteps with 6 features each and having it produce the 6th timestep as the target. When I run my code it is giving me an error saying that the input is (None,6) where as when I print out my training data it clearly says the shape is (5,6). I am very confused as to how to fix this.
Error:
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 734, in fit
use_multiprocessing=use_multiprocessing)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 224, in fit
distribution_strategy=strategy)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 547, in _process_training_inputs
use_multiprocessing=use_multiprocessing)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 593, in _process_inputs
steps=steps)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2384, in _standardize_user_data
all_inputs, y_input, dict_inputs = self._build_model_with_inputs(x, y)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2587, in _build_model_with_inputs
self._set_inputs(cast_inputs)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2674, in _set_inputs
outputs = self(inputs, **kwargs)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 772, in __call__
self.name)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\input_spec.py", line 177, in assert_input_compatibility
str(x.shape.as_list()))
ValueError: Input 0 of layer sequential is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 6]
Print readout:
********************tf.Tensor(
[[0.07812838 0.08639083 0.07809999 0.08601701 0.6974719 0.6974719 ]
[0.06794664 0.06995372 0.06220453 0.06934043 0.70064694 0.70064694]
[0.08323035 0.08651368 0.07691107 0.08147305 0.69750804 0.69750804]
[0.09781507 0.10009027 0.08847085 0.08919457 0.6944895 0.6944895 ]
[0.12235662 0.12269666 0.11316498 0.11738694 0.6868 0.6868 ]], shape=(5, 6), dtype=float32)********************tf.Tensor([[0.08238748 0.09074993 0.07986343 0.09017278 0.6965872 0.6965872 ]], shape=(1, 6), dtype=float32)********************
/data comes in as an array of shape [737,6]
train=tf.data.Dataset.from_tensor_slices(features).window(6,1,1,drop_remainder=True).flat_map(lambda x: x.batch(6)).map(lambda window: (window[:-1],window[-1:]))
valid=train.take(200).shuffle(1000).repeat()
train=train.shuffle(3000).repeat()
for x,y in valid:
print('*'*20+str(x)+"*"*20+str(y)+"*"*20)
print(train)
model = tf.keras.Sequential()
model.add(layers.SimpleRNN(128,batch_size=10))
model.add(layers.Dense(124,kernel_initializer='he_uniform',activation='softmax'))
model.compile(optimizer='adagrad', batch_size=10,step_size=.01, loss=tf.keras.losses.MeanAbsoluteError(), metrics=['accuracy'])
history = model.fit(train,epochs=100, validation_data=valid,steps_per_epoch=3000,validation_steps=1000)
The model expects an input with rank 3, but is passed an input with rank 2.
The first layer is a SimpleRNN, which expects data in the form (batch_size, timesteps, features), i.e. rank 3. The shape of the data passed by the user is (5, 6), i.e. rank 2.
Passing rank-3 data (including the batch dimension) will fix the issue.
You need to reshape your data or train variable into 3 dimensions i.e. "[batch, timesteps, features]".
The model is expecting 3 dimensionsal input and your data is 2 dimensional.
You can reshape your data like this :
data = tf.reshape(data, [-1,5,6])
And it should solve your issue.

How to fix "TypeError: The added layer must be an instance of class Layer." in Python

I have written that little helloworld-kind of neural network. The problem is that I constantly get that error which says:
"Traceback (most recent call last):
File "C:/Users/Pigeonnn/PycharmProjects/Noss/Network.py", line 21, in <module>
model.add(keras.layers.InputLayer(input_shape))
File "C:\Users\Pigeonnn\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\training\checkpointable\base.py", line 442, in _method_wrapper
method(self, *args, **kwargs)
File "C:\Users\Pigeonnn\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\keras\engine\sequential.py", line 145, in add
'Found: ' + str(layer))
TypeError: The added layer must be an instance of class Layer. Found: <keras.engine.input_layer.InputLayer object at 0x0000015EDB394DA0>"
Here's my code:
import keras
import numpy as np
from sklearn.model_selection import train_test_split
import pandas as pd
from sklearn.utils import shuffle
import tensorflow as tf
seed = 10
np.random.seed(seed)
dataset = np.loadtxt("dataset2.csv",delimiter=',',skiprows=1)
dataset = shuffle(dataset)
X = dataset[:,2:]
Y = dataset[:,1]
(X_train,X_test,Y_train,Y_test) = train_test_split(X, Y, test_size=0.15, random_state=seed)
input_shape = (13,)
model = tf.keras.models.Sequential()
model.add(keras.layers.InputLayer(input_shape))
model.add(keras.layers.core.Dense(128, activation='relu'))
model.add(keras.layers.core.Dense(128, activation='relu'))
model.add(keras.layers.core.Dense(4, activation='sigmoid'))
model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
model.fit(X_train,Y_train,epochs=20)
EDIT: after some tweaks(changing loss function, removing tf model), I have another error, this time its:
Traceback (most recent call last):
File "C:/Users/Pigeonnn/PycharmProjects/Noss/Network.py", line 28, in
model.fit(X_train,Y_train,epochs=20)
File "C:\Users\Pigeonnn\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\engine\training.py", line 952, in fit
batch_size=batch_size)
File "C:\Users\Pigeonnn\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\engine\training.py", line 789, in _standardize_user_data
exception_prefix='target')
File "C:\Users\Pigeonnn\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\engine\training_utils.py", line 138, in standardize_input_data
str(data_shape))
ValueError: Error when checking target: expected dense_3 to have shape (4,) but got array with shape (1,)
You are using both tf.keras and keras modules, which are not compatible. Use only one and be consistent.

Minimal DNNRegressor example with TensorFlow

I'm new to Python and TensorFlow and I'm trying to build a simple working example with fake data in TensorFlow. My goal is to use the DNNRegressor estimator to predict a real value from a multidimensional input. This is the code I wrote:
import pandas as pd
import tensorflow as tf
import numpy as np
# Amount of train samples
m_train = 1000
# Amount of test samples
m_test = 100
# Dimensions for each sample
n = 10
def from_dataset(ds):
return lambda: ds.make_one_shot_iterator().get_next()
# Create random samples with numpy
train_data = (np.random.sample((m_train,n)), np.random.sample((m_train,1)))
test_data = (np.random.sample((m_test,n)), np.random.sample((m_test,1)))
# Create two datasets, one for trainning and the other for testing
train_dataset = tf.data.Dataset.from_tensor_slices(train_data)
test_dataset = tf.data.Dataset.from_tensor_slices(test_data)
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=n)]
model = tf.estimator.DNNRegressor(hidden_units=[20, 20], feature_columns=feature_columns)
# Train the model
model.train(input_fn=from_dataset(train_dataset), steps=1000)
# Evaluate the unseen samples
eval_result = model.evaluate(input_fn=from_dataset(test_dataset))
And this is the error I get:
$ python fake.py
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmp1j5irF
Traceback (most recent call last):
File "fake.py", line 28, in <module>
model.train(input_fn=from_dataset(train_dataset), steps=1000)
File "/usr/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 314, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 743, in _train_model
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "/usr/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 725, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/usr/lib/python2.7/site-packages/tensorflow/python/estimator/canned/dnn.py", line 448, in _model_fn
config=config)
File "/usr/lib/python2.7/site-packages/tensorflow/python/estimator/canned/dnn.py", line 153, in _dnn_model_fn
'Given type: {}'.format(type(features)))
ValueError: features should be a dictionary of `Tensor`s. Given type: <class 'tensorflow.python.framework.ops.Tensor'>
I supose I have to use a dictionary of Tensors, but I'm just beginning in Python and I don't know how to do it.
You need to return the iterator returned by get_one(), rather than a lambda function that returns the iterator. Check out https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/examples/get_started/regression/dnn_regression.py

Shaping data for linear regression with TFlearn

I'm trying to expand the tflearn example for linear regression by increasing the number of columns to 21.
from trafficdata import X,Y
import tflearn
print(X.shape) #(1054, 21)
print(Y.shape) #(1054,)
# Linear Regression graph
input_ = tflearn.input_data(shape=[None,21])
linear = tflearn.single_unit(input_)
regression = tflearn.regression(linear, optimizer='sgd', loss='mean_square',
metric='R2', learning_rate=0.01)
m = tflearn.DNN(regression)
m.fit(X, Y, n_epoch=1000, show_metric=True, snapshot_epoch=False)
print("\nRegression result:")
print("Y = " + str(m.get_weights(linear.W)) +
"*X + " + str(m.get_weights(linear.b)))
However, tflearn complains:
Traceback (most recent call last):
File "linearregression.py", line 16, in <module>
m.fit(X, Y, n_epoch=1000, show_metric=True, snapshot_epoch=False)
File "/usr/local/lib/python3.5/dist-packages/tflearn/models/dnn.py", line 216, in fit
callbacks=callbacks)
File "/usr/local/lib/python3.5/dist-packages/tflearn/helpers/trainer.py", line 339, in fit
show_metric)
File "/usr/local/lib/python3.5/dist-packages/tflearn/helpers/trainer.py", line 818, in _train
feed_batch)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 975, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (64,) for Tensor 'TargetsData/Y:0', which has shape '(21,)'
I found the shape (64, ) comes from the default batch size of tflearn.regression().
Do I need to transform the labels (Y)? In what way?
Thanks!
I tried to do the same. I made these changes to get it to work
# linear = tflearn.single_unit(input_)
linear = tflearn.fully_connected(input_, 1, activation='linear')
My guess is that with features >1 you cannot use tflearn.single_unit(). You can add additional fully_connected layers, but the last one must have only 1 neuron because Y.shape=(?,1)
You have 21 features. Therefore, you cannot use linear = tflearn.single_unit(input_)
Instead try this: linear = tflearn.fully_connected(input_, 21, activation='linear')
The error you get is because your labels, i.e., Y has a shape of (1054,).
You have to first preprocess it.
Try using the code given below before # linear regression graph:
Y = np.expand_dims(Y,-1)
Now before regression = tflearn.regression(linear, optimizer='sgd', loss='mean_square',metric='R2', learning_rate=0.01), type the below code:
linear = tflearn.fully_connected(linear, 1, activation='linear')

Categories

Resources