I have the following code:
max_features = 5000
maxlen = 140
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, activation = 'sigmoid', inner_activation = 'hard_sigmoid', return_sequences = False))
model.add(Dense(input_dim = 128, output_dim = 2, activation = 'softmax'))
optimizer = Adam(lr = 0.001, beta_1 = 0.9, beta_2 = 0.999, epsilon = 1e-8)
model.compile(loss = 'categorical_crossentropy', optimizer = optimizer)
model.fit(x_train, y_train, batch_size = 64, nb_epoch = 10, verbose = 2)
y_test_pred = model.predict_classes(x_test)
But everytime I run it, I get an error at the line
model.compile(loss = 'categorical_crossentropy', optimizer = optimizer)
which states:
AssertionError: The number of inputs given to the inner function of scan does not match the number of inputs given to scan.
Does anyone know what that means?
Answer by OP:
I have fixed this problem, turns out it has something to do with an outdated Theano version. So if you are experiencing this problem, update your theano module!
Related
it gives another error.
The first argument to Layer.call must always be passed.
I cannot solve the problem. input_dim cannot be set as a constant. PCA and SelectKBest will cut down on the amount of input.
And if you can help with the output of the results from the pipeline, I will be very grateful
attach a link to the data: https://1drv.ms/u/s!AlHgQsqCKEIPiIxzdyWE0BfBHNocTQ?e=cxuSuo
def modelReg(inpt, opt = 'adam', kInitializer = 'glorot_uniform', dropout = 0.05):
model = Sequential()
model.add(Dense(1024, activation='relu', input_dim = inpt, kernel_initializer=kInitializer))
model.add(Dense(1024, activation='relu', kernel_initializer=kInitializer))
model.add(Dense(512, activation='relu', kernel_initializer=kInitializer))
model.add(layers.Dropout(dropout))
model.add(Dense(1, activation='sigmoid', kernel_initializer=kInitializer))
model.compile(loss='mse',optimizer=opt, metrics=["mse", "mae"])
return model
features = []
features.append(('pca', PCA(n_components=10)))
features.append(('select_best', SelectKBest(k=10)))
feature_union = FeatureUnion(features)
regressor = KerasRegressor(build_fn = modelReg(inpt), epochs = 3, batch_size = 500, verbose = 1)
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('feature_union', feature_union))
estimators.append(('regressor' regressor))
model = Pipeline(estimators)
model.fit(allData.drop(['VancouverH'], axis = 1), allData['VancouverH'])
in KerasRegressor with a function to pass arguments to the model function, they are written to the KerasRegressor arguments.
kearsEstimator = ('kR', KerasRegressor(createModel, inpt = trainDataX.shape[1],
epochs = 5, batch_size = 180, verbose = 1))
like this, not like this:
kearsEstimator = ('kR', KerasRegressor(createModel(inpt),
epochs = 5, batch_size = 180, verbose = 1))
well, and transferred the pipeline to the Grid. And the names of the parameters for the grid are written with the prefix.
estimators = []
estimators.append((kearsEstimator))
param_grid = {
'kR__optimizer':['adam'] #'RMSprop', 'Adam', 'Adamax', 'sgd'
}
grid = GridSearchCV(Pipeline(estimators), param_grid, cv = 5)
grid.fit(trainDataX, trainDataY)
Goodmorning, I'm a python beginner, I'm trying to build my first neural network. Is there a way to plot the R2 evolution vs epochs? I evaluate R2 in the following way: r2_score(y_test_pred, y_test). I have built a fully connected neural network in this way:
optimizer = tf.keras.optimizers.Adam(lr=0.001)
model = Sequential()
# ,kernel_regularizer=l2(c), bias_regularizer=l2(c)
model.add(Dense(100, input_shape = (X_train.shape[1],), activation = 'relu',kernel_initializer='glorot_uniform'))
model.add(Dropout(0.2))
model.add(Dense(100, activation = 'relu',kernel_initializer='glorot_uniform'))
model.add(Dropout(0.2))
model.add(Dense(100, activation = 'relu',kernel_initializer='glorot_uniform'))
model.add(Dense(1,activation = 'linear',kernel_initializer='glorot_uniform'))
model.compile(loss = 'mse', optimizer = optimizer, metrics = ['mse'])
history = model.fit(X_train, y_train, epochs = 100,
validation_split = 0.1, shuffle=False, batch_size=250
)
history_dict = history.history
`
the dataset is composed by 18 features and 1 label, it is a regression task.
You just have to add it on your compile line.
model.compile(loss = 'mse', optimizer = optimizer, metrics = ['mse', r2_score])
If you want to do that you need to create a metric that can be understood by keras,
import tf.keras.backend as K
def r2_score(y_true, y_pred):
SS_res = K.sum(K.square(y_true - y_pred))
SS_tot = K.sum(K.square(y_true - K.mean(y_true)))
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
The code is taken from kaggle
Sorry I forgot to add the Tensorboard part.
If you want to see the evolution of loss/metrics during training you can use Tensorboard, as in the doc
logdir = "logs/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=logdir)
history = model.fit(X_train, y_train, epochs = 100,
validation_split = 0.1, shuffle=False, batch_size=250, calllbacks=[tensorboard_callback])
And then access Tensorboard using this line in a terminal
tensorboard --logdir logs
You can then access tensorboard on your browser by going at localhost:6006
I'm pretty new to Keras and LSTMs. I've been trying to train my model of sequences to predict the future price of a stock with the code below but the error above kept popping up.
I have tried changing the dtypes of both x_data, y_data with .astype(np.float16). However, all times I am returned with the TypeError stating that I have a float32 type.
If it helps, here are the shapes of my data:
xtrain.shape : (32, 24, 67), ytrain.shape : (32, 24, 1), xtest.shape
: (38, 67), ytest.shape : (38, 1)
Does anyone have any idea on what might be wrong? I've been stuck at this for awhile. It would be great if someone could give me a hint.
y_data = y_data.to_numpy().astype(np.float32)
x_data = main_df.to_numpy().astype(np.float32)
num_x_signals = x_data.shape[1]
num_y_signals = y_data.shape[1]
# SPLIT TRAIN TEST DATA
ratio = 0.85
train_ratio = int(ratio * len(x_data))
x_train = x_data[0:train_ratio]
x_test = x_data[train_ratio:]
y_train = y_data[0:train_ratio]
y_test = y_data[train_ratio:]
# GENERATE RANDOM SEQUENCES
batch_size = 32
sequence_length = 24
EPOCHS = 50
def batch_generator(x_train, y_train, batch_size, sequence_length, num_x_signals, num_y_signals, num_train):
while True:
x_shape = (batch_size, sequence_length, num_x_signals)
x_batch = np.zeros(shape = x_shape).astype(np.float32)
y_shape = (batch_size, sequence_length, num_y_signals)
y_batch = np.zeros(shape = y_shape).astype(np.float32)
for i in range(batch_size):
idx = np.random.randint(num_train - sequence_length)
x_batch[i] = x_train[idx:idx+sequence_length]
y_batch[i] = y_train[idx:idx+sequence_length]
yield (x_batch, y_batch)
generator = batch_generator(x_train, y_train, batch_size, sequence_length, num_x_signals, num_y_signals, train_ratio)
xtrain, ytrain = next(generator)
xtest, ytest = (np.expand_dims(x_test, axis=0),
np.expand_dims(y_test, axis=0))
# LSTM MODEL
model = Sequential()
model.add(LSTM(32, input_shape = (None, num_x_signals,), return_sequences = True))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(LSTM(128, return_sequences = True))
model.add(Dropout(0.15))
model.add(BatchNormalization())
model.add(LSTM(128))
model.add(Dropout(0.18))
model.add(BatchNormalization())
model.add(Dense(32, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(1, activation = 'softmax'))
opt = tf.keras.optimizers.Adam(lr = 0.001, decay = 1e-6)
model.compile(
loss = 'sparse_categorical_crossentropy',
optimizer = opt,
metrics = ['accuracy']
)
name_of_file = f"{to_predict}-{sequence_length}-{future_predict}-{int(time.time())}"
tensorboard = TensorBoard(log_dir = "logs/{}".format(name_of_file))
filepath = "LSTM_Final-{epoch:02d}-{val_acc:.3f}"
checkpoint = ModelCheckpoint("models/{}.model".format(filepath, monitor = 'val_acc', verbose = 1, save_best_only = True, mode = 'max')) # saves only the best ones
history = model.fit(
xtrain, ytrain,
epochs = EPOCHS,
validation_data = (xtest, ytest),
callbacks = [tensorboard, checkpoint]
)
score = model.evaluate(xtest, ytest, verbose = 0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
model.save("models/{}".format(name_of_file))
I found this issue had to do with the loss function specified.
My code:
import tensorflow as tf
from tensorflow import keras
model = tf.keras.Sequential([
keras.layers.Dense(64, activation=tf.nn.relu, input_shape=[3]),
keras.layers.Dense(64, activation=tf.nn.relu),
keras.layers.Dense(1)
])
#I changed the loss function from 'sparse_categorical_crossentropy' to 'mean_squared error'
model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])
X = train_dataset.to_numpy()
y = train_labels.to_numpy()
model.fit(X,y, epochs=5)
X shape was (920,3) and dtype = float64
y shape was (920,1) and dtype = float64
My problem was in the model.fit method. I took the 'sparse_categorical_crossentropy' function from an image recognition example and what I was trying here is a neural network for house prices prediction.
I am new to keras and Neural networks. I am trying to tune the hyperparameters of a simple Neural network using GridSearchCV from scikit-learn with keras in python. Below is an example code for reference.
def base_model(input_layer_nodes = 150, optimizer = 'adam', kernel_initializer = 'normal', dropout_rate = 0.2):
model = Sequential()
model.add(Dense(units = input_layer_nodes, input_dim = 107, kernel_initializer = kernel_initializer, activation='relu'))
Dropout(dropout_rate)
model.add(Dense(units = 1, kernel_initializer = kernel_initializer, activation='sigmoid'))
# Compile model
model.compile(loss = 'binary_crossentropy', optimizer = optimizer, metrics = ['accuracy'])
return model
# Defining parameters for performing GridSearch
# optimizer = ['sgd', 'rmsprop', 'adam']
# dropout_rate = [0.1, 0.2, 0.3, 0.4, 0.5]
# input_layer_nodes = [50, 107, 150, 200]
kernel_initializer = ['uniform', 'normal']
param_grid = dict(kernel_initializer = kernel_initializer)
model = KerasClassifier(build_fn = base_model, epochs = 10, batch_size = 128, verbose = 2)
grid = GridSearchCV(estimator = model, param_grid=param_grid, n_jobs = 1, cv = 5)
grid.fit(X_train, y_train)
# View hyperparameters of best neural network
print("\nBest Training Parameters: ", grid.best_params_)
print("Best Training Accuracy: ", grid.best_score_)
When I execute the above code, I get the below error.
ValueError: ('Some keys in session_kwargs are not supported at this time: %s', dict_keys(['kernel_initializer']))
I am able to tune some of the other parameters of the network, like dropout_rate, optimizer, epochs. If the same code is working for other parameters, why is the kernel_initializer part not working? I am using keras 2.2.2, tensorflow 1.9.0-gpu and python 3.6.6. My OS windows 10 x64. Any help on this would be appreciated.
A neural network trained on iris dataset using [4, 4] hidden layers and created separately in tensorflow and keras gives different results.
While the tensorflow model gives 96.6 % accuracy on test, keras model gives only around 50%. The various hyper parameters like learning rate, optimiser, mini batch size, etc were the same in both cases.
Keras model
model = Sequential()
model.add(Dense(units = 4, activation = 'relu', input_dim = 4))
model.add(Dropout(0.25))
model.add(Dense(units = 4, activation = 'relu'))
model.add(Dropout(0.25))
model.add(Dense(units = 3, activation = 'softmax'))
adam = Adam(epsilon = 10**(-6), lr = 0.01)
model.compile(optimizer = 'adagrad', loss = 'categorical_crossentropy', metrics = ['accuracy'])
one_hot_labels = keras.utils.to_categorical(y_train, num_classes = 3)
model.fit(X_train, one_hot_labels, epochs = 50, batch_size = 40)
Tensorflow model
feature_columns = [tf.feature_column.numeric_column(key = name,
shape = (1),
dtype = tf.float32) for name in list(X_train.columns)]
classifier = tf.estimator.DNNClassifier(hidden_units = [4, 4],
feature_columns = feature_columns,
n_classes = 3,
dropout = 0.25,
model_dir = './DNN_model')
train_input_fn = tf.estimator.inputs.pandas_input_fn(x = X_train,
y = y_train,
batch_size = 40,
num_epochs = 50,
shuffle = False)
classifier.train(input_fn = train_input_fn, steps = None)
For the keras model, I did try changing the learning rate, increasing the number of epochs, using different optimisers, etc. As such, the accuracy remained poor. Clearly, both the models are doing different things, but on the surface, they seem identical to me for all the key aspects.
Any help is appreciated.
they have the same architecture, and that's all.
The difference in performance is coming from one or more of these factors:
You have Dropout. Therefore your networks in every start behaving differently (check how the Dropout works);
Weight initializations, which method you're using in Keras and TensorFlow?
Check all parameters of the optimizer.