Simple Linear Regression using Keras - python

I have been trying to implement a simple linear regression model using neural networks in Keras in hopes to understand how do we work in Keras library. Unfortunately, I am ending up with a very bad model. Here is the implementation:
from pylab import *
from keras.models import Sequential
from keras.layers import Dense
#Generate dummy data
data = data = linspace(1,2,100).reshape(-1,1)
y = data*5
#Define the model
def baseline_model():
model = Sequential()
model.add(Dense(1, activation = 'linear', input_dim = 1))
model.compile(optimizer = 'rmsprop', loss = 'mean_squared_error', metrics = ['accuracy'])
return model
#Use the model
regr = baseline_model()
regr.fit(data,y,epochs =200,batch_size = 32)
plot(data, regr.predict(data), 'b', data,y, 'k.')
The generated plot is as follows:
Can somebody point out the flaw in the above definition of the model (which could ensure a better fit)?

You should increase the learning rate of optimizer. The default value of learning rate in RMSprop optimizer is set to 0.001, therefore the model takes a few hundred epochs to converge to a final solution (probably you have noticed this yourself that the loss value decreases slowly as shown in the training log). To set the learning rate import optimizers module:
from keras import optimizers
# ...
model.compile(optimizer=optimizers.RMSprop(lr=0.1), loss='mean_squared_error', metrics=['mae'])
Either of 0.01 or 0.1 should work fine. After this modification you may not need to train the model for 200 epochs. Even 5, 10 or 20 epochs may be enough.
Also note that you are performing a regression task (i.e. predicting real numbers) and 'accuracy' as metric is used when you are performing a classification task (i.e. predicting discrete labels like category of an image). Therefore, as you can see above, I have replaced it with mae (i.e. mean absolute error) which is also much more interpretable than the value of loss (i.e. mean squared error) used here.

The below code best fits for your data.
Take a look at this.
from pylab import *
from keras.models import Sequential
from keras.layers import Dense
import matplotlib.pyplot as plt
%matplotlib inline
​# Generate dummy data
data = data = linspace(1,2,100).reshape(-1,1)
y = data*5
​ # Define the model
def baseline_model():
global num_neurons
model = Sequential()
model.add(Dense(num_neurons, activation = 'linear', input_dim = 1))
model.add(Dense(1 , activation = 'linear'))
model.compile(optimizer = 'rmsprop', loss = 'mean_squared_error')
return model
set num_neurons in first dense layer
** You may change it later
num_neurons = 17
​
​#Use the model
regr = baseline_model()
regr.fit(data,y,epochs =200, verbose = 0)
plot(data, regr.predict(data), 'bo', data,y, 'k-')
the first plot with num_neurons = 17 , is good fit.
But even we can explore more.
click on the links below to see the plots
Plot for num_neurons = 12
Plot for num_neurons = 17
Plot for num_neurons = 19
Plot for num_neurons = 20
You can see that , as we increase the num of neurons
our model is becoming more intelligent.
and best fit.
I hope you got it.
Thank you

Interesting problem, so I plugged the dataset into a model-builder framework I wrote. The framework has two callbacks: EarlyStopping callback for 'loss' and Tensorboard for analysis. I removed the 'metric' attribute from the model compile - unnecessary, and should be 'mae' anyways.
#mathnoob123 model as written and learning rate(lr) = 0.01 had a loss=1.2623e-06 in 2968 epochs
BUT the same model replacing the RMSprop optimizer with Adam was most accurate with loss=1.22e-11 in 2387 epochs.
The best compromise I found was #mathnoob123 model using lr=0.01 results in loss=1.5052e-4 in 230 epochs. This is better than #Kunam's model with 20 nodes in the first dense layer with lr=0.001: loss=3.1824e-4 in 226 epochs.
Now I'm going to look at randomizing the label dataset (y) and see what happens....

Related

How can I get constant value val accuracy and val loss in keras [duplicate]

This question already has answers here:
How to get reproducible results in keras
(11 answers)
Closed 16 days ago.
I'm newbie in neural network and I try to do mlp text classification using keras. everytime I run the code, it get different val loss and and val accuracy. Val loss is increase and val accuarcy is decrease everytime I re-run it. The code that I'm using is like this:
#Split data training and testing (80:20)
Train_X2, Test_X2, Train_Y2, Test_Y2 = model_selection.train_test_split(dataset['review'],dataset['sentiment'],test_size=0.2, random_state=1)
Encoder = LabelEncoder()
Train_Y2 = Encoder.fit_transform(Train_Y2)
Test_Y2 = Encoder.fit_transform(Test_Y2)
Tfidf_vect2 = TfidfVectorizer(max_features=None)
Tfidf_vect2.fit(dataset['review'])
Train_X2_Tfidf = Tfidf_vect2.transform(Train_X2)
Test_X2_Tfidf = Tfidf_vect2.transform(Test_X2)
#Model
model = Sequential()
model.add(Dense(100, input_dim= 1148, activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
opt = Adam (learning_rate=0.01)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])
model.summary()
from keras.backend import clear_session
clear_session()
es = EarlyStopping(monitor="val_loss",mode='min',patience=10)
history = model.fit(arr_Train_X2_Tfidf, Train_Y2, epochs=100,verbose=1, validation_split=0.2,validation_data=(arr_Test_X2_Tfidf, Test_Y2), batch_size=32, callbacks =[es])
I try using clear_session() to make the model not start off with the computed weights from the previous training. But it still get difference value. How to fix it? thank you
How can I get constant value val accuracy and val loss in keras
I guess what you want is reproducible train runs. For that you will have to seed the random number generator. Getting reproducible result with seed is tricky on the GPU because some operations on GPU are non deterministic. However, with the model architecture you are using it is not a problem.
make model not start off with the computed weights from the previous training.
It is not the case, you are creating the model every time and the Dense layers you are using gets initialised from glorot_uniform distributions.
Sample:
from sklearn.preprocessing import LabelEncoder
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import optimizers
from tensorflow.keras import callbacks
import matplotlib.pyplot as plt
import os
import numpy as np
import random as rn
import random as python_random
def seed():
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = ""
np.random.seed(123)
python_random.seed(123)
tf.random.set_seed(1234)
def train(set_seed):
if set_seed:
seed()
dataset = {
'review': [
'This is the first document.',
'This document is the second document.',
'And this is the third one.',
'Is this the first document?',
],
'sentiment': [0,1,0,1]
}
Train_X2, Test_X2, Train_Y2, Test_Y2 = train_test_split(
dataset['review'],dataset['sentiment'],test_size=0.2, random_state=1)
Encoder = LabelEncoder()
Train_Y2 = Encoder.fit_transform(Train_Y2)
Test_Y2 = Encoder.fit_transform(Test_Y2)
Tfidf_vect2 = TfidfVectorizer(max_features=None)
Tfidf_vect2.fit(dataset['review'])
Train_X2_Tfidf = Tfidf_vect2.transform(Train_X2).toarray()
Test_X2_Tfidf = Tfidf_vect2.transform(Test_X2).toarray()
#Model
model = keras.Sequential()
model.add(layers.Dense(100, input_dim= 9, activation='sigmoid'))
model.add(layers.Dense(1, activation='sigmoid'))
opt = optimizers.Adam (learning_rate=0.01)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])
#model.summary()
history = model.fit(Train_X2_Tfidf, Train_Y2, epochs=10,verbose=0,
validation_data=(Test_X2_Tfidf, Test_Y2), batch_size=32)
return history
def run(set_seed=False):
plt.figure(figsize=(7,7))
for i in range(5):
history = train(set_seed)
plt.plot(history.history['val_loss'], label=f"{i+1}")
plt.legend()
plt.title("With Seed" if set_seed else "Wihout Seed")
run()
run(True)
Outout:
You can see how the val_loss is different without seed (as it depends on the initial value of Dense layer and other places where random number generation is used) and how the val_loss is exactly the same with seed which make sure the initial values of Dense layers you are using are same between runs (and at other places where random number generation is used).
During the training process, several different sources of randomness exist. You would need to eliminate all of those to get perfectly reproducible results. Here are a few of those:
The random weight and bias initialization leads to different training runs each time.
During training mini batching is used which influences the trajectory of the training process differently each time. Each training process that uses Stochastic Gradient Descent (SGD) as an optimization technique will thus observe a different ordering of inputs.
Randomness induced by regularization techniques such as Dropout.
Some of those sources of randomness can be overcome by setting a fixed seed, how this is done is described in this older question.
This paper names several other possible sources of bias (such as data augmentation or hardware differences).

CNN-LSTM model for sentiment analysis has low validation accuracy

I am working on a project to implement CNN-LSTM sentiment analysis. Below is the code
from keras.models import Sequential
from keras import regularizers
from keras import backend as K
from keras.callbacks import ModelCheckpoint
from keras.layers import Dense, Conv1D , MaxPool1D , Flatten , Dropout
from keras.layers import BatchNormalization
from keras import regularizers
model7 = Sequential()
model7.add(Embedding(max_words, 40,input_length=max_len)) #The embedding layer
model7.add(Conv1D(20, 5, activation='relu', kernel_regularizer = regularizers.l2(l = 0.0001), bias_regularizer=regularizers.l2(0.01)))
model7.add(Dropout(0.5))
model7.add(Bidirectional(LSTM(20,dropout=0.5, kernel_regularizer=regularizers.l2(0.01), recurrent_regularizer=regularizers.l2(0.01), bias_regularizer=regularizers.l2(0.01))))
model7.add(Dense(1,activation='sigmoid'))
model7.compile(optimizer='adam',loss='binary_crossentropy', metrics=['accuracy'])
checkpoint7 = ModelCheckpoint("best_model7.hdf5", monitor='val_accuracy', verbose=1,save_best_only=True, mode='auto', period=1,save_weights_only=False)
history = model7.fit(X_train_padded, y_train, epochs=10,validation_data=(X_test_padded, y_test),callbacks=[checkpoint7])
Even after adding regularizers and dropout, my model has very high validation loss and low accuracy.
Epoch 3: val_accuracy improved from 0.54517 to 0.57010, saving model to best_model7.hdf5
2188/2188 [==============================] - 290s 132ms/step - loss: 0.4241 - accuracy: 0.8301 - val_loss: 0.9713 - val_accuracy: 0.5701
My train and test data:
train: (70000, 7)
test: (30000, 7)
train['sentiment'].value_counts()
1 41044
0 28956
test['sentiment'].value_counts()
1 17591
0 12409
Can anyone please let me know how to reduce overfitting.
Since your code works, I believe that your network is failing silently by 'not learning' a lot from the data. Here's a list of some of the things you can generally check:
Is your textual data well transformed into numerical data? Is it well reprented using TF-IDF or bag of words or any other method that returns a numerical representation?
I see that you imported batch normalization but you do not apply it. Batch norm actually helps and most importantly, does the job of regularizers since each input to each layer is normalized using the mini-batch the network has seen. So maybe remove your L2 regularizations in all layers and apply a simple batch norm instead which should reduce overfitting (also, use it without the drop out since some empirical studies show that they should not be combined together)
Your embedding output is currently set to 40, that is 40 numerical elements of a text vector that may contain more than 10,000 elements. It seems a bit low. Try something more 'standard' such as 128 or 256 instead of 40.
Lastly, you set the adam optimizer with all the default parameters. However, the learning rate can have a big impact on the way your loss function is computed. As I am sure you know, the gradient step uses this learning rate to progress in its calculation of the derivatives for each neuron. the default is learning_rate=0.001. So try the following code and increase a bit the learning rate (for example 0.01 or even 0.1).
A simple example :
# define model
model = Sequential()
model.add(LSTM(32)) # or CNN
model.add(BatchNormalization())
model.add(Dense(1))
# define optimizer
optimizer = keras.optimizers.Adam(0.01)
# define loss function
loss = keras.losses.binary_crossentropy
# define metric to optimize
metric = [keras.metrics.Accuracy(name='accuracy')] # you can add more
# compile model
model.compile(optimizer=optimizer, loss=loss, metrics=metric)
Final thought: I see that you went for a combination of CNN and LSTM which has great merite. However, it is always recommended to try a simple MLP network to establish a baseline score that you may later try to beat. Does a simple MLP with 1 or 2 layers and not a lot of units produce a low accuracy score as well? If it performs better than maybe the problem is in the implementation or in the hyper parameters that you chose for the layers (or even theoretical).
I hope this answer helps and cheers!

Tensorflow NN not giving any reasonable output

I want to train a network on the isolet dataset, consisting of 6238 samples with 300 features each.
This is my code so far:
import tensorflow as tf
import sklearn.preprocessing as prep
import numpy as np
import matplotlib.pyplot as plt
def main():
X, C, Xtst, Ctst = load_isolet()
#normalize
#X = (X - np.mean(X, axis = 1)[:, np.newaxis]) / np.std(X, axis = 1)[:, np.newaxis]
#Xtst = (Xtst - np.mean(Xtst, axis = 1)[:, np.newaxis]) / np.std(Xtst, axis = 1)[:, np.newaxis]
scaler = prep.MinMaxScaler(feature_range=(0,1))
scaledX = scaler.fit_transform(X)
scaledXtst = scaler.transform(Xtst)
# Build the tf.keras.Sequential model by stacking layers. Choose an optimizer and loss function for training:
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(X.shape[1], activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(26, activation='softmax')
])
ES_callback = tf.keras.callbacks.EarlyStopping(monitor='loss', min_delta=1e-2, patience=10, verbose=1)
initial_learning_rate = 0.01
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(initial_learning_rate,decay_steps=100000,decay_rate=0.9999,staircase=True)
optimizer = tf.keras.optimizers.Adam(learning_rate=lr_schedule)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(scaledX, C, epochs=100, callbacks=[ES_callback], batch_size = 32)
plt.figure(1)
plt.plot(range(len(history.history['loss'])), history.history['loss']);
plt.plot(range(len(history.history['accuracy'])), history.history['accuracy']);
plt.show()
Up to now, I have pretty much turned every knob I know:
different number of layers
different sizes of layers
different activation functions
different learning rates
different optimizers (we should test with 'adam' and 'stochastic gradient decent'
different batch sizes
different data preparations (the features range from -1 to 1 values. I tried normalizing along the feature axes, batch normalizing (z_i = (x_i - mean) / std(x_i)) and as seen in the code above scaling the values from 0 to 1 (since I guess 'relu' activation won't work well with negative input values)
Pretty much everything I tried gives weird outputs with extremely high loss values (depending on the learning rate) and very low accuracies during learning. The loss is increasing over epochs pretty much all of the time, but seems to be independent from the accuracy values.
For the code, I followed tutorials I got provided, however something is very off, since I should find the best hyper parameters, but I'm not able to find any good whatsoever.
I'd be very glad to get some points, where got the code wrong or need to preprocess the data differently.
Edit: Using loss='categorical_crossentropy'was given, so at least this one should be correct.
first of all:
Your convergence problems may be due to "incorrect" loss function. tf.keras supports a variety of losses that depend on the shape of your input labels.
Try different possibilities like
tf.keras.losses.SparseCategoricalCrossentropy if your labels are one-hot vectors.
tf.keras.losses.CategoricalCrossentropy if your lables are 1,2,3...
or tf.keras.losses.BinaryCrossentropy if your labels are just 0,1.
Honestly, this part of tf.keras is a bit tricky and some settings like that might need tuning.
Second of all - this part:
scaler = prep.MinMaxScaler(feature_range=(0,1))
scaledX = scaler.fit_transform(X)
scaledXtst = scaler.fit_transform(Xtst)
assuming Xtst is your test set you want to scale it based on your training set. So the correct scaling would be just
scaledXtst = scaler.transform(Xtst)
Hope this helps!

How can i improve my CNN's accuracy evolution?

So, i'm trying to create a CNN which can predict if there is any "support devices" in a x-ray thorax image, but when training my model it seems it's not learning anything.
I'm using a dataset called "CheXpert" which has over 200.000 images to use. After doing some "cleaning", the final dataset ended up with 100.000 images.
As far as the model is concerned, i imported the convolutional base of the vgg16 pretrained model and added by my self 2 fully conected layers. Then, i freezed all the convolutional base and make only trainable the fully conected layers. Here's the code:
from keras.layers import GlobalAveragePooling2D
from keras.models import Model
pretrained_model = VGG16(weights='imagenet', include_top=False)
pretrained_model.summary()
for layer in pretrained_model.layers:
layer.trainable = False
x = pretrained_model.output
x = GlobalAveragePooling2D()(x)
dropout = Dropout(0.25)
# let's add a fully-connected layer
x = Dense(1024, activation='relu')(x)
x = dropout(x)
x = Dense(1024, activation = 'relu')(x)
predictions = Dense(1, activation='sigmoid')(x)
final_model = Model(inputs=pretrained_model.input, outputs=predictions)
final_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
As far as i know, the normal behavior should be that the accuracy should start low and then grow up with the epochs. But here it only oscillates through the same values (0.93 and 0.95). I'm sorry i cannot upload images to show you the graphs.
To sum up, i want to know if that little variance in the accuracy means that the model is not learning anything.
I have an hypothesis: from all the 100.000 images of the dataset, 95.000 have the label "1" and only 5.000 have the label "0". I think that if diminish the images with "1" equate them with the images with "0" the results would change.
The lack of images labeled "0" doesn't help the CNN for sure. I also suggest to lower the learning rate and play around with the batch size to see if something changes.
I wish it helps.
Because of imbalance training data, I suggest that you can set "class_weight" during the training step. The more data you have, the lower class_weight you set.
class_weight = {0: 1.5, 1: 0.5}
model.fit(X, Y, class_weight=class_weight)
You can check the augment of class_weight in keras document.
class_weight: Optional dictionary mapping class indices (integers) to
a weight (float) value, used for weighting the loss function (during
training only). This can be useful to tell the model to "pay more
attention" to samples from an under-represented class.

Extracting weights from best Neural Network in Tensorflow/Keras - multiple epochs

I am working on a 1 - hidden - layer Neural Network with 2000 neurons and 8 + constant input neurons for a regression problem.
In particular, as optimizer I am using RMSprop with learning parameter = 0.001, ReLU activation from input to hidden layer and linear from hidden to output. I am also using a mini-batch-gradient-descent (32 observations) and running the model 2000 times, that is epochs = 2000.
My goal is, after the training, to extract the weights from the best Neural Network out of the 2000 run (where, after many trials, the best one is never the last, and with best I mean the one that leads to the smallest MSE).
Using save_weights('my_model_2.h5', save_format='h5') actually works, but at my understanding it extract the weights from the last epoch, while I want those from the epoch in which the NN has perfomed the best. Please find the code I have written:
def build_first_NN():
model = keras.Sequential([
layers.Dense(2000, activation=tf.nn.relu, input_shape=[len(X_34.keys())]),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mean_squared_error',
optimizer=optimizer,
metrics=['mean_absolute_error', 'mean_squared_error']
)
return model
first_NN = build_first_NN()
history_firstNN_all_nocv = first_NN.fit(X_34,
y_34,
epochs = 2000)
first_NN.save_weights('my_model_2.h5', save_format='h5')
trained_weights_path = 'C:/Users/Myname/Desktop/otherfolder/Data/my_model_2.h5'
trained_weights = h5py.File(trained_weights_path, 'r')
weights_0 = pd.DataFrame(trained_weights['dense/dense/kernel:0'][:])
weights_1 = pd.DataFrame(trained_weights['dense_1/dense_1/kernel:0'][:])
The then extracted weights should be those from the last of the 2000 epochs: how can I get those from, instead, the one in which the MSE was the smallest?
Looking forward for any comment.
EDIT: SOLVED
Building on the received suggestions, as for general interest, that's how I have updated my code, meeting my scope:
# build_first_NN() as defined before
first_NN = build_first_NN()
trained_weights_path = 'C:/Users/Myname/Desktop/otherfolder/Data/my_model_2.h5'
checkpoint = ModelCheckpoint(trained_weights_path,
monitor='mean_squared_error',
verbose=1,
save_best_only=True,
mode='min')
history_firstNN_all_nocv = first_NN.fit(X_34,
y_34,
epochs = 2000,
callbacks = [checkpoint])
trained_weights = h5py.File(trained_weights_path, 'r')
weights_0 = pd.DataFrame(trained_weights['model_weights/dense/dense/kernel:0'][:])
weights_1 = pd.DataFrame(trained_weights['model_weights/dense_1/dense_1/kernel:0'][:])
Use ModelCheckpoint callback from Keras.
from keras.callbacks import ModelCheckpoint
checkpoint = ModelCheckpoint(filepath, monitor='val_mean_squared_error', verbose=1, save_best_only=True, mode='max')
use this as a callback in your model.fit() . This will always save the model with the highest validation accuracy (lowest MSE on validation) at the location specified by filepath.
You can find the documentation here.
Of course, you need validation data during training for this. Otherwise I think you can probably be able to check on lowest training MSE by writing a callback function yourself.

Categories

Resources