Why the acc of DNN is less than 1% - python

Im a new in deeplearning. I have a X with 34 dimension, which are some stock technical indicator data. And Y is label, which is binary(1,-1) represent the stock is uptrend or downtrend.Here is my code.
import pandas as pd
import numpy as np
data = pd.read_csv('data/data_week.csv')
data.dropna(inplace=True)
x = data.loc[:, 'bbands_upperband':'turn_std_5']
y = data['label']
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential()
model.add(Dense(512, activation='relu', input_dim=34))
model.add(Dense(200, activation='relu'))
model.add(Dense(200, activation='relu'))
model.add(Dense(200, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(1, activation='relu'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x, y, epochs=2, batch_size=200)
231200/1041021 [=====>........................] - ETA: 59s - loss: 0.7098 - acc: 0.0086
232000/1041021 [=====>........................] - ETA: 59s - loss: 0.7087 - acc: 0.0086
However, the accuracy is less than 1%. I think this must has something wrong.
If you know please tell me and thank you very much!

For a binary classification model you should use sigmoid activation function in your last dense layer
model.add(Dense(1, activation='sigmoid'))
also, your classes must be (0,1) not (-1,1)

Related

Keras loss: 0.0000e+00 and accuracy stays constant

I have 101 folders from 0-100 containing synthetic training images.
This is my code:
dataset = tf.keras.utils.image_dataset_from_directory(
'Pictures/synthdataset5', labels='inferred', label_mode='int', class_names=None, color_mode='rgb', batch_size=32, image_size=(128,128), shuffle=True, seed=None, validation_split=None, subset=None,interpolation='bilinear', follow_links=False,crop_to_aspect_ratio=False
)
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten
model = Sequential()
model.add(Conv2D(32, kernel_size=5, activation='relu', input_shape=(128,128,3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, kernel_size=5, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, kernel_size=3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(256, kernel_size=3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(dataset,epochs=75)
And I always get the same result for every epoch:
Epoch 1/75
469/469 [==============================] - 632s 1s/step - loss: 0.0000e+00 - accuracy: 0.0098
What's wrong???
So turns out your loss might be the problem after all.
If you use SparseCategoricalCrossentropy instead as loss it should work.
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
After this you should adjust the last layer to:
model.add(Dense(101, activation='softmax'))
Also don't forget to import import tensorflow as tf
Let me know if this solves the issue.

improve Keras model accuracy Conv1D and LSTM

I've a dataset where I need to predict the target, that it is 0 or 1,
for me is good to know the prediction is near to 0, like 0.20 or near to 1, like 0.89 and so on.
my model structure is this:
model = Sequential()
model.add(Conv1D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=1, strides=1))
model.add(LSTM(128, return_sequences=True, recurrent_dropout=0.2,activation='relu'))
model.add(Dense(128, activation="relu",
kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4),
bias_regularizer=regularizers.l2(1e-4),
activity_regularizer=regularizers.l2(1e-5)))
model.add(Dropout(0.4))
model.add(Conv1D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=1, strides=1))
model.add(LSTM(64, return_sequences=True,activation='relu'))
model.add(Dense(64, activation="relu",kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4),
bias_regularizer=regularizers.l2(1e-4),
activity_regularizer=regularizers.l2(1e-5)))
model.add(Dropout(0.4))
model.add(Conv1D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling1D(pool_size=1, strides=1))
model.add(LSTM(32, return_sequences=True, recurrent_dropout=0.2, activation='relu'))
model.add(Dense(32, activation="relu",kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4),
bias_regularizer=regularizers.l2(1e-4),
activity_regularizer=regularizers.l2(1e-5)))
model.add(Dropout(0.4))
model.add(BatchNormalization())
model.add(Dense(1, activation='linear'))
from keras.metrics import categorical_accuracy
model.compile(optimizer='rmsprop',loss="mse",metrics=['accuracy'])
model.fit(X_train,y_train,epochs=1000, batch_size=16, verbose=1, validation_split=0.1, callbacks=callback)
Summary of model is here: https://pastebin.com/Ba6ErEzj
Verbosity on training is:
Epoch 58/1000
277/277 [==============================] - 1s 5ms/step - loss: 0.2510 - accuracy: 0.4937 - val_loss: 0.2523 - val_accuracy: 0.4878
Epoch 59/1000
277/277 [==============================] - 1s 5ms/step - loss: 0.2515 - accuracy: 0.4941 - val_loss: 0.2504 - val_accuracy: 0.5122
How can I improve that? accuracy around 0.50 on 0 or 1 output is useless.
This is my Colab code.
To wrap-up suggestions (some already offered in the comments), with some justification...
Mistakes. You are in a binary classification setting, so:
Using MSE is wrong; you should use loss='binary_crossentropy'
In your last single-node layer, you should use activation='sigmoid'.
Best practices. Things like dropout, batch normalization, and kernel & bial regularizers are used for regularization, i.e. (roughly speaking) to avoid overfitting. They should not be used by default, and doing so is well-known to prevent learning (as it seems to be the case here):
Remove all dropout layers
Remove all batch normalization layers
Remove all kernel, bias, and activity regularizers.
You can consider adding some of these back step by step later, but only if you see signs of overfitting.
General advice. Nowadays, usually the first choice for an optimizer is Adam, so change to optimizer='adam' as a first approach.
That said, at the end of the day, everything depends on your data (both their quantity & quality) and the particular problem to be addressed. Experimentation is king (but keeping in mind the general principles stated above).

Issue Tensorflow 2.2 with model.fit and model.evaluate

When I whant to train my model in tf it seems like tf don't get right values (cf screen).
I expect to have 21759 and not 680
It's appening since I changed of OS (fedora 30 xfce -> fedora 32 gnome) and on others laptops there is not this issue.
I am using Tf 2.2.
My dataset is made by somes csv created by tshark: A screen of my DS
Here is few lines of my code:
My model:
model = Sequential()
model.add(LSTM(9, input_shape=dataset[0].shape, activation='relu', return_sequences=True))
model.add(Dropout(0.3))
model.add(LSTM(9, input_shape=dataset[0].shape, activation='relu', return_sequences=True))
model.add(Dropout(0.3))
model.add(Dense(9, activation='relu'))
model.add(Flatten())
model.add(Dense(2, activation='softmax'))
opt = tf.keras.optimizers.Adam(lr=1e-4, decay=1e-5)
model.compile(loss='sparse_categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
Do you have any ideas ?
PS: It happens too with this .PY
import tensorflow as tf
dataset = [[1, 1],[2, 2]] * 50
label = [0, 1] * 50
print(len(dataset))
model = tf.keras.Sequential([
tf.keras.layers.Dense(1, activation="relu", input_shape=(2,)),
tf.keras.layers.Dense(2, activation="softmax")
])
model.compile(
loss="sparse_categorical_crossentropy",
optimizer="sgd",
metrics=["accuracy"]
)
history = model.fit(dataset, label, epochs=1)
Ouput:
100
4/4 [==============================] - 0s 611us/step - loss: 0.6578 - accuracy: 0.5000
Like Koralp Catalsakal said it was just an "configuration difference" issue.
So I just had to set up manually the batch_size.

how to get loss of each output in multi output regression?

In order to analyze data, I need loss for each of output dimensions, instead I get only one loss which i suspect is a mean of the losses for all output dimensions.
Any help to understand what is the loss I get and how to get separate loss for each output:
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from scipy import stats
from keras import models
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras import optimizers
from sklearn.model_selection import KFold
siz=100000
inp0=np.random.randint(100, 1000000 , size=(siz,3))
rand0=np.random.randint(-100, 100 , size=(siz,2))
a1=0.2;a2=0.8;a3=2.5;a4=2.6;a5=1.2;a6=0.3
oup1=np.dot(inp0[:,0],a1)+np.dot(inp0[:,1],a2)+np.dot(inp0[:,2],a3)\
+rand0[:,0]
oup2=np.dot(inp0[:,0],a4)+np.dot(inp0[:,1],a5)+np.dot(inp0[:,2],a6)\
+rand0[:,1]
oup_tot=np.concatenate((oup1.reshape(siz,1), oup2.reshape(siz,1)),\
axis=1)
normzer_inp = MinMaxScaler()
inp_norm = normzer_inp.fit_transform(inp0)
normzer_oup = MinMaxScaler()
oup_norm = normzer_oup.fit_transform(oup_tot)
X=inp_norm
Y=oup_norm
kfold = KFold(n_splits=2, random_state=None, shuffle=False)
opti_SGD = SGD(lr=0.01, momentum=0.9)
model1 = Sequential()
for train, test in kfold.split(X, Y):
model = Sequential()
model.add(Dense(64, input_dim=X.shape[1], activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(Y.shape[1], activation='linear'))
model.compile(loss='mean_squared_error', optimizer=opti_SGD)
history = model.fit(X[train], Y[train], \
validation_data=(X[test], Y[test]), \
epochs=100,batch_size=2048, verbose=2)
I get:
Epoch 1/100
- 0s - loss: 0.0864 - val_loss: 0.0248
Epoch 2/100
- 0s - loss: 0.0218 - val_loss: 0.0160
Epoch 3/100
- 0s - loss: 0.0125 - val_loss: 0.0091
I would like to know what is the loss i got now and how to get losses for each output dimension.
Pass a list of functions to the metrics argument in the compile function. See here: https://keras.io/metrics/#custom-metrics
import keras.backend as K
...
def loss_first_dim(y_true, y_pred):
return K.mean(K.square(y_pred[:, 0] - y_true[:, 0]))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=[loss_first_dim])
...

Convolutional Neural Network with Non-Existent Loss and Accuracy of 0

I am attempting to train a simple convolutional neural network shown below.
model= Sequential()
model.add(Conv1D(32, 3, padding='same', input_shape=(700,7))
model.add(Activation('relu'))
model.add(Conv1D(32,3))
model.add(Activation('relu'))
model.add(MaxPooling1D())
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
I fit it using 100 epochs and a validation training split 0.2 on input data shaped [1000L, 700L, 7L]. Every single one of my epochs displaed the following:
loss: nan - acc:0.0000e+00 - val_loss: nan - val_acc: 0.0000e+00
So my question is, what went wrong and how do I fix it? Is the problem with the network or how my data is being inputed and fitted to the model?

Categories

Resources