Issue Tensorflow 2.2 with model.fit and model.evaluate - python

When I whant to train my model in tf it seems like tf don't get right values (cf screen).
I expect to have 21759 and not 680
It's appening since I changed of OS (fedora 30 xfce -> fedora 32 gnome) and on others laptops there is not this issue.
I am using Tf 2.2.
My dataset is made by somes csv created by tshark: A screen of my DS
Here is few lines of my code:
My model:
model = Sequential()
model.add(LSTM(9, input_shape=dataset[0].shape, activation='relu', return_sequences=True))
model.add(Dropout(0.3))
model.add(LSTM(9, input_shape=dataset[0].shape, activation='relu', return_sequences=True))
model.add(Dropout(0.3))
model.add(Dense(9, activation='relu'))
model.add(Flatten())
model.add(Dense(2, activation='softmax'))
opt = tf.keras.optimizers.Adam(lr=1e-4, decay=1e-5)
model.compile(loss='sparse_categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
Do you have any ideas ?
PS: It happens too with this .PY
import tensorflow as tf
dataset = [[1, 1],[2, 2]] * 50
label = [0, 1] * 50
print(len(dataset))
model = tf.keras.Sequential([
tf.keras.layers.Dense(1, activation="relu", input_shape=(2,)),
tf.keras.layers.Dense(2, activation="softmax")
])
model.compile(
loss="sparse_categorical_crossentropy",
optimizer="sgd",
metrics=["accuracy"]
)
history = model.fit(dataset, label, epochs=1)
Ouput:
100
4/4 [==============================] - 0s 611us/step - loss: 0.6578 - accuracy: 0.5000

Like Koralp Catalsakal said it was just an "configuration difference" issue.
So I just had to set up manually the batch_size.

Related

Keras loss: 0.0000e+00 and accuracy stays constant

I have 101 folders from 0-100 containing synthetic training images.
This is my code:
dataset = tf.keras.utils.image_dataset_from_directory(
'Pictures/synthdataset5', labels='inferred', label_mode='int', class_names=None, color_mode='rgb', batch_size=32, image_size=(128,128), shuffle=True, seed=None, validation_split=None, subset=None,interpolation='bilinear', follow_links=False,crop_to_aspect_ratio=False
)
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten
model = Sequential()
model.add(Conv2D(32, kernel_size=5, activation='relu', input_shape=(128,128,3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, kernel_size=5, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, kernel_size=3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(256, kernel_size=3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(dataset,epochs=75)
And I always get the same result for every epoch:
Epoch 1/75
469/469 [==============================] - 632s 1s/step - loss: 0.0000e+00 - accuracy: 0.0098
What's wrong???
So turns out your loss might be the problem after all.
If you use SparseCategoricalCrossentropy instead as loss it should work.
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
After this you should adjust the last layer to:
model.add(Dense(101, activation='softmax'))
Also don't forget to import import tensorflow as tf
Let me know if this solves the issue.

What should my input to a keras conv1D layer be and what should the input_shape be?

Note: First time posting. I've tried to be thorough in my description
I've been trying to set up what I thought would be a very simple CNN by following this tutorial:
https://machinelearningmastery.com/cnn-models-for-human-activity-recognition-time-series-classification/
My Xtrain dataset is a time series as a numpy array with 34396 rows (samples) and 600 columns (time steps). My Ytrain dataset is just an array containing labels 0,1, or 2 (as ints). I'm just trying to use the CNN to perform multi-classification.
I'm running into an issue getting errors like
Input 0 is incompatible with layer conv1d_39: expected ndim=3, found
ndim=4
when input_shape=(n_timesteps,n_features,n_outputs)
or
Error when checking input: expected conv1d_40_input to have 3
dimensions, but got array with shape (34396, 600)
when input_shape=(n_timesteps,n_features)
I've been searching online for hours now but I can't seem to find a solution to my problem. I think its a simple problem with my data format and the input_shape values but I haven't been able to fix it.
I've tried setting input_shape to
(None, 600, 1)
(34396,600, 1)
(34396,600)
(None,600)
among various other combinations.
train_df = pd.read_csv('training.csv')
test_df = pd.read_csv('test.csv')
x_train=train_df.iloc[:,2:].values
y_train=train_df.iloc[:,1].values
x_test=train_df.iloc[:,2:].values
y_test=train_df.iloc[:,1].values
n_rows=len(x_train)
n_cols=len(x_train[0])
def evaluate_model(trainX, trainy, testX, testy):
verbose, epochs, batch_size = 0, 10, 32
n_timesteps, n_features, n_outputs = trainX.shape[0], trainX.shape[1], 3
print(n_timesteps, n_features, n_outputs)
model = Sequential()
model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features,n_outputs)))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit network
model.fit(trainX, trainy, epochs=epochs, batch_size=batch_size, verbose=verbose)
# evaluate model
_, accuracy = model.evaluate(testX, testy, batch_size=batch_size, verbose=0)
return accuracy
evaluate_model(x_train,y_train,x_test,y_test)
As given in the keras doc, for Conv1D, for example input_shape=(10, 128) for time series sequences of 10 time steps with 128 features per step.
So for your case since you have 600 timesteps each of 1 feature it should be input_shape=(600,1).
Also you have to feed your labels y's as one-hot-encoded.
Working code
from keras.utils import to_categorical
model = Sequential()
model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(600,1)))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
x = np.random.randn(100,600)
y = np.random.randint(0,10, size=(100))
# Reshape to no:of sample, time_steps, 1 and convert y to one hot encoding
model.fit(x.reshape(100,600,1), to_categorical(y))
# Same as model.fit(np.expand_dims(x, 2), to_categorical(y))
Output:
Epoch 1/1
100/100 [===========================] - 0s 382us/step - loss: 2.3245 - acc: 0.0800

Why the acc of DNN is less than 1%

Im a new in deeplearning. I have a X with 34 dimension, which are some stock technical indicator data. And Y is label, which is binary(1,-1) represent the stock is uptrend or downtrend.Here is my code.
import pandas as pd
import numpy as np
data = pd.read_csv('data/data_week.csv')
data.dropna(inplace=True)
x = data.loc[:, 'bbands_upperband':'turn_std_5']
y = data['label']
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential()
model.add(Dense(512, activation='relu', input_dim=34))
model.add(Dense(200, activation='relu'))
model.add(Dense(200, activation='relu'))
model.add(Dense(200, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(1, activation='relu'))
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(x, y, epochs=2, batch_size=200)
231200/1041021 [=====>........................] - ETA: 59s - loss: 0.7098 - acc: 0.0086
232000/1041021 [=====>........................] - ETA: 59s - loss: 0.7087 - acc: 0.0086
However, the accuracy is less than 1%. I think this must has something wrong.
If you know please tell me and thank you very much!
For a binary classification model you should use sigmoid activation function in your last dense layer
model.add(Dense(1, activation='sigmoid'))
also, your classes must be (0,1) not (-1,1)

Keras - Deep learning model not training

I have recently written and run this code to train a CNN with Theano and Keras:
#Building the model
model = Sequential()
model.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(1,8,182)))
model.add(Convolution2D(32, 3, 3, activation='relu'))
model.add(Convolution2D(32, 3, 3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))
#Compiling the CNN
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['binary_accuracy'])
#Fitting data and training the model
model.fit(X_train, y_train, batch_size=32, nb_epoch=100, verbose=1)
#Saving weights
model.save_weights('trained_cnn.h5', overwrite=True)
I tested it on my CPU, and each epoch took about 3 minutes. A sample output for the first epoch is this:
Epoch 1/100
72000/72000 [==============================] - 204s - loss: 0.6935 - binary_accuracy: 0.5059
Now, I have migrated to a Nvidia Titan X GPU. I have also been forced to move to Keras 2 and thus have updated my code as follows, implementing the necessary changes:
#Building the model
model = Sequential()
model.add(Conv2D(32, 3, activation='relu', input_shape=(1,8,182)))
model.add(Conv2D(32, 3, activation='relu'))
model.add(Conv2D(32, 3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))
#Compiling the CNN
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['binary_accuracy'])
#Fitting data and training the model
model.fit(X_train, y_train, batch_size=32,epochs=100, verbose=2)
#Saving weights
model.save_weights('trained_cnn_b_2.h5', overwrite=True)
Now whenever I run on my GPU, the program just gets stuck saying
epoch 1/100
and nothing happens after this, even when I wait for more than 10 minutes.
Why is this happening and how can I fix it? I haven't changed any of my code, besides the Keras functions. No errors are thrown. Where am I going wrong? Is there something wrong with the verbose command which is stopping the program from executing?
Edit 1: I have left my set up running overnight, but there is still no execution after that line.
Edit 2: I am using CUDA 7.5.17
Edit 3: This program from here works perfectly fine. It completes execution in less than 10 seconds, as expected.
# Create your first MLP in Keras
from keras.models import Sequential
from keras.layers import Dense
import numpy
# fix random seed for reproducibility
numpy.random.seed(7)
# load pima indians dataset
dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10)
# evaluate the model
scores = model.evaluate(X, Y)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
NOTE: I have verified that my GPU is working completely fine.
EDIT: I migrated to Tensorflow, and it had no problems with the CUDA version being below 9. With Tensorflow, the program executes perfectly.

Convolutional Neural Network with Non-Existent Loss and Accuracy of 0

I am attempting to train a simple convolutional neural network shown below.
model= Sequential()
model.add(Conv1D(32, 3, padding='same', input_shape=(700,7))
model.add(Activation('relu'))
model.add(Conv1D(32,3))
model.add(Activation('relu'))
model.add(MaxPooling1D())
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
I fit it using 100 epochs and a validation training split 0.2 on input data shaped [1000L, 700L, 7L]. Every single one of my epochs displaed the following:
loss: nan - acc:0.0000e+00 - val_loss: nan - val_acc: 0.0000e+00
So my question is, what went wrong and how do I fix it? Is the problem with the network or how my data is being inputed and fitted to the model?

Categories

Resources