keras predicting classes method - python

So, I have this little project going on about predicting the nba 2019 champion but it seems that my code is not clear enough to make keras understand what I want. I have passed a list of past champions on my dataset and made it the output class to get the current champion.
I'm using a dataset for teams stats from 2014 to 2018 regular seasons and I'm assuming that I should have the 2019 stats to do it. I have made my dataset very well encoded for my NN to understand by providing one hot encoding in every feature I think it's useful.
x = pd.concat([df.drop(['Unnamed: 0','Team','Game','Date','Opponent','LastSeasonChamp'], axis = 1), df_ohc], axis = 1)
y = df['LastSeasonChamp']
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.35)
x_train = tf.keras.utils.normalize(x_train.values, axis = 1)
x_test = tf.keras.utils.normalize(x_test.values, axis = 1)
n_classes = 30
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(x_train.shape[1], input_shape = (x_train.shape[0],x_train.shape[1]), activation = tf.nn.relu))
model.add(tf.keras.layers.Dense(np.mean([x_train.shape[1], n_classes], dtype = int), activation = tf.nn.relu))
model.add(tf.keras.layers.Dense(n_classes, activation = tf.nn.softmax))
model.compile(optimizer = 'adagrad' , loss = 'sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train.values, epochs = 3)
model.evaluate(x_test, y_test)
model.save('nba_champ_2019')
new_model = tf.keras.models.load_model('nba_champ_2019')
pred = new_model.predict(x_test)
y_pred = to_categorical(pred)
So, I could expect my y_pred to be a column with 0 and 1 and but all I get is a column full of 1.

The to_categorical function is used to convert a list of class IDs to an one-hot matrix. You don't need it here. You should get the output you expect by removing in this case.

Related

LSTM Time series output not match to the actual data

I use LSTM model in Keras to predict time series data. I use MinMaxScaler to normalize data and create model like this code.
sc = MinMaxScaler()
train_sc = sc.fit_transform(train)
test_sc = sc.transform(test)
X_train = train_sc[:-1]
y_train = train_sc[1:]
X_test = test_sc[:-1]
y_test = test_sc[1:]
X_train_t = X_train[:, None]
X_test_t = X_test[:, None]
K.clear_session()
model = Sequential()
model.add(LSTM(12, input_shape=(1, 1)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train_t, y_train,epochs=200, batch_size=1, verbose=1)
y_pred = model.predict(X_test_t)
real_pred = sc.inverse_transform(y_pred)
real_test = sc.inverse_transform(y_test)
The output show like this image. when put test value in model.predict() then it show predict value. So, I think predict value should compare to the next test value like this table.
The picture show that the predicted value is not match to the actual value. It look like output copy value from input to show. How to fix it?

Python Keras Model -- ValueError: Layer sequential expects 1 input(s), but it received 16 input tensors

I have seen that many others on stackoverflow have posted about this same problem, but I haven't been able to figure out how to apply those solutions to my example.
I have been working on creating a model to predict an outcome of either 0 or 1 based on a dataset which contains 16 features - Everything has seemed to work fine (accuracy evaluation, epoch completion, etc.).
As mentioned, my training features include 16 different variables, but when I pass in a list that contains 16 unique values separate from the training dataset in order to try and make an individual prediction (of either 0 or 1), I get this error:
ValueError: Layer sequential_11 expects 1 input(s), but it received 16 input tensors.
Here is my code -
y = datas.Result
X = datas.drop(columns = ['Date', 'home_team', 'away_team', 'home_pitcher', 'away_pitcher', 'Result'])
X = X.values.astype('float32')
y = y.values.astype('float32')
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.2)
X_train, X_validation, y_train, y_validation = train_test_split(X, y, test_size = 0.2)
model=keras.Sequential([
keras.layers.Dense(32, input_shape = (16,)),
keras.layers.Dense(20,activation=tf.nn.relu),
keras.layers.Dense(2,activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['acc'])
history = model.fit(X_train,y_train,epochs=20, validation_data=(X_validation, y_validation))
#all variables within features list are single values, ex: .351, 11, .991, etc.
features = [t1_pqm,t2_pqm,t1_elo,t2_elo,t1_era,t2_era,t1_bb9,t2_bb9,t1_fip,t2_fip,t1_ba,t2_ba,t1_ops,t2_ops,t1_so,t2_so]
prediction = model.predict(features)
The model expects an input of shape (None,16) but features has the shape (16,) (1D list). The easiest solution is to make it an numpy array with the right shape (1, 16):
features = np.array([[t1_pqm,t2_pqm,t1_elo,t2_elo,t1_era,t2_era,t1_bb9,t2_bb9,t1_fip,t2_fip,t1_ba,t2_ba,t1_ops,t2_ops,t1_so,t2_so]])

Why does tensorflow show inaccurate loss?

I'm using Tensorflow to train a network to predict the third item in a list of numbers.
When I train, the network appears to train quite well and do well on both the training and test set. However, when I evaluate its performance myself, it seems to be doing quite poorly.
For example, at the end of training, Tensorflow says that the validation loss is 2.1 x 10^(-5). However, when I compute it myself, I get 0.17 x 10^0. What am I doing wrong?
Here's code that can be run on Google Colab:
import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split
def create_dataset(k=5, n=2, example_amount=200):
'''Create a dataset of numbers where the goal is to always output the nth number'''
# UPGRADE: this could be done better with numpy to just generate all the examples at once
example_amount = 1000
x = []
y = []
ans = [x, y]
for i in range(example_amount):
example_x = np.random.rand(k)
example_y = example_x[n]
x.append(example_x)
y.append(example_y)
return ans
def tensorize(tensor_like) -> tf.Tensor:
'''Turn stuff into tensors'''
return tf.convert_to_tensor(tensor_like, dtype=tf.float32)
def split_dataset(dataset, train_split=0.8, random_state=42):
'''
Takes in a list (or tuple) where index 0 contains the inputs and index 1 contains the outputs
outputs x_train, x_test, y_train, y_test, train_indexes, test_indexes all as tf.Tensor
'''
indices = np.arange(len(dataset[0]))
return tuple([tensorize(data) for data in train_test_split(dataset[0], dataset[1], indices, train_size=train_split, random_state=random_state)])
# how many numbers in each example
K = 5
# the index of the solution
N = 2
# how many examples
EXAMPLE_AMOUNT = 20000
# what percentage of the examples are in the training set
TRAIN_SPLIT = 0.5
# how long to train for
epochs = 50
dataset = create_dataset(K, N, EXAMPLE_AMOUNT)
x_train, x_test, y_train, y_test, train_indexes, test_indexes = split_dataset(dataset, train_split=TRAIN_SPLIT)
model_input = tf.keras.layers.Input(shape=(K,), name="input")
model_dense1 = tf.keras.layers.Dense(10, name="dense1")(model_input)
model_dense2 = tf.keras.layers.Dense(10, name="dense2")(model_dense1)
model_output = tf.keras.layers.Dense(1, name="output")(model_dense2)
model = tf.keras.Model(inputs=model_input, outputs=model_output)
model.compile(optimizer=tf.keras.optimizers.Adam(), loss="mse")
history = model.fit(x=x_train, y=y_train, validation_data=(x_test, y_test), epochs=epochs)
# the validation loss as Tensorflow computes it
print(history.history["val_loss"][-1]) # 2.1036579710198566e-05
# the validation loss as I compute it
val_loss = tf.math.reduce_mean(tf.keras.losses.MSE(y_test, model.predict(x_test))).numpy()
print(val_loss) # 0.1655631
What you miss is that the shape of y_test.
y_test.numpy().shape
(500,) <-- causing the behaviour
Simply reshape it like:
val_loss = tf.math.reduce_mean(tf.keras.losses.MSE(y_test.numpy().reshape(-1,1), model.predict(x_test))).numpy()
print(val_loss) # 1.1548506e-05
Also:
history.history["val_loss"][-1] # 1.1548506336112041e-05
Or you can flatten() both of the data while calculating it:
val_loss = tf.math.reduce_mean(tf.keras.losses.MSE(y_test.numpy().flatten(), model.predict(x_test).flatten())).numpy()
print(val_loss) # 1.1548506e-05

Error checking input: expected embedding_1 input to have shape but got shape

I have successfully created my Keras sequential model and trained it for a while. Now I am trying to make some predictions but it fails even using the same data as used on the training phase.
I am getting this error: {ValueError}Error when checking input: expected embedding_1_input to have shape (2139,) but got array with shape (1,)
However, when checking the input I am trying to use, it says (2139,). I would like to know if anyone knows what this might be
df = pd.read_csv('../../data/parsed-data/data.csv')
df = ModelUtil().remove_entries_based_on_threshold(df, 'Author', 2)
#show_column_distribution(df, 'Author')
y = df.pop('Author')
le = LabelEncoder()
le.fit(y)
encoded_Y = le.transform(y)
tokenizer, padded_sentences, max_sentence_len \
= PortugueseTextualProcessing().convert_corpus_to_number(df)
ModelUtil().save_tokenizer(tokenizer)
vocab_len = len(tokenizer.word_index) + 1
glove_embedding = PortugueseTextualProcessing().load_vector(tokenizer)
embedded_matrix = PortugueseTextualProcessing().build_embedding_matrix(glove_embedding, vocab_len, tokenizer)
cv_scores = []
kfold = StratifiedKFold(n_splits=4, shuffle=True, random_state=7)
models = []
nn = NeuralNetwork()
nn.build_baseline_model(embedded_matrix, max_sentence_len, vocab_len, len(np_utils.to_categorical(encoded_Y)[0]))
# Separate some validation samples
val_data, X, Y = ModelUtil().extract_validation_data(padded_sentences, encoded_Y)
for train_index, test_index in kfold.split(X, Y):
# convert integers to dummy variables (i.e. one hot encoded)
dummy_y = np_utils.to_categorical(Y)
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = dummy_y[train_index], dummy_y[test_index]
nn.train(X_train, y_train, 100)
scores = nn.evaluate_model(X_test, y_test)
cv_scores.append(scores[1] * 100)
models.append(nn)
print("%.2f%% (+/- %.2f%%)" % (np.mean(cv_scores), np.std(cv_scores)))
best_model = models[cv_scores.index(max(cv_scores))]
best_model.save_model()
best_model.predict_entries(X[0])
Method that performs the prediction and model creation
def build_baseline_model(self, emd_matrix, long_sent_size, vocab_len, number_of_classes):
self.model = Sequential()
embedding_layer = Embedding(vocab_len, 100, weights=[emd_matrix], input_length=long_sent_size,
trainable=False)
self.model.add(embedding_layer)
self.model.add(Dropout(0.2))
self.model.add(Flatten())
# softmax performing better than relu
self.model.add(Dense(number_of_classes, activation='softmax'))
self.model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])
return self.model
def predict_entries(self, entry):
predictions = self.model.predict_classes(entry)
# show the inputs and predicted outputs
print("X=%s, Predicted=%s" % (entry, predictions[0]))
return predictions
X[0].shape evaluates to
: (2139,)
In you case you should apply a reshape so you can get an array with a unique element that contains the sentence.
X_reshape = X[0].reshape(1, 2139)
best_model.predict_entries(X_reshape)

Labels for Keras Model Predicting Multi-Classification Problem

If I have a set of targets a.k.a y's as [1,0,9,9,7,5,4,0,4,1] and I use model.predict(X) Keras returns a 6 item array for each of the 10 samples. It returns 6 items because there are 6 possible targets (0,1,4,5,7,9) and keras returns a decimal/float (for each label) representing likelihood of any one of those being the correct target. For the first sample, for example - where y=1 Keras returns an array that looks like this: [.1, .4,.003,.001,.5,.003].
I want to know which value matches to which target (does .1 refer to 1 because it's first in the dataset or 0 because it's the lowest number or 9 because it's the last number, etc). How does Keras order it's predictions? The documentation does not seem to articulate this; it only says
"Generates output predictions for the input samples."
So I'm not sure how to match the labels to the prediction results.
EDIT:
Here is my model and training code:
X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=0.25, random_state=42)
Y_train = to_categorical(y_train)
Y_test = to_categorical(y_test)
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
x = Conv1D(64, 5, activation='relu')(embedded_sequences)
x = MaxPooling1D(4)(x)
x = Conv1D(64, 5, activation='relu')(x)
x = MaxPooling1D(4)(x)
x = Conv1D(64, 5, activation='relu')(x)
x = MaxPooling1D(4)(x) # global max pooling
x = Flatten()(x)
x = Dense(64, activation='relu')(x)
preds = Dense(labels_Index, activation='softmax')(x)
model = Model(sequence_input, preds)
model.fit(X_train, Y_train, epochs=10, verbose = 1)
Keras doesn't order anything, it all depend on how the classes in the data you used to train the model are defined and one-hot encoded.
You can usually recover the integer class label by taking the argmax of the class probability array for each sample.
From your example, 0.1 is class 0, 0.4 is class 1, 0.003 is class 2, 0.001 is class 3, 0.5 is class 4, and 0.003 is class 5 (6 classes in total).

Categories

Resources