Keras doesn't predict multi output correctly - python

I have a dataset with two features to predict those two features. Here and example of data:
raw = {'one': ['41.392953', '41.392889', '41.392825','41.392761', '41.392697'],
'two': ['2.163917','2.163995','2.164072','2.164150','2.164229' ]}
When I'm using Keras (below my code):
# example of making predictions for a regression problem
from keras.models import Sequential
from keras.layers import Dense
X = raw[:-1]
y = raw[1:]
# define and fit the final model
model = Sequential()
model.add(Dense(4, input_dim=2, activation='relu'))
model.add(Dense(4, activation='relu'))
model.add(Dense(1, activation='linear'))
model.compile(loss='mse', optimizer='adam')
model.fit(X[0:len(X)-1], y[0:len(y)-1], epochs=1000, verbose=0)
# make a prediction
Xnew=X[len(X)-1:len(X)]
ynew = model.predict(Xnew)
# show the inputs and predicted outputs
print("X=%s, Predicted=%s" % (Xnew, ynew))
However, the output is different from the input, it should contain two parameters and with similar size.
X= latitude longitude
55740 41.392052 2.164564, Predicted=[[21.778254]]

If you want to have two outputs, you have to explicitly specify them in your output layer. For example:
from keras.models import Sequential
from keras.layers import Dense
X = tf.random.normal((341, 2))
Y = tf.random.normal((341, 2))
# define and fit the final model
model = Sequential()
model.add(Dense(4, input_dim=2, activation='relu'))
model.add(Dense(4, activation='relu'))
model.add(Dense(2, activation='linear'))
model.compile(loss='mse', optimizer='adam')
model.fit(X, Y, epochs=1, verbose=0)
# make a prediction
Xnew=tf.random.normal((1, 2))
ynew = model.predict(Xnew)
# show the inputs and predicted outputs
print("X=%s, Predicted=%s" % (Xnew, ynew))
# X=tf.Tensor([[-0.8087067 0.5405918]], shape=(1, 2), dtype=float32), Predicted=[[-0.02120915 -0.0466493 ]]

I think the problem is your input format. Why do you not use 4 for input dimensions?
I try with different format (numpy). The output is quite good.
import numpy as np
raw = np.array([[41.392953, 41.392889, 41.392825,41.392761, 41.392697],
[2.163917,2.163995,2.164072,2.164150,2.164229 ]])
# example of making predictions for a regression problem
from keras.models import Sequential
from keras.layers import Dense
X = raw[:,:-1]
y = raw[:,-1]
# define and fit the final model
model = Sequential()
model.add(Dense(4, input_dim=4, activation='relu'))
model.add(Dense(4, activation='relu'))
model.add(Dense(1, activation='linear'))
model.compile(loss='mse', optimizer='adam')
model.fit(X, y, epochs=1000, verbose=0)
# make a prediction
Xnew=X[len(X)-1:len(X)]
ynew = model.predict(Xnew)
# show the inputs and predicted outputs
print("X=%s, Predicted=%s" % (Xnew, ynew))
Outputs:
X=[[2.163917 2.163995 2.164072 2.16415 ]], Predicted=[[2.3935468]]

Related

TensorFlow and Keras // 2 dimensional input /1st day beginner

It is my first day with tf and keras. I had a quick tutorial which worked fine, but left me with a lot of questions.
Can someone show me how to get two data inputs instead of one?
import keras
import numpy as np
model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd',loss='mean_squared_error')
xs = np.array([1,2,3,4,5,6,7], dtype=int) # input data 1
ys = np.array([8,11,14,17,20,23,26], dtype=int)
# formel is : 3*x+5
model.fit(xs, ys, epochs=500)
print(model.predict([10.0]))
add a few hidden layers for feature detection. If you want multiple features then you will need to change the shape of X and the input shape
X = np.array([1,2,3,4,5,6,7], dtype=int).tolist()
#ys = np.array([8,11,14,17,20,23,26], dtype=int)
ys=list(map(lambda x: 3*x+5,xs.tolist()))
plt.plot(xs,ys)
X_train, X_test, y_train, y_test= train_test_split(X,y,test_size=0.3)
model=Sequential()
model.add(layers.Input(shape=(1,), name='main_input'))
model.add(Dense(200, activation='tanh'))
model.add(Dense(100, activation='tanh'))
model.add(Dense(32, activation='tanh'))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse', metrics=['mse'])
history=model.fit(X_train, y_train, epochs=1000, verbose=0)
predictionResults=model.predict(X_test)
index=0
results=predictionResults.flatten()
for value in X_test:
plt.scatter(value,results[index])
index+=1
plt.plot(X,y)
plt.show()
Going off of Golden's answer, here is an example of adding another feature "x". You should just mess around with the layers and sizes.
import keras
import numpy as np
xs = np.array([1,2,3,4,5,6,7], dtype=int) # input data 1
x = np.array([3,5,7,9,11,13,15], dtype=int) # input data 2
ys = np.array([3,10,21,36,55,78,105], dtype=int)
# formel is : xs * x
input_data = np.array([[xs],[x]]).T
model = keras.models.Sequential()
model.add(keras.Input(shape=input_data.shape[1:]))
model.add(keras.layers.Dense(500, activation='tanh'))
model.add(keras.layers.Dense(200, activation='tanh'))
model.add(keras.layers.Dense(1))
model.compile(optimizer='adam', loss='mse', metrics=['mse'])
model.fit(input_data, ys, epochs=500)
print(model(np.array([[10, 21]])).numpy())
see https://www.pyimagesearch.com/2019/02/04/keras-multiple-inputs-and-mixed-data/ for how to create a simple feedforward neural network with 10 inputs
model = Sequential()
model.add(Dense(8, input_shape=(10,), activation="relu"))
model.add(Dense(4, activation="relu"))
model.add(Dense(1, activation="linear"))
This network is a simple feedforward neural without with 10 inputs, a first hidden layer with 8 nodes, a second hidden layer with 4 nodes, and a final output layer used for regression.
Keras allows you to create multiple sequential networks with inputs and concatenate them into a dense layer with one or more outputs

Evaluation of keras model returns loss:0 and accuracy:0

I am trying to learn keras. As tutorial I used this https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/
Why does model.evaluate(X) returns loss:0 and accuracy:0?
# first neural network with keras make predictions
from numpy import loadtxt
from keras.models import Sequential
from keras.layers import Dense
# load the dataset
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=',')
# split into input (X) and output (y) variables
X = dataset[:,0:8]
y = dataset[:,8]
# define the keras model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# compile the keras model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit the keras model on the dataset
model.fit(X, y, epochs=150, batch_size=10)
# make class predictions with the model
predictions = (model.predict(X) > 0.5).astype(int)
# summarize the first 5 cases
for i in range(5):
print('%s => %d (expected %d)' % (X[i].tolist(), predictions[i], y[i]))
print(model.evaluate(X))
print(model.predict(X[-5:]))
Terminal:
I forgot to add the target output to the model.evaluate().
print(model.evaluate(X, y))
This works fine!

Setting output variable in deep learning

I have this code:
from numpy import loadtxt
from keras.models import Sequential
from keras.layers import Dense
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=',')
X = dataset[:,0:8]
y = dataset[:,8]
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, epochs=150, batch_size=10)
_, accuracy = model.evaluate(X, y)
print('Accuracy: %.2f' % (accuracy*100))
I need to change the output column so it predicts/learns from a score(for instance 1 to a million) instead of 0 or 1(sigmoid).
As for your case you need to use relu as your activation function in the last layer (output layer) instead of sigmoid
The range of relu is [0,inf).Then in that case you need to use 'MSE' as your loss metric.
Conceptually, the problem which you are trying to solve is a regression type of problem.

ValueError : Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 18]

I'm new with Keras and I'm trying to build a model for personal use/future learning. I've just started with python and I came up with this code (with help of videos and tutorials). I have a data of 16324 instances, each instance consists of 18 features and 1 dependent variable.
import pandas as pd
import os
import time
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM, BatchNormalization
from tensorflow.keras.callbacks import TensorBoard, ModelCheckpoint
EPOCHS = 10
BATCH_SIZE = 64
NAME = f"-TEST-{int(time.time())}"
df = pd.read_csv("EntryData.csv", names=['1SH5', '1SHA', '1SA5', '1SAA', '1WH5', '1WHA', '2SA5', '2SAA', '2SH5', '2SHA', '2WA5', '2WAA', '3R1', '3R2', '3R3', '3R4', '3R5', '3R6', 'Target'])
df_val = 14554
validation_df = df[df.index > df_val]
df = df[df.index <= df_val]
train_x = df.drop(columns=['Target'])
train_y = df[['Target']]
validation_x = validation_df.drop(columns=['Target'])
validation_y = validation_df[['Target']]
model = Sequential()
model.add(LSTM(128, input_shape=(train_x.shape[1:]), return_sequences=True))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(LSTM(128, return_sequences=True))
model.add(Dropout(0.1))
model.add(BatchNormalization())
model.add(LSTM(128))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(2, activation='softmax'))
opt = tf.keras.optimizers.Adam(lr=0.001, decay=1e-6)
model.compile(loss='sparse_categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
tensorboard = TensorBoard(log_dir=f'logs/{NAME}')
filepath = "RNN_Final-{epoch:02d}-{val_acc:.3f}"
checkpoint = ModelCheckpoint("models/{}.model".format(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')) # saves only the best ones
history = model.fit(
train_x, train_y,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(validation_x, validation_y),
callbacks=[tensorboard, checkpoint],)
score = model.evaluate(validation_x, validation_y, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
model.save("models/{}".format(NAME))
In line
model.add(LSTM(128, input_shape=(train_x.shape[1:]), return_sequences=True))
is throwing an error:
ValueError: Input 0 of layer lstm is incompatible with the layer:
expected ndim=3, found ndim=2. Full shape received: [None, 18]
I was searching for solution on this site and on google for few hours now and I was not able to find proper answer for this or I was not able to implement the solution for similar problem.
Thank you for any tips.
An LSTM network expects three dimensional input of this format:
(n_samples, time_steps, features)
There are two main ways this can be a problem.
Your input is 2D
You have stacked (multiple) LSTM layers
1. Your input is 2D
You need to turn your input to 3D.
x = x.reshape(len(x), 1, x.shape[1])
# or
x = np.expand_dims(x, 1)
Then, specify the right input shape in the first layer:
LSTM(64, input_shape=(x.shape[1:]))
2. You have stacked LSTM layers
By default, LSTM layers will not return sequences, i.e., they will return 2D output. This means that the second LSTM layer will not have the 3D input it needs. To address this, you need to set the return_sequences=True:
tf.keras.layers.LSTM(8, return_sequences=True),
tf.keras.layers.LSTM(8)
Here's how to reproduce and solve the 2D input problem:
import tensorflow as tf
import numpy as np
x = np.random.rand(100, 10)
# x = np.expand_dims(x, 1) # uncomment to solve the problem
y = np.random.randint(0, 2, 100)
model = tf.keras.Sequential([
tf.keras.layers.LSTM(8),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
history = model.fit(x, y, validation_split=0.1)
Here's how to reproduce and solve the stacked LSTM layers problem:
import tensorflow as tf
import numpy as np
x = np.random.rand(100, 1, 10)
y = np.random.randint(0, 2, 100)
model = tf.keras.Sequential([
tf.keras.layers.LSTM(8), # use return_sequences=True to solve the problem
tf.keras.layers.LSTM(8),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
history = model.fit(x, y, validation_split=0.1)

python keras neural network prediction not working (outputs 0 or 1)

I have created with keras a neural network for predicting addition.
I have 2 inputs and 1 output (result of adding the 2 inputs).
I trained my neural network with tensorflow and then I tried to predict addition but the program returns 0 or 1 value not 3,4,5,etc.
This is my code :
from keras.models import Sequential
from keras.layers import Dense
import numpy
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load dataset
dataset = numpy.loadtxt("data.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:2]
Y = dataset[:,2]
# create model
model = Sequential()
model.add(Dense(12, input_dim=2, init='uniform', activation='relu'))
model.add(Dense(2, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=2)
# calculate predictions
predictions = model.predict(X)
# round predictions
rounded = [round(x[0]) for x in predictions]
print(rounded)
And my file data.csv:
1,2,3
3,3,6
4,5,9
10,8,18
1,3,4
5,3,8
For example:
1+2=3
3+3=6
4+5=9
...etc.
But I get this as output : 0,1,0,0,1,0,1...
Why didn't I get the output as 3,6,9...?
i updated code for use other loss function but i have same error :
from keras.models import Sequential
from keras.layers import Dense
import numpy
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load pima indians dataset
dataset = numpy.loadtxt("data.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:2]
Y = dataset[:,2]
# create model
model = Sequential()
model.add(Dense(12, input_dim=2, init='uniform', activation='relu'))
model.add(Dense(2, init='uniform', activation='relu'))
#model.add(Dense(1, init='uniform', activation='sigmoid'))
model.add(Dense(1, input_dim=2, init='uniform', activation='linear'))
# Compile model
#model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=2)
# calculate predictions
predictions = model.predict(X)
# round predictions
rounded = [round(x[0]) for x in predictions]
print(rounded)
outout=1,1,1,3,1,1,...etc
As #ebeneditos mentioned, you need to change your activation function in the last layer to something other than sigmoid. You can try changing it to linear.
model.add(Dense(1, init='uniform', activation='linear'))
You should also change your loss function to something like mean squared error, as your problem is more of a regression problem than a classification problem (binary_crossentropy is used as a loss function for binary classification problems)
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
This is due to the Sigmoid function you have in the last layer. As it is defined:
It can only take values from 0 to 1. You should change last layer's activation function.
You can try this instead (with Dense(8) instead of Dense(2)):
# Create model
model = Sequential()
model.add(Dense(12, input_dim=2, init='uniform', activation='relu'))
model.add(Dense(8, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='linear'))
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=2)

Categories

Resources