How to get rid of ridiculously high loss when data is already normalized? - python

I have 500 sample of housing data which I have converted all to numbers. It has 12 columns which are used to predict 1 price.
However, when I try to run the model, its loss is massive(14 digit loss). I have normalize the data but this had no effect. This is causing the programs predictions to be very off, x100 off. what can i do to fix this. Here is the code:
import numpy as np
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
import pandas as pd
from sklearn.preprocessing import StandardScaler, MinMaxScaler
data = pd.read_csv('Housing.csv')
valdata = pd.read_csv('val.csv')
mapping1 = {'yes': 1, 'no': 0}
mapping2 = {'furnished': 2, 'semi-furnished': 1, "unfurnished": 0}
cols_to_convert = ['mainroad', 'guestroom', "basement", "hotwaterheating", "airconditioning", "prefarea"]
for col in cols_to_convert:
data[col] = data[col].map(mapping1)
valdata[col] = valdata[col].map(mapping1)
data["furnishingstatus"] = data["furnishingstatus"].map(mapping2)
valdata["furnishingstatus"] = valdata["furnishingstatus"].map(mapping2)
x_train = np.array(data.drop("price", axis=1))
y_train = np.array(data["price"])
scaler = MinMaxScaler()
x_train = scaler.fit_transform(x_train)
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
input_shape = x_train[0].shape
inputs = Input(shape=input_shape)
# Define layers
dense1 = Dense(8, activation='relu')(inputs)
dense2 = Dense(1, activation='linear')(dense1)
model = Model(inputs=inputs, outputs=dense2)
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train, y_train, epochs=100, validation_split=0.2)
x_new = np.array([[7420, 4, 2, 3, 1, 0, 0, 0, 1, 2, 1, 2]])
y_new = model.predict(x_new)
print(y_new)
And hers is roughly what the csv file looks like before mapping the strings to numbers.
Image of csv

You can try to improve your model by increasing the number of hidden layers (say from 1 to 2) or/and increasing the number of units in hidden Dense layers (say to 256 and 64).
But actually you might get better results when using other ML algorithms, like Random Forests or XGBoost instead of DNN. Please check the following article, which shows that Random Forests outperform DNN for tabular data like you have:
https://www.kdnuggets.com/2019/06/random-forest-vs-neural-network.html

Related

Resnet for Text data

Hi I want to use ResNet for Text data. I tried to look some code example lot of other data at the end I wrote the following code. But I'm not sure it's the correct way for ResNet or not.
NOTE::: this part is optional if i recieve an opinion on it. it will be great but I'm going to try it once the above one is corrected. if it is correct way then I want it to implement it in this way ----> ResNet should contain 18 layers in total whereas these layers should be divided into four stages and each stage should consist of two convolutional blocks. Each convolutional block should contain two convolutional layers with batch normalization and ReLU non_linearity in-between. Then, ResNet should pass the output from the convolutional layers to two fully-connected layers that will use the reduced data to classify the initial data to a given website class. Last but not least, you should use Adam optimizer and categorical cross-entropy (typically used for multi-class classification problems). Make sure that you identify and use the optimal hyper-parameters for your ResNet.
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
class ResNet_class():
def __init__(self):
# Cross-Validate
self.no_of_folds = int(input('enter no of K_fold: '))
self.kf = KFold(self.no_of_folds, shuffle=True, random_state=42) # Use for KFold classification
self.EPOCHS = int(input('enter no of epochs: '))
def check_test(self):
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
med = df['income'].median()
df['income'] = df['income'].fillna(med)
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
oos_y = []
oos_pred = []
fold = 0
for train, test in self.kf.split(x):
fold += 1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train, y_train, validation_data=(x_test, y_test), verbose=0,
epochs=self.EPOCHS)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
score = np.sqrt(metrics.mean_squared_error(pred, y_test))
print(f"Fold score (RMSE): {score}")
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred, oos_y))
print(f"Final, out of sample score (RMSE): {score}")
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat([df, oos_y, oos_pred], axis=1)
resnet = ResNet_class()
resnet.check_test()

ValueError: Error when checking target: expected dense_4 to have shape (1,) but got array with shape (6,)

I am doing a prediction model using a chronic kidney disease dataset.
However the shape of my X_train value doesn't seem to be valid.
I have tried to change it but got a tuple error
# import libraries
import glob
from keras.models import Sequential, load_model
import numpy as np
import pandas as pd
from keras.layers import Dense
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
import matplotlib.pyplot as plt
import keras as k
from sklearn.model_selection import train_test_split
# load the data
from google.colab import files
uploaded = files.upload()
df = pd.read_csv('kidney_disease.csv')
#print the first 5 rows of data
df.head(5)
# create a list of column names to keep
columns_to_retain = ['sg', 'al', 'sc', 'hemo', 'pcv', 'wbcc', 'htn', 'classification']
# drop the unneccessary columns
df = df.drop( [col for col in df.columns if not col in columns_to_retain], axis=1)
#drop the rows with na or missing values
df = df.dropna(axis=0)
# transform the non-numeric data in the columns
for column in df.columns:
if df[column].dtype == np.number:
continue
df[column] = LabelEncoder().fit_transform(df[column])
# split the data into independent (X) dataset and dependent (y) dataset
X = df.drop(['classification'], axis=1)
y = df['classification']
# feature scaling
#min-max scaler method scales the dataset in order that all features lies between 0 and 1
X_scaler = MinMaxScaler()
X_scaler.fit(X)
column_names = X.columns
X[column_names] = X_scaler.transform(X)
# split the data into 80% training & 20% testing
X_train, y_train, X_test, y_test = train_test_split(X,y, test_size = 0.2, shuffle=True)# build the model
model = Sequential()
model.add( Dense(256, input_dim= len(X.columns), kernel_initializer=k.initializers.random_normal(seed=13), activation ='relu') )
model.add( Dense(1, activation = 'hard_sigmoid') )
# compiling the model (loss function mesures how well the model does in training
# & tries to improve on it using the optimizer )
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# train the model
history = model.fit(X_train, y_train, epochs = 2000, batch_size= X_train.shape[0])
#print(X_train[0:1].shape)
Do you guys have any idea and explain me the root of this problem.
Thank you in advance!

Input Shape Keras RNN

I'm working with a time-series data, that has shape of 2000x1001, where 2000 is the number of cases, 1000 rows represent the data in time-domain, displacements in X direction during 1 sec period, meaning that the timestep is 0.001. The last column represents the speed, the output value that I need to predict based on the displacements during 1 sec. How the Input Data should be shaped for RNN in Keras? I've gone trough some tutorials, but still I'm cofused about Input Shape in RNN. Thanks in advance
#load data training data
dataset=loadtxt("Data.csv", delimiter=",")
x = dataset[:,:1000]
y = dataset[:,1000]
#Create train and test dataset with an 80:20 split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
#input scaling
scaler = StandardScaler()
x_train_s =scaler.fit_transform(x_train)
x_test_s = scaler.transform(x_test)
num_samples = x_train_s.shape[0] ## Number of samples
num_vals = x_train_s.shape[1] # Number of elements in each sample
x_train_s = np.reshape(x_train_s, (num_samples, num_vals, 1))
#create model
model = Sequential()
model.add(LSTM(100, input_shape=(num_vals, 1)))
model.add(Dense(1, activation='relu'))
model.compile(loss='mae', optimizer='adam',metrics = ['mape'])
model.summary()
#training
history = model.fit(x_train_s, y_train,epochs=10, verbose = 1, batch_size =64)
look at this code:
it is trying to predict next 4 values based on previous 6 values.
follow the comments and see how very simple input is manipulated for using it
as input in rnn/lstm
follow the comments within code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow.keras import Model
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import RNN, LSTM
"""
creating a toy dataset
lets use this below ```input_sequence``` as the sequence to make data points.
as per the question, we will use 6 points to predict next 4 points
"""
input_sequence = [1,2,3,4,5,6,7,8,9,10,1,2,3,4,5,6,7,8,9,10,1,2,3,4,5,6,7,8,9,10]
X_train = []
y_train = []
**#first 6 points will be our input data points and next 4 points will be data label.
#so on we will shift by 1 and make such data points and label pairs**
for i in range(len(input_sequence)-9):
X_train.append(input_sequence[i:i+6])
y_train.append(input_sequence[i+6:i+10])
X_train = np.array(X_train, dtype=np.float32)
y_train = np.array(y_train, dtype=np.int32)))
**#X_test for the predictions (contains 6 points)**
X_test = np.array([[8,9,10,1,2,3]],dtype=np.float32)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
**#we will be using basic LSTM, which accepts input in ```[num_inputs, time_steps, data_points], therefore reshaping as per that```**
# so here:
# 1. num_inputs = how many sequence of 6 points you want to use i.e. rows (we use X_train.shape[0] )
# 2. time_steps = batches you can considered i.e. if you want to use 1 or 2 or 3 rows
# 3. data_points = number of points (for ex. in our case its 6 points we are using)
X_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1]))
X_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1]))
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
x_points = X_train.shape[-1]
print("one input contains {} points".format(x_points))
model = Sequential()
model.add(LSTM(4, input_shape=(1, x_points)))
model.add(Dense(4))
model.compile(loss='mean_squared_error', optimizer='adam')
model.summary()
model.fit(X_train, y_train, epochs=500, batch_size=5, verbose=2)
output = list(map(np.ceil, model.predict(X_test)))
print(output)
hope it helps. ask for any doubt pls.
Like explained in the doc, Keras expects the following shape for a RNN:
(batch_size, timesteps, input_dim)
batch_size is the umber of samples you feed before a backprop
timesteps is the number of timesteps for each sample
input_dim is the number of features for each timestep
EDIT more details:
In your case you should go for
batch_input_shape = (batch_size, timesteps, 1)
With batch_size and timesteps selected as you wish.
What about the timesteps?
Let's say you take one of your 2000 samples, and let's say that your sample has 10 elements instead of 1000, for example:
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Then, if we chose timesteps=3, then you get a batch of length 8:
[[[0], [1], [2]],
[[1], [2], [3]],
[[2], [3], [4]],
[[3], [4], [5]],
[[4], [5], [6]],
[[5], [6], [7]],
[[6], [7], [8]],
[[7], [8], [9]]]

InvalidArgumentError: Incompatible shapes with Keras LSTM Net

I want to predict the pressure of a machine. I have 18 input values and the pressure as output. So I have 19 columns and 7657 rows as the database consists of 7657 time steps and each counts for 1 sec.
I have a problem with the following code:
import tensorflow as tf
import pandas as pd
from matplotlib import pyplot
from sklearn.preprocessing import MinMaxScaler
from sklearn import linear_model
from keras.models import Sequential
from keras.layers import Dense #Standard neural network layer
from keras.layers import LSTM
from keras.layers import Activation
from keras.layers import Dropout
df = pd.read_csv('Testdaten_2_Test.csv',delimiter=';')
feature_col_names=['LSDI','LZT1I', ..... ,'LZT5I']
predicted_class_names = ['LMDI']
x = df[feature_col_names].values
y = df[predicted_class_names].values
x_train_size = 6400
x_train, x_test = x[0:x_train_size], x[x_train_size:len(x)]
y_train_size = 6400
y_train, y_test = y[0:y_train_size], y[y_train_size:len(y)]
nb_model = linear_model.LinearRegression()
nb_model.fit(X=x_train, y=y_train)
nb_predict_train = nb_model.predict(x_test)
from sklearn import metrics
def scale(x, y):
# fit scaler
x_scaler = MinMaxScaler(feature_range=(-1, 1))
x_scaler = x_scaler.fit(x)
x_scaled = x_scaler.transform(x)
# fit scaler
y_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = y_scaler.fit(y)
y_scaled = y_scaler.transform(y)
return x_scaler, y_scaler, x_scaled, y_scaled
x_scaler, y_scaler, x_scaled, y_scaled = scale(x, y)
x_train, x_test = x_scaled[0:x_train_size], x_scaled[x_train_size:len(x)]
y_train, y_test = y_scaled[0:y_train_size], y_scaled[y_train_size:len(y)]
x_train=x_train.reshape(x_train_size,1,18)
y_train=y_train.reshape(y_train_size,1,1)
model = Sequential()
model.add(LSTM(10, return_sequences=True,batch_input_shape=(32,1,18)))
model.add(LSTM(10,return_sequences=True))
model.add(LSTM(1,return_sequences=True, activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=
['accuracy'])
model.fit(x_train, y_train, epochs=10,batch_size=32)
score = model.evaluate(x_test, y_test,batch_size=32)
predicted = model.predict(x_test)
predicted = y_scaler.inverse_transform(predicted)
predicted = [x if x > 0 else 0 for x in predicted]
correct_values = y_scaler.inverse_transform(y_test)
correct_values = [x if x > 0 else 0 for x in correct_values]
print(nb_predict_train)
I Get the Error:
ValueError: Error when checking input: expected lstm_1_input to have 3
dimensions, but got array with shape (1257, 18)
After the last line of code.
I also tried to reshape the test data but then I get a very similar error.
I think, I'm missing something very easy or basic but I can't figure it out at the moment, as I'm just a beginner in coding neuronal networks.
I need this for my master thesis so I would be very thank full if anyone could help me out.
The problem is that your model input batch_input_shape is fixed. The length of your test length is 1257 and cannot be divisible by 32. It should be changed as follows:
model.add(LSTM(10, return_sequences=True,batch_input_shape=(None,1,18)))
You should modify test shape before the model evaluate test.
x_test= x_test.reshape(len(x)-x_train_size,1,18)
y_test= y_test.reshape(len(y)-x_train_size,1,1)
score = model.evaluate(x_test, y_test,batch_size=32)
Of course, you have to reshape predicted and y_test before inverse_transform.
predicted = model.predict(x_test)
predicted= predicted.reshape(len(y)-x_train_size,1)
y_test= y_test.reshape(len(y)-x_train_size,1)

kfold cross validation wont terminate, stuck at cross_val_score

I am trying to run kfold cross validation. but for some reason, it gets stuck here, it wont terminate from here accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10, n_jobs = -1)
i cant understand whats the problem. and how do i fix it.
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('Churn_Modelling.csv')
X = dataset.iloc[:, 3:13].values
y = dataset.iloc[:, 13].values
# Encoding categorical data
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_X_1 = LabelEncoder()
X[:, 1] = labelencoder_X_1.fit_transform(X[:, 1])
labelencoder_X_2 = LabelEncoder()
X[:, 2] = labelencoder_X_2.fit_transform(X[:, 2])
onehotencoder = OneHotEncoder(categorical_features = [1])
X = onehotencoder.fit_transform(X).toarray()
X = X[:,1:]
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
import keras
from keras.models import Sequential #Required to initialize the ANN
from keras.layers import Dense #Build layers of ANN
from keras.layers import Dropout
# Evaluating the ANN
import keras
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from keras.models import Sequential #Required to initialize the ANN
from keras.layers import Dense #Build layers of ANN
def build_classifier(): # Builds the architecture, or the classifier
classifier = Sequential()
classifier.add(Dense(activation = 'relu', input_dim = 11, units = 6, kernel_initializer = 'uniform'))# add layers
classifier.add(Dense(activation = 'relu', units = 6, kernel_initializer = 'uniform'))# add layers
classifier.add(Dense(activation = 'sigmoid', units = 1, kernel_initializer = 'uniform'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
return classifier
classifier = KerasClassifier(build_fn = build_classifier, batch_size = 10, nb_epoch = 100)
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10, n_jobs = -1)
mean = accuracies.mean()
variance = accuracies.std()
Edit
Im on windows 10 using Anaconda with python 3.6.
Dataset : Drive Link for dataset
It works perfectly when i set n_jobs = 1 but not when n_jobs = -1
Since you have set the n_jobs = -1, then all the CPUs are being utlised as per the documentation mentioned here. However, you must understand that utilising all the CPUs does not necessarily may lead to reduction in execution time because:
There is an overhead invovled with creation and allocation of reasources to new threads.
Also, there might be other bottlenecks like data being to large to be broadcasted to all threads at the same time, thread pre-emption over RAM (or other resouces,etc.), how data is pushed into each thread, etc.
Also multithreading in Python has various shortcomings, see here and here.
You can check out a similar issue with GridSearchCV and parallization here in this answer.
Also, as mentioned by #ncfith, there is no current solution for this problem.
References
Why do I sometime get a crash/freeze with n_jobs > 1 under OSX or Linux?
Similar issue with numpy on MacOS

Categories

Resources