Linear Regression Neural Network Tensorflow Keras Python program - python

I wrote a small
"Linear Regression Neural Network Tensorflow Keras Python program"
Input dataset is
y = mx + c straight line data.
Predicted y values are not correct and are giving horizontal line kind of
values, instead of a line with some slope.
I ran this program on Windows laptop with tensorflow, Keras and
Jupyter notebook.
What to do to fix this program please?
Thanks and best regards,
SSJ
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
n2 = 50
count = 20
n4 = n2 + count
p = 100
m = 10
c = 5
x = np.linspace(n2, n4, p)
y = m * x + c
x
y
plt.scatter(x,y)
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
x_normalizer = preprocessing.Normalization(input_shape=[1,])
x_normalizer.adapt(x)
x_normalized = x_normalizer(x)
y_normalizer = preprocessing.Normalization(input_shape=[1,])
y_normalizer.adapt(y)
y_normalized = x_normalizer(y)
y_model = tf.keras.Sequential([
y_normalizer,
layers.Dense(1)
])
y_model.compile(optimizer='rmsprop', loss='mse', metrics = ['mae'])
y_hist = y_model.fit(x, y, epochs=100, verbose=0, validation_split = 0.2)
hist = pd.DataFrame(y_hist.history)
hist['epoch'] = y_hist.epoch
hist.head()
hist.tail()
xin = [51,53,59,64]
ypred = y_model.predict(xin)
ypred
plt.scatter(x, y)
plt.scatter(xin, ypred, color = 'r')
plt.grid(linestyle = '--')

Use StandardScaler instead of Normalization
Normalizer acts row-wise and StandardScaler column-wise.
Normalizer does not remove the mean and scale by deviation but scales
the whole row to unit norm.
Found here: Difference between StandardScaler and Normalizer
This is how you can process the data:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from sklearn.preprocessing import StandardScaler
x = np.linspace(50, 70, 100).reshape(-1, 1)
y = 10 * x + 5
x_standard_scaler = StandardScaler().fit(x)
y_standard_scaler = StandardScaler().fit(y)
x_scaled = x_standard_scaler.transform(x)
y_scaled = y_standard_scaler.transform(y)
Remember that you need two separate scalers for x and y so don't use the same object for that. Also if you want to use that scaler to process new data for testing, save the scaler in some variable. A good practice is to not refit the scaler again on test data.
model = Sequential([
Dense(1, input_dim=1, activation='linear'),
])
model.compile(optimizer='rmsprop', loss='mse')
history = model.fit(x_scaled, y_scaled, epochs=1000, verbose=0, validation_split = 0.2).history
pd.DataFrame(history).plot()
plt.show()
As you can see the model is converging. Its worth to plot the loss history which helps to tell if your model is learning or not.
x_test = np.linspace(20, 100, 10).reshape(-1, 1)
y_test = 10 * x_test + 5
x_test_scaled = x_standard_scaler.transform(x_test)
y_test_scaled = y_standard_scaler.transform(y_test)
If you have a test data that you want to use for validation or just predict it, remember to use standard scaler again, but without fitting. It should be fitted on train data only in most cases.
y_test_pred_scaled = model.predict(x_test_scaled)
y_test_pred = y_standard_scaler.inverse_transform(y_test_pred_scaled)
plt.scatter(x_test, y_test, s=30, label='true')
plt.scatter(x_test, y_test_pred, s=15, label='pred')
plt.legend()
plt.show()
If you want to get your prediction rescaled back to its original range use inverse_transform. Notice that prediction on x_test after rescaling is very close to y_test.

Related

Predicting the square root of a number using Machine Learning

I am trying to create a program in python that uses machine learning to predict the square root of a number. I am listing what all I have done in my program:-
created a csv file with numbers and their squares
extracted the data from csv into suitable variables (X stores squares, y stores numbers)
scaled the data using sklearn's, StandardScaler
built the ANN with two hidden layers each of 6 units (no activation functions)
compiled the ANN using SGD as the optimizer and mean squared error as the loss function
trained the model. Loss was around 0.063
tried predicting but the result is something else.
My actual code:-
import numpy as np
import tensorflow as tf
import pandas as pd
df = pd.read_csv('CSV/SQUARE-ROOT.csv')
X = df.iloc[:, 1].values
X = X.reshape(-1, 1)
y = df.iloc[:, 0].values
y = y.reshape(-1, 1)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, test_size=0.2)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_test_sc = sc.fit_transform(X_test)
X_train_sc = sc.fit_transform(X_train)
sc1 = StandardScaler()
y_test_sc1 = sc1.fit_transform(y_test)
y_train_sc1 = sc1.fit_transform(y_train)
ann = tf.keras.models.Sequential()
ann.add(tf.keras.layers.Dense(units=6))
ann.add(tf.keras.layers.Dense(units=6))
ann.add(tf.keras.layers.Dense(units=1))
ann.compile(optimizer='SGD', loss=tf.keras.losses.MeanSquaredError())
ann.fit(x = X_train_sc, y = y_train_sc1, batch_size=5, epochs = 100)
print(sc.inverse_transform(ann.predict(sc.fit_transform([[144]]))))
OUTPUT:- array([[143.99747]], dtype=float32)
Shouldn't the output be 12? Why is it giving me the wrong result?
I am attaching the csv file I used to train my model as well: SQUARE-ROOT.csv
TL;DR: You really need those nonlinearities.
The reason behind it not working could be one (or a combination) of several causes, like bad input data range, flaws in your data, over/underfitting, etc.
However, in this specific case the model you build literally can't learn the function you're trying to approximate, because not having nonlinearities makes this a purely linear model, which can't accurately approximate nonlinear functions.
A Dense layer is implemented as follows:
x_res = activ_func(w*x + b)
where x is the layer input, w the weights, b the bias vector and activ_func the activation function (if one is defined).
Your model, then, mathematically becomes (I'm using indices 1 to 3 for the three Dense layers):
pred = w3 * (w2 * ( w1 * x + b1 ) + b2 ) + b3
= w3*w2*w1*x + w3*w2*b1 + w3*b2 + b3
As you see, the resulting model is still linear.
Add activation functions and your mode becomes capable of learning nonlinear functions too. From there, experiment with the hyperparameters and see how the performance of your model changes.
The reason your code does not work is because you apply fit_transform to your test set, which is wrong. You can fix it by replacing fit_transform(test) to transform(test). Although I don't think StandardScaler is neccessary, please try this:
import numpy as np
import tensorflow as tf
import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
N = 10000
X = np.arange(1, N).reshape(-1, 1)
y = np.sqrt(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, test_size=0.2)
sc = StandardScaler()
X_train_sc = sc.fit_transform(X_train)
#X_test_sc = sc.fit_transform(X_test) # wrong!!!
X_test_sc = sc.transform(X_test)
sc1 = StandardScaler()
y_train_sc1 = sc1.fit_transform(y_train)
#y_test_sc1 = sc1.fit_transform(y_test) # wrong!!!
y_test_sc1 = sc1.transform(y_test)
ann = tf.keras.models.Sequential()
ann.add(tf.keras.layers.Dense(units=32, activation='relu')) # you have 10000 data, maybe you need a little deeper network
ann.add(tf.keras.layers.Dense(units=32, activation='relu'))
ann.add(tf.keras.layers.Dense(units=32, activation='relu'))
ann.add(tf.keras.layers.Dense(units=1))
ann.compile(optimizer='SGD', loss='MSE')
ann.fit(x=X_train_sc, y=y_train_sc1, batch_size=32, epochs=100, validation_data=(X_test_sc, y_test_sc1))
#print(sc.inverse_transform(ann.predict(sc.fit_transform([[144]])))) # wrong!!!
print(sc1.inverse_transform(ann.predict(sc.transform([[144]]))))

Rebuilding LSTM tensorflow model using Keras

Hello I am new to building models in python and I am trying to learn because I need to train a model using Python and extract its weights and biases to build the model on FPGA
I was following this tutorial:
https://medium.com/#curiousily/human-activity-recognition-using-lstms-on-android-tensorflow-for-hackers-part-vi-492da5adef64
I have been trying to implement the same model in the previous link using Keras. However, when I tried to train the keras model the accuracy was 0.0905 eventhough it has the same structure as the tensorflow model.
import keras.layers
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.model_selection import train_test_split
import numpy as np
import pandas as pd
from scipy import stats
from sklearn import metrics
import seaborn as sns
from keras.utils.vis_utils import plot_model
import pydot as py
RANDOM_SEED = 42
#Reading Dataset
columns = ['user', 'activity', 'timestamp', 'x_axis', 'y_axis', 'z_axis']
df = pd.read_csv('WISDM_ar_v1.1_raw.txt', header=None, names=columns)
df = df.dropna()
#data_preprocessing
N_TIME_STEPS = 200
N_FEATURES = 3
step = 20
segments = []
labels = []
for i in range(0, len(df) - N_TIME_STEPS, step):
xs = df['x_axis'].values[i:i + N_TIME_STEPS]
ys = df['y_axis'].values[i:i + N_TIME_STEPS]
zs = df['z_axis'].values[i:i + N_TIME_STEPS]
# Note that we take the most common activity and assign it as a label for the sequence.
label = stats.mode(df['activity'][i:i + N_TIME_STEPS])[0][0]
segments.append([xs, ys, zs])
labels.append(label)
#print(np.array(segments).shape)
#(54901,3,200)
reshaped_segments = np.array(segments, dtype=np.float32).reshape(-1, N_TIME_STEPS, N_FEATURES)
#print(reshaped_segments.shape)
#(54901,200,3)
# Labels one hot encoding
labels = np.array(pd.get_dummies(labels), dtype=np.float32)
#print(labels.shape)
#(54901,6)
X_train, X_test, y_train, y_test = train_test_split(reshaped_segments, labels, test_size=0.2, random_state=RANDOM_SEED)
N_CLASSES = 6
N_HIDDEN_UNITS = 64
model = Sequential()
model.add(
LSTM((N_HIDDEN_UNITS),input_shape=(N_TIME_STEPS,N_FEATURES),return_sequences=True,recurrent_activation='relu'))
model.add(LSTM(labels.shape[1],return_sequences=False,recurrent_activation='relu'))
print(model.summary())
opt = keras.optimizers.Adam(learning_rate=0.0025)
model.compile(loss= 'categorical_crossentropy',optimizer=opt,metrics=['categorical_accuracy'])
plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
history = model.fit(X_train,y_train,epochs=50,batch_size=1024)
print(model.get_weights())
predictions = model.predict(X_test)
plt.plot(history.history['loss'])
plt.show()
categories = ['Downstairs', 'Jogging', 'Sitting', 'Standing', 'Upstairs', 'Walking']
max_test = np.argmax(y_test, axis=1)
max_predictions = np.argmax(predictions, axis=1)
confusion_matrix = metrics.confusion_matrix(max_test, max_predictions)
plt.figure(figsize=(16, 14))
sns.heatmap(confusion_matrix, xticklabels=categories, yticklabels=categories, annot=True, fmt="d");
plt.title("Confusion matrix")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
model.save('mymodel')
This is my Keras implemenation, if someone can guide me on what is the difference between both models or if I am missing something I would be very grateful

Poor accuarcy score for Semi-Supervised Support Vector machine

I am using a Semi-Supervised approach for Support Vector Machine in Python for the image classification from PASCAL VOC 2007 data.
I have tried with the default parameters from the libraries and also tuned them but it get extremely bad accuracy of about only ~ 2%.
Below is my code:
import pandas as pd
import numpy as np
from sklearn import decomposition
from sklearn.model_selection import train_test_split
from numpy import concatenate
import numpy as np
from sklearn import datasets
from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn import svm
from sklearn import decomposition
import warnings
warnings.filterwarnings("ignore")
color_layout_features = pd.read_pickle("color_layout_descriptor.pkl")
bow_surf = pd.read_pickle("bow_surf.pkl")
color_hist_features = pd.read_pickle("hist.pkl")
labels = pd.read_pickle("labels.pkl")
# Feat. Scaling
def scale(X, x_min, x_max):
nom = (X-X.min(axis=0))*(x_max-x_min)
denom = X.max(axis=0) - X.min(axis=0)
denom[denom==0] = 1
return x_min + nom/denom
# normalization
def normalize(x):
return (x - np.min(x))/(np.max(x) - np.min(x))
color_layout_features_scaled = scale(color_layout_features, 0, 1)
color_hist_features_scaled = scale(color_hist_features, 0, 1)
bow_surf_scaled = scale(bow_surf, 0, 1)
features = np.hstack([color_layout_features_scaled, color_hist_features_scaled, bow_surf_scaled])
# define dataset
X, Y = features, labels
X = normalize(X)
pca = decomposition.PCA(n_components=100)
pca.fit(X)
X = pca.transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.30, random_state=1, stratify=Y)
# split train into labeled and unlabeled
X_train_lab, X_test_unlab, y_train_lab, y_test_unlab = train_test_split(X_train, y_train, test_size=0.30, random_state=1, stratify=y_train)
# create the training dataset input
X_train_mixed = concatenate((X_train_lab, X_test_unlab))
# create "no label" for unlabeled data
nolabel = [-1 for _ in range(len(y_test_unlab))]
# recombine training dataset labels
y_train_mixed = concatenate((y_train_lab, nolabel))
from semisupervised import S3VM
model = S3VM(kernel="Linear", C = 1e-2, gamma = 0.5, lamU = 1.0, probability=True)
#model.fit(X_train_mixed, _train_mixed)
model.fit(np.vstack((X_train_lab, X_test_unlab)), np.append(y_train_lab, nolabel))
#model.fit(np.vstack((label_X_train, unlabel_X_train)), np.append(label_y_train, unlabel_y))
# predict
predict = model.predict(X_test)
acc = metrics.accuracy_score(y_test, predict)
# metric
print("accuracy", acc*100)
accuracy 2.6692291266282298
I am using a Transductive version of SVM (TSVM) from the semisupervised library. But not sure what am I doing wrong so that even after tweaking the parameters I still get the same result. Any inputs would be helpful.
I refer https://github.com/rosefun/SemiSupervised/blob/master/semisupervised/TSVM.py to make the implementation. Any inputs would be helpful.
Please consider that according to link Documentation "The unlabeled samples should be labeled as -1".

MLPRegressor working but results don't make any sense

I am building a neural network with my research data in two ways: with a statistical programm (SPSS) and with python.
I am using the scikit learn MLPRegressor. The problem I have is that whereas my code is , apparently, well written (because it runs), the results do not make sense. The r2score should be around 0.70 ( it is-4147.64) and the correlation represented in the graph should be almost linear. (it is just a straight line at a constant distance from X axis). Also the x and y axis should have values ranging from 0 to 180, which is not the case ( X from 20 to 100, y from -4100 to -3500)
If any of you can give a hand I would really appreciate it.
Thank you!!!!!!
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import neighbors, datasets, preprocessing
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import r2_score
vhdata = pd.read_csv('vhrawdata.csv')
vhdata.head()
X = vhdata[['PA NH4', 'PH NH4', 'PA K', 'PH K', 'PA NH4 + PA K', 'PH NH4 + PH K', 'PA IS', 'PH IS']]
y = vhdata['PMI']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
from sklearn.preprocessing import Normalizer
scaler = Normalizer().fit(X_train)
X_train_norm = scaler.transform(X_train)
X_test_norm = scaler.transform(X_test)
nnref = MLPRegressor(hidden_layer_sizes = [4], activation = 'logistic', solver = 'sgd', alpha = 1,
learning_rate= 'constant', learning_rate_init= 0.6, max_iter=40000, momentum=
0.3).fit(X_train, y_train)
y_predictions= nnref.predict(X_test)
print('Accuracy of NN classifier on training set (R2 score): {:.2f}'.format(nnref.score(X_train_norm, y_train)))
print('Accuracy of NN classifier on test set (R2 score): {:.2f}'.format(nnref.score(X_test_norm, y_test)))
plt.figure()
plt.scatter(y_test,y_predictions, marker = 'o', color='red')
plt.xlabel('PMI expected (hrs)')
plt.ylabel('PMI predicted (hrs)')
plt.title('Correlation of PMI predicted by MLP regressor and the actual PMI')
plt.show()
You have a couple of issues. First, it is important to use the right scaler or normalization to work with an MLP. NNs work best between 0 and 1, so consider using sklearn's MinMaxScaler to accomplish this.
So:
from sklearn.preprocessing import Normalizer
scaler = Normalizer().fit(X_train)
X_train_norm = scaler.transform(X_train)
X_test_norm = scaler.transform(X_test)
Should be:
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_train_norm = scaler.fit_transform(X_train)
X_test_norm = scaler.fit_transform(X_test)
Next, you are training and testing on the unscaled data, but then performing your scores on the scaled data. Meaning:
nnref = MLPRegressor(hidden_layer_sizes = [4], activation = 'logistic', solver = 'sgd', alpha = 1,
learning_rate= 'constant', learning_rate_init= 0.6, max_iter=40000, momentum=
0.3).fit(X_train, y_train)
should be:
nnref = MLPRegressor(hidden_layer_sizes = [4], activation = 'logistic', solver = 'sgd', alpha = 1,
learning_rate= 'constant', learning_rate_init= 0.6, max_iter=40000, momentum=
0.3).fit(X_train_norm , y_train)
And...
y_predictions= nnref.predict(X_test)
Should be:
y_predictions= nnref.predict(X_test_norm)
Additional notes...
It doesn't make any sense to predict on your training data. That provides no value, as it is testing the same data it learned from and should predict 100%. That is an example of overfitting.
Well, I found a mistake:
You train the model on samples, that weren't normalized:
nnref = MLPRegressor(...).fit(X_train, y_train)
But later you're trying to predict values from normalized samples:
nnref.score(X_train_norm, y_train)
Also the x and y axis should have values ranging from 0 to 180, which is not the case ( X from 20 to 100, y from -4100 to -3500)
Scikit-learn do not change values by itself. If X is not in range you need, it means that you've changed it somehow. Or, maybe your vision of X values is incorrect.

Learning Sine function seems to take excessive amount of parameters in an ANN (Keras)

I have been trying to do a little research about different function approximation methods, and the first one I tried is using ANN (Artificial Neural Net). The code is following -
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from tensorflow.keras.layers import Input, Dense, Flatten, Dropout
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Model
from sklearn.preprocessing import MinMaxScaler
X = np.linspace(0.0 , 2.0 * np.pi, 20000).reshape(-1, 1)
Y = np.sin(X)
x_scaler = MinMaxScaler()
y_scaler = MinMaxScaler()
X = x_scaler.fit_transform(X)
Y = y_scaler.fit_transform(Y)
plt.plot(X, Y)
plt.show()
inp = Input(shape=(20000, 1))
x = Dense(32, activation='relu')(inp)
x = Dense(64, activation='relu')(x)
x = Dense(128, activation='relu')(x)
x = Dense(256, activation='relu')(x)
predictions = Dense(1, activation='linear')(x)
model = Model(inp, predictions)
model.compile(loss='mse', optimizer='adam')
model.summary()
X = X.reshape((-1, 20000, 1))
Y = Y.reshape((-1, 20000, 1))
history = model.fit(X, Y, epochs=500, batch_size=32, verbose=2)
X_test = np.linspace(0.0 , 2.0 * np.pi, 20000).reshape(-1, 1)
X_test.shape
X_test = x_scaler.transform(X_test)
X_test = X_test.reshape((-1, 20000, 1))
res = model.predict(X_test, batch_size=32)
res = res.reshape((20000, 1))
res_rscl = y_scaler.inverse_transform(res)
Y_rscl = y_scaler.inverse_transform(Y.reshape(20000, 1))
plt.subplot(211)
plt.plot(res_rscl, label='ann')
plt.plot(Y_rscl, label='train')
plt.xlabel('#')
plt.ylabel('value [arb.]')
plt.legend()
plt.subplot(212)
plt.plot(Y_rscl - res_rscl, label='diff')
plt.legend()
plt.show()
The plots are like following -
As we can see that it indeed approximated Sine curve very well with this architecture. However, I am not really sure I am doing the right thing. It looks strange to me that I need 43,777 parameters to fit the sine curve. Maybe I am wrong. However, looking at this R code (I do not know R at all, but I am guessing that the ANN is much smaller than what I have) makes me wonder more.
My question - Is my approach right? Should I change something so that the number of parameters becomes less? Or is it normal that sine is a difficult function and for ANN it takes a good number of parameters to approximate it?
It may be somewhat an open-ended question, but I would really appreciate any direction that you can point me to and any mistake that I am making that you can show me.
Note - This question suggests that the cyclic nature of the data is the hard thing for ANN. I would also like to know if this is really the case and if that is the reason the ANN takes so many parameters.

Categories

Resources