How to convert LSTM model to linear regression with keras? - python

model = Sequential()
model.add(LSTM(512, input_shape=(None, 1), return_sequences=True))
model.add(Dropout(0.3))
model.add(LSTM(512, input_shape=(None, 1)))
model.add(Dropout(0.3))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='rmsprop', metrics=['accuracy'])
model.summary()
hist = model.fit(x_train, y_train, epochs=10, batch_size=16, verbose=2)
p = model.predict(x_test)
I have this code and predict model p. I want to convert p of time series to a linear regression model.
Time series p -> linear regression model.
How can I accomplish this?

Related

Evaluate a deep learning model

I want to evaluate the following deep learning model.
lst=[]
for i in range(10):
model = Sequential()
model.add(Dense(64, kernel_initializer='he_normal', input_dim=X_train.shape[1], activation='relu'))
model.add(Dropout(0.7))
model.add(Dense(64, kernel_initializer='he_normal', activation='relu'))
model.add(Dropout(0.7))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['Recall'])
early_stopping = EarlyStopping(monitor='val_loss', patience=10)
history = model.fit(X_train, y_train, epochs=150,
batch_size=128, validation_data=(X_val, y_val), callbacks=[early_stopping])
# Generating predictions on the test set
y_pred = model.predict(X_test)
y_pred = (y_pred > 0.5)
clsf=classification_report(y_test, y_pred)
lst.append (clsf)
print (clsf)
By this, I am running the model for 10 times and after that take the average of the recall metric.
I am wondering if I am doing right the procedure or I can do this with some other way
Any suggestions? Thanks

I'd like to change the Keras to a pytorch, but I don't know how to build a neural network

The HAR dataset should be analyzed using LSTM and 1D CNN.
I need to check the graph of the change in loss and check the confusion matrix.
I don't know how to make init and forward functions in pytorch....
# define model
model = Sequential()
model.add(ConvLSTM2D(filters=64, kernel_size=(1,3), activation='relu', input_shape=(n_steps, 1, n_length, n_features)))
model.add(Dropout(0.1))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit network
hist = model.fit(X_train, Y_train, epochs=epochs, validation_data=(X_test, Y_test), batch_size=batch_size, verbose=verbose)
# evaluate model
(loss, accuracy) = model.evaluate(X_test, Y_test, batch_size=batch_size, verbose=verbose)
print("[INFO] loss={:.4f}, accuracy: {:.4f}%".format(loss, accuracy * 100))
The above is an LSTM model implemented by keras.
model = Sequential()
model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features)))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu', padding = 'same'))
model.add(Dropout(0.3))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit network
history = model.fit(X_train, Y_train, validation_data=(X_test, Y_test),
epochs=epochs, batch_size=batch_size, callbacks = [checkpoint], verbose=verbose)
# evaluate model
(loss, accuracy) = model.evaluate(X_test, Y_test, batch_size=batch_size, verbose=verbose)
print("[INFO] loss={:.4f}, accuracy: {:.4f}%".format(loss, accuracy * 100))
The above is a 1D CNN model implemented by keras.
I started deep learning a few months ago, so I don't know. Help me.

How to extend a LSTM model to probabilistic Bayesian LSTM model?

I have an LSTM model for regression in Python and I wanna extend it to Probabilistic Bayesian LSTM. In fact, I wanna learn the probability distribution of outputs. If I wanna map output to Normal distribution, is it possible to provide confidence interval for both mean and variance?
model = Sequential()
model.add(LSTM(100, activation='relu', return_sequences=True, input_shape=(n_steps_in, n_features)))
model.add(LSTM(100, activation='relu'))
model.add(Dense(n_steps_out))
model.compile(optimizer='adam', loss='mse')
# fit model
model.fit(X, y, epochs=100, verbose=1)

How can I extract Flatten Layer Output for each epoch?

model = Sequential()
model.add(Conv2D(50, (5,5), activation='relu', input_shape =(5,5,1), kernel_initializer='he_normal'))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
model.summary()
# compile the model
model.compile(loss='binary_crossentropy', optimizer= 'adam', metrics=['accuracy'])
model_checkpoint=ModelCheckpoint(r'C:\Users\globo\Desktop\Test_CNN\Results\Kernel5x5\Weights'+'\\'+test+'\model_test{epoch:02d}.h5',save_freq=1,save_weights_only=True)
# fit the model
history = model.fit(X_train, Y_train, epochs=10, batch_size=32, verbose=1, callbacks=[model_checkpoint], shuffle=True, validation_split=0.5)
I'm already extracting weights for each epoch with "ModelCheckpoint", but how can I extract flatten layer output for each epoch and save them?
doing this with sequential models is not feasible at all.
you should use functional API
inp = Input((5,5,1))
x = Conv2D(50, (5,5), activation='relu', kernel_initializer='he_normal')(inp)
xflatten = Flatten()(x)
out = Dense(1, activation='sigmoid')(xflatten)
main_model = Model(inp, out) # this works same as your model
flatten_model = Model(inp, xflatten) # and this only outputs the flatten layer and is not necessary to compile it because we won't train it, it just shows the output of a layer
main_model.compile(loss='binary_crossentropy', optimizer= 'adam', metrics=['accuracy'])
history = main_model.fit(X_train, Y_train, epochs=10, batch_size=32, verbose=1, callbacks=[model_checkpoint], shuffle=True, validation_split=0.5)
to see the flatten layers's output:
flatten_model.predict(X)

Creating an RNN with Keras Python

I'm new to machine learning and Keras. I made an Neural Network with Keras for regression looking like this:
model = Sequential()
model.add(Dense(57, input_dim=44, kernel_initializer='normal',
activation='relu'))
model.add(Dense(45, activation='relu'))
model.add(Dense(35, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(18, activation='relu'))
model.add(Dense(15, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1, activation='linear'))
My data has 44 dimensions, so could you please give me an example how could i make an RNN. I'm trying like this:
model = Sequential()
model.add(LSTM(44, input_shape=(6900, 44), ))
model.add(Dense(1))
model.compile(loss='mape', optimizer='adam', metrics=['mse', 'mae', 'mape'])
model.fit(X_train, y_train, epochs=100, batch_size=10, verbose=1)
But i get this error:
Error when checking input: expected lstm_13_input to have 3 dimensions, but got array with shape (6900, 44)
As far as I understood you, your data is 44 dimensional and not a time series. An RNN is computing operations on a sequence of data, i.e. a 2D and not a 1D tensor. But you can still use a RNN for 1D vectors, by interpreting them not as one n-dimensional vector but as a time series of n steps, each containing a 1D vector.
model = Sequential()
model.add(Reshape((-1, 1)
model.add(LSTM(44, input_shape=(6900, 44), ))
model.add(Dense(1))
model.compile(loss='mape', optimizer='adam', metrics=['mse', 'mae', 'mape'])
model.fit(X_train, y_train, epochs=100, batch_size=10, verbose=1)

Categories

Resources