My data shape is the same, I just generated here random numbers. In real the datas are float numbers from range -6 to 6, I scaled them as well. The Input layer size and Encoding dimension have to remain the same. When I am training the loss starts and stays at 0.631 all the time. I changed the learning rate manually. I am new to python and do not know to implement to a grid search to this code to find the right parameters. What else can I do to tune my network ?
import numpy as np
from keras.layers import Input, Dense
from keras.models import Model
from keras import optimizers
#Train data
x_train=np.random.rand(2666000)
x_train = (train-min(train))/(max(train)-min(train))
x_train=x_train.reshape(-1,2000)
x_test=[]#empty testing later
#Enc Dimension
encoding_dim=100
#Input shape
input_dim = Input(shape=(2000,))
#Encoding Layer
encoded = Dense(encoding_dim, activation='relu')(input_dim)
#Decoding Layer
decoded = Dense(2000, activation='sigmoid')(encoded)
#Model AE
autoencoder = Model(input_dim, decoded)
#Model Encoder
encoder = Model(input_dim, encoded)
#Encoding
encoded_input = Input(shape=(encoding_dim,))
#Decoding
decoder_layer = autoencoder.layers[-1]
#Model Decoder
decoder = Model(encoded_input, decoder_layer(encoded_input))
optimizers.Adadelta(lr=0.1, rho=0.95, epsilon=None, decay=0.0)
autoencoder.compile(optimizer=optimizer, loss='binary_crossentropy',
metrics=['accuracy'])
#Train and test
autoencoder_train= autoencoder.fit(x_train, x_train,
epochs=epochs, shuffle=False, batch_size=2048)
I suggest adding more hidden layers. If your loss stays the same it means at least one of two things:
Your data is more or less random and there are no relationships to be drawn
Your model is not complex enough to learn meaningful relationships from your data
A rule of thumb for me is that a model should be powerful enough to overfit the data given enough training iterations.
Unfortunately there is a fine line between sufficiently complex and too complex. You have to play around with the number of hidden layers, the number of units in each layer, and the amount of epochs you take to train your network. Since you only have two Dense layers, a good starting point would be to increase model complexity.
If you insist on using a grid search keras has a wrapper for scikit_learn and sklearn has a grid search module. A toy example:
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
def create_model():
<return a compiled but untrained keras model>
model = KerasClassifier(build_fn = create_model, batch_size=1000, epochs=10)
#now write out all the parameters you want to try out for the grid search
activation = ['relu', 'tanh', 'sigmoid'...]
learn_rate = [0.1, 0.2, ...]
init = ['unform', 'normal', 'zero', ...]
optimizer = ['SGD', 'Adam' ...]
param_grid = dict(activation=activation, learn_rate=learn_rate, init=init, optimizer=optimizer)
grid = GridSearchCV(estimator=model, param_grid=param_grid)
result = grid.fit(X, y)
Related
I am used to working in PyTorch but now have to learn Tensorflow for my job. I am trying to get up to speed by creating a simple dense network and training it on the MNIST dataset, but I cannot get it to train. My super simple code:
import tensorflow as tf
from tensorflow.keras import Input
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.utils import to_categorical
# Load mnist data from keras
(train_data, train_label), (test_data, test_label) = tf.keras.datasets.mnist.load_data(path="mnist.npz")
train_label, test_label = to_categorical(train_label), to_categorical(test_label)
train_data, train_label, test_data, test_label = Flatten()(train_data), Flatten()(train_label), Flatten()(test_data), Flatten()(test_label)
# Create generic SGD optimizer (no learning schedule)
optimizer = SGD(learning_rate = 0.01)
# Define function to build and compile model
def build_mnist_model(input_shape, batch_size = 30):
input_img = Input(shape = input_shape, batch_size = batch_size)
# Pass through dense layer
x = Dense(200, activation = 'relu', use_bias = True)(input_img)
x = Dense(400, activation = 'relu', use_bias = True)(x)
scores = Dense(10, activation = 'softmax', use_bias = True)(x)
# Create and compile tf model
mnist_model = Model(input_img, scores)
mnist_model.compile(optimizer = optimizer, loss = 'categorical_crossentropy')
return mnist_model
# Build the model
mnist_model = build_mnist_model(train_data[0].shape)
# Train the model
mnist_model.fit(
x = train_data,
y = train_label,
batch_size = 30,
epochs = 20,
verbose = 2,
shuffle = True,
# steps_per_epoch = 200
)
When I run this I get
ValueError: When using data tensors as input to a model, you should specify the `steps_per_epoch` argument.
This does not really make sense to me because my train_data and train_label are just regular tensors and per the Tensorflow documentation in this case it should default to the number of samples in the dataset divided by the batch size (which would be 200 in my case).
At any rate, I tried specifying steps_per_epoch = 200 when I call mnist_model.fit() but then I get a different error:
InvalidArgumentError: Incompatible shapes: [60000,10] vs. [30,1]
[[{{node training_4/SGD/gradients/gradients/loss_5/dense_17_loss/softmax_cross_entropy_with_logits_grad/mul}}]]
I can't seem to discern where a size mismatch would come from. In PyTorch, I am used to manually creating batches (by subindexing my data and label tensors) but in Tensorflow this seems to happen automatically. As such, this leaves me quite confused about what batch has the wrong size, how it got the wrong size, etc. I hope this simple model is way easier than I am making it and I just do not know the Tensorflow tricks yet.
Thanks for the help.
ValueError: Can not squeeze dim[1], expected a dimension of 1, got 3 for 'metrics/sparse_categorical_accuracy/Squeeze' (op: 'Squeeze') with input shapes: [?,3].
The Iris dataset
In this assignment, you will use the Iris dataset. It consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters. For a reference, see the following papers:
R. A. Fisher. "The use of multiple measurements in taxonomic problems". Annals of Eugenics. 7 (2): 179–188, 1936.
Your goal is to construct a neural network that classifies each sample into the correct class, as well as applying validation and regularisation techniques.
Load and preprocess the data
First read in the Iris dataset using datasets.load_iris(), and split the dataset into training and test sets.
You can now construct a model to fit to the data. Using the Sequential API, build your model according to the following specifications:
The model should use the input_shape in the function argument to set the input size in the first layer.
The first layer should be a dense layer with 64 units.
The weights of the first layer should be initialised with the He uniform initializer.
The biases of the first layer should be all initially equal to one.
There should then be a further four dense layers, each with 128 units.
This should be followed with four dense layers, each with 64 units.
All of these Dense layers should use the ReLU activation function.
The output Dense layer should have 3 units and the softmax activation function.
In total, the network should have 10 layers.
from numpy.random import seed
seed(8)
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets, model_selection
get_ipython().run_line_magic('matplotlib', 'inline')
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Softmax
def read_in_and_split_data(iris_data):
return model_selection.train_test_split(
iris_data["data"],
iris_data["target"],
test_size=0.1
)
# In[3]:
# Run your function to generate the test and training data.
iris_data = datasets.load_iris()
(train_data, test_data,
train_targets, test_targets) = read_in_and_split_data(iris_data)
# We will now convert the training and test targets using a one hot encoder.
# In[4]:
# Convert targets to a one-hot encoding
train_targets = tf.keras.utils.to_categorical(np.array(train_targets))
test_targets = tf.keras.utils.to_categorical(np.array(test_targets))
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def get_model(input_shape):
"""
This function should build a Sequential model according to
the above specification. Ensure the weights are initialised
by providing the input_shape argument in the first layer, given by the
function argument.
Your function should return the model.
"""
model = Sequential([
Dense(64, activation = "rely",
kernel_initializer='he_uniform',
bias_initializer='ones',
input_shape=input_shape),
Dense(128, activation = "relu"),
Dense(128, activation = "relu"),
Dense(128, activation = "relu"),
Dense(128, activation = "relu"),
Dense(64, activation = "relu"),
Dense(64, activation = "relu"),
Dense(64, activation = "relu"),
Dense(64, activation = "relu"),
Dense(3, activation = "softmax"),
])
return model
# In[16]:
# Run your function to get the model
model = get_model(train_data[0].shape)
# #### Compile the model
#
# You should now compile the model using the `compile` method.
# Remember that you need to specify an optimizer, a loss function and
# a metric to judge the performance of your model.
# In[23]:
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def compile_model(model):
#model.compile(loss="sparse_categorical_crossentropy", optimizer="adam")
opt = tf.keras.optimizers.Adam(learning_rate=0.0001)
acc = tf.keras.metrics.SparseCategoricalAccuracy()
model.compile(optimizer=opt,
loss='sparse_categorical_crossentropy',
metrics=[acc] )
# In[24]:
# Run your function to compile the model
compile_model(model)
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def train_model(model, train_data, train_targets, epochs):
"""
This function should train the model for the given number of epochs on the
train_data and train_targets.
Your function should return the training history, as returned by model.fit.
"""
return model.fit(train_data, train_targets, epochs)
# Run the following cell to run the training for 800 epochs.
# In[26]:
# Run your function to train the model
history = train_model(model, train_data, train_targets, epochs=800)
This is because you have the wrong loss function. Your targets are one-hot encoded, so you should not use 'sparse_categorical_crossentropy'. Instead, you should use 'categorical_crossentropy'
Same thing for acc = tf.keras.metrics.SparseCategoricalAccuracy(). It should be acc = tf.keras.metrics.CategoricalAccuracy()
Okay so I'm pretty new to deep learning and have a very basic doubt. I have an input data with an array containing 255 data (Araay shape (255,)) in epochs_data and their corresponding labels in new_labels (Array shape (255,)).
I split the data using the following code:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(epochs_data, new_labels, test_size = 0.2, random_state=30)
I'm using a sequential model:
from keras.models import Sequential
from keras import layers
from keras.layers import Dense, Activation, Flatten
model = Sequential()
I know how to code for the hidden layers and output layer:
model.add(Dense(500, activation='relu')) #Hidden Layer
model.add(Dense(2, activation='softmax')) #Output Layer
But I don't know how to code layer for input with the input_shape specified. The X_train is the input.It's an array of shape (180,). Also tell me how to code the model.fit() for the same. Any help is appreciated.
You have to copy this line before the hidden layer. You can add the activation function that you want. Finally, as you can see this line represent both the input layer and the 1° hidden layer (you have to choose the n° of neuron (I put 100) )
model.add(Dense(100, input_shape = (X_train.shape[1],))
EDIT:
Before fitting your model you have to configure your model with this line:
model.compile(loss = 'mse', optimizer = 'Adam', metrics = ['mse'])
So you have to choose a metric that in this case is Mean Squarred Error and an optimizer like Adam, Adamax, ect.
Then you can fit your model choosing the data (X,Y), n° epochs, val_split and the batch size.
history = model.fit(X_train, y_train, epochs = 200,
validation_split = 0.1, batch_size=250)
I'm just trying to explore keras and tensorflow with the famous MNIST dataset.
I already applied some basic neural networks, but when it comes to tuning some hyperparameters, especially the number of layers, thanks to the sklearn wrapper GridSearchCV, I get the error below:
Parameter values for parameter (hidden_layers) need to be a sequence(but not a string) or np.ndarray.
So you can have a better view I post the main parts of my code.
Data preparation
# Extract label
X_train=train.drop(labels = ["label"],axis = 1,inplace=False)
Y_train=train['label']
del train
# Reshape to fit MLP
X_train = X_train.values.reshape(X_train.shape[0],784).astype('float32')
X_train = X_train / 255
# Label format
from keras.utils import np_utils
Y_train = keras.utils.to_categorical(Y_train, num_classes = 10)
num_classes = Y_train.shape[1]
Keras part
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
# Function with hyperparameters to optimize
def create_model(optimizer='adam', activation = 'sigmoid', hidden_layers=2):
# Initialize the constructor
model = Sequential()
# Add an input layer
model.add(Dense(32, activation=activation, input_shape=784))
for i in range(hidden_layers):
# Add one hidden layer
model.add(Dense(16, activation=activation))
# Add an output layer
model.add(Dense(num_classes, activation='softmax'))
#compile model
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=
['accuracy'])
return model
# Model which will be the input for the GridSearchCV function
modelCV = KerasClassifier(build_fn=create_model, verbose=0)
GridSearchCV
from keras.activations import relu, sigmoid
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import Dropout
from keras.utils import np_utils
activations = [sigmoid, relu]
param_grid = dict(hidden_layers=3,activation=activations, batch_size = [256], epochs=[30])
grid = GridSearchCV(estimator=modelCV, param_grid=param_grid, scoring='accuracy')
grid_result = grid.fit(X_train, Y_train)
I just want to let you know that the same kind of question has already been asked here Grid Search the number of hidden layers with keras but the answer is not complete at all and I can't add a comment to reply to the answerer.
Thank you!
You should add:
for i in range(int(hidden_layers)):
# Add one hidden layer
model.add(Dense(16, activation=activation))
Try to add the values of param_grid as lists :
params_grid={"hidden_layers": [3]}
When you are setting your parameter hidden layer =2 it goes as a string thus an error it throw.
Ideally it should a sequence to run the code that's what you error says
I am trying to build an LSTM Autoencoder to predict Time Series data. Since I am new to Python I have mistakes in the decoding part. I tried to build it up like here and Keras. I could not understand the difference between the given examples at all. The code that I have right now looks like:
Question 1: is how to choose the batch_size and input_dimension when each sample has 2000 values?
Question 2: How to get this LSTM Autoencoder working (the model and the prediction) ? This ist just the model, but how to predict? That it is predicting from the lets say starting from sample 10 on till the end of the data?
Mydata has in total 1500 samples, I would go with 10 time steps (or more if better), and each sample has 2000 Values. If you need more information I would include them as well later.
trainX = np.reshape(data, (1500, 10,2000))
from keras.layers import *
from keras.models import Model
from keras.layers import Input, LSTM, RepeatVector
parameter
timesteps=10
input_dim=2000
units=100 #choosen unit number randomly
batch_size=2000
epochs=20
Model
inpE = Input((timesteps,input_dim))
outE = LSTM(units = units, return_sequences=False)(inpE)
encoder = Model(inpE,outE)
inpD = RepeatVector(timesteps)(outE)
outD1 = LSTM(input_dim, return_sequences=True)(outD
decoder = Model(inpD,outD)
autoencoder = Model(inpE, outD)
autoencoder.compile(loss='mean_squared_error',
optimizer='rmsprop',
metrics=['accuracy'])
autoencoder.fit(trainX, trainX,
batch_size=batch_size,
epochs=epochs)
encoderPredictions = encoder.predict(trainX)
The LSTM model that I use is this one:
def get_model(n_dimensions):
inputs = Input(shape=(timesteps, input_dim))
encoded = LSTM(n_dimensions, return_sequences=False, name="encoder")(inputs)
decoded = RepeatVector(timesteps)(encoded)
decoded = LSTM(input_dim, return_sequences=True, name='decoder')(decoded)
autoencoder = Model(inputs, decoded)
encoder = Model(inputs, encoded)
return autoencoder, encoder
autoencoder, encoder = get_model(n_dimensions)
autoencoder.compile(optimizer='rmsprop', loss='mse',
metrics=['acc', 'cosine_proximity'])
history = autoencoder.fit(x, x, batch_size=100, epochs=100)
encoded = encoder.predict(x)
It works with the data that have, x is of size (3000, 180, 40), that is 3000 samples, timesteps=180 and input_dim=40.