I have several seperate training datasets that when read as one crash the kernel in jupyter. So I did a workaround and read them separately in a sequence and call fit() on the same model-object.
To get accuracy metrics I am only grabbing the final history-object, but does this also represent all previous fit()-calls?
By default history is initialized as a callback every time anew when you call fit. This is unless you provide some alternative. One way to do so is to pass the model's history from one fit() call to the next fit() as a callback:
model.fit(x, y, batch_size, epochs, callbacks=[model.history])
this way the new values will be appended to the previously accumulated values, so you'd get statistics over multiple runs of fit().
If you need something more special - save and process history objects from each fit or write a custom callback with memory.
Related
I'm using Tensorflow 2.0 and trying to write a tf.keras.callbacks.Callback that reads both the inputs and outputs of my model for the batch.
I expected to be able to override on_batch_end and access model.inputs and model.outputs but they are not EagerTensor with a value that I could access. Is there anyway to access the actual tensors values that were involved in a batch?
This has many practical uses such as outputting these tensors to Tensorboard for debugging, or serializing them for other purposes. I am aware that I could just run the whole model again using model.predict but that would force me to run every input twice through the network (and I might also have non-deterministic data generator). Any idea on how to achieve this?
No, there is no way to access the actual values for input and output in a callback. That's not just part of the design goal of callbacks. Callbacks only have access to model, args to fit, the epoch number and some metrics values. As you found, model.input and model.output only points to the symbolic KerasTensors, not actual values.
To do what you want, you could take the input, stack it (maybe with RaggedTensor) with the output you care about, and then make it an extra output of your model. Then implement your functionality as a custom metric that only reads y_pred. Inside your metric, unstack the y_pred to get the input and output, and then visualize / serialize / etc. Metrics
Another way might be to implement a custom Layer that uses py_function to call a function back in python. This will be super slow during serious training but may be enough for use during diagnostic / debugging.
I have the following code where I want to use k-fold cross validation for a Linear Regression model:
kf = KFold(n_splits=100)
predi = cross_val_predict(model, train[columns], train[target], cv = kf)
predi = pandas.Series(predi)
model.fit(data[columns], data[target])
pred_test = model.predict(test[columns])
print(mean_squared_error(pred_test, test[target]))
However, I am not sure whether the code does what I would like it to do. Specifically, I am not sure about the model.fit part. Does it even use the cross-validation?
The reason why I am not sure that calculating it like this yields worse results than without cross-validation.
No. CV is just for checking the performance of model on a data (or rather different parts of it)
When you call fit(), it will fit the whole data supplied at the time whereas cross-validation only uses parts of the data (leaving 1 fold in each iteration). So this data difference may cause the estimator to perform better or worse.
model.fit doesn't have any functionality to divide the data. It just works on the cost function minimization problem and creates a model (means find parameters).
Also if you think that you create a loop and you divide the data on every iteration and call model.fit again and again you get the more generalized model, then it's not possible because on calling fit 2nd time on linear regression model object, it forgets about old data.
I am using Keras to train a network. Let's say that after 20 epochs I want to stop the training to check if everything is fine, then continue form the 21st epoch. Does calling the model.fit method for a second time reinitialize the already trained weights ?
Does calling the model.fit method for a second time reinitialize the already trained weights ?
No, it will use the preexisting weights your model had and perform updates on them. This means you can do consecutive calls to fit if you want to and manage it properly.
This is true also because in Keras you are also able to save a model (with the save and load_model methods), load it back, and call fit on it. For more info on that check this question.
Another option you got is to use the train_on_batch method instead:
train_on_batch(self, x, y, sample_weight=None, class_weight=None)
Runs a single gradient update on a single batch of data.
This way I think you may have more control in between the updates of you model, where you can check if everything is fine with the training, and then continue to the next gradient update.
I'm using Keras with Theano to train a basic logistic regression model.
Say I've got a training set of 1 million entries, it's too large for my system to use the standard model.fit() without blowing away memory.
I decide to use a python generator function and fit my model using model.fit_generator().
My generator function returns batch sized chunks of the 1M training examples (they come from a DB table, so I only pull enough records at a time to satisfy each batch request, keeping memory usage in check).
It's an endlessly looping generator, once it reaches the end of the 1 million, it loops and continues over the set
There is a mandatory argument in fit_generator() to specify samples_per_epoch. The documentation indicates
samples_per_epoch: integer, number of samples to process before going to the next epoch.
I'm assuming the fit_generator() doesn't reset the generator each time an epoch runs, hence the need for a infinitely running generator.
I typically set the samples_per_epoch to be the size of the training set the generator is looping over.
However, if samples_per_epoch this is smaller than the size of the training set the generator is working from and the nb_epoch > 1:
Will you get odd/adverse/unexpected training resulting as it seems the epochs will have differing sets training examples to fit to?
If so, do you 'fastforward' you generator somehow?
I'm dealing some something similar right now. I want to make my epochs shorter so I can record more information about the loss or adjust my learning rate more often.
Without diving into the code, I think the fact that .fit_generator works with the randomly augmented/shuffled data produced by the keras builtin ImageDataGenerator supports your suspicion that it doesn't reset the generator per epoch. So I believe you should be fine, as long as the model is exposed to your whole training set it shouldn't matter if some of it is trained in a separate epoch.
If you're still worried you could try writing a generator that randomly samples your training set.
I'm using Keras to predict a time series. As standard I'm using 20 epochs. I want to know what did my neural network predict for each one of the 20 epochs.
By using model.predict I get the last prediction. However I want all predictions, or at least the last 10 ones (which have acceptable error levels).
To access that I'm trying the ModelCheckpoint function from Keras, however I'm having trouble to access it afterwards. I'm using the following code:
model=Sequential()
model.add(GRU(input_dim=col,init='uniform',output_dim=20))
model.add(Dense(10))
model.add(Dense(5))
model.add(Activation("softmax"))
model.add(Dense(1))
model.compile(loss="mae", optimizer="RMSprop")
checkpoint=ModelCheckpoint(filepath='/Users/Alex/checkpoint.hdf5')
model.fit(X=predictor_train, y=target_train, nb_epoch=20, batch_size=batch,validation_split=0.1) #best validation split at 0.1
model.evaluate(X=predictor_train, y=target_train,batch_size=batch,show_accuracy=True)
print checkpoint
Objectively, my questions are:
I expected that after running the code I would find a file named checkpoint.hdf5 inside the folder /Users/Alex, however I didn't. What am I missing?
When I print checkpoint out what I get is a keras.callbacks.ModelCheckpoint object at 0x117471290. Is there a way to print what I want? How would the code look like?
Your help is very much appreciated :)
There are two problems in this code:
You are not passing the callback to the model's fit method. This is done with the keyword argument "callbacks".
The filepath should contain placeholders (like "{epoch:02d}-{val_loss:.2f}" that are used with str.format by Keras in order to save each epoch to a different file.
So the correct version should be something like:
checkpoint = ModelCheckpoint(filepath='/Users/Alex/checkpoint-{epoch:02d}-{val_loss:.2f}.hdf5')
model.fit(X=predictor_train, y=target_train, nb_epoch=20,
batch_size=batch,validation_split=0.1, callbacks=[checkpoint])
You can also add other kinds of callbacks in the list that is assigned to that keyword.
Unfortunately the callback object doesn't store the history information so it cannot be recovered from it.