I am training my model using model.train_on_batch(). I dig into Keras code how to use train_ob_batch properly. I notice before calling this function, Keras calls several routines, including model._make_train_function().
But, I found many Github sources, they use model.train_on_batch without calling _make_train_function() first.
So, my question is whether calling _make_train_function() has to be called first before train_on_batch? I am using Keras 2.2.4 and Tensorflow 1.12.0
Thanks.
Related
I have the Ensemble model that combines both tensorflow and scikit-learn. And I would like to save this Ensemble model as a box to feed data in and generate the output. My code is as below
def model_base_LSTM(***):
***
model = model_base_LSTM(***)
ensem_model = BaggingRegressor(base_estimator=model, n_estimators=15)
ensem_model.fit(x_train, y_train)
bag_mod_pred = ensem_model.predict(x_test_bag)
from joblib import dump, load
dump(ensem_model, 'LSTM_Ensemble.joblib')
TypeError: can't pickle _thread._local objects
So, how to solve this problem??
You can save your TensorFlow (and even PyTorch) models with Scikit-Learn, but only if you use Neuraxle and its saving mechanics.
Neuraxle is an extension of Scikit-Learn to make it more compatible with all deep learning libraries.
The trick is performed by using Neuraxle-TensorFlow or Neuraxle-PyTorch.
Why so?
Using one of Neuraxle-TensorFlow or Neuraxle-PyTorch will provide you with a saver to allow your thing to be serialized correctly. You want it to be serialized correctly to be able to ensure compatibility between scikit-learn and your Deep Learning framework when it comes time to save or parallelize things and so forth. You can read how Neuraxle solves this with savers here.
Code Examples
Here is a full project example from A to Z where TensorFlow is used with Neuraxle as if it was used with Scikit-Learn.
Here is another practical example where TensorFlow is used within a scikit-learn-like pipeline
I'm trying to use tf.contrib.layers.fully_connected() in one of my projects, and it's been deprecated in tensorflow 2.0. Is there an equivalent function, or should I just keep tensorflow v1.x in my virtual environment for this projcet?
In TensorFlow 2.0 the package tf.contrib has been removed (and this was a good choice since the whole package was a huge mix of different projects all placed inside the same box), so you can't use it.
In TensorFlow 2.0 we need to use tf.keras.layers.Dense to create a fully connected layer, but more importantly, you have to migrate your codebase to Keras. In fact, you can't define a layer and use it, without creating a tf.keras.Model object that uses it.
tf-slim, as a standalone package, already included tf.contrib.layers.you can install by pip install tf-slim,call it by from tf_slim.layers import layers as _layers; _layers.fully_conntected(..).The same as the original, easy to replace
use: tf.compat.v1.layers.dense
for example, instead of
Z = tf.contrib.layers.fully_connected(F, num_outputs, activation_fn=None)
you can replace it with:
Z = tf.compat.v1.layers.dense(F, num_outputs, activation = None)
tf.contrib.layers.fully_connected() is a perfect mess. It is a very old historical mark(or a prehistory DNN legacy). Google has completely deprecated the function since Google hated it. There is no any direct function in TensoFlow 2.x to replace tf.contrib.layers.fully_connected(). Therefore, it is not worth inquiring and getting to know the function.
I'm using Keras with Cern ROOT and its analysis package TMVA. The way it works is that I use Keras to initialize the NN, then save it to a file, then TMVA loads that file in. The problem is that I am using custom metrics when setting up the neural network, and when doing this, Keras wants you to do something like
models.load_model(model_path, custom_objects={"my_object":my_object})
Unfortunately, the way that TMVA takes arguments requires that I only supply the filename of the model file being used. However, based on the error messages that I am getting, it is clear that it is simply using Keras to load the model in. My question is, how do I force Keras to automatically load my custom objects in without having to use the above line, as this is incompatible with the package I'm trying to use.
I have been implementing some deep nets in Keras, but have eventually gotten frustrated with some limitations (for example: setting floatx to float16 fails on batch normalization layers, and the only way to fix it is to actually edit the Keras source; implementing custom layers requires coding them in backend code, which destroys the ability to switch backends), there appear to be no parallel training mechanisms [unlike tf.Estimator], and even vanilla programs run 30% slower in Keras than in tf (if one is to trust the interwebs), and was grumbling about moving to tensorflow, but was pleased to discover that TensorFlow (especially if you use tf.layers stuff) is not actually any longer for anything imaginable you might want to do. Is this a failure of my imagination, or is tf.layers basically a backporting of Keras into core TensorFlow, and is there any actual use case for Keras?
Keras used to have an upper hand on TensorFlow in the past but ever since the author is now affiliated with Google all the features that made it attractive are being implemented into TensorFlow you can check version 1.8, like you rightfully pointed out tf.layers is one such example.
Is there any documentation on whether these are the same or different modules and the differences between these?
keras.layers.Conv2D
keras.layers.Convolution2D
keras.layers.convolutional.Conv2D
keras.layers.convolutional.Convolution2D
They're aliases for the same functionality. The reason behind this is related to the new Keras 2 API, which tries to give users some time to migrate their code to the new one, and to use the shorter one with different parameters. The other aliases eventually are going to be deprecated, since when using old Keras API shows warnings.
See Keras code