Keras plot_model() function: More elaborate output - python

I'm using the Keras plot_model() function to visualize my machine learning models. Apart from having the issue that the first node of my output is always simply a very large number, there is another thing annoying me about this function: It does not provide a very elaborate output. For example, I would like to be able to see more information about the used loss function, the batch size, the number of epochs, the used optimizer, etc...
Is there any way I can retrieve this information from a model I previously saved to the disk and loaded again with the model_from_json() function?

How about TensorBoardCallback? It will create interactive graphs that you can explore based on your model if you use Tensorflow as your backend.
You just need add it as a callback to your fit function and make sure write_graph=True is set (which it is by default). If you want a shortcut you can directly invoke its methods instead of passing as a callback:
tensorboard = TensorboardCallback()
tensorboard.set_model(model) # your model here, will write graph etc
tensorboard.on_train_end() # will close the writer
Then just run tensorboard --logdir=./logs to start the server.

Related

Tensorflow 2.0: Accessing a batch's tensors from a callback

I'm using Tensorflow 2.0 and trying to write a tf.keras.callbacks.Callback that reads both the inputs and outputs of my model for the batch.
I expected to be able to override on_batch_end and access model.inputs and model.outputs but they are not EagerTensor with a value that I could access. Is there anyway to access the actual tensors values that were involved in a batch?
This has many practical uses such as outputting these tensors to Tensorboard for debugging, or serializing them for other purposes. I am aware that I could just run the whole model again using model.predict but that would force me to run every input twice through the network (and I might also have non-deterministic data generator). Any idea on how to achieve this?
No, there is no way to access the actual values for input and output in a callback. That's not just part of the design goal of callbacks. Callbacks only have access to model, args to fit, the epoch number and some metrics values. As you found, model.input and model.output only points to the symbolic KerasTensors, not actual values.
To do what you want, you could take the input, stack it (maybe with RaggedTensor) with the output you care about, and then make it an extra output of your model. Then implement your functionality as a custom metric that only reads y_pred. Inside your metric, unstack the y_pred to get the input and output, and then visualize / serialize / etc. Metrics
Another way might be to implement a custom Layer that uses py_function to call a function back in python. This will be super slow during serious training but may be enough for use during diagnostic / debugging.

Accessing Estimator evaluation results via SessionRunHooks

I'm trying to modify a program that uses the Estimator class in TensorFlow (v1.10) and I would like to access the evaluation metric results every time evaluation occurs so that I can copy the checkpoint files only when a new maximum has been achieved.
One idea I had was to create a class inheriting from SessionRunHook, doing the work I want in the after_run method. According to the documentation I can specify what is passed to after_run using before_run. However I cannot find a way to access the evaluation metrics results I want from the information passed in to before_run.
I looked into the Estimator code and it appears that it is writing the results to a summary file so another idea I had was to read this back in the after_run method, but the summary api doesn't seem to provide any read operations.
Are there any other ways I can achieve what I want to do? Not using the Estimator class is not an option as that would involve drastic changes to the code I'm working with.
Checkpoints are not the same as exporting. Checkpoints are about fault-recovery and involve saving the complete training state (weights, global step number, etc.).
In your case I would recommend exporting. The exported model will written to a directory called “exporter” and the serving input function specifies what the end-user will be expected to provide to the prediction service.
You can use the class "Best Exporter" to just export the models that are perfoming best:
https://www.tensorflow.org/api_docs/python/tf/estimator/BestExporter
This class exports the serving graph and checkpoints of the best models.
Also, it performs a model export everytime when the new model is better than any exsiting model.

How to make Keras automatically load custom metrics

I'm using Keras with Cern ROOT and its analysis package TMVA. The way it works is that I use Keras to initialize the NN, then save it to a file, then TMVA loads that file in. The problem is that I am using custom metrics when setting up the neural network, and when doing this, Keras wants you to do something like
models.load_model(model_path, custom_objects={"my_object":my_object})
Unfortunately, the way that TMVA takes arguments requires that I only supply the filename of the model file being used. However, based on the error messages that I am getting, it is clear that it is simply using Keras to load the model in. My question is, how do I force Keras to automatically load my custom objects in without having to use the above line, as this is incompatible with the package I'm trying to use.

Multiple sessions in the code when using skflow

I am using skflow for simple classification. What I noticed is that I have multiple sessions/graphs. For example, error that I get is
ValueError: Cannot execute operation using Run(): No default session is registered. Use 'with default_session(sess)' or pass an explicit session to Run(session=sess)
When I try to set in the main function a tf.Session().as_default(), I realized that there is another session and graph created by skflow. After research that came to be true. Indeed, skflow creates it here.
classifier = skflow.TensorFlowEstimator(
model_fn=model_fn, n_classes=2,
steps=100, optimizer='Adam',
learning_rate=0.01, continue_training=True)
My problem is that I want to print some variables that are used during training. For example, word embeddings matrix. In my model_fn, I save the word embeddings matrix since I have access to it. But when I try printing it, it seems that sessions was closed and i get that error I mentioned above. So I am not sure how I can set one default sessions, why skflow created another one, how can I print a varibale that is used inside of skflow classifier, and also why in SummaryWriter Graph tensorboard I only see the main graph (not the one in skflow)?
I might be very wrong so Any help would be appreciated!

Multiple networks in Theano

I'd like to have 2 separate networks running in Theano at the same time, where the first network trains on the results of the second. I could embed both networks in the same structure but that would be a real mess in the entire forward pass (and probably won't even work because of the shared variables etc.)
The problem is that when I define a theano function I don't specify the model it's applied on, meaning if I'm having a predict and a train function they'll both work on the first model I define.
Is there a way to overcome that issue?
In a rather simplified way I've managed to find a nice solution. The trick was to create one model, define its function and then create the other model and define the second function. Works like a charm

Categories

Resources