I understand that I have to visualize my model I have to follow to steps: 1) Preprocessing the pre-trained model (lets assume it's called my_model.h5) and 2.) creation of the interactive model.
Further I have created a json file of my model as mentioned within the instructions (Model Preprocessing):https://tensorspace.org/html/docs/preKeras.html
I have node.js installed and I installed tensorspace via npm install tensorspace. However I'm not able to recall the API of tensorspace. Does anyone now if I missed something out?
Related
I am trying to load a spaCy text classification model that I trained previously. After training, the model was saved into the en_textcat_demo-0.0.0.tar.gz file.
I want to use this model in a jupyter notebook, but when I do
import spacy
spacy.load("spacy_files/en_textcat_demo-0.0.0.tar.gz")
I get
OSError: [E053] Could not read meta.json from spacy_files/en_textcat_demo-0.0.0.tar.gz
What is the correct way to load my model here?
You need to either unzip the tar.gz file or install it with pip.
If you unzip it, that will result in a directory, and you can give the directory name as an argument to spaCy load.
If you use pip install, it will be put with your other libraries, and you can use the model name like you would with a pretrained spaCy model.
Description
I had a setup for training using the object detection API that worked really well, however I have had to upgrade from TF1.15 to TF2 and so instead of using model_main.py I am now using model_main_tf2.py and using mobilenet ssd 320x320 pipeline to transfer train a new model.
When training my model in TF1.15 it would display a whole heap of scalars as well as detection box image samples. It was fantastic.
In TF2 training I get no such data, just loss scalars and 3 input images!! and yet the event files are huge gigabytes! where as they were in hundreds of megs using TF1.15
The thing is there is nowhere to specify what data is presented. I have not changed anything other than which model_main py file I use to run the training. I added num_visualizations: to the pipeline config file but no visualizations of detection boxes appear.
Can someone please explain to me what is going on? I need to be able to see whats happening throughout training!
Thank You
I am training on PC in virtual environment before performing TRT optimization in Linux but I think that is irrelevant here really.
Environment
GPU Type: P220
Operating System + Version: Win10 Pro
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): 2
Relevant Files
TF1.15 vs TF2 screenshots:
TF1 (model_main.py) Tensorboard Results
TF2 (model_main_tf2.py) Tensorboard Results
Steps To Reproduce
The repo I am working with GitHub Object Detection API
Model
Pipeline Config File
UPDATE: I have investigated further and discovered that the tensorboard settings are being set in Object Detection Trainer 1 for TF1.15 and Object Detection Trainer 2 for TF2
So if someone who knows more than I do about this could work out what the difference is and what I need to do to get same result in tensorboard with v2 as I do with the first one that would be amazing and save me enormous headache. It would seem that this, even though it is documented as being for TF2, is not actually following TF2 syntax but I could be wrong.
Excuse me . I have a question . I'm working with a distributed version of tensorflow and keras and I succeeded to make a sample deep learning network work on multiple programs (python scripts) working together but I don't know how to save final single model on one of hosts while currently each script is saving it's own model as checkpoint separately.
Thanks.
Source code I used to develop my program :
Distributed tensorflow / keras code sample on github
For example in the source code above , final model save path is not set , would you tell me how to set it ?
I'm running my python-project which is about training a neural network with tensorflow at PyCharm, and find that my network has been already well-trained since the second restart of running my project.
I have no command to restore any trained model in my project (I do have commands for saving models). Anyone knows anything about my problem?
Is it possible that tensorflow or pycharm have default settings to save and restore?
Great thanks!
I have read through some of the PFA documentation and understand that PFA model can be imported and used in Production deployments (I have followed along a few examples on GitHub). However, it is not clear to me how the PFA model is exported/generated. Is it possible to export Python scikit-learn model as a PFA model? Is it possible to export Tensorflow model built in Python as a PFA model? Would you please be able to provide guidelines on the export process?
There is no compatibility at the moment for that, you can check it out in their wiki in github https://github.com/opendatagroup/hadrian/wiki
It is possible to translate some SKLearn models to PFA code using the SKompiler library:
from skompiler import skompile
skompile(model.predict).to('pfa/yaml')