Load custom trained spaCy model - python

I am trying to load a spaCy text classification model that I trained previously. After training, the model was saved into the en_textcat_demo-0.0.0.tar.gz file.
I want to use this model in a jupyter notebook, but when I do
import spacy
spacy.load("spacy_files/en_textcat_demo-0.0.0.tar.gz")
I get
OSError: [E053] Could not read meta.json from spacy_files/en_textcat_demo-0.0.0.tar.gz
What is the correct way to load my model here?

You need to either unzip the tar.gz file or install it with pip.
If you unzip it, that will result in a directory, and you can give the directory name as an argument to spaCy load.
If you use pip install, it will be put with your other libraries, and you can use the model name like you would with a pretrained spaCy model.

Related

Loading keras model into tensorspace

I understand that I have to visualize my model I have to follow to steps: 1) Preprocessing the pre-trained model (lets assume it's called my_model.h5) and 2.) creation of the interactive model.
Further I have created a json file of my model as mentioned within the instructions (Model Preprocessing):https://tensorspace.org/html/docs/preKeras.html
I have node.js installed and I installed tensorspace via npm install tensorspace. However I'm not able to recall the API of tensorspace. Does anyone now if I missed something out?

How to use an exported model from google colab in Pycharm

I have a LSTM Keras Tensorflow model trained and exported in .h5 (HDF5) format.
My local machine does not support keras tensorflow. I have tried installing. But does not work.
Therefore, i used google colabs and exported the model.
I would like to know, how i can use the exported model in pycharm
Edit : I just now installed tensorflow on my machine
Thanks in Advance
Found the answer :
I ve exported the model as follows
model.save('/content/drive/My Drive/Colab Notebooks/model.h5')
Then i downloaded the file and saved in the folder where my other codes are. I have installed tensorflow.
Next i load the code and predicted using the saved model as follows.
import keras
model=keras.models.load_model('/content/drive/My Drive/Colab Notebooks/model.h5')
model.predict(instace)
You still need keras and tensorflow to use the model.
The accepted answer is correct but it misses that you first need to mount the "/content/drive"
from google.colab import drive
drive.mount('/content/drive')
Then you can save the weights of the model:
model.save_weights('my_model_weights.h5')
..or even save the whole model :
model.save('my_model.h5')
Once done, disconnect your mounted point using:
drive.flush_and_unmount()

How to convert tflite model to quantized model without pb file

I'm trying to convert a tflite model to it's quantized version. (I try to convert the pose estimation module multi_person_mobilenet_v1_075_float.tflite hosted here) to it's quantized version.
I therefore installed the tflite_converter command line tool, recommended here. But the examples do not fit my case where I only have a *.tflite file and no corresponding frozen_graph.pb file.
Thus when I just call
tflite_convert --output_file multi_person_quant.tflite --saved_model_dir ./
from within the directory containing multi_person_mobilenet_v1_075_float.tflite, I get an error message:
IOError: SavedModel file does not exist at: .//{saved_model.pbtxt|saved_model.pb}
I guess I need a .pb file for whatever I want to do... Any idea how to generate it from the *.tflite file?
Any other advice would also be helpful.

How to generate HDF5/H5 file in Keras with existing Python project

I am trying to get HDF5/H5 file from existing project in keras.
this attention_ocr is related to OCR written in python. I would like to generate HDF5/H5 file so I can convert that with tensorflowjs_converter[ref] and will use in browser.
Reference:
How to import a TensorFlow SavedModel into TensorFlow.js
Importing a Keras model into TensorFlow.js
I am looking for installing keras environment and generating HDF5/H5 file.
Once your model is trained in keras, saving it as an HDF5 is simply one line:
my_model.save('my_filename.h5')

Tensorflow with poets

I have questions to ask about why tensorflow with poets was not able to classify the image i want. I am using Ubuntu 14.04 with tensorflow installed using docker. Here is my story:
After a successful retrained on flower category following this link here. I wish to train on my own category as well, I have 10 classes of images and they are well organized according to the tutorial. My photos were also stored in the tf_files directory and following the guide i retrain the inception model on my category.
Everything on the retraining went well. However, as I tired to classify the image I want, I was unable to do so and I have this error. I also tried to look for py file in /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/errors_impl.py but my dist-packages were empty! Can someone help me in this! Where can I find the py files? Thank you!
Your error indicates that it is unable to locate the file. I would suggest you to execute the following command in the directory where you have the graph and label files
python label_image.py exact_path_to_your_testimage_file.jpg

Categories

Resources