import pickle
import streamlit as st
from streamlit_option_menu import option_menu
#loading models
breast_cancer_model = pickle.load(open('C:/Users/Jakub/Desktop/webap/breast_cancer_classification_nn_model.sav', 'rb')) #here is the error #BadZipFile
wine_quality_model = pickle.load(open('wine_nn_model.sav', 'rb')) #BadZipFile
Since it's not a zip file i tried zipping it, moving it to a different location, nothing I could think of worked
I am trying to convert my saved model to a tflite model, the saved model is saved on my desktop, however when I try and run this code:
I gen an error -
OSError: SavedModel file does not exist at: C:/Users/Omar/Desktop/model00000014.h5/{saved_model.pbtxt|saved_model.pb}.
Not sure what the problem is.
import tensorflow as tf
saved_model_dir = "r"C:/Users/Omar/Desktop/model00000014.h5""
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
If you're trying to convert a .h5 Keras model to a TFLite model, make sure you use TFLiteConverter.from_keras_model() method, as described in the docs,
model = tf.keras.models.load( "C:/Users/Omar/Desktop/model00000014.h5" )
converter = tf.lite.TFLiteConverter.from_keras_model( model )
open( 'model.tflite' , 'wb' ).write( converter.convert() )
In case of a SavedModel, use TFLiteConverter.from_saved_model() and mention the file path of the directory of the SavedModel,
saved_model_dir = 'path/to/savedModelDir'
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
You're providing a Keras Model to the TFLiteConverter.from_saved_model() method, which might be causing an error.
I have a model with me named "model.json" and I want to use that trained model in my python code so so you tell me how to convert the code or how can I load the "model.json" file in python to use that for any use.
you must of course also save the model weights in h5 format.
If you want to load the model from json do this
json_file = open('model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
#load weights into new model
loaded_model.load_weights("model.h5")
From your code i read you upload a dict so try this:
from keras.models import model_from_config
model = model_from_config(model_dict)
the model dict is the json.
For the placeholder problem try:
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
Let me know if you've solved it
#Saving the best model with Pickle (Neural %83.43)
import pickle
pickle.dump(classifier, open("NeuralNews", 'wb'))
loading = pickle.load(open("NeuralNews", 'rb'))
predictionPickleNeural = loading.predict(testResult2)
predictionPickleNeural = (predictionPickleNeural > 0.5)
acScorePickleNeural =
accuracy_score(lb.fit_transform(testDataForComparison),
predictionPickleNeural)
print("Accuracy Pickle Neural : " + str(acScorePickleNeural))
I can't find the 'Neural News' file that I created on Google Drive.
Is there a way to find out where it is?
Its inside the current directory of Google Cloud VM. You can try:
import os
os.listdir('.')
If you get some output like,
['.config', 'sample_data']
then you can even get listing by issuing the command like below,
!ls sample_data
to look inside the sample_data folder. Anyway, you can upload/save it to your Google drive or download it to your local machine also.
I am working on code using the gensim and having a tough time troubleshooting a ValueError within my code. I finally was able to zip GoogleNews-vectors-negative300.bin.gz file so I could implement it in my model. I also tried gzip which the results were unsuccessful. The error in the code occurs in the last line. I would like to know what can be done to fix the error. Is there any workarounds? Finally, is there a website that I could reference?
Thank you respectfully for your assistance!
import gensim
from keras import backend
from keras.layers import Dense, Input, Lambda, LSTM, TimeDistributed
from keras.layers.merge import concatenate
from keras.layers.embeddings import Embedding
from keras.models import Mode
pretrained_embeddings_path = "GoogleNews-vectors-negative300.bin"
word2vec =
gensim.models.KeyedVectors.load_word2vec_format(pretrained_embeddings_path,
binary=True)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-23bd96c1d6ab> in <module>()
1 pretrained_embeddings_path = "GoogleNews-vectors-negative300.bin"
----> 2 word2vec =
gensim.models.KeyedVectors.load_word2vec_format(pretrained_embeddings_path,
binary=True)
C:\Users\green\Anaconda3\envs\py35\lib\site-
packages\gensim\models\keyedvectors.py in load_word2vec_format(cls, fname,
fvocab, binary, encoding, unicode_errors, limit, datatype)
244 word.append(ch)
245 word = utils.to_unicode(b''.join(word),
encoding=encoding, errors=unicode_errors)
--> 246 weights = fromstring(fin.read(binary_len),
dtype=REAL)
247 add_word(word, weights)
248 else:
ValueError: string size must be a multiple of element size
Edit: The S3 url has stopped working. You can download the data from Kaggle or use this Google Drive link (be careful downloading files from Google Drive).
The below commands no longer work work.
brew install wget
wget -c "https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz"
This downloads the GZIP compressed file that you can uncompress using:
gzip -d GoogleNews-vectors-negative300.bin.gz
You can then use the below command to get wordVector.
from gensim import models
w = models.KeyedVectors.load_word2vec_format(
'../GoogleNews-vectors-negative300.bin', binary=True)
you have to write the complete path.
use this path:
https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz
try this -
import gensim.downloader as api
wv = api.load('word2vec-google-news-300')
vec_king = wv['king']
also, visit this link : https://radimrehurek.com/gensim/auto_examples/tutorials/run_word2vec.html#sphx-glr-auto-examples-tutorials-run-word2vec-py
Here is what worked for me. I loaded a part of the model and not the entire model as it's huge.
!pip install wget
import wget
url = 'https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz'
filename = wget.download(url)
f_in = gzip.open('GoogleNews-vectors-negative300.bin.gz', 'rb')
f_out = open('GoogleNews-vectors-negative300.bin', 'wb')
f_out.writelines(f_in)
import gensim
from gensim.models import Word2Vec, KeyedVectors
from sklearn.decomposition import PCA
model = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True, limit=100000)
You can use this URL that points to Google Drive's download of the bin.gz file:
https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit?resourcekey=0-wjGZdNAUop6WykTtMip30g
Alternative mirrors (including the S3 mentioned here) seem to be broken.