How to get text Prediction accuracy from the saved model" - python

New to tensorflow, html, and stuck badly in text classification.
i'am trying to detect positive and negative polarity of text. trained the model in browser on manually filtered text (sentences) for both neagative and postive classes and saved it in .JSON and .BIN File.
async function saveFile(){ const saveResults = await model.save('downloads://my-model-1');}
Loaded back the files by user input
async function loadFile(){ const jsonUpload = document.getElementById('json-upload'); const weightsUpload = document.getElementById('weights-upload');
const model = await tf.loadModel(tf.io.browserFiles([jsonUpload.files[0], weightsUpload.files[0]]));
model.compile({loss: "categoricalCrossentropy", optimizer: "adam", metrics:'accuracy'});
model.summary();
}
summary of model loaded, trained and saved from small portion of data
stuck in Re_creating model ( ERROR: Uncaught TypeError: Sequential model cannot be built: model is empty. Add some layers first.)
What i need to do is LOAD MODEL, this model should predict the polarity of user input text as negative/positive, detection accuracy.
Any one can help please in bit detailed, As learning it but not able to get it via tutorials from https://www.tensorflow.org/js
Model
// Define a model
model = tf.sequential();
console.log(sequence_length);
//Add layers to model
model.add(tf.layers.embedding({
inputDim: vocabulary_size,
outputDim: embedding_dim,
inputLength: sequence_length,
trainable: true
}));
addCLayers();
model.add(tf.layers.dropout ({rate:0.2}));
model.add(tf.layers.flatten());
model.add(tf.layers.dense({units: 100, activation: 'sigmoid'}));
model.add(tf.layers.dense({units: 1000, activation: 'sigmoid'}));
model.add(tf.layers.dense({units: 100, activation: 'sigmoid'}));
model.add(tf.layers.dense({units: 2, activation: 'softmax'}));

I believe I see two issues.
Firstly, you're using loadModel, which has been deprecated. You'll want to switch to loadLayersModel soon. This can also accept an IOHandler, just like you're currently using.
The second thing is you need to compile the model before saving. I see you're loading and then trying to compile. There should be no need to compile after loading. You CAN however load a layerless model.
Assure your model is in good standing before you save it. That seems to be where the problem is.

Related

Azure machine learning unable to load pytorch Model from the outputs folder

I am unable to load saved pytorch model from the outputs folder in my other scripts.
I am using following lines of code to save the model:
os.makedirs("./outputs/model", exist_ok=True)
torch.save({
'model_state_dict': copy.deepcopy(model.state_dict()),
'optimizer_state_dict': optimizer.state_dict()
}, './outputs/model/best-model.pth')
new_run.upload_file("outputs/model/best-model.pth", "outputs/model/best-model.pth")
saved_model = new_run.register_model(model_name='pytorch-model', model_path='outputs/model/best-model.pth')
and using the following code to access it:
global model
best_model_path = 'outputs/model/best-model.pth'
model_checkpoint = torch.load(best_model_path)
model.load_state_dict(model_checkpoint['model_state_dict'], strict = False)
but when I run the above mentioned code, I get this error: No such file or directory: './outputs/model/best-model.pth'
Also I want to know is there a way to get the saved model from Azure Models? I have tried to get it by using following lines of code:
from azureml.core.model import Model
model = Model(ws, "Pytorch-model")
but it returns Model type object which returns error on model.eval() (error: Model has no such attribute eval()).
There is no global output folder. If you want to use a Model in a new script you need to give the script the model as an input or register the model and download the model from the new script.
The Model object form from azureml.core.model import Model is not your pytorch Model. 1
You can use model.register(...) to register your model. And model.download(...) to download you model. Than you can use pytorch to load you model. 2

How to use evaluate and predict functions in keras implementation of SincNet?

thanks for your atention, I'm developing an automatic speaker recognition system using SincNet.
Ravanelli, M., & Bengio, Y. (2018, December). Speaker recognition from raw waveform with sincnet. In 2018 IEEE Spoken Language Technology Workshop (SLT) (pp. 1021-1028). IEEE.
Since the network is coded in Pytorch I searched and found a Keras implementation here https://github.com/grausof/keras-sincnet. I adapted the train.py code to train a Sincnet with my own data in Tensorflow 2.0, and worked fine, I saved only the weights of my trained network, my training data has shape 128,3200,1 for inputs and 128 for labels per batch
#Creates a Sincnet model with input_size=3200 (wlen), num_classes=40, fs=16000
redsinc = create_model(wlen,num_classes,fs)
#Saves only weights and stopearly callback
checkpointer = ModelCheckpoint(filepath='checkpoints/SincNetBiomex3.hdf5',verbose=1,
save_best_only=True, monitor='val_accuracy',save_weights_only=True)
stopearly = EarlyStopping(monitor='val_accuracy',patience=3,verbose=1)
callbacks = [checkpointer,stopearly]
# optimizer = RMSprop(lr=learnrate, rho=0.9, epsilon=1e-8)
optimizer = Adam(learning_rate=learnrate)
# Creates generator of training batches
train_generator = batchGenerator(batch_size,train_inputs,train_labels,wlen)
validinputs, validlabels = create_batches_rnd(validation_labels.shape[0],
validation_inputs,validation_labels,wlen)
#Compiling model and train with function fit_generator
redsinc.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
history = redsinc.fit_generator(train_generator, steps_per_epoch=N_batches, epochs = epochs,
verbose = 1, callbacks=callbacks, validation_data=(validinputs,validlabels))
The problem came when I tried to evaluate the network, I didn't use the code found in test.py, I only loaded the weights I previously saved and use the function evaluate, my test data had the shape 1200,3200,1 for the inputs and 1200 for labels.
# Create a Sincnet model and load previously saved weights
redsinc = create_model(wlen,num_clases,fs)
redsinc.load_weights('checkpoints/SincNetBiomex3.hdf5')
test_loss, test_accuracy = redsinc.evaluate(x=eval_in,y=eval_lab)
RuntimeError: You must compile your model before training/testing. Use `model.compile(optimizer,
loss)`.
Then I added the same compile code I used for training:
optimizer = Adam(learning_rate=0.001)
redsinc.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
Then rerun the test code and got this:
WARNING:tensorflow:From C:\Users\atenc\Anaconda3\envs\py3.7-tf2.0gpu\lib\site-
packages\tensorflow_core\python\ops\resource_variable_ops.py:1781: calling
BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is
deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
ValueError: A tf.Variable created inside your tf.function has been garbage-collected. Your code needs to keep Python references to variables created inside `tf.function`s.
A common way to raise this error is to create and return a variable only referenced inside your function:
#tf.function
def f():
v = tf.Variable(1.0)
return v
v = f() # Crashes with this error message!
The reason this crashes is that #tf.function annotated function returns a **`tf.Tensor`** with the **value** of the variable when the function is called rather than the variable instance itself. As such there is no code holding a reference to the `v` created inside the function and Python garbage collects it.
The simplest way to fix this issue is to create variables outside the function and capture them:
v = tf.Variable(1.0)
#tf.function
def f():
return v
f() # <tf.Tensor: ... numpy=1.>
v.assign_add(1.)
f() # <tf.Tensor: ... numpy=2.>
I don't understand the error since I've evaluated other networks with the same function and never got any problems. Then I decided to use predict function to match predicted labels with correct labels and obtain all metrics with my own code but I got another error.
# Create a Sincnet model and load previously saved weights
redsinc = create_model(wlen,num_clases,fs)
redsinc.load_weights('checkpoints/SincNetBiomex3.hdf5')
print('Model loaded')
#Predict labels with test data
predict_labels = redsinc.predict(eval_in)
Error while reading resource variable _AnonymousVar212 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar212/class tensorflow::Var does not exist.
[[node sinc_conv1d/concat_104/ReadVariableOp (defined at \Users\atenc\Anaconda3\envs\py3.7-tf2.0gpu\lib\site-packages\tensorflow_core\python\framework\ops.py:1751) ]] [Op:__inference_keras_scratch_graph_13649]
Function call stack:
keras_scratch_graph
I hope someone can tell me what these errors mean and how to solve them, I've searched for solutions to them but most of the solutions I've found don't seem related to my problem so I can't apply those solutions. I'm guessing the errors are caused by the Sincnet layer code, because it is a custom coded layer. The code for Sincnet layer can be found in the github repository in the file sincnet.py.
I appreciate all help I can get, again thank you for your atention.
You should downgrade your tf and keras version, it works to me when I faced the same problem.
Try this keras==2.1.6; tensorflow-gpu==1.13.1

Alternative for Lambda layer in yolo3 Keras

My Goal
I want to train a custom object detection model in Tensorflow(python) and use it using Tensorflow js after digging lot of example I found this which is widely popular
What I have done
I have written ( taken help form online examples ) the Tensorflow JS part to load a model from local and get the get the predictions. I used with COCO pretrained model It is working fine (so not adding the code here).
What is my problem
I am very new to python and Tensorflow.
The example for training qqwweee/keras-yolo3 the model is in python and it is Lamda from Keras
from keras.layers import Input, Lambda for this places
model.compile(optimizer=Adam(lr=1e-3), loss={
# use custom yolo_loss Lambda layer.
'yolo_loss': lambda y_true, y_pred: y_pred})
And
model.compile(optimizer=Adam(lr=1e-4), loss={'yolo_loss': lambda y_true, y_pred: y_pred}) # recompile to apply the change
And
model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss',
arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.5})(
[*model_body.output, *y_true])
model = Model([model_body.input, *y_true], model_loss)
So What I understood so far, Lambda is mainly used for calculating the loss function, and this is causing main problem in TFJS because Lambda layer is not implemented till now I want to use some alternative instead of lambda layer.
This is the error I am getting while using the trained model in TFJS
Error loading layer ValueError: Unknown layer: Lambda. This may be due to one of the following reasons:
1. The layer is defined in Python, in which case it needs to be ported to TensorFlow.js or your JavaScript code.
2. The custom layer is defined in JavaScript, but is not registered properly with tf.serialization.registerClass().
Similar question is also asked here "Unknown layer: Lambda" in tensorflowjs on browser, it talks about writing a custom layer , the example is not enough to do that, ultimately leads to dead end.
What I want
Is there any way to use any other loss function insteed of lambda ? how?
Is there any example for writing custom layer for lambda
Where are my understanding are wrong?
p.s: I spent hell lot of time to find the solution, any help will be appreciated, Thanks in advance
After adding the empty lambda layer given by #edkeveked (Thanks!), the error Error loading layer ValueError: Unknown layer: Lambda is gone but ran into something else.
Check the model summary here
Now, In the model warmup itslef thorowing this error
code for warmup
let zero = tfNode.zeros([1, 416, 416, 3]);
const result = await this.model.predict(zero)
result.map(async (t) => await t.data());
result.map(async (t) => t.dispose());
code for image prediction
batched = tfNode.tidy(() => {
if (!(img instanceof tfNode.Tensor)) {
img = tfNode.browser.fromPixels(img);
}
return img.expandDims(0);
});
result = await this.model.predict(batched);
Error I am getting
"Error: Error when checking model : the Array of Tensors that you are passing to your model is not the size the the model expected. Expected to see 4 Tensor(s), but instead got 1 Tensors(s).
at new ValueError (XXX\node_modules\#tensorflow\tfjs-layers\dist\errors.js:68:28)
at checkInputData (XXX\node_modules\#tensorflow\tfjs-layers\dist\engine\training.js:316:19)
at LayersModel.predict (XXX\node_modules\#tensorflow\tfjs-layers\dist\engine\training.js:981:9)
at ObjectDetection.warmUp (XXX\tensorflow_predownloaded_model.js:47:45)
at XXX\tensorflow_predownloaded_model.js:38:18"
Since the Lambda layer is not yet supported, it need to be provided for the conversion to work.
Moreover, the loaded layer is not used for training, so the lambda layer can be empty. (code not tried)
class Lambda extends tf.layers.Layer {
constructor() {
super({})
}
static get className() {
return 'Lambda';
}
}
tf.serialization.SerializationMap.register(Lambda);
;

Description of TF Lite's Toco converter args for quantization aware training

These days I am trying to track down an error concerning the deployment of a TF model with TPU support.
I can get a model without TPU support running, but as soon as I enable quantization, I get lost.
I am in the following situation:
Created a model and trained it
Created an eval graph of the model
Froze the model and saved the result as protocol buffer
Successfully converted and deployed it without TPU support
For the last point, I used the TFLiteConverter's Python API. The script that produces a functional tflite model is
import tensorflow as tf
graph_def_file = 'frozen_model.pb'
inputs = ['dense_input']
outputs = ['dense/BiasAdd']
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file, inputs, outputs)
converter.inference_type = tf.lite.constants.FLOAT
input_arrays = converter.get_input_arrays()
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_model = converter.convert()
open('model.tflite', 'wb').write(tflite_model)
This tells me that my approach seems to be ok up to this point. Now, if I want to utilize the Coral TPU stick, I have to quantize my model (I took that into account during training). All I have to do is to modify my converter script. I figured that I have to change it to
import tensorflow as tf
graph_def_file = 'frozen_model.pb'
inputs = ['dense_input']
outputs = ['dense/BiasAdd']
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file, inputs, outputs)
converter.inference_type = tf.lite.constants.QUANTIZED_UINT8 ## Indicates TPU compatibility
input_arrays = converter.get_input_arrays()
converter.quantized_input_stats = {input_arrays[0]: (0., 1.)} ## mean, std_dev
converter.default_ranges_stats = (-128, 127) ## min, max values for quantization (?)
converter.allow_custom_ops = True ## not sure if this is needed
## REMOVED THE OPTIMIZATIONS ALTOGETHER TO MAKE IT WORK
tflite_model = converter.convert()
open('model.tflite', 'wb').write(tflite_model)
This tflite model produces results when loaded with the Python API of the interpreter, but I am not able to understand their meaning. Also, there is no (or if there is, it is hidden well) documentation on how to choose mean, std_dev and the min/max ranges. Also, after compiling this with the edgetpu_compiler and deploying it (loading it with the C++ API), I receive an error:
INFO: Initialized TensorFlow Lite runtime.
ERROR: Failed to prepare for TPU. generic::failed_precondition: Custom op already assigned to a different TPU.
ERROR: Node number 0 (edgetpu-custom-op) failed to prepare.
Segmentation fault
I suppose I missed a flag or something during the conversion process. But as the documentation is also lacking here, I can't say for sure.
In short:
What do the params mean, std_dev, min/max do and how do they interact?
What am I doing wrong during the conversion?
I am grateful for any help or guidance!
EDIT: I have opened a github issue with the full test code. Feel free to play around with this.
You should never need to manually set the quantization stats.
Have you tried the post-training-quantization tutorials?
https://www.tensorflow.org/lite/performance/post_training_integer_quant
Basically they set the quantization options:
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
Then they pass a "representative dataset" to the converter, so that the converter can run the model a few batches to gather the necessary statistics:
def representative_data_gen():
for input_value in mnist_ds.take(100):
yield [input_value]
converter.representative_dataset = representative_data_gen
While there are options for quantized training, it's always easier to to do post-training quantization.

Lasagne/Theano, problems loading pickled model

I think I'm losing my mind at this point.
I'm using Lasagne for a small convolutional neural network. It trains perfectly, I can compute the error on training and validation set as well, but I cannot save the trained model on the disk. Better, I can save it and load it, but I cannot use it to predict for new data.
This is what I do after training
model = {'network': network, 'params': get_all_params(network), 'params_values': get_all_param_values(network)}
pickle.dump(model, open('models/model_1.pkl', 'wb'), protocol=pickle.HIGHEST_PROTOCOL)
And this is what I do to load the model
with open('models/model.pkl', 'rb') as pickle_file:
model = pickle.load(pickle_file)
network = model['network']
values = model['params_values']
set_all_param_values(network, values)
T_input = T.tensor4('input', dtype='float32')
T_target = T.ivector('target')
predictions = get_output(network, deterministic=True)
loss = (cross_entropy(predictions, T_target)).mean()
acc = T.mean(T.eq(T.argmax(predictions, axis=1), T_target), dtype=config.floatX)
test_fn = function([T_input, T_target], [loss, acc])
I cannot even pass the real numpy input, that I get this error
theano.compile.function_module.UnusedInputError: theano.function was asked to create a
function computing outputs given certain inputs, but the provided input variable at index 0
is not part of the computational graph needed to compute the outputs: input.
To make this error into a warning, you can pass the parameter
on_unused_input='warn' to theano.function. To disable it completely, use
on_unused_input='ignore'.
I tried to set the parameter on_unused_input='warn' then, and this is the result
theano.gof.fg.MissingInputError: An input of the graph, used to compute (..)
was not provided and not given a value.Use the Theano flag
exception_verbosity='high',for more information on this error.
The problem is that your T_input is not tied to the input layer and hence theano can't compile it
T_input = lasagne.layers.get_all_layers(network)[0].input_var

Categories

Resources