I have been trying to run a neural net model using Keras on some .tfrecord files I have already generated. To do this I am passing them in as command line arguments and storing in a tensorflow dataset which I am then using to fit the model. However when I run the code I get the following error: ValueError: Please provide either inputs and targets or inputs, targets, and sample_weights. It seems like Keras is angry I am not passing separate input and label tensors but I have been led to believe you can use the dataset as a single argument instead? The code is shown below:
import tensorflow as tf
import sys
import tensorflow.data
from tensorflow import keras
from tensorflow.keras import layers
tf.enable_eager_execution()
inputList = []
for file in sys.argv[0:]:
inputList.append(file)
filenames = tf.Variable(inputList, tf.string)
dataset = tf.data.TFRecordDataset(filenames)
dataset.shuffle(1600000)
model = tf.keras.Sequential()
model.add(layers.Dense(13, input_shape=(13,), activation='relu'))
model.add(layers.Dense(20, activation='relu'))
model.add(layers.Dense(20, activation='relu'))
model.add(layers.Dense(10, activation='relu'))
model.add(layers.Dense(2, activation='relu'))
model.compile(optimizer=tf.train.AdamOptimizer(0.001), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(dataset, epochs=10, steps_per_epoch=30)
Related
I am using tensorflow2 on Windows, and the Utilization rate on CPU is Extremely Low, SO I look up the task manager, and there is only ONE CORE Working
then I search on the internet add The following code in the front of my code, which makes no difference
tf.config.threading.set_intra_op_parallelism_threads(48) tf.config.threading.set_inter_op_parallelism_threads(48)
how can i fix it. The following is my code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import *
tf.config.threading.set_intra_op_parallelism_threads(48)
tf.config.threading.set_inter_op_parallelism_threads(48)
def build_model(layer1=10, layer2=10):
model = keras.Sequential()
model.add(Dense(layer1, kernel_initializer='he_normal', input_shape=[11]))
model.add(Dense(layer2, kernel_initializer='he_normal'))
model.add(Dense(10, kernel_initializer='he_normal', activation='softmax'))
optimizer = keras.optimizers.Adam(learning_rate=0.0015)
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer=optimizer)
return model
model = built_model()
model.fit(x, y)
I used Tensorflow lite 2.1.1-ALPHA-PRECOMPILED for arduino nano 33 ble with headers
Import
import numpy as np
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Dropout, Conv1D, MaxPooling1D
Model Definition
def get_model(n_timesteps, n_features, n_outputs):
model = Sequential()
model.add(Conv1D(filters=32, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features)))
model.add(Conv1D(filters=32, kernel_size=3, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
model.add(Conv1D(filters=16, kernel_size=5, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy', tf.keras.metrics.Precision()])
# fit network
return model
model = get_model(128, 6, num_class=4)
Model Summary
TF Lite Converter Work but add expandsdims operation
# Convert the model to the TensorFlow Lite format without quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model)
def representative_dataset():
for _, samp in enumerate(trainX):
yield [samp.astype(np.float32).reshape(1, 128, 6)]
# Set the optimization flag.
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# Enforce integer only quantization
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
# Provide a representative dataset to ensure we quantize correctly.
converter.representative_dataset = representative_dataset
model_tflite = converter.convert()
# Save the model to disk
open('default_tf/model0_1.tflite', "wb").write(model_tflite)
When i check tflite structure via netron i found that the exapandsDims operation is included as shown in the following image
I already try to include
#include "tensorflow/lite/micro/all_ops_resolver.h"
to my sketch But did not resolve the problem and io also tried to include
#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/micro/kernels/micro_ops.h"
static tflite::MicroMutableOpResolver<1> micro_op_resolver;
void setup(){
micro_op_resolver.AddExpandDims();
}
in this case i get a error:
micro_op_resolver.AddExpandDims();
^~~~~~~~~~~~~
exit status 1
'class tflite::MicroMutableOpResolver<1>' has no member named 'AddExpandDims'
I resolve the problem to downgrade the python version of tensorflow from 2.3.0 to 2.1.1 where i trained the model, in 2.1.1 for convert the model intessorflow-lite use reshape ops where its replaced in 2.3.0 with exapandsdims ops.
We ran into the same problem but found a solution that avoids downgrading Tensorflow (so we can use the latest Tensorflow 2.4.1). The Tensorflow 1D operations, e.g., Conv1D, are specialized versions of their higher order counterparts, e.g., Conv2D. Therefore, you can "upgrade" your 1D operations without sacrificing accuracy. The Tensorflow Lite Converter does the same thing but replaces the Conv1D with Conv2D AND an additional ExpandDims layer before that - hence the problem.
Here is the procedure:
Adjust the input_shape for your network by adding another dimension of size 1, e.g., input_shape=(3,3,1) instead of input_shape=(3,3)
Replace the 1D operations with their 2D counterpart (Conv1D to Conv2D, MaxPooling1D to MaxPooling2D)
Adjust the kernel_size for your Conv2D layers, e.g., kernel_size=(3,1) instead of kernel_size=3
Adjust the pool_size for the pooling layers (such as MaxPooling2D or GlobalAveragePooling2D), e.g., pool_size=(2,1) instead of pool_size=2
Retrain the network
Convert your model to Tensorflow Lite Micro
Your model after conversion
def get_model(n_timesteps, n_features, n_outputs):
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(3,1), activation='relu', input_shape=(n_timesteps,n_features,1)))
model.add(Conv2D(filters=32, kernel_size=(3,1), activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling2D(pool_size=(2,1)))
model.add(Conv2D(filters=16, kernel_size=(5,1), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,1)))
model.add(Flatten())
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy', tf.keras.metrics.Precision()])
# fit network
return model
Downsides
You have to adjust your training data to account for the extra dimension
You have to adjust the input data for your network whenever you run inference
You have to retrain your network
We tried to avoid the adjustments of our training data and input data for inference by adding a Reshape layer right after the Input layer of your model. Unfortunately, this lead to another problem after pruning, int8 quantitating, and converting our model to Tensorflow Lite Micro. Without the reshaping everything works just fine.
I had a similar issue using Arduino IDE, was able to install the latest version of tflite micro and it resolved all issues https://github.com/tensorflow/tflite-micro-arduino-examples
I am trying to save my ANN model using SavedModel format. The command that I used was:
model.save("my_model")
It supposed to give me a folder namely "my_model" that contains all saved_model.pb, variables and asset, instead it gives me an HDF file namely my_model. I am using keras v.2.3.1 and tensorflow v.2.2.0
Here is a bit of my code:
from keras import optimizers
from keras import backend
from keras.models import Sequential
from keras.layers import Dense
from keras.activations import relu,tanh,sigmoid
network_layout = []
for i in range(3):
network_layout.append(8)
model = Sequential()
#Adding input layer and first hidden layer
model.add(Dense(network_layout[0],
name = "Input",
input_dim=inputdim,
kernel_initializer='he_normal',
activation=activation))
#Adding the rest of hidden layer
for numneurons in network_layout[1:]:
model.add(Dense(numneurons,
kernel_initializer = 'he_normal',
activation=activation))
#Adding the output layer
model.add(Dense(outputdim,
name="Output",
kernel_initializer="he_normal",
activation="relu"))
#Compiling the model
model.compile(optimizer=opt,loss='mse',metrics=['mse','mae','mape'])
model.summary()
#Training the model
history = model.fit(x=Xtrain,y=ytrain,validation_data=(Xtest,ytest),batch_size=32,epochs=epochs)
model.save('my_model')
I have read the API documentation in the tensorflow website and I did what it said to use model.save("my_model") without any file extension, but I can't get it right.
Your help will be very appreciated. Thanks a bunch!
If you would like to use tensorflow saved model format, then use:
tms_model = tf.saved_model.save(model,"export/1")
This will create a folder export and a subfolder 1 inside that. Inside the 1 folder you can see the assets, variables and .pb file.
Hope this will help you out.
Make sure to change your imports like this
from tensorflow.keras import optimizers
I have issues with saving a sequential model produced by Keras to SavedModel format.
As been said in https://www.tensorflow.org/guide/keras/save_and_serialize#export_to_savedmodel ,
to save the Keras model to the format that could be used by TensorFlow, I need to use model.save() and provide save_format='tf', but what I have is:
Traceback (most recent call last):
File "load_file2.py", line 14, in <module>
classifier.save('/tmp/keras-model.pb', save_format='tf')
My code example is:
import pandas as pd
import tensorflow as tf;
import keras;
from keras import Sequential
from keras.layers import Dense
import json;
import numpy as np;
classifier = Sequential()
classifier.add(Dense(4, activation='relu', kernel_initializer='random_normal', input_dim=4))
classifier.add(Dense(1, activation='sigmoid', kernel_initializer='random_normal'))
classifier.compile(optimizer ='adam',loss='binary_crossentropy', metrics = ['accuracy'])
classifier.save('/tmp/keras-model.pb', save_format='tf')
My python is 3.6.10.
My tensorflow is 1.14 and 2.0 (I tested on both, my result is the same).
My keras is 2.3.1.
What is wrong there or what should I change to make my model saved and then used by tensorflow?
Or, maybe, there is another way of saving models from Keras with TensorFlow2 as backend?
Thanks.
I ran your code. With tensorflow 1.15 I got type error saying save_format is not a known parameter. With tensorflow 2 I got the suggesstion to use tf.keras instead of native keras. So, I tried tf.keras instead of keras. This time the code ran with no error.
Also, I don't see a fit method before saving the model.
With TF2.0:
import pandas as pd
import tensorflow as tf;
##Change.
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
import json;
import numpy as np;
classifier = Sequential()
classifier.add(Dense(4, activation='relu', kernel_initializer='random_normal', input_dim=4))
classifier.add(Dense(1, activation='sigmoid', kernel_initializer='random_normal'))
classifier.compile(optimizer ='adam',loss='binary_crossentropy', metrics = ['accuracy'])
classifier.save('/tmp/keras-model.pb', save_format='tf')
Result:
INFO:tensorflow:Assets written to: /tmp/keras-model.pb/assets
I am trying to follow this Keras tutorial, but I encounter the following error when compiling using the command python3 test.py:
Traceback (most recent call last):
File "test.py", line 13, in <module>
layers.Dense(64, activation='sigmoid')
NameError: name 'layers' is not defined
My code is as follows:
import tensorflow as tf
from tensorflow import keras
model = keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(keras.layers.Dense(64, activation='relu'))
# Add another:
model.add(keras.layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(keras.layers.Dense(10, activation='softmax'))
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
Python version: 3.6.6
Operating System: MacOS High Sierra
I am also doing this all in the command line (tensorflow)$ environment.
What is wrong
First of all, python is signalling you that an object with name layers is not present within the scope of the script.
But the actual error is that the code was copied out of the TensorFlow's Keras documentation, but in the documentation the second part of the code serves only to explain what is being instantiated within the model.add(...) call.
So just drop all the code that begins with layers, as it is just an explanation.
import tensorflow as tf
from tensorflow import keras
model = keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(keras.layers.Dense(64, activation='relu'))
# Add another:
model.add(keras.layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(keras.layers.Dense(10, activation='softmax'))
Further readings
You should consider learning about Keras on the Keras Documentation.
For me importing layers with from tensorflow.keras import layers did the job.