What i want is to get output of encoder (compressed data) to then do a face_recognition on it .
After training this autoencoder i want to use the trained encoder.
so when i try to run the code i get this error:
How can i solve the problem and extract only the trained encoder part of this autoencoder model ?
ValueError: Error when checking target: expected max_pooling2d_3 to have shape (8, 8, 64) but got array with shape (64, 64, 3)
What's happening is that your model output is the encoded part, and you are providing the image you will encode as target, which is correct for an autoencoder. What you need to do is to define
autoencoder = Model(input_img, decoded)
to train it and then use a separate encoded-like model to use the .predict method to obtain the reduced input.
Related
I'm trying to create a Keras model to train with a group of images, taken from a list of paths.
I know that the method tf.keras.utils.image_dataset_from_directory exists but it doesn't meet my needs because I want to learn the correct way to handle images and because I need to make a regression, not a classification.
Every approach I tried failed one way or another, mostly because the type of the x_train variable is wrong.
The most promising function I used to load a single image is:
def encode_image(img_path):
img = tf.keras.preprocessing.image.load_img(img_path)
img_array = tf.keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0)
return img_array
x_train = df['filename'].apply(lambda i: encode_image(i))
This doesn't work because, when I call the .fit() method this way:
history = model.fit(x_train, y_train, epochs=1)
I receive the following error:
Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray)
This makes me understand that I'm passing the data in a wrong format.
Can someone provide me a basic example of creating a (x_train, y_train) pair to feed a model for training using a set of images?
Thank you very much
I am attempting to run inference on my .onnx model converted from a keras' multi-label text classification model using https://keras.io/examples/nlp/multi_label_classification/. This is a text classification model that takes in text and provides a predicted category.
I am following this tutorial here: https://github.com/onnx/keras-onnx/blob/master/tutorial/TensorFlow_Keras_MNIST.ipynb BUT I am not sure what I am missing with regards to finding the format for 'feed'.
The keras model looks like this:
def make_model():
shallow_mlp_model = keras.Sequential(
[
layers.Dense(512, activation="relu"),
layers.Dense(256, activation="relu"),
layers.Dense(lookup.vocabulary_size(), activation="sigmoid"),
]
)
return shallow_mlp_model
The feed is a dictionary of input name to data. In the original tutorial the data for the input named 'dense_input' was created with this:
data = [digit_image.astype(np.float32)]
The data needs to be a numpy array as ONNX Runtime knows nothing about BatchDataset (based on the output in your question that's the type returned by make_dataset).
I want to fine tune Tensorflow Universal sentence encoder. But when I try to use own data as input to Keras layer with sentence encoder I got the error
import tensorflow as tf
import tensorflow_hub as hub
module_url = "https://tfhub.dev/google/universal-sentence-encoder/4"
model = hub.load(module_url)
hub_layer = hub.KerasLayer(model, output_shape=[512], dtype=tf.string)
hub_layer(np.array('test sentence'))
InvalidArgumentError: input must be a vector, got shape: []
I tried different variants of input data: strings, numpy arrays, but it didn't work. Does anybody know what this model takes as input and how can I adapt my own data for this?
I have exported my PyTorch model to ONNX. Now, is there a way for me to obtain the input layer from that ONNX model?
Exporting PyTorch model to ONNX
import torch.onnx
checkpoint = torch.load("./saved_pytorch_model.pth")
model.load_state_dict(checkpoint['state_dict'])
input = torch.tensor(df_X.values).float()
torch.onnx.export(model, input, "onnx_model.onnx")
Loading ONNX model
onnx_model = onnx.load('onnx_model.onnx')
I want to be able to somehow obtain the input layer from onnx_model. Is this possible?
The ONNX model is a protobuf structure, as defined here (https://github.com/onnx/onnx/blob/master/onnx/onnx.in.proto). You can work with it using the standard protobuf methods generated for python (see: https://developers.google.com/protocol-buffers/docs/reference/python-generated). I don't understand what exactly you want to extract. But you can iterate through the nodes that make up the graph (model.graph.node). The first node in the graph may or may not correspond to what you might consider the first layer (it depends on how the translation was done). You can also get the inputs of the model (model.graph.input).
Onnx library provides APIs to extract the names and shapes of all the inputs as follows:
model = onnx.load(onnx_model)
inputs = {}
for inp in model.graph.input:
shape = str(inp.type.tensor_type.shape.dim)
inputs[inp.name] = [int(s) for s in shape.split() if s.isdigit()]
I'm trying to optimize a saved graph for inference, so I can use it in Android.
My first attempt at using the optimize_for_inference script failed with
google.protobuf.message.DecodeError: Truncated message
So my question is whether the input/output nodes are wrong or the script cannot handle SavedModels (although it's the same extension as a frozen graph .pb)
Regarding the first: since with Estimators we provide input_fn instead of the data itself, which should be considered the input? The first tf operation on it? Like:
x = x_dict['gestures']
# Data input is a 1-D vector of x_dim * y_dim features ("pixels")
# Reshape to match format [Height x Width x Channel]
# Tensor input become 4-D: [Batch Size, Height, Width, Channel]
x = tf.reshape(x, shape=[-1, x_dim, y_dim, 1], name='input')
(...)
pred_probs = tf.nn.softmax(logits, name='output')
BTW: if there is something different in loading a SavedModel in Android, I'd like to know too.
Thank you in advance!
Update: There are good instructions at https://www.tensorflow.org/mobile/prepare_models which include an explaination of what to do with SavedModels. You can freeze your SavedModel using the --input_saved_model_dir to freeze_graph.py.
They're both protocol buffers (.pb), but unfortunately they're different messages (i.e. different file formats). Theoretically you could first extract a MetaGraph from the SavedModel, then "freeze" the MetaGraph's GraphDef (move variables into constants), then run this script on the frozen GraphDef. In that case you'd want your input_fn to be just placeholders.
You could also add a plus one emoji on one of the "SavedModel support for Android" Github issues. Medium-term we'd like to standardize on SavedModel; sorry you've run into this!