Can not convert Pytorch ml model to TorchScript Version - python

I'm trying to convert PyTorch ml model into TorchScript Version and then convert it to Core ML using coremltools.
While trying to convert PyTorch ml model into TorchScript Version my code below keep getting following error: Dictionary inputs to traced functions must have consistent type. Found Tensor and Tuple[Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor], Tuple[Tensor, Tensor, Tensor, Tensor]]
my code:
model = AutoModelForSeq2SeqLM.from_pretrained("Seungjun/t5-small-finetuned-xsum")
model.eval()
decoder_input_ids = output
traced_model = torch.jit.trace(model, (inputs['input_ids'], inputs['attention_mask'], output))
out = traced_model(inputs['input_ids'])
But all three parameters for torch.jit.trace have same type like follow:
inputs['input_ids'].shape
torch.Size([1, 219])
inputs['attention_mask'].shape
torch.Size([1, 219])
output.shape
torch.Size([1, 23])
Does anyone know why this is happening or is there something wrong with my code?

One thing looks suspicious: The model was traced with 3 inputs (ids, mask and output), but the inference was called with only one input.

Related

Keras Functional API embedding layer output to LSTM

When passing the output of my embedding layer to the LSTM layer I'm running into a ValueError that I cannot figure out. My model is:
def lstm_mod(self, n_cells,batch_size):
input = tf.keras.Input((self.n_seq, self.n_features))
embedding = tf.keras.layers.Embedding(batch_size,self.n_seq,input_length=self.n_clusters)(input)
x= tf.keras.layers.LSTM(n_cells)(embedding)
out = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(input, out,name="LSTM")
model.compile(loss='mse', optimizer='Adam')
return model
The error is:
ValueError: Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, 128, 7, 128]
Given that the dimensions passed to the model input and the embedding layer are consistent through the arguments of the model I'm puzzled by this. Any guidance is appreciated.
Keras adds an additional dimension (None) when you feed your data through your model because it processes your data in batches.
In this line :
input = tf.keras.Input((self.n_seq, self.n_features))
You've defined a 2-dimensional input, and Keras adds a 3rd dimension (the batch), hence expected ndim=3.
However, the data that is being passed to the input layer is 4-dimensional, which means that your actual input data shape is 3-dimensional + the batch dimension, not 2-dimensional + batch.
To fix this you need to either re-shape your 3-D input to 2-D, or add an additional dimension to the input shape.
Print out the values for self.n_seq and self.n_features and find out what is missing from the shape 128, 7, 128 and that should guide you as to what you need to add.

Tensorflow input for a series of (1, 512) tensors

I have a pandas dataset with a column of tensors of shape TensorShape([1, 512]) which are a result of tf.hub Bert embeddings. I know I can use an embedding layer directly in tensorflow, but is there a way to feed the data as it is in the Input layer?
I've tried with a (1, 512) shape input layer but I have an error: "Failed to convert a Numpy array to a Tensor". I've tried to feed it as a np.array instead of a series, but it's not working...
I would guess that it's a shape problem, but I don't see how to solve it!
Edits : I used USE from tf.hub : https://tfhub.dev/google/universal-sentence-encoder-large/5

How to correctly create a multi input neural network

i'm building a NN that has, as input, two car images and classifies if thery are the same make and model. My problem is in the fitmethod of keras, because there is this error
ValueError: Error when checking target: expected dense_3 to have shape (1,) but got array with shape (2,)
The network architecture is the following:
input1=Input((150,200,3))
model1=InceptionV3(include_top=False, weights='imagenet', input_tensor=input1)
model1.layers.pop()
input2=Input((150,200,3))
model2=InceptionV3(include_top=False, weights='imagenet', input_tensor=input2)
model2.layers.pop()
for layer in model2.layers:
layer.name = "custom_layer_"+ layer.name
concat = concatenate([model1.layers[-1].output,model2.layers[-1].output])
flat = Flatten()(concat)
dense1=Dense(100, activation='relu')(flat)
do1=Dropout(0.25)(dense1)
dense2=Dense(50, activation='relu')(do1)
do2=Dropout(0.25)(dense2)
dense3=Dense(1, activation='softmax')(do2)
model = Model(inputs=[model1.input,model2.input],outputs=dense3)
My idea is that the error is due to the to_catogorical method that i have called on the array which stores, as 0 or 1, if the two cars have the same make and model or not. Any suggestion?
Since you are doing binary classification with one-hot encoded labels, then you should change this line:
dense3=Dense(1, activation='softmax')(do2)
To:
dense3=Dense(2, activation='softmax')(do2)
Softmax with a single neuron makes no sense, two neurons should be used for binary classification with softmax activation.

Cannot predict single instance in Keras using a loop

I'm playing a little with deep learning and Keras has been my choice due to its simplicity.
I've built a simple multilayer perceptron model for binary classification and fitted it on input data (the same that I'm using for other ML models and which are working ok).
The Following picture displays the Model summary:
The first dense layer was defined as such:
model.add(Dense(18, input_dim=len(X_encoded.columns), activation = "relu", kernel_initializer="uniform"))
When I attempt to predict over a loop like so:
for vals in X_encoded.values:
print("Survives?", model.predict([vals], batch_size=1))
I get the following error:
ValueError: Error when checking input: expected dense_90_input to have shape (35,) but got array with shape (1,)
These are my variable sizes:
print("Shape of vals:", vals.shape, "Number of Columns and First Layer Dimension:", len(X_encoded.columns))
Result:
Shape of vals: (35,) Number of Columns and First Layer Dimension: 35
As you can see, these match in size which is the expected input.
What is going on? When I pass the entire dataframe "predict" it works correctly, but not when I pass a single value...
You need an array, not a list. You only use a list for multiple input tensors.
model.predict(np.array([vals]), batch_size=1)
But why not:
model.predict(X_encoded.values, batch_size=1)

Keras Convolutional NN Input Shape

I have a pandas dataframe as shown below:
I want to give this into a Keras convolutional neural network as an input, however, the shape is given as (20,4). To give it as a Keras input, its shape must be (n_samples, 4, 4, 1). How can I reshape it into this? Is it possible to set the major axis as the index?
This was created from a flattened pandas.Panel, if there is an easier conversion possible.
Thanks,
Stephen

Categories

Resources