Tensorflow Hub shape not defined - python

I am trying to load in a pre-trained embedding model but no matter what shape I give, I cannot find the input shape. It seems like a pretty popular choice, but I cannot find any indicator on the tensorflow hub page on what input shape to use. The input sequence is supposed to have variable-length, so I use an input shape of none. Keras automatically provides the batch size
embedding_url = 'https://tfhub.dev/google/universal-sentence-encoder-large/5'
embedding_layer = hub.KerasLayer(embedding_url)
premises = tf.keras.Input(shape=(None,))
conclusion = tf.keras.Input(shape=(None,))
x1 = embedding_layer(premises)
x2 = embedding_layer(conclusion)
model = tf.keras.Model(inputs=[premises, conclusion], outputs=[x1, x2])
Here is the error I get
ValueError: Python inputs incompatible with input_signature:
inputs: (
Tensor("input_5:0", shape=(None, None), dtype=float32))
input_signature: (
TensorSpec(shape=<unknown>, dtype=tf.string, name=None))

You can use Functional API with KerasLayer by keeping the input shape parameter as an empty tuple.
Code:
import tensorflow_hub as hub
import tensorflow as tf
embedding_url = 'https://tfhub.dev/google/universal-sentence-encoder-large/5'
premises = tf.keras.layers.Input(shape=(), name="Input1", dtype=tf.string)
conclusion = tf.keras.layers.Input(shape=(), name="Input2", dtype=tf.string)
embedding_layer = hub.KerasLayer(embedding_url)
x1 = embedding_layer(premises)
x2 = embedding_layer(conclusion)
model = tf.keras.Model(inputs=[premises, conclusion], outputs=[x1, x2])
tf.keras.utils.plot_model(model, 'my_first_model.png', show_shapes=True)
Your model looks like this:

Related

How to change the input shape of model in Keras

I have a model that I load this way:
def YOLOv3_pretrained(n_classes=12, n_bbox=3):
yolo3 = tf.keras.models.load_model("yolov3/yolo3.h5")
yolo3.trainable = False
l3 = yolo3.get_layer('leaky_re_lu_71').output
l3_flat = tf.keras.layers.Flatten()(l3)
out3 = tf.keras.layers.Dense(100*(4+1+n_classes))(l3_flat)
out3 = Reshape((100, (4+1+n_classes)), input_shape=(12,))(out3)
yolo3 = Model(inputs=yolo3.input, outputs=[out3])
return yolo3
I want to add a Dense at the end of it but since it takes an input with shape (None, 416,416,3) it doesn't let me do it and it returns an error:
ValueError: The last dimension of the inputs to a Dense layer should be defined. Found None. Full input shape received: (None, None)
I also tried this way with a Sequential (I want to use just the last output of yolo):
def YOLOv3_Dense(n_classes=12):
yolo3 = tf.keras.models.load_model("yolov3/yolo3.h5")
model = Sequential()
model.add(yolo3)
model.add(Flatten())
model.add(Dense(100*(4+1+n_classes)))
model.add(Reshape((100, (4+1+n_classes)), input_shape=(413,413,3)))
return model
But it returns another error:
ValueError: All layers in a Sequential model should have a single output tensor. For multi-output layers, use the functional API.
Is there a way to add the final Dense layer?
The problem is that you are trying to reduce (flatten) an output with multiple None dimensions, which will not work if you want to use the output as input to another layer. You can try using a GlobalAveragePooling2D or GlobalMaxPooling2D instead:
import tensorflow as tf
yolo3 = tf.keras.models.load_model("yolo3.h5")
yolo3.trainable = False
l3 = yolo3.get_layer('leaky_re_lu_71').output
l3_flat = tf.keras.layers.GlobalMaxPooling2D()(l3)
out3 = tf.keras.layers.Dense(100*(4+1+12))(l3_flat)
out3 = tf.keras.layers.Reshape((100, (4+1+12)), input_shape=(12,))(out3)
yolo3 = tf.keras.Model(inputs=yolo3.input, outputs=[out3])

Incompatible shape sizes using PyGAD

I'm trying to follow the tutorial given here.
This tutorial trains a Keras model using a genetic algorithm, with the PyGAD package. I'm interested in the binary classification case. My input matrix is of dimension 10000x20. Hence, I've created the following model using Keras:
input_layer = tensorflow.keras.layers.Input(20)
dense_layer1 = tensorflow.keras.layers.Dense(500, activation="relu")(input_layer)
dense_layer2 = tensorflow.keras.layers.Dense(500, activation="relu")(dense_layer1)
output_layer = tensorflow.keras.layers.Dense(1, activation="softmax")(dense_layer2)
model = tensorflow.keras.Model(inputs=input_layer, outputs=output_layer)
keras_ga = pygad.kerasga.KerasGA(model=model,
num_solutions=10)
However, when I go to run the algorithm, using ga_instance.run(), I get the error:
ValueError: Shapes (10000,) and (10000, 1) are incompatible
I can't figure out why I'm getting this error? I want my Keras model to have 2 hidden layers, each with 500 hidden nodes and 1 output node.
I think the problem is related to how each output is represented in the array. if you have a single output for 10000 instances, then this is an example of preparing the data that works with PyGAD. Its shape is (1000, 1).
numpy.random.uniform(0, 1, (1000, 1))
Here is a code that works but for a simple network architecture because, based on the fitness function you used, the fitness sometimes is NaN.
As I do not have the same data you used, I generated the input/output data randomly.
import tensorflow.keras
import pygad.kerasga
import numpy
import pygad
def fitness_func(solution, sol_idx):
global data_inputs, data_outputs, keras_ga, model
model_weights_matrix = pygad.kerasga.model_weights_as_matrix(model=model,
weights_vector=solution)
model.set_weights(weights=model_weights_matrix)
predictions = model.predict(data_inputs)
cce = tensorflow.keras.losses.CategoricalCrossentropy()
solution_fitness = 1.0 / (cce(data_outputs, predictions).numpy() + 0.00000001)
# print("solution_fitness", cce(data_outputs, predictions).numpy(), solution_fitness)
return solution_fitness
def callback_generation(ga_instance):
print("Generation = {generation}".format(generation=ga_instance.generations_completed))
print("Fitness = {fitness}".format(fitness=ga_instance.best_solution(ga_instance.last_generation_fitness)[1]))
data_inputs = numpy.random.uniform(0, 1, (1000, 20))
data_outputs = numpy.random.uniform(0, 1, (1000, 1))
# create model
from tensorflow.keras.layers import Dense, Dropout
l1_rate=1e-6
l2_rate = 1e-6
input_layer = tensorflow.keras.layers.InputLayer(20)
dense_layer1 = tensorflow.keras.layers.Dense(10, activation="relu",kernel_regularizer=tensorflow.keras.regularizers.l1_l2(l1=l1_rate, l2=l2_rate))
output_layer = tensorflow.keras.layers.Dense(1, activation="sigmoid")
model = tensorflow.keras.Sequential()
model.add(input_layer)
model.add(dense_layer1)
model.add(Dropout(0.2))
model.add(output_layer)
keras_ga = pygad.kerasga.KerasGA(model=model,
num_solutions=10)
# Run pygad
num_generations = 30
num_parents_mating = 5
initial_population = keras_ga.population_weights
ga_instance = pygad.GA(num_generations=num_generations,
num_parents_mating=num_parents_mating,
initial_population=initial_population,
fitness_func=fitness_func,
on_generation=callback_generation)
ga_instance.run()
Thanks for using PyGAD!

Keras functional API and TensorFlow Hub

I'm trying to use a Universal Sentence Encoder from TF Hub as a keras layer in a functional way. I would like to use hub.KerasLayer with Keras Functional API, but i'm not sure how to achieve that, so far I've only seen exmaples of hub.KerasLayer with the Sequential API
import tensorflow_hub as hub
import tensorflow as tf
from tensorflow.keras import layers
import tf_sentencepiece
use_url = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/1'
english_sentences = ["dog", "Puppies are nice.", "I enjoy taking long walks along the beach with my dog."]
english_sentences = np.array(english_sentences, dtype=object)[:, np.newaxis]
seq = layers.Input(shape=(None, ), name='sentence', dtype=tf.string)
module = hub.KerasLayer(hub.Module(use_url))(seq)
model = tf.keras.models.Model(inputs=[seq], outputs=[module])
model.summary()
x = model.predict(english_sentences)
print(x)
the code above runs into this error when passing the input layer to the embedding: TypeError: Can't convert 'inputs': Shape TensorShape([Dimension(None), Dimension(None)]) is incompatible with TensorShape([Dimension(None)])
Is it possible to use hub.KerasLayer with keras functional API in TensorFlow 1.x? if it can be done, how?
Try This
sentence_encoding_layer = hub.KerasLayer("https://tfhub.dev/google/universal-sentence-encoder/4",
trainable=False,
input_shape = [],
dtype = tf.string,
name = 'U.S.E')
inputs = tf.keras.layers.Input(shape = (), dtype = 'string',name = 'input_layer')
x = sentence_encoding_layer(inputs)
x = tf.keras.layers.Dense(64,activation = 'relu')(x)
outputs = tf.keras.layers.Dense(1,activation = 'sigmoid',name = 'output_layer')(x)
model = tf.keras.Model(inputs,outputs,name = 'Transfer_learning_USE')
model.summary()
model.predict([sentence])
If you use v3 of the same universal sentence encoder with tf 1.15, you can do such thing by replacing lines from
import tf_sentencepiece
use_url = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/1'
module = hub.KerasLayer(hub.Module(use_url))(seq)
to
import tensorflow_text
use_url = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/3'
module = hub.KerasLayer(use_url)(seq)
First shape is what you are passing into the model, Shape TensorShape([Dimension(None), Dimension(None)]). Second shape is what you are expecting, TensorShape([Dimension(None)]). So in this error, its telling you it expecting a shape of ()...
Or
If you are expecting to do batches of text, perhaps do TimeDistributed layer, like so...
module = tf.keras.layers.TimeDistributed(hub.KerasLayer(hub.Module(use_url)))(seq)
However you maybe forced to do specific size for text length...

Convert a tensor from 128,128,3 to 129,128,3 and the 1,128,3 values padded to that tensor happens later

This is my piece of code for GAN where the model is being initialized, everything is working and only the relevant code to the problem is present here:
z = Input(shape=(100+384,))
img = self.generator(z)
print("before: ",img) #128x128x3 shape, dtype=tf.float32
temp = tf.get_variable("temp", [1, 128, 3],dtype=tf.float32)
img=tf.concat(img,temp)
print("after: ",img) #error ValueError: Incompatible type conversion requested to type 'int32' for variable of type 'float32_ref'
valid = self.discriminator(img)
self.combined = Model(z, valid)
I have 128x128x3 images to generate, what I want to do is give 129x128x3 images to discriminator and the 1x128x3 text-embedding matrix is concatenated with the image while training. But I have to specify at the start the shape of tensors and input value that each model i.e. GEN and DISC will get. Gen takes 100noise+384embedding matrix and generates 128x128x3 image which is again embeded by some embedding i.e. 1x128x3 and is fed to DISC. So my question is that whether this approach is correct or not? Also, if it is correct or it makes sense then how can I specific the stuff needed at the start so that it does not give me errors like incompatible shape because at the start I have to add these lines:-
z = Input(shape=(100+384,))
img = self.generator(z) #128x128x3
valid = self.discriminator(img) #should be 129x128x3
self.combined = Model(z, valid)
But img is of 128x128x3 and is later during training changed to 129x128x3 by concatenating embedding matrix. So how can I change "img" from 128,128,3 to 129,128,3 in the above code either by padding or appending another tensor or by simply reshaping which of course is not possible. Any help will be much much appreciated. Thanks.
The first argument of tf.concat should be the list of tensors, while the second is the axis along which to concatenate. You could concatenate the img and temp tensors as follows:
import tensorflow as tf
img = tf.ones(shape=(128, 128, 3))
temp = tf.get_variable("temp", [1, 128, 3], dtype=tf.float32)
img = tf.concat([img, temp], axis=0)
with tf.Session() as sess:
print(sess.run(tf.shape(img)))
UPDATE: Here you have a minimal example showing why you get the error "AttributeError: 'Tensor' object has no attribute '_keras_history'". This error pops up in the following snippet:
from keras.layers import Input, Lambda, Dense
from keras.models import Model
import tensorflow as tf
img = Input(shape=(128, 128, 3)) # Shape=(batch_size, 128, 128, 3)
temp = Input(shape=(1, 128, 3)) # Shape=(batch_size, 1, 128, 3)
concat = tf.concat([img, temp], axis=1)
print(concat.get_shape())
dense = Dense(1)(concat)
model = Model(inputs=[img, temp], outputs=dense)
This happens because tensor concatis not a Keras tensor, and therefore some of the typical Keras tensors' attributes (such as _keras_history) are missing. To overcome this problem, you need to encapsulate all TensorFlow tensors into a Keras Lambda layer:
from keras.layers import Input, Lambda, Dense
from keras.models import Model
import tensorflow as tf
img = Input(shape=(128, 128, 3)) # Shape=(batch_size, 128, 128, 3)
temp = Input(shape=(1, 128, 3)) # Shape=(batch_size, 1, 128, 3)
concat = Lambda(lambda x: tf.concat([x[0], x[1]], axis=1))([img, temp])
print(concat.get_shape())
dense = Dense(1)(concat)
model = Model(inputs=[img, temp], outputs=dense)

Keras + TensorFlow prepend a processing layer to a trained network

I'm trying to prepend a preprocessing layer to a pre-trained network. This is the code I'm working on:
orig_model = applications.vgg16.VGG16(include_top=True, weights=None, input_tensor=None, input_shape=None, pooling=None, classes=1000)
orig_model.load_weights(weights_path)
preproc_layer = Lambda(preprocess, input_shape=(3,224,224), output_shape=(3,224,224))
model = Sequential()
model.add(preproc_layer)
all_layers = orig_model.layers
for l in all_layers:
config = l.get_config()
copy = layers.deserialize({'class_name':l.__class__.__name__, 'config': config})
weights = l.get_weights()
copy.set_weights(weights)
model.add(copy)
Where preprocess is:
preprocess(x):
x = x[::-1, ...]
x = K.bias_add(x, vgg_mean, data_format='channels_first')
It works for the first InputLayer but throws me an error at copy.set_weights(weights) for the second (Conv2D) layer:
You called `set_weights(weights)` on layer "block1_conv1" with a weight list of length 2, but the layer was expecting 0 weights.
I found something similar on Google: https://github.com/keras-team/keras/issues/4812. Here they suggest setting trainable = True for the layer, but this doesn't work in my case.
Do you have any suggestions? Keras version is 2.1.5, Tensorflow 1.6.0

Categories

Resources