How can I put multiple output for keras DL training? - python

I want to make a network that uses multiple output.
For example, I want to put an input of list which has the shape of :
[ 8, 128, 128, 3]
Here, 8 is number of images in one set of input, 128 x 128 x 3 is shape of a color image.
And I want my output to be :
[ 8, 128, 128]
Here, 8 is number of images in one set of output, 128 x 128 is shape of a gray image.
So I made my code as following :
f_list = []
for i in range(v):
img_in = M.Input((h, w, 3)) # Input :
feature = spm.encoder(img_in)
f_list.append(feature)
fuse_ave,fuse_max,fuse_min = spm.fusion(f_list)
decode_list = []
for i in range(v):
ffuse = spm.decoder(f_list[i],fuse_ave,fuse_max)
decode_list.append(ffuse)
dl_con = L.concatenate(decode_list,0)
print(dl_con.shape)
epsnet = M.Model(inputs = img_in,outputs = dl_con)
epsnet.compile(optimizer=O.Adam(lr=0.0001,decay = 0.000001), loss='mean_squared_error', metrics=['accuracy'])
epsnet.fit(iml,np.array(gml),batch_size = 5, epochs=50,verbose=1,shuffle=True, validation_split=0.1)
Here, the function decode is as following :
def decoder(input,fuse_a,fuse_M): #input : encoded
infu = L.Concatenate(-1)([input,fuse_a,fuse_M])
f=128
x = L.Conv2DTranspose(filters = f,kernel_size=(5,5),strides=2,padding = 'same')(infu) dding = 'same')(x__)
...
x__ = L.BatchNormalization()(x__)
x__ = L.Activation('relu')(x__)
def sq(x):
x_sq = B.squeeze(x,-1)
return x_sq
xq = L.Lambda(sq)(x__)
return xq
Here I am getting error message :
ValueError: Output tensors to a Model must be the output of a TensorFlow `Layer` (thus holding past layer metadata). Found: Tensor("concatenate_7/concat:0", shape=(?, ?, ?), dtype=float32)
I tried several ways, but I still get same error message. Please give me a breakthrough and thank you very much.

Related

Dimension of tensors dont fit

I am replicating a pytorch model in keras and ahve problems to see where the extra dimension comes from.
This how my code looks so far:
class Attention(tf.keras.Model):
def __init__(self, input_shape):
super(Attention, self).__init__()
in_features=input_shape[-1]
small_in_features = max(math.floor(in_features/10), 1)
self.d_k = small_in_features
query = tf.keras.models.Sequential()
query.add(tf.keras.layers.Dense(in_features))
query.add(tf.keras.layers.Dense(small_in_features,activation="tanh"))
self.query= query
self.key = tf.keras.layers.Dense(small_in_features)
def call(self, inp):
# inp.shape should be (B,N,C)
q = self.query(inp) # (B,N,C/10)
k = self.key(inp) # B,N,C/10
k = tf.transpose(k)
print(q)
print(k)
x = tf.linalg.matmul(q, k) / math.sqrt(self.d_k) # B,N,N
x = tf.nn.softmax(x) # over rows
x = tf.transpose(x)
x = tf.linalg.matmul(x, inp) # (B, N, C)
return x
But if I want to add it to my Sequential model I get this Error:
ValueError: Dimensions must be equal, but are 1 and 256 for '{{node attention_19/MatMul}} = BatchMatMulV2[T=DT_FLOAT, adj_x=false, adj_y=false](attention_19/sequential_36/Identity, attention_19/transpose)' with input shapes: [?,256,1], [1,256,?].
I have now printed my 'q' and 'k' and it prints out like following:
Tensor("attention_19/sequential_36/Identity:0", shape=(None, 256, 1), dtype=float32)
Tensor("attention_19/transpose:0", shape=(1, 256, None), dtype=float32)
So they are 3 dimensional where one dimension is unfilled.I dont quite understand why it happens.
How can I "remove" the extra dimension or bring this custom layer to work?
Note: The original codes seems to use 3 dimensional Input but I want 2 dimensional input.

how to set appropriate input shape of model in Keras

I'm a newbie to Keras. I'm playing around Keras to get some intuition and stuck with here.
input_image = tf.keras.Input(shape=(16,16,3))
x = tf.keras.layers.Conv2D(32,(3,3), padding = 'same')(input_image)
model = tf.keras.Model(input_image , x)
model.compile(optimizer='Adam',loss = 'MSE')
inputs = np.random.normal(size = (16,16,3))
outputs = np.random.normal(size = (16,16,32))
model.fit(x = inputs , y =outputs)
I just wanted to see the output shape that model.summary says (None, 16, 16, 32). But now I have two questions. One is the output shape and another is why my code doesn't work. I hope someone tells me what I'm missing. Thanks~
inputs = np.random.normal(size = (1,16,16,3)) #<---- here
outputs = np.random.normal(size = (1,16,16,32)) #<---here
They should be 4D not 3D in shape. You need to give the detail of batch also.
(batch_size, w,h,c) <---- 4D
You are missing batch_size
32,(3,3) from tf.keras.layers.Conv2D(32,(3,3), padding = 'same')(input_image)
You have 32 filters. So the channel depth will be 32. But since you have used the padding='same' so your output will have the same dimension as input. Only differ in depth.

Multiple Input Keras Model

I want to train a Keras Model where the input is a vector of size (20, 300).
But the problem is that I need also to feed the model with a fixed list of vectors that should be used on each training step.
the list of vectors is fixed for all training examples.. so Here's what I've tried.
def create_model(num_filters=64, embedding_dim=300, seq_len=20):
# input1 Shape (?,20,300)
input1 = Input(shape=(seq_len,embedding_dim,), dtype='float32') # Input1 taken from the model input
# input2 Shape (5,20,300)
input2=get_input2() # Input2: taken from outside the model
# CNN Encoding of Input 1
convs = []
filter_sizes = [1,2,3]
for fsz in filter_sizes:
x = Conv1D(num_filters, fsz, activation='relu',padding='same')(input1)
x = MaxPooling1D()(x)
convs.append(x)
output1 = Concatenate(axis=-1)(convs)
output1 = Flatten()(output1)
# CNN Encoding of Input 2
convs1 = []
filter_sizes = [1,2,3]
for fsz in filter_sizes:
x1 = Conv1D(num_filters, fsz, activation='relu',padding='same')(input2)
x1 = MaxPooling1D()(x1)
convs1.append(x1)
output2 = Concatenate(axis=-1)(convs1)
output2 = Flatten()(output2)
However this implementation throws a value error.
"ValueError: Layer conv1d_60 was called with an input that isn't a
symbolic tensor. Received type: ."
How this can be done in Keras?

Dimension out of range when applying l2 normalization in Pytorch

I'm getting a runtime error:
RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)`
and can't figure out how to fix it.
The error appears to refer to the line:
i_enc = F.normalize(input =i_batch, p=2, dim=1, eps=1e-12) # (batch, K, feat_dim)
I'm trying to encode image features (batch x 36 x 2038) by applying a L2 norm. Below is the full code for the section.
def forward(self, q_batch, i_batch):
# batch size = 512
# q -> 512(batch)x14(length)
# i -> 512(batch)x36(K)x2048(f_dim)
# one-hot -> glove
emb = self.embed(q_batch)
output, hn = self.gru(emb.permute(1, 0, 2))
q_enc = hn.view(-1,self.h_dim)
# image encoding with l2 norm
i_enc = F.normalize(input =i_batch, p=2, dim=1, eps=1e-12) # (batch, K, feat_dim)
q_enc_copy = q_enc.repeat(1, self.K).view(-1, self.K, self.h_dim)
q_i_concat = torch.cat((i_enc, q_enc_copy), -1)
q_i_concat = self.non_linear(q_i_concat, self.td_W, self.td_W2 )#512 x 36 x 512
i_attention = self.att_w(q_i_concat) #512x36x1
i_attention = F.softmax(i_attention.squeeze(),1)
#weighted sum
i_enc = torch.bmm(i_attention.unsqueeze(1), i_enc).squeeze() # (batch, feat_dim)
# element-wise multiplication
q = self.non_linear(q_enc, self.q_W, self.q_W2)
i = self.non_linear(i_enc, self.i_W, self.i_W2)
h = torch.mul(q, i) # (batch, hid_dim)
# output classifier
# BCE with logitsloss
score = self.c_Wo(self.non_linear(h, self.c_W, self.c_W2))
return score
I would appreciate any help.
Thanks
I would suggest to check the shape of i_batch (e.g. print(i_batch.shape)), as I suspect i_batch has only 1 dimension (e.g. of shape [N]).
This would explain why PyTorch is complaining you can normalize only over the dimension #0; while you are asking for the operation to be done over a dimension #1 (c.f. dim=1).

Set input shape of model in keras

I had saw other similar question on tensor flow but didn't match my problem.
Model:
# picture size
img_row = 128
img_col = 647
shape = (img_row, img_col)
img = Input(input_shape)
...
with result
Data:
There has 1000 datas and each with shape (128, 647), and its a column of Dataframe df.
Therefore, size result and data preview are as follow:
Problem
The problem is: when I pass the Data to Model, some size error occured.
train_history = model.fit( x = df["data"],
y = df["genre_idx"],
validation_split = 0.1,
epochs = 30,
batch_size = 200,
verbose = 2
)
And error message are as follow:
Error when checking input: expected input_79 to have 3 dimensions, but got array with shape (1000, 1)
It might be a low question, but I didn't figure out what is the main problem of this situation and how to solve it.
You need to give it as a single ndarray which you can extract using the .values property of the data frame. The expected shape for the input is (1000, 128, 647).

Categories

Resources