I have 2 inputs into a lambda layer one size (2,3,) the other (3,) . The lambda layer should return an output of size 2, however when the multiply layer is executed the following error occurs:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 2 and 3 for 'multiply_1/mul' (op: 'Mul') with input shapes: [?,2], [?,3].
The relevant code is below and any help would be much appreciated,thanks:
import numpy as np
from keras.models import Model
from keras import backend as K
from keras.engine.topology import Layer
from keras.layers import Dense,Input,concatenate,Lambda,multiply,add
import tensorflow as tf
import time
def weights_Fx(x):
j = x[0][:,0]
k = x[1][0]
y = j - k
return y
def sum_layer(x):
x = tf.reduce_sum(x)
return x
type1_2 = Dense(units=1, activation = 'relu',name = "one")
type1_3 = Dense(units=1,activation = 'relu',name = "two")
in1 = Input(shape=(1,))
in2 = Input(shape=(1,))
n1 = type1_2(in1)
n2 = type1_3(in2)
model = concatenate([n1,n2],axis=-1,name='merge_predicitions')
coords_in = Input(shape=(2,3,))
coords_target = Input(shape=(3,))
model2 = Lambda(weights_Fx,output_shape=(2,),name='weightsFx')([coords_in,coords_target])
model = multiply([model,model2])
model = Lambda(sum_layer)(model)
model = Model(inputs=[in1,in2,coords_in,coords_target],outputs=[model])
The issue was to do with how I was indexing the array. It is important to remember that though the data is of shape (2,3) keras will create a Tensor of shape (None,2,3) therefore to perform the operation as desired the following is needed:
y = x[0][:,:,0]-x[1][:,0]
Furthermore in "sum layer" in order to prevent the rank (number of dimensions) in the tensor being reduced by 1 the following is required:
y = K.sum(x,axis=1,keepdims=True)
Your Lambda layer is not returning output of shape 2 but it is returning output of shape 3.
Model2 shape is (,3) and not (,2) which is giving error in multiply of the model and model2
Take a look at your coords_in and coords_target shape.
Related
I'm trying to follow the tutorial given here.
This tutorial trains a Keras model using a genetic algorithm, with the PyGAD package. I'm interested in the binary classification case. My input matrix is of dimension 10000x20. Hence, I've created the following model using Keras:
input_layer = tensorflow.keras.layers.Input(20)
dense_layer1 = tensorflow.keras.layers.Dense(500, activation="relu")(input_layer)
dense_layer2 = tensorflow.keras.layers.Dense(500, activation="relu")(dense_layer1)
output_layer = tensorflow.keras.layers.Dense(1, activation="softmax")(dense_layer2)
model = tensorflow.keras.Model(inputs=input_layer, outputs=output_layer)
keras_ga = pygad.kerasga.KerasGA(model=model,
num_solutions=10)
However, when I go to run the algorithm, using ga_instance.run(), I get the error:
ValueError: Shapes (10000,) and (10000, 1) are incompatible
I can't figure out why I'm getting this error? I want my Keras model to have 2 hidden layers, each with 500 hidden nodes and 1 output node.
I think the problem is related to how each output is represented in the array. if you have a single output for 10000 instances, then this is an example of preparing the data that works with PyGAD. Its shape is (1000, 1).
numpy.random.uniform(0, 1, (1000, 1))
Here is a code that works but for a simple network architecture because, based on the fitness function you used, the fitness sometimes is NaN.
As I do not have the same data you used, I generated the input/output data randomly.
import tensorflow.keras
import pygad.kerasga
import numpy
import pygad
def fitness_func(solution, sol_idx):
global data_inputs, data_outputs, keras_ga, model
model_weights_matrix = pygad.kerasga.model_weights_as_matrix(model=model,
weights_vector=solution)
model.set_weights(weights=model_weights_matrix)
predictions = model.predict(data_inputs)
cce = tensorflow.keras.losses.CategoricalCrossentropy()
solution_fitness = 1.0 / (cce(data_outputs, predictions).numpy() + 0.00000001)
# print("solution_fitness", cce(data_outputs, predictions).numpy(), solution_fitness)
return solution_fitness
def callback_generation(ga_instance):
print("Generation = {generation}".format(generation=ga_instance.generations_completed))
print("Fitness = {fitness}".format(fitness=ga_instance.best_solution(ga_instance.last_generation_fitness)[1]))
data_inputs = numpy.random.uniform(0, 1, (1000, 20))
data_outputs = numpy.random.uniform(0, 1, (1000, 1))
# create model
from tensorflow.keras.layers import Dense, Dropout
l1_rate=1e-6
l2_rate = 1e-6
input_layer = tensorflow.keras.layers.InputLayer(20)
dense_layer1 = tensorflow.keras.layers.Dense(10, activation="relu",kernel_regularizer=tensorflow.keras.regularizers.l1_l2(l1=l1_rate, l2=l2_rate))
output_layer = tensorflow.keras.layers.Dense(1, activation="sigmoid")
model = tensorflow.keras.Sequential()
model.add(input_layer)
model.add(dense_layer1)
model.add(Dropout(0.2))
model.add(output_layer)
keras_ga = pygad.kerasga.KerasGA(model=model,
num_solutions=10)
# Run pygad
num_generations = 30
num_parents_mating = 5
initial_population = keras_ga.population_weights
ga_instance = pygad.GA(num_generations=num_generations,
num_parents_mating=num_parents_mating,
initial_population=initial_population,
fitness_func=fitness_func,
on_generation=callback_generation)
ga_instance.run()
Thanks for using PyGAD!
I want to train my regression model with 3D array? how can I do it in Python? can you please guide me. Actually I want to predict regression value from giving input of multiple 3D arrays. Is it possible to predict just single float number from multiple 3d arrays?. Thanks
train.model((x1,x2,x3..xN), y-value).
where x1,x2,..xN are 3D array.
and Y is output just single float number.
The key point is to reshape your 3D samples to flat 1D samples. The following example code uses tf.reshape to reshape input before feeding to a regular dense network for regression to a single value output by tf.identity (no activation).
%tensorflow_version 2.x
%reset -f
import tensorflow as tf
from tensorflow.keras import *
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from tensorflow.keras.callbacks import *
class regression_model(Model):
def __init__(self):
super(regression_model,self).__init__()
self.dense1 = Dense(units=300, activation=tf.keras.activations.relu)
self.dense2 = Dense(units=200, activation=tf.keras.activations.relu)
self.dense3 = Dense(units=1, activation=tf.identity)
#tf.function
def call(self,x):
h1 = self.dense1(x)
h2 = self.dense2(h1)
u = self.dense3(h2) # Output
return u
if __name__=="__main__":
inp = [[[1],[2],[3],[4]], [[3],[3],[3],[3]]] # 2 samples of whatever shape
exp = [[10], [12]] # Regress to sums for example'
inp = tf.constant(inp,dtype=tf.float32)
exp = tf.constant(exp,dtype=tf.float32)
NUM_SAMPLES = 2
NUM_VALUES_IN_1SAMPLE = 4
inp = tf.reshape(inp,(NUM_SAMPLES,NUM_VALUES_IN_1SAMPLE))
model = regression_model()
model.compile(loss=tf.losses.MeanSquaredError(),
optimizer=tf.optimizers.Adam(1e-3))
model.fit(x=inp,y=exp, batch_size=len(inp), epochs=100)
print(f"\nPrediction from {inp}, will be:")
print(model.predict(x=inp, batch_size=len(inp), steps=1))
# EOF
I faced a problem when I try to fit the model, here is my build model and the shape of my train and test data:
import keras
def buildModel(dataLength, labelLength):
price=Input(shape=(dataLength, 51),name='price')
# price = Input(shape = (dataLength,1),name='price')
sentiment = Input(shape=(dataLength, 51),name='sentiment')
priceLayers = LSTM(64, return_sequences=False)(price)
sentimentLayers = LSTM(64, return_sequences=False)(sentiment)
output = keras.layers.concatenate(
[priceLayers,sentimentLayers,]
)
output = Dense(labelLength, activation='linear',name='output')(output)
model = Model(
inputs = [price,sentiment],
outputs=[output]
)
model.compile(optimizer='rmsprop',loss='mse')
return model
from keras.layers import Input, Embedding, LSTM, Dense
from keras.models import Model
lstm = buildModel(22234,1)
lstm.fit([trainX,trainS],[trainY],validation_data=(
[testX,testS],
[testY]),epochs = 10)
trainX.shape = (1, 22234, 51)
testX.shape = (1, 9500, 51)
trainY.shape = (22234,)
testY.shape = (9500,)
trainS.shape = (1, 22234, 51)
testS.shape = (1, 9500, 51)
Error shows:
ValueError Traceback (most recent call last)
<ipython-input-40-4d75b702c980> in <module>()
5 lstm.fit([trainX,trainS],[trainY],validation_data=(
6 [testX,testS],
----> 7 [testY]),epochs = 10
8 )
ValueError: Input arrays should have the same number of samples as target arrays. Found 1 input samples and 22234 target samples.
But I don't understand why it says my input and target samples have different size, is it because in X and S has 3 dimensions but Y only has 2D? My thought is: the input has to be 3D, so I reshape X and S; however, Y is the label, and it do not need to reshape
The batch dimension for inputs and targets needs to be the same. This needs to be dimension 0.
trainX.shape = (1, 22234, 51)
needs to be: (22234, 51, 1) when the outputs are shaped as (22234,).
Do not try to use 1 as time_step dimension since an LSTM with a single time step doesn't make sense.
The input dimensions should be (batch_size, time_steps, n_features). No need to specify batch_size when building the model. So that shapes you declare should be (time_steps, n_features). For a sequence of N measurements N is the time_steps and n_features is the number of values measured at a time.
In follow-up to [this question], a few notes on what we're looking to accomplish:
We have two inputs X and Y of different sample sizes n and m, and a boolean vector z of size nm. Each element of z denotes whether two items in X and Y are some sort of match
We want to use embedding layers (perhaps the same, perhaps different) on X and Y before pairwise concatenating the output of these embedding layers to derive input to an output layer.
Here's one example of what a simple network could look like:
The linked answer and a few other resources helped get as far as this example, which builds a model but throws this error at fit time: ValueError: All input arrays (x) should have the same number of samples. Got array shapes: [(10, 2), (12, 2)].
Recent versions of Keras allow for skipping the dimension check, can this check be skipped in Tensorflow? I'd also be happy to use Keras, but I'm not sure how to perform the reshape and concatenate in Keras in the middle of a model.
Or, is this simply not possible in either? Is the only option to expand and pairwise concatenate prior to input?
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
k = 2
N = 10
M = 12
x = np.random.randint(2, size = 2 * N).reshape((-1,2))
y = np.random.randint(2, size = 2 * M).reshape((-1,2))
x_rep = np.tile(x, (1, M)).reshape((-1,2))
y_rep = np.tile(y, (N, 1))
xy = np.concatenate((x_rep, y_rep), axis=1)
xy = xy.astype(bool)
z = (xy[:,0] == xy[:,2]) * (xy[:,1] ^ xy[:,3])
print(z[:20])
xy = xy.astype(int)
z = z.astype(int)
first = keras.Input(shape=(k,))
second = keras.Input(shape=(k,))
shared_dense = layers.Dense(k)
first_dense = shared_dense(first)
second_dense = shared_dense(second)
first_tiled = layers.Lambda(tf.tile, arguments={'multiples':[1, M]}, name='first_expanded' )(first_dense) #keras.backend.tile(first_dense, [1, M])
second_tiled = layers.Lambda(tf.tile, arguments={'multiples':[N,1]}, name='second_expanded')(second_dense) #keras.backend.tile(first_dense, [1, M])
first_reshaped = layers.Reshape((k,))(first_tiled)
concatenated = layers.Concatenate()([first_reshaped, second_tiled])
out = layers.Dense(1)(concatenated)
model = keras.Model([first, second], out)
keras.utils.plot_model(model, 'tf_nw.png', show_shapes=True)
model.compile('Adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit([x, y], z)
I would like to extract single values from a tensor and manipulate it while retaining backpropagation. My current implementation:
import keras
from keras import backend as K
from keras.models import Model
from keras.layers import Dense, Activation, Input
import tensorflow as tf
input = Input(shape=(100,1), dtype='float32')
x = Dense(100)(input)
x = Activation('relu')(x)
x = Dense(5)(x)
x = Activation('tanh')(x)
start_pad = 40.0 + 5.0 * x[0] # important line
# ...
zs = K.arange(0.0, 1000, step=1.0)
zs = K.relu( zs - start_pad )
# ...
out = zs # + ...
out = Reshape( (trace_length,1) )(out)
model = Model(inputs = input, outputs = out)
However, start_pad seems to be a tensor with dimensions of x. Running code above gives error:
ValueError: Dimensions must be equal, but are 1000 and 5 for 'sub' (op: 'Sub') with input shapes: [1000], [100,5].
where start_pad object is <tf.Tensor 'add_1:0' shape=(100, 5) dtype=float32>.
I would like to have scalar like value for start_pad and subtract from zs with broadcasting. How do I achive this with Tensorflow/Keras?
Ok, the solution i found is
x = tf.unstack(x, axis=1)
which returns a list of tf tensors