Trying to get behaviour of keras dense input shape and ndarrays - python

I'm trying to fit my simple keras model for 5-classes classification:
model = Sequential()
model.add(Dense(64, input_shape=(6,), activation="relu"))
model.add(Dense(5, activation="softmax"))
Also, I have the data with format:
>print(features)
[array([155, 22, 159, 57, 247, 88], dtype=uint8),
array([184, 165, 127, 49, 190, 0,], dtype=uint8),
...
array([35, 136, 32, 255, 114, 137], dtype=uint8)]
But when I'm trying to fit the model, I'm getting the next error:
Error when checking input: expected input_layer_input to have shape (6,) but got array with shape (1,)
I can't understand what is the reason oh this error. Could you please help me to get it?
Some additional information:
>type(features)
numpy.ndarray
>features.shape
(108885,)
>type(features[0])
numpy.ndarray
>features[0].shape
(6,)

You could change the input data to be a 2 Dimensional (numpy) array, or you could just change the input_shape to (1,), depending on what you want to do. Rightnow you have an array of arrays. Keras dosn't accept that.

Related

any way to rescale and stack features maps with different shape in tensorflow?

I intend to use the concepts of skip connection in my experiment. Basically, in my pipeline, features maps that comes after Conv2D are going to be stacked or concatenated. But, features maps are in different shape and try to stack them together into one tensor gave me error. Does anyone knows any possible way of doing this correctly in tensorflow? Any thoughts or ideas to make this happen? Thanks
idea flowchart
here is the pipeline flowchart I want to do it:
my case is little different because I got extra building block is used after Conv2D and its output now is feature maps of 15x15x64 and so on. I want to stack those features map into one then use it to Conv2D again.
my attempt:
this is my reproducible attempt:
import tensorflow as tf
from tensorflow.keras.layers import Dense, Dropout, Activation, Conv2D, Flatten, MaxPool2D, BatchNormalization
inputs = tf.keras.Input(shape=(32, 32, 3))
x = inputs
x = Conv2D(32, (3, 3), input_shape=(32,32,3))(x)
x = BatchNormalization(axis=-1)(x)
x = Activation('relu')(x)
fm1 = MaxPooling2D(pool_size=(2,2))(x)
x = Conv2D(32,(3, 3), input_shape=(15,15,32))(fm1)
x = BatchNormalization(axis=-1)(x)
x = Activation('relu')(x)
fm2 = MaxPooling2D(pool_size=(2,2))(x)
concatted = tf.keras.layers.Concatenate(axis=1)([fm1, fm2])
but this way I ended up with following error: ValueError: A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 15, 15, 32), (None, 6, 6, 32)]. I am not sure what would be correct way to stack features maps with different shape. How can we make this right? Any possible thoughts?
desired output
in my actual model, I got shape of features maps are TensorShape([None, 15, 15, 128]) and TensorShape([None, 6, 6, 128]). I need to find way to merge them or stack them into one. Ideally, shape of concatenated or stacked feature maps' shape would be: [None, 21,21,128]. Is there any way of stacking them into one? Any idea?
What you're trying to achieve doesn't work mathematically. Let me illustrate. Take the simple 1D problem (like 1D convolution). You have a (None, 64, 128) (fm1) sized output and a (None, 32, 128) (fm2) output that you want to concatenate. Then,
concatted = tf.keras.layers.Concatenate(axis=1)([fm1, fm2])
works totally fine, giving you an output of size (None, 96, 128).
Let's come to the 2D problem. Now you got two tensors (None, 15, 15, 128) and (None, 6, 6, 128) and want to end up with a (None, 21, 21, 128) sized output. Well the math doesn't work here. To understand why, reduce this to 1D format. Then you got
fm1 -> (None, 225, 128)
fm2 -> (None, 36, 128)
By concat you get,
concatted -> (None, 261, 128)
If the math works you should get (None, 441, 128) which is reshape-able to (None, 21, 21, 128). So this cannot be achieved unless you pad the edges of the smaller with 441-261 = 180 on the reshaped tensor. And then reshape it to the desired shape. Following is an example of how you can do it,
concatted = tf.keras.layers.Lambda(
lambda x: K.reshape(
K.concatenate(
[K.reshape(x[0], (-1, 225, 128)),
tf.pad(
K.reshape(x[1], (-1, 36, 128)), [(0,0), (0, 180), (0,0)]
)
], axis=1
), (-1, 21, 21, 128))
)([fm1, fm2])
Important: But I can't guaranttee the performance of your model this just solves your problem mathematically. In a machine learning perspective, I wouldn't advice this. Best way would be making sure the outputs are compatible in sizes for concatenation. Few ways would be,
Not reduce the size of convolution outputs (stride = 0 and padding='same')
Use transpose convolution operation to size-up the smaller one

Landsat 8 data with keras autoencoder

I am trying to input a 7 band .tif image to a keras input layer.
input_img = Input(shape=(246, 177, 7))
The tif image is 246 x 177 and has 7 bands.
I am using rasterio to read the image as follows:
with rio.open(landsat_post_fire_path) as src:
landsat_post_fire = src.read()
This gives me 7 arrays of shape 177, 246
Since i have only one image, I am inserting it into an array:
x_train = np.expand_dims(landsat_post_fire, axis=0)
My complete autoencoder looks as follows:
input_size = 43542 #246 x 177
hidden_size = 20000
code_size = 5000
input_img = Input(shape=(246, 177, 7))
hidden_1 = Dense(hidden_size, activation='relu')(input_img)
code = Dense(code_size, activation='relu', activity_regularizer=l1(10e-6))(hidden_1)
hidden_3 = Dense(hidden_size, activation='relu')(code)
output_img = Dense(input_size, activation='sigmoid')(hidden_3)
autoencoder = Model(input_img, output_img)
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.fit(x_train, x_train, epochs=1)
When I finally run the code, I get the following error:
Error when checking input: expected input_1 to have shape (246, 177, 7) but got array with shape (7, 177, 246)
Am I missing how the input shape is working?
I don't understand what seems to be the issue here. I would be grateful if someone can help explaining how the Input understands the dimension.
EDIT
I added the following line after reading the image:
landsat_post_fire=np.moveaxis(landsat_post_fire,0,-1)
But now the error is:
Error when checking target: expected dense_4 to have shape (177, 246, 43542) but got array with shape (177, 246, 7)
I think this is not the right way to tell the input about the number of bands in the image?

Tensor Shape Not Recognized in Tensorflow 2.0

I cant get my RNN classifier to work with my input data. I am using TF 2.0 pre-release with a sliding window.
I am trying to build an RNN which I am feeding 5 timesteps with 6 features each and having it produce the 6th timestep as the target. When I run my code it is giving me an error saying that the input is (None,6) where as when I print out my training data it clearly says the shape is (5,6). I am very confused as to how to fix this.
Error:
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 734, in fit
use_multiprocessing=use_multiprocessing)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 224, in fit
distribution_strategy=strategy)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 547, in _process_training_inputs
use_multiprocessing=use_multiprocessing)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 593, in _process_inputs
steps=steps)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2384, in _standardize_user_data
all_inputs, y_input, dict_inputs = self._build_model_with_inputs(x, y)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2587, in _build_model_with_inputs
self._set_inputs(cast_inputs)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2674, in _set_inputs
outputs = self(inputs, **kwargs)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 772, in __call__
self.name)
File "C:\Users\employee\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\keras\engine\input_spec.py", line 177, in assert_input_compatibility
str(x.shape.as_list()))
ValueError: Input 0 of layer sequential is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 6]
Print readout:
********************tf.Tensor(
[[0.07812838 0.08639083 0.07809999 0.08601701 0.6974719 0.6974719 ]
[0.06794664 0.06995372 0.06220453 0.06934043 0.70064694 0.70064694]
[0.08323035 0.08651368 0.07691107 0.08147305 0.69750804 0.69750804]
[0.09781507 0.10009027 0.08847085 0.08919457 0.6944895 0.6944895 ]
[0.12235662 0.12269666 0.11316498 0.11738694 0.6868 0.6868 ]], shape=(5, 6), dtype=float32)********************tf.Tensor([[0.08238748 0.09074993 0.07986343 0.09017278 0.6965872 0.6965872 ]], shape=(1, 6), dtype=float32)********************
/data comes in as an array of shape [737,6]
train=tf.data.Dataset.from_tensor_slices(features).window(6,1,1,drop_remainder=True).flat_map(lambda x: x.batch(6)).map(lambda window: (window[:-1],window[-1:]))
valid=train.take(200).shuffle(1000).repeat()
train=train.shuffle(3000).repeat()
for x,y in valid:
print('*'*20+str(x)+"*"*20+str(y)+"*"*20)
print(train)
model = tf.keras.Sequential()
model.add(layers.SimpleRNN(128,batch_size=10))
model.add(layers.Dense(124,kernel_initializer='he_uniform',activation='softmax'))
model.compile(optimizer='adagrad', batch_size=10,step_size=.01, loss=tf.keras.losses.MeanAbsoluteError(), metrics=['accuracy'])
history = model.fit(train,epochs=100, validation_data=valid,steps_per_epoch=3000,validation_steps=1000)
The model expects an input with rank 3, but is passed an input with rank 2.
The first layer is a SimpleRNN, which expects data in the form (batch_size, timesteps, features), i.e. rank 3. The shape of the data passed by the user is (5, 6), i.e. rank 2.
Passing rank-3 data (including the batch dimension) will fix the issue.
You need to reshape your data or train variable into 3 dimensions i.e. "[batch, timesteps, features]".
The model is expecting 3 dimensionsal input and your data is 2 dimensional.
You can reshape your data like this :
data = tf.reshape(data, [-1,5,6])
And it should solve your issue.

Input image data to tensorflow placeholder

I'm working with the keras.datasets.fashion_mnist dataset, which contains 28 x 28 grayscale images. I've built a pretty simple convolutional neural network that accepts a placeholder of images defined as:
X = tf.placeholder(tf.float32, [None, 28, 28, INPUT_CHANNELS], name='X_placeholder')
I'm starting out with a <type 'numpy.ndarray'> of shape (100, 28, 28). 100 here represents the batch size that I've chosen to train with.
Obviously, the dimensionality doesn't line up here. The graph I've built should work with RGB images as well, hence the INPUT_CHANNEL dimension. As expected, when I try to train, I get the following error:
ValueError: Cannot feed value of shape (100, 28, 28) for Tensor u'X_placeholder:0', which has shape '(?, 28, 28, 1)'
Being relatively new to TF and numpy, I'm failing to see how to add in that extra dimension. Having pieced together my code from various sources, I can't say that I chose the placeholder input shape [None, 28, 28, INPUT_CHANNELS], but I want to stick with it instead of trying to work around it.
Question
How can I reshape my training data to match the expected placeholder dimensionality?
In numpy:
You can use np.newaxis,np.expand_dims and reshape() to add dimension.
import numpy as np
train_data = np.random.normal(size=(100,28,28))
print(train_data.shape)
new_a = train_data[...,np.newaxis]
print(new_a.shape)
new_a = np.expand_dims(train_data,axis=-1)
print(new_a.shape)
new_a = train_data.reshape(100,28,28,1)
print(new_a.shape)
(100, 28, 28)
(100, 28, 28, 1)
(100, 28, 28, 1)
(100, 28, 28, 1)
In tensorflow:
You can use tf.newaxis,tf.expand_dims and tf.reshape to add dimension.
import tensorflow as tf
train_data = tf.placeholder(shape=(None,28,28),dtype=tf.float64)
print(train_data.shape)
new_a = train_data[...,tf.newaxis]
print(new_a.shape)
new_a = tf.reshape(train_data,shape=(-1,28,28,1))
print(new_a.shape)
new_a = tf.expand_dims(train_data,axis=-1)
print(new_a.shape)
(?, 28, 28)
(?, 28, 28, 1)
(?, 28, 28, 1)
(?, 28, 28, 1)

Keras 2 similar dataset : 1 works other raises valueError

I'm learning deep learning and trying to write some models. But I've stucked at data set. When I use ready code and data-set from github it works normally, but when I try my data set with same code it doesn't work. However , both datasets have same type and shape:
working dataset:
Shape of train: (5000, 32, 32, 3)
Type of train: <class 'numpy.ndarray'>
Shape of train labels: (5000,)
Shape of valid: (500, 32, 32, 3)
Shape of valid labels: (500,)
my data-set:
Shape of train: (31368, 32, 32, 3)
Type of train: <class 'numpy.ndarray'>
Shape of train labels: (31368,)
Shape of valid: (7841, 32, 32, 3)
Shape of valid labels: (7841, 32, 32, 3)
Shape of train_pixels[0]: (32, 32, 3)
Error I got:
ValueError: Error when checking model input: the list of Numpy arrays
that you are passing to your model is not the size the model expected.
Expected to see 1 arrays but instead got the following list of 7841
arrays: [array([[[186, 182, 255],
[179, 177, 255],
[163, 161, 244],...
There is one similar question I have found here but I couldn't use it, I got other errors. This solution doesn't work too.

Categories

Resources