X has 232 features, but StandardScaler is expecting 241 features as input - python

I want to make a prediction using knn and I have following lines of code:
def knn(trainImages, trainLabels, testImages, testLabels):
max = 0
for i in range(len(trainImages)):
if len(trainImages[i]) > max:
max = len(trainImages[i])
for i in range(len(trainImages)):
aux = np.array(trainImages[i])
aux.resize(max)
trainImages[i] = aux
max = 0
for i in range(len(testImages)):
if len(testImages[i]) > max:
max = len(testImages[i])
for i in range(len(testImages)):
aux = np.array(testImages[i])
aux.resize(max)
testImages[i] = aux
scaler = StandardScaler()
scaler.fit(list(trainImages))
trainImages = scaler.transform(list(trainImages))
testImages = scaler.transform(list(testImages))
classifier = KNeighborsClassifier(n_neighbors=5)
classifier.fit(trainImages, trainLabels)
pred = classifier.predict(testImages)
print(classification_report(testLabels, pred))
I got the error at testImages = scaler.transform(list(testImages)). I understand that its a difference between arrays number. How can I solve it?

scaler in scikit-learn expects input shape as (n_samples, n_features).
If your second dimension in train and test set is not equal, then not only in sklearn it is incorrect and cause to raise error, but also in theory it does not make sense. n_features dimension of test and train set should be equal, but first dimension can be different, since it show number of samples and you can have any number of samples in train and test sets.
When you execute scaler.transform(test) it expects test have the same feature numbers as where you executed scaler.fit(train). So, all your images should be in the same size.
For example, if you have 100 images, train_images shape should be something like (90,224,224,3) and test_images shape should be like (10,224,224,3) (only first dimension is different).
So, try to resize your images like this:
import cv2
resized_image = cv2.resize(image, (224,224)) #don't include channel dimension

Related

Keras won't broadcast-multiply the model output with a mask designed for the entire mini batch

I have a data generator that produces batches of input data (X) and targets (Y), and also a mask (batch_mask) to be applied to the model output (the same mask applies to all the datapoint in the batch; there are different masks for different batches and the data generator takes care of doing this).
As a result, the first dimension of batch_mask could have shape 1 or batch_size (by repeating the same mask along the first dimension batch_size times). I was expecting Keras to let me use either, and I wanted to simply create masks having a shape of 1 on the first dimension.
However, when I tried this, I got the error:
ValueError: Data cardinality is ambiguous:
x sizes: 128, 1
y sizes: 128
Make sure all arrays contain the same number of samples.
Why won't Keras broadcast along the first dimension? It seems like this should not be complicated.
Here's some minimal example code to observe this behavior
import tensorflow.keras as tfk
import numpy as np
#######################
# 1. model definition #
#######################
# model parameters
nfeatures_in = 6
target_size = 8
# model inputs
input = tfk.layers.Input(nfeatures_in)
input_mask = tfk.layers.Input(target_size)
# model graph
out = tfk.layers.Dense(target_size)(input)
out_masked = tfk.layers.Multiply()((out,input_mask)) # multiply all model outputs in the batch by the same mask
model = tfk.Model(inputs=(input, input_mask), outputs=out_masked)
##########################
# 2. dummy data creation #
##########################
batch_size = 32
# create masks the batch
zeros_vector = np.zeros((1,target_size)) # "batch_size"==1
zeros_vector[0,:6] = 1
batch_mask = zeros_vector
# dummy data creation
X = np.random.randn(batch_size, 6)
Y = np.random.randn(batch_size, target_size)*batch_mask # the target is masked by design in each batch
############################
# 3. compile model and fit #
############################
model.compile(optimizer="Adam", loss="mse")
model.fit((X, batch_mask),Y, batch_size=batch_size)
I know I could make this work by either:
repeating the mask to make the first dimension of batch_mask be the size of the first dimension of X (instead of 1).
using pure tensorflow (but I feel like broadcasting along the batch dimension should not be a problem for Keras).
How can I make this work with Keras?
Thank you!
You can create an IdentityLayer which receives as an external input parameter the batch_mask and returns it as a tensor.
class IdentityLayer(tfk.layers.Layer):
def __init__(self, my_mask, **kwargs):
super(IdentityLayer, self).__init__()
self.my_mask = my_mask
def call(self, _):
my_mask = tf.convert_to_tensor(self.my_mask, dtype=tf.float32)
return my_mask
def get_config(self):
config = super().get_config()
config.update({
"my_mask": self.my_mask,
})
return config
The usage of IdentityLayer in a model is straightforward:
# model inputs
input = tfk.layers.Input(nfeatures_in)
input_mask = IdentityLayer(batch_mask)(input)
# model graph
out = tfk.layers.Dense(target_size)(input)
out_masked = tfk.layers.Multiply()((out,input_mask))
model = tfk.Model(inputs=input, outputs=out_masked)
Where batch_mask is a numpy array created as you reported:
zeros_vector = np.zeros((1,target_size)) # "batch_size"==1
zeros_vector[0,:6] = 1
batch_mask = zeros_vector
The solution is to (properly) use a DataGenerator.
See the gist with the working code: https://gist.github.com/iranroman/2aaecf5b5621051df6b1b6b5394e5ef3
Thank you #Marco Cerliani for the discussion that led to figuring out the solution.

In model.fit in tf.keras, is there a way to pass each sample in a batch n times?

I am trying to write a custom loss function for a model that utilizes Monte Carlo (MC) dropout. I want the model to run through each sample in a batch n times before feeding the predictions to the loss function. A current toy code is shown below. The model has 24 inputs and 10 outputs with 5000 training samples.
import numpy as np
import tensorflow as tf
X = np.random.rand(5000,24)
y = np.random.rand(5000,10)
def MC_Loss(y_true,y_pred):
mu = tf.math.reduce_mean(y_pred,axis=0)
#error = tf.square(y_true-mu)
error = tf.square(y_true-y_pred)
var = tf.math.reduce_variance(y_pred,axis=0)
return tf.math.divide(error,var)/2 + tf.math.log(var)/2 + tf.math.log(2*np.pi)/2
input_layer = tf.keras.layers.Input(shape=(X.shape[1],))
hidden_layer = tf.keras.layers.Dense(units=100,activation='elu')(input_layer)
do_layer = tf.keras.layers.Dropout(rate=0.20)(hidden_layer,training=True)
output_layer = tf.keras.layers.Dense(units=10,activation='sigmoid')(do_layer)
model = tf.keras.models.Model(input_layer,output_layer)
model.compile(loss=MC_Loss,optimizer='Adam')
model.fit(X,y,epochs=100,batch_size=128,shuffle=True)
The current shape of y_true and y_pred are (None,10) with None being the batch_size. I want to be able to have n values for each sample in the batch, so I can get the mean and standard deviation for each sample to use in the loss function. I want these value, because the mean and standard deviation should be unique to each sample, not taken across all samples in a batch. The current shape of mu and sigma are (10,) and I would want them to be (None,10) which would mean y_true and y_pred have the shape (None,n,10).
How can I accomplish this?
I believe I found the solution after some experimentation. The modified code is shown below.
import numpy as np
import tensorflow as tf
n = 100
X = np.random.rand(5000,24)
X1 = np.concatenate(([X.reshape(X.shape[0],1,X.shape[1]) for _ in range(n)]),axis=1)
y = np.random.rand(5000,10)
y1 = np.concatenate(([y.reshape(y.shape[0],1,y.shape[1]) for _ in range(n)]),axis=1)
def MC_Loss(y_true,y_pred):
mu = tf.math.reduce_mean(y_pred,axis=1)
obs = tf.math.reduce_mean(y_true,axis=1)
error = tf.square(obs-mu)
var = tf.math.reduce_variance(y_pred,axis=1)
return tf.math.divide(error,var)/2 + tf.math.log(var)/2 + tf.math.log(2*np.pi)/2
input_layer = tf.keras.layers.Input(shape=(X.shape[1]))
hidden_layer = tf.keras.layers.Dense(units=100,activation='elu')(input_layer)
do_layer = tf.keras.layers.Dropout(rate=0.20)(hidden_layer,training=True)
output_layer = tf.keras.layers.Dense(units=10,activation='sigmoid')(do_layer)
model = tf.keras.models.Model(input_layer,output_layer)
model.compile(loss=MC_Loss,optimizer='Adam')
model.fit(X1,y1,epochs=100,batch_size=128,shuffle=True)
So what I am now doing is stacking the inputs and outputs about an intermediate axis, creating n identical sets of all input and output samples. While tensorflow shows a warning because the model is created without knowledge of this intermediate axis. It still trains with no issues and the shapes are as expected.
Note: since y_true now has the shape (None,n,10), you have to take the mean about the intermediate axis which gives you the true value since all n are identical.

Keras Multi Input Network, using Images and structured data : How do I build the correct input data?

I am building a multi input Network using the Keras functionnal API, but I struggle to find and understand the right format for my input data throw the network.
I have two main input :
One is an image, that goes throw a fine-tuned ResNet50 CNN
The second is a simple numpy array (X_train) containing metadata about the image (position and size of the image). This one goes throw a simple dense network.
I load the images from a dataframe, containing the metadata, and the filepath to the corresponding image.
I use ImageDataGenerator and the flow_from_dataframe method to load my images :
datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
train_flow = datagen.flow_from_dataframe(
dataframe=df_train,
x_col="cropped_img_filepath",
y_col="category",
batch_size=batch_size,
shuffle=False,
class_mode="categorical",
target_size=(224,224)
)
I can train the two networks separately using their own data, no problems until here.
The two output of the two distinct networks are then combined to a dense network to output a 10 digits probability vector :
# Create the input for the final dense network using the output of both the dense MLP and CNN
combinedInput = concatenate([cnn.output, mlp.output])
x = Dense(512, activation="relu")(combinedInput)
x = Dense(256, activation="relu")(x)
x = Dense(128, activation="relu")(x)
x = Dense(32, activation="relu")(x)
x = Dense(10, activation="softmax")(x)
model = Model(inputs=[cnn.input, mlp.input], outputs=x)
# Compile the model
opt = Adam(lr=1e-3, decay=1e-3 / 200)
model.compile(loss="categorical_crossentropy",
metrics=['accuracy'],
optimizer=opt)
# Train the model
model_history = model.fit(x=(train_flow, X_train),
y=y_train,
epochs=1,
batch_size=batch_size)
However, when I cannot train the overall network, I get the following error :
ValueError: Failed to find data adapter that can handle input: (<class 'tuple'> containing values of types {"<class 'keras_preprocessing.image.dataframe_iterator.DataFrameIterator'>", "<class 'numpy.ndarray'>"}), <class 'pandas.core.series.Series'>
I understand I am not using the correct input format for my input data.
I can train my CNN with the train_flow, and my dense network with X_train, so I was hoping this would work.
Do you have any idea of how to combine image data and nump array into a multi input array ?
Thank you for all the information you can give me!
I finally found how to do it, inspiring me from the post # Nima Aghli proposed.
Here is how I did that :
First instanciate the preprocessing function (for me the one used for ResNest50) :
from tensorflow.keras.applications.resnet50 import ResNet50, preprocess_input
def preprocess_function(x):
if x.ndim == 3:
x = x[np.newaxis, :, :, :]
return preprocess_input(x)
# Initializing the datagen, using the above function :
datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
And then Define the Custom Data Generator that will yield randomly sampled array coupling image & metadata, whiule making sure not to be ever out of data (so that you can run on which ever number of epochs) :
def createGenerator(dff, verif=False, batch_size=BATCH_SIZE):
# Shuffles the dataframe, and so the batches as well
dff = dff.sample(frac=1)
# Shuffle=False is EXTREMELY important to keep order of image and coord
flow = datagen.flow_from_dataframe(
dataframe=dff,
directory=None,
x_col="cropped_img_filepath",
y_col="category",
batch_size=batch_size,
shuffle=False,
class_mode="categorical",
target_size=(224,224),
seed=42
)
idx = 0
n = len(dff) - batch_size
batch = 0
while True :
# Get next batch of images
X1 = flow.next()
# idx to reach
end = idx + X1[0].shape[0]
# get next batch of lines from df
X2 = dff[["x", "y", "w", "h"]][idx:end].to_numpy()
dff_verif = dff[idx:end]
# Updates the idx for the next batch
idx = end
# print("batch nb : ", batch, ", batch_size : ", X1[0].shape[0])
batch+=1
# Checks if we are at the end of the dataframe
if idx==len(dff):
# print("END OF THE DATAFRAME\n")
idx = 0
# Yields the image, metadata & target batches
if verif==True :
yield [X1[0], X2], X1[1], dff_verif
else :
yield [X1[0], X2], X1[1] #Yield both images, metadata and their mutual label
I voluntarily kept the commentaries as it helps grasps all the operations that are computed.
The main point/problem is to get images from all the dataframe, without ever getting short on images, and having batches of the same size.
Also, we have to be careful to the order of the images/metadata, so tht the right info is connected to the right image in the returned array.

reshape Error/ValueError: total size of new array must be unchanged

I have a code for image classification using CNN, so there are training data set and testing dataset. When I perform the system I have this error:
ValueError Traceback (most recent call last)
<ipython-input-44-cb7ec1a13881> in <module>()
1 optimize(num_iterations=1)
2
----> 3 print_validation_accuracy()
<ipython-input-43-7f1a17e48e41> in print_validation_accuracy(show_example_errors, show_confusion_matrix)
21
22 # Get the images from the test-set between index i and j.
---> 23 images = data.valid.images[i:j, :].reshape(batch_size, img_size_flat)
24 #images = data.valid.images[i:j, :].reshape(1, 128)
25
ValueError: total size of new array must be unchanged
and the steps of the code that precessed this error are:
def print_validation_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.valid.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.valid.images[i:j, :].reshape(batch_size, img_size_flat)
# Get the associated labels.
labels = data.valid.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
cls_true = np.array(data.valid.cls)
cls_pred = np.array([classes[x] for x in cls_pred])
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
Can anyone help me please?
As the error message shows, there is a mismatch in your reshape in this statement.
images = data.valid.images[i:j, :].reshape(batch_size, img_size_flat)
What is happening is this equation is not equal i.e
(j - i) * (column_size of data.valid.images) is not equal to batch_size * img_size_flat.
Make it equal and the problem will be solved.

Keras wrong image size

I want to test the accuracy of my CNN model for the test-images. Following is the code for converting Ground-truth images in mha format to png format.
def save_labels(fns):
'''
INPUT list 'fns': filepaths to all labels
'''
progress.currval = 0
for label_idx in progress(xrange(len(fns))):
slices = io.imread(fns[label_idx], plugin = 'simpleitk')
for slice_idx in xrange(len(slices)):
'''
commented code in order to reshape the image slices. I tried reshaping but it did not work
strip=slices[slice_idx].reshape(1200,240)
if np.max(strip)!=0:
strip /= np.max(strip)
if np.min(strip)<=-1:
strip/=abs(np.min(strip))
'''
io.imsave('Labels2/{}_{}L.png'.format(label_idx, slice_idx), slices[slice_idx])
This code is producing 240 X 240 images in png format. However most of them are low contrast or completely blackened. Moving on, Now I pass these images to function for calculating knowing the class of labelled image.
def predict_image(self, test_img, show=False):
'''
predicts classes of input image
INPUT (1) str 'test_image': filepath to image to predict on
(2) bool 'show': True to show the results of prediction, False to return prediction
OUTPUT (1) if show == False: array of predicted pixel classes for the center 208 x 208 pixels
(2) if show == True: displays segmentation results
'''
imgs = io.imread(test_img,plugin='simpleitk').astype('float').reshape(5,240,240)
plist = []
# create patches from an entire slice
for img in imgs[:-1]:
if np.max(img) != 0:
img /= np.max(img)
p = extract_patches_2d(img, (33,33))
plist.append(p)
patches = np.array(zip(np.array(plist[0]), np.array(plist[1]), np.array(plist[2]), np.array(plist[3])))
# predict classes of each pixel based on model
full_pred = keras.utils.np_utils.probas_to_classes(self.model_comp.predict(patches))
fp1 = full_pred.reshape(208,208)
if show:
io.imshow(fp1)
plt.show
else:
return fp1
I am getting ValueError: cannot reshape array of size 172800 into shape (5,240,240). I changed 5 to 3 so that 3X240X240=172800. But then there is new problem then ValueError: Error when checking : expected convolution2d_input_1 to have 4 dimensions, but got array with shape (43264, 33, 33).
My model looks like this:
single = Sequential()
single.add(Convolution2D(self.n_filters[0], self.k_dims[0], self.k_dims[0], border_mode='valid', W_regularizer=l1l2(l1=self.w_reg, l2=self.w_reg), input_shape=(self.n_chan,33,33)))
single.add(Activation(self.activation))
single.add(BatchNormalization(mode=0, axis=1))
single.add(MaxPooling2D(pool_size=(2,2), strides=(1,1)))
single.add(Dropout(0.5))
single.add(Convolution2D(self.n_filters[1], self.k_dims[1], self.k_dims[1], activation=self.activation, border_mode='valid', W_regularizer=l1l2(l1=self.w_reg, l2=self.w_reg)))
single.add(BatchNormalization(mode=0, axis=1))
single.add(MaxPooling2D(pool_size=(2,2), strides=(1,1)))
single.add(Dropout(0.5))
single.add(Convolution2D(self.n_filters[2], self.k_dims[2], self.k_dims[2], activation=self.activation, border_mode='valid', W_regularizer=l1l2(l1=self.w_reg, l2=self.w_reg)))
single.add(BatchNormalization(mode=0, axis=1))
single.add(MaxPooling2D(pool_size=(2,2), strides=(1,1)))
single.add(Dropout(0.5))
single.add(Convolution2D(self.n_filters[3], self.k_dims[3], self.k_dims[3], activation=self.activation, border_mode='valid', W_regularizer=l1l2(l1=self.w_reg, l2=self.w_reg)))
single.add(Dropout(0.25))
single.add(Flatten())
single.add(Dense(5))
single.add(Activation('softmax'))
sgd = SGD(lr=0.001, decay=0.01, momentum=0.9)
single.compile(loss='categorical_crossentropy', optimizer='sgd')
print 'Done.'
return single
I am using keras 1.2.2. Please refer here and here( is it due to this change in full_predict in above code) for my previous post for background information. Please refer this for knowing why these specific sizes like 33,33.
You should check the shape of the patches array. This should have 4 dimensions (nrBatches, nrChannels, Width, Height). According to your error message there are only 3 dimensions. Therefore it seems like you merged your channel dimension with your batch dimension.

Categories

Resources