Use CAM with pretrained (VGG16) and make Heatmap with pneumonia - python

I'm trying to make a heatmap.
I've done almost done.
The problem is that when I use decode_predctions there is nothing about pneumonia or sth.
Here is my code.
from keras.applications.VGG16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input, decode_predictions
import numpy as np
img_path = 'path.jpg'
img = image.load_img(img_path, target_size(224,224))
x = image.img_to_arrray(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = model.predict(x)
print('Predicted:', decode_predctions(pred, top=20)[0])
That img_path is picture of chest x- ray and the output is not related to pneumonia or chest x-ray. I want the reason why and to get the right parameter of decode_predctions
Chears guys.

Related

How to get prediction label along with percentage

I am using transfer learning to retrain VGG16 model on Fruits360 dataset using keras. I have already trained the model and generated the model.h5 file. Now, to test the model I trained I wrote a separate code as shown below and loaded the model.h5 file and the input image and predicted using model.predict() function. I got an array of predicted values as output, but I am not able to get the label for the output.
How do I also get the labels after predicting?
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
from keras.applications.vgg16 import preprocess_input
from keras.applications.vgg16 import decode_predictions
from keras.preprocessing import image
from keras.models import load_model
import os
my_list = os.listdir('./fruits-360/Training')
labels = sorted(my_list)
#print(len(labels))
saved_model = load_model("output.h5")
# load an image from file
image = load_img('apple.jpg', target_size=(224, 224))
# convert the image pixels to a numpy array
image = img_to_array(image)
# reshape data for the model
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
# prepare the image for the VGG model
image = preprocess_input(image)
# predict the probability across all output classes
yhat = saved_model.predict(image)
print(yhat)
The output that I got from this:
[[7.17755854e-02 1.64420519e-04 7.11962930e-04 1.09639345e-03
3.65487649e-03 1.30461820e-03 1.71189080e-03 8.44106398e-05
2.32845647e-04 2.93225341e-04 5.51751134e-09 8.36079926e-06
2.45124284e-07 4.89534505e-05 3.62677121e-04 3.77899994e-07
1.04390840e-09 2.77215719e-07 1.48338046e-07 1.58574392e-06
1.85948572e-08 2.35122825e-05 1.40991315e-05 1.53142121e-09
4.20618314e-08 9.00860164e-10 8.37871852e-08 1.38314470e-04
2.33362043e-05 1.02217612e-07 1.56784572e-05 1.45486838e-05
1.35744230e-07 7.53441327e-07 8.10141572e-08 9.25831589e-09
1.17044747e-05 7.80909737e-09 1.17813433e-05 1.39052809e-05
1.33823562e-06 8.83602411e-07 5.22362086e-07 3.12003103e-04
3.63733534e-07 3.09960592e-06 7.83494880e-10 2.16209537e-06
1.09540458e-07 1.00488634e-07 5.04332002e-06 3.11387431e-08
1.43967145e-06 3.70907003e-08 9.72185060e-02 7.17791181e-07
8.50022047e-07 1.09006250e-11 8.06401147e-07 2.94776954e-04
1.42594319e-04 6.57663213e-06 2.22632690e-09 1.33982932e-04
7.27764191e-03 1.76724559e-03 4.58840788e-07 2.83163081e-05
1.27739793e-06 1.51839274e-07 6.35151446e-01 1.49872008e-04
1.69212143e-07 6.46130411e-06 8.09798095e-09 1.33023859e-04
3.11768084e-10 6.82332274e-03 2.72009001e-05 1.36803810e-05
3.21909931e-04 2.18727801e-05 4.89347076e-06 1.65353231e-05
8.18530396e-02 2.71601088e-08 3.78919160e-03 1.93472511e-06
2.28390039e-04 9.45829204e-04 8.07484355e-08 2.39097773e-07
3.94911304e-08 6.42228715e-10 1.27851049e-10 2.42364536e-06
6.91388919e-08 5.50304435e-07 5.60582407e-08 6.93544493e-08
2.04468861e-07 1.82402204e-07 1.29191315e-08 1.40132336e-03
7.21434930e-08 1.26103216e-04 7.80344158e-02 6.98078452e-07
6.39117275e-07 4.86231899e-09 6.67545173e-05 1.98491052e-05
3.82679382e-08 4.00836188e-06 1.76605427e-05 5.99655250e-05
1.41588691e-06 6.29748298e-09 1.60603679e-03 2.18801666e-04
1.52924549e-05 2.39897645e-07 5.80409534e-08 1.40595137e-06
4.33732907e-07 9.40148311e-06 6.87087507e-08 9.42246814e-08
4.06775257e-07 1.12163532e-08 8.79949056e-08]]
I tried a lot of different options that were answered in other questions regarding the same but I was not able to find out any solution to this. Can anyone help?
If you want I can also provide the retraining code for reference.
Thanks in advance!
You can get the labels from the training dataset as:
class_names = your_train_ds.class_names
labels = (v_datagen.class_indices)
# this returns python dictionary in order of label_name:index
# We need to switch this order to index:label_name,
# so that we can access the label name using the index as key
labels = dict((val,ky) for ky,val in labels.items())
Now, get the prediction index as mentioned in the comments
pred_ind = np.argmax(yhat, axis=1)
print(labels[pred_ind])

How can I solve " cuda 2D conv problem "?

this is my code , but predict funtion not work
Error:
from keras.applications.vgg16 import VGG16
from keras.utils.vis_utils import plot_model
#Keras will download the weight files from the Internet and store them in the ~/.keras/models directory.
model = VGG16()
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
from keras.applications.vgg16 import preprocess_input
# load an image from file
image = load_img('output.png', target_size=(224, 224))
# convert the image pixels to a numpy array
image = img_to_array(image)
# reshape data for the model
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
# prepare the image for the VGG model
image = preprocess_input(image)
from keras.applications.vgg16 import decode_predictions
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
# predict the probability across all output classes
yhat = model.predict(image)
# convert the probabilities to class labels
label = decode_predictions(yhat)
# retrieve the most likely result, e.g. highest probability
label = label[0][0]
# print the classification
print('%s (%.2f%%)' % (label[1], label[2]*100))
img = Image.open('output.png')
plt.imshow(img)
whast should I do ??
wekfjeojocwjcopjcoekcopejcpoekcpwekcpejoejocijewjwrjwihujviruhviuhr
The error is that it failed to the the convolution algorithm. Not sure of all the things that can cause this error but when I get that error it is because I have more than one instance of python running that is using tensorflow. So in your jupyter notebook disable the kernel for all python open notebooks except for the notebook you want to run.

cant load custom dataset to cnn pretrained for feature extraction

hello i am newbie to all this and i am trying to feed the pretrained CNN VGG16 with a custom dataset of mine and then to achieve feature extraction for every image with numpy. but i am taking this error:'numpy.ndarray' object has no attribute 'load_img' really any help appreciate it.thanks
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
import numpy as np
import matplotlib.pyplot as plt
import os
model = VGG16(weights='imagenet', include_top=False)
dir_images = "C:/Users/.../Desktop/db"
imgs = os.listdir(dir_images)
for imgnm in imgs:
image = plt.imread(os.path.join(dir_images, imgnm))
img = image.load_img(image, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
features = model.predict(x)
#np.save('features.csv', features)
You are overiding the module image of keras.preprocessing by your own actual images loaded with matplotlib.
So just change the line
image = plt.imread(os.path.join(dir_images, imgnm))
into somehting else like
arr_image = plt.imread(os.path.join(dir_images, imgnm))
and then this error will be gone.
But note that image.load_img takes path as input and not actual images of type ndarray so you should instead use load_img in the loop and remove the matplotlib loading.

What is solution for this error "ValueError: If evaluating from data tensors, you should specify the `steps` argument"?

I am testing my Deep learning model, I wrote this code
from keras.models import load_model
classifier = load_model('Trained_model.h5')
classifier.evaluate()
Prediction of single image
import numpy as np
from keras.preprocessing import image
img_name = input('Enter Image Name: ')
image_path = './predicting_data/test_set/{}'.format(img_name)
print('')
after running, I am getting this error
ValueError: If evaluating from data tensors, you should specify the `steps` argument.
NOTE :- ./predicting_data/test_Set is the path of my test dataset which has sub folders like A b...c ...to z containing images
The working code to Predict the Class of an Image, by Loading the Saved Model is shown below:
import os
import tensorflow as tf
from tensorflow.keras.preprocessing import image
Test_Dir = '/Dogs_Vs_Cats_Small/test/cats'
New_Model = tf.keras.models.load_model('Dogs_Vs_Cats.h5')
New_Model.summary()
Image_Path = os.path.join(Test_Dir, 'cat.1500.jpg')
Img = image.load_img(Image_Path, target_size = (150,150))
Img_Array = image.img_to_array(Img)
Img_Array = Img_Array/255.0
Img_Array = tf.reshape(Img_Array, (-1,150,150,3))
Predictions = New_Model.predict(Img_Array)
Label = tf.argmax(Predictions)
Label.numpy()[0]
Final line gives the respective Class for our Image.

Resizing an input image in a Keras Lambda layer

I would like my keras model to resize the input image using OpenCV or similar.
I have seen the use of ImageGenerator, but I would prefer to write my own generator and simply resize the image in the first layer with keras.layers.core.Lambda.
How would I do this?
If you are using tensorflow backend then you can use tf.image.resize_images() function to resize the images in Lambda layer.
Here is a small example to demonstrate the same:
import numpy as np
import scipy.ndimage
import matplotlib.pyplot as plt
from keras.layers import Lambda, Input
from keras.models import Model
from keras.backend import tf as ktf
# 3 channel images of arbitrary shape
inp = Input(shape=(None, None, 3))
try:
out = Lambda(lambda image: ktf.image.resize_images(image, (128, 128)))(inp)
except :
# if you have older version of tensorflow
out = Lambda(lambda image: ktf.image.resize_images(image, 128, 128))(inp)
model = Model(input=inp, output=out)
model.summary()
X = scipy.ndimage.imread('test.jpg')
out = model.predict(X[np.newaxis, ...])
fig, Axes = plt.subplots(nrows=1, ncols=2)
Axes[0].imshow(X)
Axes[1].imshow(np.int8(out[0,...]))
plt.show()

Categories

Resources