I need to know how to make a cv2 rectangle contour from the model I had?
My project is about Object Detection using Random Forests Algorithm
I Used the VGG16 model to extract features and put extract the result by Random forests
feature_extractor=VGG16()
feature_extractor.layers.pop()
feature_extractor=Model(inputs=feature_extractor.inputs,outputs=feature_extractor.layers[-1].output)
X_for_RF = feature_extractor.predict(x_train,verbose=1)
model = RandomForestClassifier(n_estimators = 100, random_state = 42)
model.fit(X_for_RF,y_train)
Example: if i have apple images and i made training model about it .
now i need when i enter image it make rectangle on the apple
i need to make detection contour not classfication
Related
I have several images and I want to know if there is any aircraft in the images or not.
I used the clip shown below but the output is [[1.0]], while the image is the face of humans. I think it is because it uses softmax.
I tried to use logits_per_image but the value is not understandable to me tensor([[20.03]]).
Is there any way to know if an image is related to a word in percent or so?
Can I use object detection in my problem to see if there are any aircraft in my image?
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
image = Image.open('image_4.jpg')
inputs = processor(text=['aircraft'], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
probs.tolist()
I am asking a lot but I am very stuck on this one...
I have this part of code I used to extract features with SIFT, and I am trying to adapdt it to extract features based on a VGG16 model.
No matter how hard I try, I can't get passed through and always rise errors.
So if anyone can help to get the features in a way to use it for a clustering afterwards.
Here is the code with SIFT :
# identification of key points and associated descriptors
import time, cv2
sift_keypoints = []
temps1=time.time()
sift = cv2.xfeatures2d.SIFT_create(500)
for image_num in range(len(list_photos)) :
if image_num%100 == 0 : print(image_num)
image = cv2.imread(path+list_photos[image_num],0) # convert in gray
image = cv2.GaussianBlur(image,(7,7),cv2.BORDER_DEFAULT) #apply gaussianblur filter
# image = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
res = cv2.equalizeHist(image) # equalize image histogram
kp, des = sift.detectAndCompute(res, None)
sift_keypoints.append(des)
sift_keypoints_by_img = np.asarray(sift_keypoints)
sift_keypoints_all = np.concatenate(sift_keypoints_by_img, axis=0)
And here is how I use it for my clustering :
from sklearn import cluster, metrics
# Determination number of clusters
k = int(round(np.sqrt(len(sift_keypoints_all)),0))
print("Nombre de clusters estimés : ", k)
print("Création de",k, "clusters de descripteurs ...")
# Clustering
kmeans = cluster.MiniBatchKMeans(n_clusters=k, init_size=3*k, random_state=0)
kmeans.fit(sift_keypoints_all)
What should I do to be able to extract features with a VGG model?
Thanks
There is an example regarding feature extraction with VGG16 in the official Keras documentation [1].
Note the layers of a convolutional network are successive representations of varying dimensions of your picture. Depending the layer you choose as output, the results from clustering may be very different.
[1] https://keras.io/api/applications/
I am trying to implement myself a bag of words classifier to classify a dataset I have. To be certain that my implementation is correct, I used just two classes from the Caltech dataset (http://www.vision.caltech.edu/Image_Datasets/Caltech101/) to test my implementation: elephant and electric guitar. As they are totally different visually, I believe that a correct implementation of Bag Of Visual Words (BOVW) classification could classify these images accurately.
From my understanding (please correct me if I am wrong), the correct BOVW classification happens in three steps:
Detect SIFT 128-Dimensional descriptors from training images and clusterize them with k-means.
Test the training and testing images SIFT descriptors in the k-means classifier (trained in step 1) and make a histogram of classification results.
Use these histograms as feature vectors for SVM classification
As I explained before, I tried to solve a very easy problem of classifying two very distinct classes. I am reading the training and testing files from a text file, I use the training images SIFT descriptors to train a k-means classifier, use the training and testing images to get the histogram of classifications and finally use them as feature vectors for classification.
The source code of my solution follows:
import numpy as np
from sklearn import svm
from sklearn.metrics import accuracy_score
#this function will get SIFT descriptors from training images and
#train a k-means classifier
def read_and_clusterize(file_images, num_cluster):
sift_keypoints = []
with open(file_images) as f:
images_names = f.readlines()
images_names = [a.strip() for a in images_names]
for line in images_names:
print(line)
#read image
image = cv2.imread(line,1)
# Convert them to grayscale
image =cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
# SIFT extraction
sift = cv2.xfeatures2d.SIFT_create()
kp, descriptors = sift.detectAndCompute(image,None)
#append the descriptors to a list of descriptors
sift_keypoints.append(descriptors)
sift_keypoints=np.asarray(sift_keypoints)
sift_keypoints=np.concatenate(sift_keypoints, axis=0)
#with the descriptors detected, lets clusterize them
print("Training kmeans")
kmeans = MiniBatchKMeans(n_clusters=num_cluster, random_state=0).fit(sift_keypoints)
#return the learned model
return kmeans
#with the k-means model found, this code generates the feature vectors
#by building an histogram of classified keypoints in the kmeans classifier
def calculate_centroids_histogram(file_images, model):
feature_vectors=[]
class_vectors=[]
with open(file_images) as f:
images_names = f.readlines()
images_names = [a.strip() for a in images_names]
for line in images_names:
print(line)
#read image
image = cv2.imread(line,1)
#Convert them to grayscale
image =cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
#SIFT extraction
sift = cv2.xfeatures2d.SIFT_create()
kp, descriptors = sift.detectAndCompute(image,None)
#classification of all descriptors in the model
predict_kmeans=model.predict(descriptors)
#calculates the histogram
hist, bin_edges=np.histogram(predict_kmeans)
#histogram is the feature vector
feature_vectors.append(hist)
#define the class of the image (elephant or electric guitar)
class_sample=define_class(line)
class_vectors.append(class_sample)
feature_vectors=np.asarray(feature_vectors)
class_vectors=np.asarray(class_vectors)
#return vectors and classes we want to classify
return class_vectors, feature_vectors
def define_class(img_patchname):
#print(img_patchname)
print(img_patchname.split('/')[4])
if img_patchname.split('/')[4]=="electric_guitar":
class_image=0
if img_patchname.split('/')[4]=="elephant":
class_image=1
return class_image
def main(train_images_list, test_images_list, num_clusters):
#step 1: read and detect SURF keypoints over the input image (train images) and clusterize them via k-means
print("Step 1: Calculating Kmeans classifier")
model= bovw.read_and_clusterize(train_images_list, num_clusters)
print("Step 2: Extracting histograms of training and testing images")
print("Training")
[train_class,train_featvec]=bovw.calculate_centroids_histogram(train_images_list,model)
print("Testing")
[test_class,test_featvec]=bovw.calculate_centroids_histogram(test_images_list,model)
#vamos usar os vetores de treino para treinar o classificador
print("Step 3: Training the SVM classifier")
clf = svm.SVC()
clf.fit(train_featvec, train_class)
print("Step 4: Testing the SVM classifier")
predict=clf.predict(test_featvec)
score=accuracy_score(np.asarray(test_class), predict)
file_object = open("results.txt", "a")
file_object.write("%f\n" % score)
file_object.close()
print("Accuracy:" +str(score))
if __name__ == "__main__":
main("train.txt", "test.txt", 1000)
main("train.txt", "test.txt", 2000)
main("train.txt", "test.txt", 3000)
main("train.txt", "test.txt", 4000)
main("train.txt", "test.txt", 5000)
As you can see, I tried to vary a lot the number of clusters in the kmeans classifier. However, no matter what I try, the accuracy is always 53.62%, which is terrible, considering that the images classes are quite diferent.
So, is there any problem with my understanding or implementation of BOVW? What I've mistaken here?
The solution is simpler than I thought.
In this line:
hist, bin_edges=np.histogram(predict_kmeans)
The number of bins is the standard number of bins from numpy (I belive it is 10). By doing this:
hist, bin_edges=np.histogram(predict_kmeans, bins=num_clusters)
The accuracy increased from the 53.62% I reported to 78.26% using 1000 clusters and, therefore 1000 dimensional vectors.
It looks like you are creating clusters and histograms for each image. But in order to make it work you have to aggregate the sift features for all images and then clusters theses and use these common clusters to create the histogram. Check out also https://github.com/shackenberg/Minimal-Bag-of-Visual-Words-Image-Classifier
I am making a project on handwritten digit recognition using MNIST database and I have trained it for 60,0000 images in the data set and tested it for the 10,000 test images and got results about 99% accurate.
Now I want to input an external image to see whether my handwritten digit is recognized by the CNN or not. So I scanned my own handwritten image, converted it into gray scale and numpy array and feed it into the CNN, but I am always getting the output predicted result as 8 as a one hot encoded vector of numpy array.
import numpy as np
from keras.preprocessing import image
test_image = image.load_img('six.jpg', target_size = (28,28))
test_image = image.img_to_array(test_image, data_format = None)
from numpy import *
test_image= delete(test_image, np.s_[::2], 2)
test_image = np.expand_dims(test_image, axis = 0)
predicted_dig = digit_recogniser.predict(test_image,batch_size= 32)
predicted_digits = np.argmax(np.round(predicted_digits),axis=0)
Can you please help me in figuring out what is the problem with the code and how can I successfully predict the digits individually scanned by/ external inputs? My CNN is fully trained using the MNIST data set. This is a kind of single prediction I want to make with some accuracy on taking random handwritten images of my choice.
Do you match the training data preprocessing during testing?
So, I am trying to make a model which can predict doodles. I am using google's quick draw data :https://console.cloud.google.com/storage/browser/quickdraw_dataset/full/numpy_bitmap which are images rendered into 28x28 greyscale bitmap numpy array. I only chose 10 classes and took 60,000 photos to train/evaluate. I get a test accuracy of 91% . When I try to make predictions with data from test data, it works. But when i make a drawing in paint and convert it into 28x28, it doesn't make good predictions. What sort of data do I need to have? What kind of preprocessing does the image need?
This is how i preprocessed the data from google's npy file
def load_set(name,path,resultx,resulty,label):
loaded_set = np.load(path+name+".npy")
loaded_set = loaded_set.reshape(loaded_set.shape[0],1,28,28)
# print(name,loaded_set.shape)
loaded_set = loaded_set[0:6000,0:6000,0:6000,0:6000]
resultx = np.append(resultx,loaded_set,axis=0)
resulty = createLabelArray(label,loaded_set.shape[0],resulty)
print("loaded "+name)
return resultx,resulty
def createLabelArray(label,size,result):
for i in range(0,size):
result = np.append(result,[[label]],axis=0)
return result
where label is the label i want for that category.
I shuffle them afterwards and everything.
And this is how I am trying to process new images(drawings by me):
print("[INFO] loading and preprocessing image...")
image = image_utils.load_img(os.path.join(path, name), grayscale=True,target_size=(28, 28))
image = image_utils.img_to_array(image)
print(image.shape)
image = np.expand_dims(image, axis=0)
print(image.shape)
image = image.astype('float32')
image /= 255
return image
Please help, I've been stuck on this for a while now. Thank you
Seems to be a typical case of overfitting.
Please try 10-fold cross-validation to get accuracy of model.
Further use regularization and dropout in keras to prevent overfitting.