Can't oversample my image data using SMOTE - python

I'm new to machine learning, and i have been working on a project for early dementia detection using cnn.
I am facing issue in oversampling my data.(data is MRI images from imported from kaggle with train and test classes having 4 sub classes(nondemented,milddemented....)). the train data has around 5120 images and test has around 1200 with 176258 size which i have resized to 176176
for x,y in train_data:
images.append(x)
images = np.concatenate(images)
train_images = images.reshape(len(images),176*176*3)
sm=SMOTE(sampling_strategy='minority',random_state=42)
train_images=sm.fit_resample(train_images)
this is the code,i have applied the same procedure for test data as well upto reshaping, in the last line its causing an error, now i know there in fit_resample there has to be 2 arguments second one been labels, but in this case where i just have images, what should i put there as second argument, should it be my test_data? i have no clue. please help me

Related

How can i perform Stratified split on my dataset for an instance segmentation task?

I am working on training a deep learning model to detect objects in X-ray images using segmentation approach as a part of my master's thesis. I have around 3000 images that are different in the number and shapes of objects that need to be detected. For this reason, I want to split into training, validation, and testing datasets where the distribution of number and shapes of the objects is maintained. I have tried using splitfolders for that, but it is splitting the data based on class labels, which cannot be applied in my case. How can i tackle this problem?

How do I have to process an image to test it in a CNN?

I have trained my CNN in Tensorflow using MNIST data set; when I tested it, it worked very well using the test data. Even, to prove my model in a better way, I made another set taking images from train and test set randomly. All the images that I took from those set, at the same time, I deleted and I didn't give them to my model. It worked very well too, but with a dowloaded image from Google, it doesn't classify well, so my question is: should I have to apply any filter to that image before I give it to the prediction part?
I resized the image and converted it to gray scale before.
MNIST is an easy dataset. Your model (CNN) structure may do quite well for MNIST, but there is no guarantee that it does well for more complex images too. You can add some more layers and check different activation functions (like Relu, Elu, etc.). Normalizing your image pixel values for small values like between -1 and 1 may help too.

YOLO V3 Not Learning Well From Data

I tried training YOLO V3 on a signature dataset, but the trained model after 2000 iterations couldn't produce any detection.
The data set consisted of around 5000 images with signatures on them. The document pages are black and white. The images are labeled accurately as the pages are generated by me by placing signatures on the pages.
I used the YOLOv3 default architecture, but trained from scratch. I tried using darknet53.conv.74 and fine tuning it, but it didn't work, which I assume is because the network is trained on photo data while the data I have are documents. Training from scratch, I trained on a GPU AWS machine for 2000 iterations. During training, the output is like follows:
It went from:
2: 3249.269043, 3238.557373 avg, 0.000000 rate, 10.226817 seconds, 128 images Loaded: 0.000073 seconds
To:
2032: 0.667013, 0.644689 avg, 0.001000 rate, 22.906654 seconds, 130048 images Loaded: 0.000103 seconds
So the training loss has significantly decreased and has been hovering around 0.6 for couple hundred iterations at least.
The only part I'm not 100% sure is how to start the training process, and I used the code below to train it.
./darknet detector train params/darknet.data params/darknet-yolov3.cfg
darknet.data is as follows:
classes = 1
train = ./params/data_train.txt
valid = ./params/data_test.txt
names = ./params/classes.names
backup = ./params/weights/
And darknet-yolov3.cfg is the exact same as yolov3.cfg.
I tried testing the model using couple different images with signatures on them, and they are rather simple cases. But the trained model failed to detect any signature in all of these test images.
If anyone has any suggestions on what I should do to/test, it would be greatly appreciated! Thanks!

Following a TensorFlow tutorial - questions around IMG_SIZE and image size consistency

I am following a tutorial on TensorFlow image classifications.
In the first CreateData section of the tutorial it has a parameter:
# The size of the images that your neural network will use
IMG_SIZE = 50
I am wondering what this is? Is this a 50x50 pixel image? In my case I would like to train my model and allow my model to predict based upon differently sizes photos, they won't all have the same resolution or size. In this case, how will I alter the code within this tutorial to cater for that?
Also, in the bit of code used to run a prediction against a test bit of data, it also has the same IMG_SIZE param, so again the same question for this, in my use case, test data will be different sizes, so how can we cater for this?
Additionally, how many photos should I have for each of my trained classes at a minimum? Right now I have about 22 data points for each of my 3 cases, and the results are very poor, but that could equally be to do with the IMG_SIZE issue as above.
Any and all advice greatly appreciated.

Deep learning - splitting the image dataset into train and test

enter image description herei have 3000 images for both training and testing in one folder and i also have the image label in label.csv file which has the five class categories. Can anyone help me how to split this dataset into train and test data so that i can classify the images using convolution neural network. My dataset looks like the following image after the linking with csv and images.
First, you need an association between images and labels (some kind of knowledge of which label belongs to which image). Otherwise it will not work properly. After that you can split your dataset. Here is a toy example, assuming full_dataset contains the whole dataset and SIZE_OF_DATASET is the size of full_dataset:
full_dataset = full_dataset.shuffle()
train_dataset = full_dataset.take(int(0.8*SIZE_OF_DATASET))
test_dataset = full_dataset.skip(int(0.2*SIZE_OF_DATASET))

Categories

Resources