How to create Cifar-10 subset? - python

I would like to train a deep neural network using fewer training data samples to reduce the time for testing my code. II wanted to know how to subset the Cifar-10 dataset using Keras TensorFlow.I have the following code which is training for Cifar-10 complete dataset.
#load and prepare data
if WhichDataSet == 'CIFAR10':
(x_train, y_train), (x_test, y_test) = tensorflow.keras.datasets.cifar10.load_data()
else:
(x_train, y_train), (x_test, y_test) = tensorflow.keras.datasets.cifar100.load_data()
num_classes = np.unique(y_train).shape[0]
K_train = x_train.shape[0]
input_shape = x_train.shape[1:]
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
y_train = tensorflow.keras.utils.to_categorical(y_train, num_classes)
y_test = tensorflow.keras.utils.to_categorical(y_test, num_classes)

Create susbset based on labels
Create a subset of dataset excluding few labels. For example, to create a new train dataset with only first five class labels you can use below code
subset_x_train = x_train[np.isin(y_train, [0,1,2,3,4]).flatten()]
subset_y_train = y_train[np.isin(y_train, [0,1,2,3,4]).flatten()]
Create subset irrespective of labels
To create a 10% subset of train data you can use below code
# Shuffle first (optional)
idx = np.arange(len(x_train))
np.random.shuffle(idx)
# get first 10% of data
subset_x_train = x_train[:int(.10*len(idx))]
subset_y_train = y_train[:int(.10*len(idx))]
Repeat the same for x_test and y_test to get a subset of test data.

Use the pandas module to create a data frame and sample it accordingly.
import pandas as pd
(train_images1, train_labels), (test_images1, test_labels) = datasets.cifar10.load_data()
# Normalize pixel values to be between 0 and 1
train_images, test_images = train_images1 / 255.0, test_images1 / 255.0
#creating the validation set from the training set
df = pd.DataFrame(list(zip(train_images, train_labels)), columns =['Image', 'label'])
val = df.sample(frac=0.2)
X_train = np.array([ i for i in list(val['Image'])])
y_train = np.array([ [i[0]] for i in list(val['label'])])
The line val = df.sample(frac=0.2) samples out 0.20 percent of the total data .
You can use val = df.sample(n=5000) if you want a specific number of data records, by setting the n value accordingly.
You can use random_state = 0 if you want the same results every time you run the code. eg:
val = df.sample(n=5000,random_state = 0)

Related

Too much RAM is required for loading dataset

I’m working in a neural network and my dataset has 42000 images and I have to load it all. I’m using google colab for that, but every time I load the dataset the RAM is insufficient.
I am putting everything in a numpy array, cause I tried to use the ImageGenerator method and it didn’t work. I’m using the following code to load the data:
class = glob.glob(r"/content/drive/MyDrive/DATASET/class/*.*")
data = []
labels = []
for i in class:
image=tf.keras.preprocessing.image.load_img(i, color_mode='rgb',
target_size= (336, 336))
image=np.array(image)
data.append(image)
labels.append(0)
data = np.array(data)
labels = np.array(labels)
As ImageDataGenerator is deprecated, you can use a custom Keras Sequence class to load images when needed.
The strategy here is to create a Pandas DataFrame with all the path and class of your images then transform the class to numeric label with pd.factorize. Once, you have X (paths) and y (labels), you can use train_test_split to extract 3 subsets: train, test and validation. The last step is to convert these collections to datasets compatible with Tensorflow.
Each time, Tensorflow process a batch, the Sequence will load a batch of images in memory and so on.
Step 0: Imports and constants
import tensorflow as tf
import pandas as pd
import numpy as np
import pathlib
from sklearn.model_selection import train_test_split
INPUT_SHAPE = (336, 336, 3)
BATCH_SIZE = 32
DATA_DIR = pathlib.Path('/content/drive/MyDrive/DATASET/')
Step 1: Load all image paths to a Pandas DataFrame:
# Find images of dataset
data = []
for file in DATA_DIR.glob('**/*.jpg'):
d = {'class': file.parent.name,
'path': file}
data.append(d)
# Create dataframe and select columns
df = pd.DataFrame(data)
df['label'] = pd.factorize(df['class'])[0]
X = df['path']
y = df['label']
# Split into 3 balanced datasets
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.2, random_state=2023)
X_train, X_valid, y_train, y_valid = \
train_test_split(X_train, y_train, test_size=0.2, random_state=2023)
Step 2: Create a custom data Sequence
class ImgDataSequence(tf.keras.utils.Sequence):
"""
Check documentation here: https://www.tensorflow.org/api_docs/python/tf/keras/utils/Sequence
"""
def __init__(self, image_set, label_set, batch_size=32, image_size=(256, 256)):
self.image_set = np.array(image_set)
self.label_set = np.array(label_set)
self.batch_size = batch_size
self.image_size = image_size
def __get_image(self, image):
image = tf.keras.preprocessing.image.load_img(image, color_mode='rgb', target_size=self.image_size)
image_arr = tf.keras.preprocessing.image.img_to_array(image)
return image_arr
def __get_data(self, images, labels):
image_batch = np.asarray([self.__get_image(img) for img in images])
label_batch = np.asarray(labels)
return image_batch, label_batch
def __getitem__(self, index):
images = self.image_set[index * self.batch_size:(index + 1) * self.batch_size]
labels = self.label_set[index * self.batch_size:(index + 1) * self.batch_size]
images, labels = self.__get_data(images, labels)
return images, labels
def __len__(self):
return len(self.image_set) // self.batch_size + (len(self.image_set) % self.batch_size > 0)
Step 3: Create datasets
train_ds = ImgDataSequence(X_train, y_train, image_size=INPUT_SHAPE[:2], batch_size=BATCH_SIZE)
valid_ds = ImgDataSequence(X_valid, y_valid, image_size=INPUT_SHAPE[:2], batch_size=BATCH_SIZE)
test_ds = ImgDataSequence(X_test, y_test, image_size=INPUT_SHAPE[:2], batch_size=BATCH_SIZE)
Test the new datasets:
# Take the first batch of our train dataset
>>> imgs, labels = train_ds[0]
# Check then length (BATCH_SIZE)
>>> len(labels)
32
# Check the dimension of one image
>>> imgs[0].shape
(336, 336, 3)
How to use it with Tensorflow?
# train_ds & valid_ds to fit
history = model.fit(train_ds, epochs=10, validation_data=valid_ds)
# test_ds to evaluate
loss, *metrics = model.evaluate(test_ds)

How to standardize categorical data that is skewed to 1 value for predictions using keras or sklean

I am currently using Keras to provide a sequential model for my data, but am thinking my data is skewed to much because of one of my categories contains 7 values and one of those accounts for 85% of the data. I am thinking I need to standardize the columns by adjusting the weights. Will the prepossessing function from sklean be able to help with this?
Below is the current code I have so far:
# load the dataset as a pandas DataFrame
data = read_csv(filename)
data = pd.DataFrame(data,columns= ['A','B','C'])
datas = data
# retrieve numpy array
dataset = data.values
# split into input (X) and output (y) variables
X = dataset[:, :-1]
y = dataset[:,-1]
# format all fields as string
X = X.astype(str)
# reshape target to be a 2d array
y = y.reshape((len(y), 1))
# load the dataset
def load_dataset(filename):
# load the dataset as a pandas DataFrame
data = read_csv(filename, header=None)
# retrieve numpy array
dataset = data.values
# split into input (X) and output (y) variables
X = dataset[:, :-1]
y = dataset[:,-1]
# format all fields as string
X = X.astype(str)
# reshape target to be a 2d array
y = y.reshape((len(y), 1))
return X, y
# split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
stdscaler = preprocessing.StandardScaler().fit(X_train)
X_scaled = stdscaler.transform(x)
X_train_scaled = stdscaler.transform(X_train)
X_test_scaled = stdscaler.transform(X_test)
print('Train', X_train.shape, y_train.shape)
print('Test', X_test.shape, y_test.shape)
# prepare input data
def prepare_inputs(X_train, X_test):
ohe = OneHotEncoder(handle_unknown='ignore')
ohe.fit(X_train)
X_train_enc = ohe.transform(X_train)
X_test_enc = ohe.transform(X_test)
return X_train_enc, X_test_enc
# prepare target
def prepare_targets(y_train, y_test):
le = LabelEncoder()
le.fit(y_train)
y_train_enc = le.transform(y_train)
y_test_enc = le.transform(y_test)
return y_train_enc, y_test_enc
# prepare input data
X_train_enc, X_test_enc = prepare_inputs(X_train_scaled, X_test_scaled)
# prepare output data
y_train_enc, y_test_enc = prepare_targets(y_train, y_test)
# define the model
model =Sequential()
model.add(Dense(10, input_dim=X_train_enc.shape[1], activation='relu', kernel_initializer='he_normal'))
model.add(Dense(1, activation='sigmoid'))
# compile the keras model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit the keras model on the dataset
model.fit(X_train_enc, y_train_enc, epochs=100, batch_size=16, verbose=2)
# evaluate the keras model
_, accuracy = model.evaluate(X_test_enc, y_test_enc, verbose=0)
print('Accuracy: %.2f' % (accuracy*100))
I highly suggest to go through the example in keras documentation. You are facing class imbalanced problem .Check out this url.
You need to use class_weight argument while fitting your model

How to pass different set of data to train and test without splitting a dataframe. (python)?

I have gone through multiple questions that help divide your dataframe into train and test, with scikit, without etc.
But my question is I have 2 different csvs ( 2 different dataframes from different years). I want to use one as train and other as test?
How to do so for LinearRegression / any model?
Load the datasets individually.
Make sure they are in the same format of rows and columns (features).
Use the train set to fit the model.
Use the test set to predict the output after training.
# Load the data
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
# Split features and value
# when trying to predict column "target"
X_train, y_train = train.drop("target"), train["target"]
X_test, y_test = test.drop("target"), test["target"]
# Fit (train) model
reg = LinearRegression()
reg.fit(X_train, y_train)
# Predict
pred = reg.predict(X_test)
# Score
accuracy = reg.socre(X_test, y_test)
please skillsmuggler what about the X_train and X_Test how I can define it because when I try to do that it said NameError: name 'X_train' is not defined
I couldn't edit the first answer which is almost there. There is some code missing though...
# Load the data
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
y_train = train[:, :1] #if y is only one column
X_train = train[:, 1:]
# Fit (train) model
reg = LinearRegression()
reg.fit(X_train, y_train)
# Predict
pred = reg.predict(X_test)
# Score
accuracy = reg.socre(X_test, y_test)

Run trained Machine Learning model on a different dataset

I am new to Machine Learning and am in the process of trying to run a simple classification model that I trained and saved using pickle, on another dataset of the same format. I have the following python code.
Code
#Training set
features = pd.read_csv('../Data/Train_sop_Computed.csv')
#Testing set
testFeatures = pd.read_csv('../Data/Test_sop_Computed.csv')
print(colored('\nThe shape of our features is:','green'), features.shape)
print(colored('\nThe shape of our Test features is:','green'), testFeatures.shape)
features = pd.get_dummies(features)
testFeatures = pd.get_dummies(testFeatures)
features.iloc[:,5:].head(5)
testFeatures.iloc[:,5].head(5)
labels = np.array(features['Truth'])
testlabels = np.array(testFeatures['Truth'])
features= features.drop('Truth', axis = 1)
testFeatures = testFeatures.drop('Truth', axis = 1)
feature_list = list(features.columns)
testFeature_list = list(testFeatures.columns)
def add_missing_dummy_columns(d, columns):
missing_cols = set(columns) - set(d.columns)
for c in missing_cols:
d[c] = 0
def fix_columns(d, columns):
add_missing_dummy_columns(d, columns)
# make sure we have all the columns we need
assert (set(columns) - set(d.columns) == set())
extra_cols = set(d.columns) - set(columns)
if extra_cols: print("extra columns:", extra_cols)
d = d[columns]
return d
testFeatures = fix_columns(testFeatures, features.columns)
features = np.array(features)
testFeatures = np.array(testFeatures)
train_samples = 100
X_train, X_test, y_train, y_test = model_selection.train_test_split(features, labels, test_size = 0.25, random_state = 42)
testX_train, textX_test, testy_train, testy_test = model_selection.train_test_split(testFeatures, testlabels, test_size= 0.25, random_state = 42)
print(colored('\n TRAINING SET','yellow'))
print(colored('\nTraining Features Shape:','magenta'), X_train.shape)
print(colored('Training Labels Shape:','magenta'), X_test.shape)
print(colored('Testing Features Shape:','magenta'), y_train.shape)
print(colored('Testing Labels Shape:','magenta'), y_test.shape)
print(colored('\n TESTING SETS','yellow'))
print(colored('\nTraining Features Shape:','magenta'), testX_train.shape)
print(colored('Training Labels Shape:','magenta'), textX_test.shape)
print(colored('Testing Features Shape:','magenta'), testy_train.shape)
print(colored('Testing Labels Shape:','magenta'), testy_test.shape)
from sklearn.metrics import precision_recall_fscore_support
import pickle
loaded_model_RFC = pickle.load(open('../other/SOPmodel_RFC', 'rb'))
result_RFC = loaded_model_RFC.score(textX_test, testy_test)
print(colored('Random Forest Classifier: ','magenta'),result_RFC)
loaded_model_SVC = pickle.load(open('../other/SOPmodel_SVC', 'rb'))
result_SVC = loaded_model_SVC.score(textX_test, testy_test)
print(colored('Support Vector Classifier: ','magenta'),result_SVC)
loaded_model_GPC = pickle.load(open('../other/SOPmodel_Gaussian', 'rb'))
result_GPC = loaded_model_GPC.score(textX_test, testy_test)
print(colored('Gaussian Process Classifier: ','magenta'),result_GPC)
loaded_model_SGD = pickle.load(open('../other/SOPmodel_SGD', 'rb'))
result_SGD = loaded_model_SGD.score(textX_test, testy_test)
print(colored('Stocastic Gradient Descent: ','magenta'),result_SGD)
I am able to get the results for the test set.
But the problem I am facing is that I need to run the model on the entire Test_sop_Computed.csv dataset. But it is only being run on the test dataset that I've split.
I would sincerely appreciate if anyone could provide any suggestions on how I can run the loaded model on the entire dataset. I know that I'm going wrong with the following line of code.
testX_train, textX_test, testy_train, testy_test = model_selection.train_test_split(testFeatures, testlabels, test_size= 0.25, random_state = 42)
Both the train and test dataset have the Subject, Predicate, Object, Computed and Truth and the features with the Truth being the predicted class. The testing dataset has the actual values for this Truth column and I dopr it usingtestFeatures = testFeatures.drop('Truth', axis = 1) and intend on using the various loaded models of classifiers to predict this Truth as 0 or 1 for the entire dataset and then get the predictions as an array.
I have done this so far. But I think that I am splitting my test dataset as well. Is there a way to pass the entire test dataset even if it is in another file?
This test dataset is in the same format as the training set. I have checked the shape of the two and I get the following.
Confirming the Features and Shape
Shape of the Train features is: (1860, 5)
Shape of the Test features is: (1386, 5)
TRAINING SET
Training Features Shape: (1395, 1045)
Training Labels Shape: (465, 1045)
Testing Features Shape: (1395,)
Testing Labels Shape: (465,)
TEST SETS
Training Features Shape: (1039, 1045)
Training Labels Shape: (347, 1045)
Testing Features Shape: (1039,)
Testing Labels Shape: (347,)
Any suggestions in this regard will be highly appreciated.
Your question is a bit unclear but as I understand, you want to run your model on testX_train and on testX_test (which is just testFeatures splitted into two sub datasets).
So, either you can run your model on testX_train the same way you did for testX_test, e.g. :
result_RFC_train = loaded_model_RFC.score(textX_train, testy_train)
or you can just remove the following line :
testX_train, textX_test, testy_train, testy_test = model_selection.train_test_split(testFeatures, testlabels, test_size= 0.25, random_state = 42)
So you just don't split you data and run it on the full dataset :
result_RFC_train = loaded_model_RFC.score(testFeatures, testlabels)

Randomize the splitting of data for training and testing for this function

I wrote a function to split numpy ndarrays x_data and y_data into training and test data based on a percentage of the total size.
Here is the function:
def split_data_into_training_testing(x_data, y_data, percentage_split):
number_of_samples = x_data.shape[0]
p = int(number_of_samples * percentage_split)
x_train = x_data[0:p]
y_train = y_data[0:p]
x_test = x_data[p:]
y_test = y_data[p:]
return x_train, y_train, x_test, y_test
In this function, the top part of the data goes to the training dataset and the bottom part of the data samples go to the testing dataset based on percentage_split. How can this data split be made more randomized before being fed to the machine learning model?
Assuming there's a reason you're implementing this yourself instead of using sklearn.train_test_split, you can shuffle an array of indices (this leaves the training data untouched) and index on that.
def split_data_into_training_testing(x_data, y_data, split, shuffle=True):
idx = np.arange(len(x_data))
if shuffle:
np.random.shuffle(idx)
p = int(len(x_data) * split)
x_train = x_data[idx[:p]]
x_test = x_data[idx[p:]]
... # Similarly for y_train and y_test.
return x_train, x_test, y_train, y_test
You can create a mask with p randomly selected true elements and index the arrays that way. I would create the mask by shuffling an array of the available indices:
ind = np.arange(number_of_samples)
np.random.shuffle(ind)
ind_train = np.sort(ind[:p])
ind_test = np.sort(ind[p:])
x_train = x_data[ind_train]
y_train = y_data[ind_train]
x_test = x_data[ind_test]
y_test = y_data[ind_test]
Sorting the indices is only necessary if your original data is monotonically increasing or decreasing in x and you'd like to keep it that way. Otherwise, ind_train = ind[:p] is just fine.

Categories

Resources