Imbalanced Dataset Using Keras - python

I am building a classifying ANN with python and the Keras library. I am using training the NN on an imbalanced dataset with 3 different classes. Class 1 is about 7.5 times as prevalent as Classes 2 and 3. As remedy, I took the advice of this stackoverflow answer and set my class weights as such:
class_weight = {0 : 1,
1 : 6.5,
2: 7.5}
However, here is the problem: The ANN is predicting the 3 classes at equal rates!
This is not useful because the dataset is imbalanced, and predicting the outcomes as each having a 33% chance is inaccurate.
Here is the question: How do I deal with an imbalanced dataset so that the ANN does not predict Class 1 every time, but also so that the ANN does not predict the classes with equal probability?
Here is my code I am working with:
class_weight = {0 : 1,
1 : 6.5,
2: 7.5}
# Making the ANN
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
classifier = Sequential()
# Adding the input layer and the first hidden layer with dropout
classifier.add(Dense(activation = 'relu',
input_dim = 5,
units = 3,
kernel_initializer = 'uniform'))
#Randomly drops 0.1, 10% of the neurons in the layer.
classifier.add(Dropout(rate= 0.1))
#Adding the second hidden layer
classifier.add(Dense(activation = 'relu',
units = 3,
kernel_initializer = 'uniform'))
#Randomly drops 0.1, 10% of the neurons in the layer.
classifier.add(Dropout(rate = 0.1))
# Adding the output layer
classifier.add(Dense(activation = 'sigmoid',
units = 2,
kernel_initializer = 'uniform'))
# Compiling the ANN
classifier.compile(optimizer = 'adam',
loss = 'binary_crossentropy',
metrics = ['accuracy'])
# Fitting the ANN to the training set
classifier.fit(X_train, y_train, batch_size = 100, epochs = 100, class_weight = class_weight)

The most evident problem that I see with your model is that it is not properly structured for classification.
If your samples can belong to only one class at a time, then you should not overlook this fact by having a sigmoid activation as your last layer.
Ideally, the last layer of a classifier should output the probability of a sample belonging to a class, i.e. (in your case) an array [a, b, c] where a + b + c == 1..
If you use a sigmoid output, then the output [1, 1, 1] is a possible one, although it is not what you are after. This is also the reason why your model is not generalizing properly: given that you're not specifically training it to prefer "unbalanced" outputs (like [1, 0, 0]), it will defalut to predicting the average values that it sees during training, accounting for the reweighting.
Try changing the activation of your last layer to 'softmax' and the loss to 'catergorical_crossentropy':
# Adding the output layer
classifier.add(Dense(activation='softmax',
units=2,
kernel_initializer='uniform'))
# Compiling the ANN
classifier.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
If this doesn't work, see my other comment and get back to me with that info, but I'm pretty confident that this is the main problem.
Cheers

Imbalanced datasets (where classes are uneven or unequally distributed) are a prevalent problem in classification. For example, one class label has a very high number of observations, and the other has a pretty low number of observations. Significant causes of data imbalance include:
Faulty data collection
Domain peculiarity – when some domains have an imbalanced dataset.
Imbalanced datasets can create many problems in classification hence the need to improve datasets for robust models and improve performance.
Here are several methods to bring balance to imbalanced datasets:
Undersampling – works by resampling the majority class points in a dataset to match or make them equal to the minority class points. It brings equilibrium between the majority and minority classes so that the classifier gives equal importance to both classes. However, it’s important to note that undersampling may cause some loss of information hence some insignificant results.
Oversampling – Also known as upsampling, oversampling resamples the minority class to equal the total number of majority class points. It replicates the observations from minority class points to balance datasets.
Synthetic Minority Oversampling Technique – As the name suggests, the SMOTE technique uses oversampling to create artificial data points for minority classes. It creates new instances between the attributes of the minority class, which are synthesized from existing data.
Searching optimal value from a grid – This technique involves finding probabilities for a particular class label then finding the optimum threshold to map the possibilities to the correct class label.
Using the BalancedBaggingClassifier – The BalancedBaggingClassifier allows you to resample each subclass of a dataset before training a random estimator to create a balanced dataset.
Use different algorithms – Some algorithms aren’t effective in restoring balance in imbalanced datasets. Sometimes it’s wise to try different algorithms to stand a better chance at creating a balanced dataset and improving performance. For instance, you can employ regularization or penalized models to punish the wrong predictions on the minority class.
The effects of imbalanced datasets can be significant. Hopefully, one of the approaches above can help you get in the right direction.
To test which approach works best for you, I’d suggest using deepchecks, an awesome open python package for validating data and models quickly.

Related

What can I do to help make my TensorFlow network overfit a large dataset?

The reason I am trying to overfit specifically, is because I am following the "Deep Learning with Python" by François Chollet's steps to designing a network. This is important as this is for my final project in my degree.
At this stage, I need to make a network large enough to overfit my data in order to determine a maximal capacity, an upper-bounds for the size of networks that I will optimise for.
However, as the title suggests, I am struggling to make my network overfit. Perhaps my approach is naïve, but let me explain my model:
I am using this dataset, to train a model to classify stars. There are two classes that a star must be classified by (into both of them): its spectral class (100 classes) and luminosity class (10 classes).
For example, our sun is a 'G2V', it's spectral class is 'G2' and it's luminosity class is 'V'.
To this end, I have built a double-headed network, it takes this input data:
DataFrame containing input data
It then splits into two parallel networks.
# Create our input layer:
input = keras.Input(shape=(3), name='observation_data')
# Build our spectral class
s_class_branch = layers.Dense(100000, activation='relu', name = 's_class_branch_dense_1')(input)
s_class_branch = layers.Dense(500, activation='relu', name = 's_class_branch_dense_2')(s_class_branch)
# Spectral class prediction
s_class_prediction = layers.Dense(100,
activation='softmax',
name='s_class_prediction')(s_class_branch)
# Build our luminosity class
l_class_branch = layers.Dense(100000, activation='relu', name = 'l_class_branch_dense_1')(input)
l_class_branch = layers.Dense(500, activation='relu', name = 'l_class_branch_dense_2')(l_class_branch)
# Luminosity class prediction
l_class_prediction = layers.Dense(10,
activation='softmax',
name='l_class_prediction')(l_class_branch)
# Now we instantiate our model using the layer setup above
scaled_model = Model(input, [s_class_prediction, l_class_prediction])
optimizer = keras.optimizers.RMSprop(learning_rate=0.004)
scaled_model.compile(optimizer=optimizer,
loss={'s_class_prediction':'categorical_crossentropy',
'l_class_prediction':'categorical_crossentropy'},
metrics=['accuracy'])
logdir = os.path.join("logs", "2raw100k")
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
scaled_model.fit(
input_data,{
's_class_prediction':spectral_targets,
'l_class_prediction':luminosity_targets
},
epochs=20,
batch_size=1000,
validation_split=0.0,
callbacks=[tensorboard_callback])
In the code above you can see me attempting a model with two hidden layers in both branches, one layer with a shape of 100 000, following into another layer with 500, before going to the output layer. The training targets are one-hot encoded, so there is one node for every class.
I have tried a wide range of sizes with one to four hidden layers, ranging from a shape of 500 to 100 000, only stopping because I ran out of RAM. I have only used dense layers, with the exception of trying a normalisation layer to no affect.
Graph of losses
They will all happily train and slowly lower the loss, but they never seem to overfit. I have run networks out to 100 epochs and they still will not overfit.
What can I do to make my network fit the data better? I am fairly new to machine learning, having only been doing this for a year now, so I am sure there is something that I am missing. I really appreciate any help and would be happy to provide the logs shown in the graph.
After a lot more training I think I have this answered. Basically, the network did not have adequate capacity and needed more layers. I had tried more layers earlier but because I was not comparing it to validation data the overfitting was not apparent!
The proof is in the pudding:
So thank you to #Aryagm for their comment, because that let me work it out. As you can see, the validation data (grey and blue) clearly overfits, while the training data (green and orange) does not show it.
If anything, this goes to show why a separate validation set is so important and I am a fool for not having used it in the first place! Lesson learned.

Multi-label classification implementation

So far I have used Keras Tensorflow to model image processing, NLP, time series prediction. Usually in case of having labels with multiple entries, so multiple categories the task was always to just predict to which class the sample belongs. So for example the list of possible classes was [car, human, airplane, flower, building]. So the final prediction was to which class the sample belongs - giving probabilities for each class. Usually in terms of a very confident prediction one class had a very high probability and the others very low.
Now I came across this Kaggle challenge: Toxic Comment Classification Challenge and in specific this implementation. I thought that this is a multi-label classification problem, as one sample can belong to different classes. And indeed, when I check the final prediction:
I can see that the first sample prediction has a very high probability for both toxic and obscene. With my knowledge so far when I applied a standard model to predict a class I would have predicted the probability to which of this class the sample belongs. So either class 1 or 2 or .... so I would have had - in case of a confident prediciton - a high probability for class toxic and low for the others - or in case of unconfident prediciton - 0.4x for toxic, 0.4x for obscene and small probability for the rest.
Now I was suprised of how the implementation was done resp. I do not understand the following:
How is a multi-label classification done (in opposite to the "usual" model)?
When checking the code I see the following model:
inp = Input(shape=(maxlen,))
x = Embedding(max_features, embed_size, weights=[embedding_matrix])(inp)
x = Bidirectional(LSTM(50, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))(x)
x = GlobalMaxPool1D()(x)
x = Dense(50, activation="relu")(x)
x = Dropout(0.1)(x)
x = Dense(6, activation="sigmoid")(x)
model = Model(inputs=inp, outputs=x)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
I understand that x = Dense(6, activation="sigmoid") results from having to predict 6 classes. Same would be with my knowledge so far. However, why is then a probability resulting for a mulit-label classification? Where is the implementation difference between multi-label classification and just predicting one-label out of different choices?
Is it the simple difference of using binary crossentropy and not (sparse) categorical crossentropy along with 6 classes? So that tells that we have a binary problem for each of the classes and it handles these 6 classes separately, so giving a probability for each class that the sample belongs to this class and therefore it can have high probability to belonging to different classes?
The loss function to be used is indeed the binary_crossentropy with a sigmoid activation.
The categorical_crossentropy is not suitable for multi-label problems, because in case of the multi-label problems, the labels are not mutually exclusive. Repeat the last sentence: the labels are not mutually exclusive.
This means that the presence of a label in the form [1,0,1,0,0,0] is correct. The categorical_crossentropy and softmax will always tend to favour one specific class, but this is not the case; just like you saw, a comment can be both toxic and obscene.
Now imagine photos with cats and dogs inside them. What happens if we have 2 dogs and 2 cats inside a photo? Is it a dog picture or a cat picture? It is actually a "both" picture! We definitely need a way to specify that multiple labels are pertained/related to a photo/label.
The rationale for using the binary_crossentropy and sigmoid for multi-label classification resides in the mathematical properties, in that each output needs to be treated as in independent Bernoulli distribution.
Therefore, the only correct solution is BCE + 'sigmoid'.
As you already found out, this is not a "classic" classification problem. For the classification problems you described in your text, softmax activation is commonly used to achieve the effect with high and low confidences which sum up to 1.
If you want to predict a binary problem, like for example "credit card fraud", you can choose between softmax activation in combination with 2 output neurons (fraud<-> non fraud) and a regression model with 1 output neuron. In the latter, the single neuron will output values in range [0,1] and a threshold is chosen, for example 0.5. All outputs <0.5 belong to class 0 and all >= 0.5 to class 1.
If you want to build a model which is able to predict multiple classes for one input, you should use the regression aproach (latter one) and a sigmoid activation function. This enables outputs like the one in your image.
To be honest, I am not sure if the "binary-crossentropy" is the correct loss for a problem like this.

By which technique adapted to time-series can I replace cross-validation in my Keras MLP regression model in Python

I'm currently working with a time series dataset of 46 lines about meteorological measurements on approximately each 3 hours by day during one week. My explanatory variables (X) is composed of 26 variables and some variable has different units of measurement (degree, minimeters, g/m3 etc.). My variable to explain (y) is composed of only one variable temperature.
My goal is to predict temperature (y) on a slot of 12h-24h with the ensemble of variables (X)
For that I used Keras Tensorflow and Python, with MLP regressor model :
X = df_forcast_cap.loc[:, ~df_forcast_cap.columns.str.startswith('l')]
X = X.drop(['temperature_Y'],axis=1)
y = df_forcast_cap['temperature_Y']
y = pd.DataFrame(data=y)
# normalize the dataset X
scaler = MinMaxScaler(feature_range=(0, 1))
scaler.fit_transform(X)
normalized = scaler.transform(X)
# normalize the dataset y
scaler = MinMaxScaler(feature_range=(0, 1))
scaler.fit_transform(y)
normalized = scaler.transform(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# define base model
def norm_model():
# create model
model = Sequential()
model.add(Dense(26, input_dim=26, kernel_initializer='normal', activation='relu'))# 30 is then number of neurons
#model.add(Dense(6, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam')
return model
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# evaluate model with standardized dataset
estimator = KerasRegressor(build_fn=norm_model, epochs=(100), batch_size=5, verbose=1)
kfold = KFold(n_splits=10, random_state=seed)
results = cross_val_score(estimator, X, y, cv=kfold)
print(results)
[-0.00454741 -0.00323181 -0.00345096 -0.00847261 -0.00390925 -0.00334816
-0.00239754 -0.00681044 -0.02098541 -0.00140129]
# invert predictions
X_train = scaler.inverse_transform(X_train)
y_train = scaler.inverse_transform(y_train)
X_test = scaler.inverse_transform(X_test)
y_test = scaler.inverse_transform(y_test)
results = scaler.inverse_transform(results)
print("Results: %.2f (%.2f) MSE" % (results.mean(), results.std()))
Results: -0.01 (0.01) MSE
(1) I read that cross-validation is not adapted for time series prediction. So, I'm wondering which others techniques exist and which one is more adapted to time-series.
(2) In a second place, I decided to normalize my data because my X dataset is composed of different metrics (degree, minimeters, g/m3 etc.) and my variable to explain y is in degree. In this way, I know that have to deal with a more complicated interpretation of the MSE because its result won't be in the same unity that my y variable. But for the next step of my study I need to save the result of the y predicted (made by the MLP model) and I need that these values be in degree. So, I tried to inverse the normalization but without success, when I print my results, the predicted values are still in normalized format (see in my code above). Does anyone see my mistake.s ?
The model that you present above is looking at a single instance of 26 measurements to make a prediction. From your description it seems that you would like to make predictions from a sequence of these measurements. I'm not sure if I fully understood the description but I'll assume that you have a sequence of 46 measurements, each with 26 values that you believe should be good predictors of the temperature. If that is the case, the input shape of your model should be (46, 26,). The 46 here is called time_steps, 26 is the number of features.
For a time series you need to select a model design. There are 2 approaches: a recurrent network or a convolutional network (or a mixture of the 2nd). A convolutional network is typically used to detect patterns in the input data which may be located somewhere in the data. For instance, suppose you want to detect a given shape in an image. Convolutional Networks are a good starting point. Recurrent networks, update their internal state after each time step. They can detect patterns as well as a convolutional network, but you can think of them as being less position independent.
Simple example of a convolutional approach.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import *
from tensorflow.keras.models import Sequential, Model
average_tmp = 0.0
model = Sequential([
InputLayer(input_shape=(46,26,)),
Conv1D(16, 4),
Conv1D(32, 4),
Conv1D(64, 2),
Conv1D(128, 4),
MaxPooling1D(),
Flatten(),
Dense(256, activation='relu'),
Dense(1, bias_initializer=keras.initializers.Constant(average_tmp)),
])
model.compile('adam', 'mse')
model.summary()
A mixed approach, would replace the ```Flatten`` layer above with an LSTM node. That would probably be a reasonable starting point to start experimenting.
(1) I read that cross-validation is not adapted for time series prediction. So, I'm wondering which others techniques exist and which one is more adapted to time-series.
cross validation is a technique that is very well suited for this problem. If you try the example model above, I can almost guarantee that it will overfit your dataset very significantly. cross-validation can help you determine the right regularisation parameters for your model in order to avoid overfitting.
Examples of regularisation techniques that you probably want to consider:
Saving the model weights at the epoch with lower validation score.
Dropout and/or BatchNormalization.
kernel regularisation.
(2) In a second place, I decided to normalize my data because my X dataset is composed of different metrics (degree, minimeters, g/m3 etc.) and my variable to explain y is in degree.
Good call. It will avoid training cycles of your model trying to discover the bias at very high values from the random initialisation.
In this way, I know that have to deal with a more complicated interpretation of the MSE because its result won't be in the same unity that my y variable.
This is orthogonal. The inputs are not assumed to be in the same unit as y. We assume in a DNN that we can create a combination of linear transformation of weights (plus non-linear activations). That has no implicit assumption of units.
But for the next step of my study I need to save the result of the y predicted (made by the MLP model) and I need that these values be in degree. So, I tried to inverse the normalization but without success, when I print my results, the predicted values are still in normalized format (see in my code above). Does anyone see my mistake.s ?
scaler.inverse_transform(results) should do the trick.
It doesn't make sense to inverse transform the inputs X_ and Y_. And it would probably help you keep your code straight to not use the same variable name for both the X and Y scalers.
It is also possible to refrain from scaling Y. If you choose to do so, I'd suggest that you initialise the output layer bias with the mean of the Ys.

Tensorflow neural network has very high error with simple regression

I'm trying to build a NN to do regression with Keras in Tensorflow.
I've trying to predict the chart ranking of a song based on a set of features, I've identified a strong correlation of having a low feature 1, a high feature 2 and a high feature 3, with having a high position on the chart (a low output ranking, eg position 1).
However after training my model, the MAE is coming out at about 3500 (very very high) on both the training and testing set. Throwing some values in, it seems to give the lowest output rankings for observations with low values in all 3 features.
I think this could be something to do with the way I'm normalising my data. After brining it into a pandas dataframe with a column for each feature, I use the following code to normalise:
def normalise_dataset(df):
return df-(df.mean(axis=0))/df.std()
I'm using a sequential model with one Dense input layer with 64 neurons and one dense output layer with one neuron. Here is the definition code for that:
model = keras.Sequential([
keras.layers.Dense(64, activation=tf.nn.relu, input_dim=3),
keras.layers.Dense(1)
])
optimizer = tf.train.RMSPropOptimizer(0.001)
model.compile(loss='mse', optimizer=optimizer, metrics=['mae'])
I'm a software engineer, not a data scientist so I don't know if this model set-up is the correct configuration for my problem, I'm very open to advice on how to make it better fit my use case.
Thanks
EDIT: Here's the first few entires of my training data, there are ~100,000 entires. The final col (finalPos) contains the labels, the field I'm trying to predict.
chartposition,tagcount,artistScore,finalPos
256,191,119179,4625
256,191,5902650,292
256,191,212156,606
205,1480523,5442
256,195,5675757,179
256,195,933171,7745
The first obvious thing is that you are normalizing your data in the wrong way. The correct way is
return (df - df.mean(axis=0))/df.std()
I just changed the bracket, but basically it is (data - mean) divided by standard deviation, whereas you are dividing the mean by the standard deviation.

Appropriate Deep Learning Structure for multi-class classification

I have the following data
feat_1 feat_2 ... feat_n label
gene_1 100.33 10.2 ... 90.23 great
gene_2 13.32 87.9 ... 77.18 soso
....
gene_m 213.32 63.2 ... 12.23 quitegood
The size of M is large ~30K rows, and N is much smaller ~10 columns.
My question is what is the appropriate Deep Learning structure to learn
and test the data like above.
At the end of the day, the user will give a vector of genes with expression.
gene_1 989.00
gene_2 77.10
...
gene_N 100.10
And the system will label which label does each gene apply e.g. great or soso, etc...
By structure I mean one of these:
Convolutional Neural Network (CNN)
Autoencoder
Deep Belief Network (DBN)
Restricted Boltzman Machine
To expand a little on #sung-kim 's comment:
CNN's are used primarily for problems in computer imaging, such as
classifying images. They are modelled on animals visual cortex, they
basically have a connection network such that there are tiles of
features which have some overlap. Typically they require a lot of
data, more than 30k examples.
Autoencoder's are used for feature generation and dimensionality reduction. They start with lots of neurons on each layer, then this number is reduced, and then increased again. Each object is trained on itself. This results in the middle layers (low number of neurons) providing a meaningful projection of the feature space in a low dimension.
While I don't know much about DBN's they appear to be a supervised extension of the Autoencoder. Lots of parameters to train.
Again I don't know much about Boltzmann machines, but they aren't widely used for this sort of problem (to my knowledge)
As with all modelling problems though, I would suggest starting from the most basic model to look for signal. Perhaps a good place to start is Logistic Regression before you worry about deep learning.
If you have got to the point where you want to try deep learning, for whatever reasons. Then for this type of data a basic feed-forward network is the best place to start. In terms of deep-learning, 30k data points is not a large number, so always best start out with a small network (1-3 hidden layers, 5-10 neurons) and then get bigger. Make sure you have a decent validation set when performing parameter optimisation though. If your a fan of the scikit-learn API, I suggest that Keras is a good place to start
One further comment, you will want to use a OneHotEncoder on your class labels before you do any training.
EDIT
I see from the bounty and the comments that you want to see a bit more about how these networks work. Please see the example of how to build a feed-forward model and do some simple parameter optisation
import numpy as np
from sklearn import preprocessing
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
# Create some random data
np.random.seed(42)
X = np.random.random((10, 50))
# Similar labels
labels = ['good', 'bad', 'soso', 'amazeballs', 'good']
labels += labels
labels = np.array(labels)
np.random.shuffle(labels)
# Change the labels to the required format
numericalLabels = preprocessing.LabelEncoder().fit_transform(labels)
numericalLabels = numericalLabels.reshape(-1, 1)
y = preprocessing.OneHotEncoder(sparse=False).fit_transform(numericalLabels)
# Simple Keras model builder
def buildModel(nFeatures, nClasses, nLayers=3, nNeurons=10, dropout=0.2):
model = Sequential()
model.add(Dense(nNeurons, input_dim=nFeatures))
model.add(Activation('sigmoid'))
model.add(Dropout(dropout))
for i in xrange(nLayers-1):
model.add(Dense(nNeurons))
model.add(Activation('sigmoid'))
model.add(Dropout(dropout))
model.add(Dense(nClasses))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='sgd')
return model
# Do an exhaustive search over a given parameter space
for nLayers in xrange(2, 4):
for nNeurons in xrange(5, 8):
model = buildModel(X.shape[1], y.shape[1], nLayers, nNeurons)
modelHist = model.fit(X, y, batch_size=32, nb_epoch=10,
validation_split=0.3, shuffle=True, verbose=0)
minLoss = min(modelHist.history['val_loss'])
epochNum = modelHist.history['val_loss'].index(minLoss)
print '{0} layers, {1} neurons best validation at'.format(nLayers, nNeurons),
print 'epoch {0} loss = {1:.2f}'.format(epochNum, minLoss)
Which outputs
2 layers, 5 neurons best validation at epoch 0 loss = 1.18
2 layers, 6 neurons best validation at epoch 0 loss = 1.21
2 layers, 7 neurons best validation at epoch 8 loss = 1.49
3 layers, 5 neurons best validation at epoch 9 loss = 1.83
3 layers, 6 neurons best validation at epoch 9 loss = 1.91
3 layers, 7 neurons best validation at epoch 9 loss = 1.65
Deep learning structure would be recommended if you were dealing with raw data and wanted to find features, that work towards your classification goal, automatically. But based on the names of your columns and their number (only 10) it seems that you have your features already engineered.
For this reason you could just go with a standard multi-layer neural network and use supervised learning (back propagation). Such network would have the number of inputs matching the number of your columns (10), followed by a number of hidden layers, and then followed by an output layer with the number of neurons matching the number of your labels. You could experiment with using different number of hidden layers, neurons, different neuron types (sigmoid, tanh, rectified linear etc.) and so on.
Alternatively you could use the raw data (if it's available) and then go with DBNs (they're known to be robust and achieve good results across different problems) or auto-encoders.
If you expect the output to be thought of like scores for a label (as I understood from your question), try a supervised multi-class logistic regression classifier. (the highest score takes the label).
If you're bound to use deep-learning.
A simple feed-forward ANN should do, supervise learning through back propagation. Input layer with N neurons, and one or two hidden layers can be added, not more than that. There is no need to go 'deep' and add more layers for this data, there is risk to overfit the data easily with more layers, if you do so it can be tricky to figure out what the problem is, and the test accuracy will be affected greatly.
Simply plotting or visualizing the data i.e with t-sne can be a good start, if you need to figure out which features are important (or any correlation that may exist).
you can then play with higher powers of those feature dimensions/ or add increased weight to their score.
For problems like this, deep-learning probably isn't very well suited. but a simpler ANN architecture like this should work well depending on the data.

Categories

Resources