Deeplearning with electroencephalography (EEG) data - python

I am making a convolutional network model with which I want to classify EEG data. The data is an experiment where participants are evoked with images of 3 different classes with 2 subclasses each. To give a brief explanation about the dataset size, a subclass has ±300 epochs of a given participant (this applies for all the subclasses).
Object
Color
Number
Now my question is:
I have 5 participants in my training dataset, I took 15% of each participants' data and put it in the testing dataset. Can I consider the 15% as unseen data even though the same participant was used to train the model on?
Any input is welcome!

It depends on what you want to test. A test set is used to estimate the generalization (i.e. performance on unseen data). So the question is:
Do want to estimate the generalization to unseen data from the same participants (whose data was used to train the classifier)?
Or do you want to estimate the generalization to unseen participants (the general population)?
This really depends on you goal or the claim you are trying to make. I can think of situations for both approaches:
Think of BCIs which need to be retrained for every user. Here, you would test on data from the same individual.
On the other hand, if you make a very general claim (e.g. I can decode some relevant signal from a certain brain region across the population) then having a test set consisting of participants which were not included in the training set would lend much stronger support to your claim. (The question is whether this works, though.)

Related

Is it necessary to mitigate class imbalance problem in multiclass text classification?

I am performing multi-class text classification using BERT in python. The dataset that I am using for retraining my model is highly imbalanced. Now, I am very clear that the class imbalance leads to a poor model and one should balance the training set by undersampling, oversampling, etc. before model training.
However, it is also a fact that the distribution of the training set should be similar to the distribution of the production data.
Now, if I am sure that the data thrown at me in the production environment will also be imbalanced, i.e., the samples to be classified will likely belong to one or more classes as compared to some other classes, should I balance my training set?
OR
Should I keep the training set as it is as I know that the distribution of the training set is similar to the distribution of data that I will encounter in the production?
Please give me some ideas, or provide some blogs or papers for understanding this problem.
Class imbalance is not a problem by itself, the problem is too few minority class' samples make it harder to describe its statistical distribution, which is especially true for high-dimensional data (and BERT embeddings have 768 dimensions IIRC).
Additionally, logistic function tends to underestimate the probability of rare events (see e.g. https://gking.harvard.edu/files/gking/files/0s.pdf for the mechanics), which can be offset by selecting a classification threshold as well as resampling.
There's quite a few discussions on CrossValidated regarding this (like https://stats.stackexchange.com/questions/357466). TL;DR:
while too few class' samples may degrade the prediction quality, resampling is not guaranteed to give an overall improvement; at least, there's no universal recipe to a perfect resampling proportion, you'll have to test it out for yourself;
however, real life tasks often weigh classification errors unequally: resampling may help improving certain class' metrics at the cost of overall accuracy. Same applies to classification threshold selection however.
This depends on the goal of your classification:
Do you want a high probability that a random sample is classified correctly? -> Do not balance your training set.
Do you want a high probability that a random sample from a rare class is classified correctly? -> balance your training set or apply weighting during training increasing the weights for rare classes.
For example in web applications seen by clients, it is important that most samples are classified correctly, disregarding rare classes, whereas in the case of anomaly detection/classification, it is very important that rare classes are classified correctly.
Keep in mind that a highly imbalanced dataset tends to always predicting the majority class, therefore increasing the number or weights of rare classes can be a good idea, even without perfectly balancing the training set..
P(label | sample) is not the same as P(label).
P(label | sample) is your training goal.
In the case of gradient-based learning with mini-batches on models with large parameter space, rare labels have a small footprint on the model training. So, your model fits in P(label).
To avoid fitting to P(label), you can balance batches.
Overall batches of an epoch, data looks like an up-sampled minority class. The goal is to get a better loss function that its gradients move parameters toward a better classification goal.
UPDATE
I don't have any proof to show this here. It is perhaps not an accurate statement. With enough training data (with respect to the complexity of features) and enough training steps you may not need balancing. But most language tasks are quite complex and there is not enough data for training. That was the situation I imagined in the statements above.

Different results infer_vector() of Doc2Vec after saving to disk and load

I am using the Doc2Vec model from gensim (4.1.2) python library.
I trained model on my corpus of documents and used infer_vector(). Than I saved model and try to use infer_vector on same text, but I get totally different vector. What is wrong?
Here is example of code:
doc2vec_model.infer_vector(["system", "response"])
array([-1.02667394e-03, -2.73817539e-04, -2.08510624e-04, 1.01583987e-03,
-4.99124289e-04, 4.82861622e-04, -9.00296785e-04, 9.18195175e-04,
....
doc2vec_model.save('model/doc2vec')
If I load saved model
fname = "model/model_doc2vec"
model = Doc2Vec.load(fname)
model.infer_vector(["system", "response"])
array([-1.07945153e-03, 2.80674692e-04, 4.65555902e-04, 6.55420765e-04,
7.65898672e-04, -9.16261168e-04, 9.15124183e-05, -5.18970715e-04,
....
First, there's a natural amount of variance from one run of infer_vector() to another, that's inherent to how the algorithm works. The vector will be at least a little different every time you run it, even without the save/load between. For more details, see:
Q12: I've used Doc2Vec infer_vector() on a single text, but the resulting vector is different each time. Is there a bug or have I made a mistake? (doc2vec inference non-determinism)
Second, a 2-word text is a minimal corner-case on which Doc2Vec is less likely to work very well. It's better on texts that are at least dozens of words long. In particular, both the training & inference are processes that work in proportion to the number of words in a text. So a 100-word text, that goes through inference to find a new vector, will get 50x more 'adjustment nudges' than a mere 2-word text - and thus tend to be somewhat more stable, run-to-run, than a tiny text. (As mentioned in the FAQ item linked above, increasing the epochs may help a bit, making a small text a little more like a longer text – but I would still expect any small text to be more at the mercy of vagaries of the random initialization, and random smpling during incremental adjustment, than a longer text.)
Finally, often other problems in the model – like insufficient training data, overfitting (expecially when the model is too large for the amount of training data), or other suboptimal parameters or errors during training can make a model that's especially inconsistent from inference to inference.
The vectors from repeated inferences will never be identical, but they should be fairly close, when parameters are good & training is sufficient. (In fact, one indirect way to test if a model is doing anything useful is to check, at then end of training, how often a re-inferred vector for training texts is the top, or one of the few top, neighbors of the same text's vector from bulk training.)
One possible errors could be too few epochs – the default of 5 inherited from Word2Vec is often too few, with 10 or 20 often being better. (Or, if you're struggling with minimal amounts of data, even more epochs can help eke out some results – though really, this algorithm needs lots of training data. Published results typically use at least tens-of-thousands, if not millions, of separate training docs, each at least dozens, but ideally hundreds or in some cases thousands of words long. With less data (and possibly too many vector_size dimensions for tiny training data), models will be 'looser' or more arbitrary when modeling new data.
Another very common error is to follow some of the bad tutorials online which include calling .train() many times in your own training loop, (mis-)managing the training alpha manually. This is almost never a good idea. See this other answer for more details on this common error:
My Doc2Vec code, after many loops/epochs of training, isn't giving good results. What might be wrong?

Text classification - is it overfitting? How can I prove?

I have a multi classification problem and my data involves sequence of letters. It is a labelled data (used label encoder to encode string labels to numeric). There could be partial strings for the same class. May strings match but some could be just slightly different.
I am preparing my data with k-mer and countvectoriser (fitted on train data and transformed train and test data). With the combination of kmer size and ngram sizes, the dimension (feature size) varies between 8000+ to 35000+. I do not think that there is test information leak at the training of the model.
I fit different algorithms on the train data and test to review the generalisation. The test scores (accuracy, f1-score, precision and recall) are coming pretty high (more than 99%). Even though this is testing, do you think the model could be overfitting due to high dimensionality (curse of dimensionality)? I understand that if training score is high and generalises poorly then its overfitting but here the test scores are very high. This is not models as different algorithms giving similar results, its certainly about the data.
If I apply PCA to get 10 components which covers 99% variance, the test score on testing is high too. If I use selectkfeatures to select just about 10 best features, then the scores come down.
Really looking for your thoughts on how I can prove that this is not overfitting? Should I always go for reduced features size (through selection or pca) with such high dimension size? Thanks.
Regards,
Vijay
If your test score is high, then below are the possibilities
Overlap in test and train data: This can happen if you have duplicate records and while splitting one fall into train and other into test
Data Leak: If the class label information is some how encoded in the features. This can be easily verified: if train score are almost 100% even with basic models. Check this resource for understand what is a data leak.
You really have succeeded in building a good model
I suggest check the above 2 possibilities first and then try out K-fold cross validation.

Training CNN on small subsets to select architecture

Do you know if it's possible to use a very small subset of my training data (100 or 500 instances only for example), to train very rough CNN network quickly in order to compare different architectures, then select the best performing one ?
When I say "possible", I mean is there evidence that applying that kind of selection strategy works, and that the selected network will consistently outperform the other to for this specific task.
Thank you,
For information, the project in question would constist of two stages CNNs to classify multichannel timeseries. The first CNN would forecast the inputs data over the next period of time, then the second CNN would use this forecast and classify the results in two categories.
The procedure you are talking about is actually used in practice. When tuning hyperparameters, a lot of people select a subset of the whole dataset to do this.
Is the best architecture on the subset necessarily the best on the full dataset? NO! However, it's the best guess you have and that's why it's useful.
A couple of things to note on your question:
100-500 instances is extremely low! The CNN still needs to be trained. When we say subset we usually mean tens of thousands of images (out of the millions of the dataset). If your dataset is under 50000 images then why do you need a subset? Train on the whole dataset.
Contrary to what a lot of people believe, the details of the architecture are of little importance to the classification performance. Some of the hyperparameters you mention (e.g. kernel size) are of secondary importance. The key things you should focus on is depth, size of layers, use of pooling/skip connections/batch norm/dropout, etc.

Training, testing, and validation sets for bidirectional LSTM (BLSTM)

When it comes to normal ANNs, or any of the standard machine learning techniques, I understand what the training, testing, and validation sets should be (both conceptually, and the rule-of-thumb ratios). However, for a bidirectional LSTM (BLSTM) net, how to split the data is confusing me.
I am trying to improve prediction on individual subject data that consists of monitored health values. In the simplest case, for each subject, there is one long time series of values (>20k values), and contiguous parts of that time series are labeled from a set of categories, depending on the current health of the subject. For a BLSTM, the net is trained on all of the data going forwards and backwards simultaneously. The problem then is, how does one split a time series for one subject?
I can't just take the last 2,000 values (for example), because they might all fall into a single category.
And I can't chop the time series up randomly, because then both the learning and testing phases would be made of disjointed chunks.
Finally, each of the subjects (as far as I can tell) has slightly different (but similar) characteristics. So, maybe, since I have thousands of subjects, do I train on some, test on some, and validate on others? However, since there are inter-subject differences, how would I set up the tests if I was only considering one subject to start?
I think this has more to do with your particular dataset than Bi-LSTMs in general.
You're confusing splitting a dataset for training/testing vs. splitting a sequence in a particular sample. It seems like you have many different subjects, which constitute a different sample. For a standard training/testing split, you would split your dataset between subjects, as you suggested in the last paragraph.
For any sort of RNN application, you do NOT split along your temporal sequence; you input your entire sequence as a single sample to your Bi-LSTM. So the question really becomes whether such a model is well-suited to your problem, which has multiple labels at specific points in the sequence. You can use a sequence-to-sequence variant of the LSTM model to predict which label each time point in the sequence belongs to, but again you would NOT be splitting the sequence into multiple parts.

Categories

Resources