TPOT taking too long to train - python

Ive been trying to use tpot for the first time on a dataset that has approximately 7000 rows, when trying to train tpot on the training dataset which is 25% of the dataset as a whole, tpot takes too long. ive been running the code for approximately 45 minutes on google colab and the optimization progress is still at 4%. Ive just been trying to use the example as seen on :http://epistasislab.github.io/tpot/examples/. Is it typical for tpot to take this long, because so far i dont think its worth even trying to use it

TPOT can take quite a long time depending on the dataset you have. You have to consider what TPOT is doing: TPOT is evaluating thousands of analysis pipelines and fitting thousands of ML models on your dataset in the background, and if you have a large dataset, then all that fitting can take a long time--especially if you're running it on a less powerful computer.
If you'd like faster results, you have a few options:
Use the "TPOT light" configuration, which uses simpler models and will run faster.
Set the n_jobs parameter to -1 or a number greater than 1, which will allow TPOT to evaluate pipelines in parallel. -1 will use all of the available cores and speed things up significantly if you have a multicore machine.
Subsample the data using the subsample parameter. The default is 1.0, corresponding to using 100% of your training data. You can subsample to lower percentages of the data and TPOT will run faster.

Related

Is there a way we can calculate estimated model training time when applying machine learning with scikitlearn or otherwise?

Many a times specially when the dataset is large or has multiple features, it takes ages (long hours) for sci-kit learn to train the model. Since it is using the computational resources, working on other things during this time on the same machine becomes exceptionally slow, thus, reducing overall productivity.
Is there a way to estimate the time required for training a model? It doesnt have to be actually beforehand but it can be estimated once the training has started.
I have tried scitime but thats a very invasive method. Would prefer a method that is more tightly coupled with sklearn functionality.

SKLearn VotingRegressor - why so slow?

I'm trying to work with SciKit-Learn's VotingRegressor, but I find the experience quite frustrating due to the apparent overhead this class adds.
All it should be doing according to the documentation is
...fits several base regressors, each on the whole dataset. Then it averages the individual predictions to form a final prediction.
But by doing this, I find it somehow increases the runtime by LOADS. Why?
For example, if I import 6 different regressors and train them individually, it amounts to around 5 minutes of training on my computer. Based on the description, the only additional step the VotingRegressor takes is it averages each predictor's prediction. However, when I pass the same 6 regressors to a VotingRegressor and start training, the training keeps running well above the 20 minute mark.
For getting an average, I wouldn't expect an over 5-fold increase in runtime (I'm currently running a training with over 30 minutes passed and still not stopped). What is the overhead that VotingRegressor is adding? Keep in mind this is happening with a circa 30 000 x 150 sized dataset.

how to do hyperparameter optimization in large data?

I almost finished my time series model, collected enough data and now I am stuck at hyperparameter optimization.
And after lots of googling I found new & good library called ultraopt, but problem is that how much amount of fragment of data should I use from my total data (~150 GB) for hyperparameter tuning. And I want to try lots of algorithm and combinations, is there any faster and easy way?
Or
Is there any math involved, something like,
mydata = 100%size
hyperparameter optimization with 5% of mydatasize,
optimized hyperparameter *or+ or something with left 95% of datasize #something like this
To get a similar result as full data used for optimization at a time. Is there any shortcut for these?
I am using Python 3.7,
CPU: AMD ryzen5 3400g,
GPU: AMD Vega 11,
RAM: 16 GB
Hyperparameter tuning is typically done on the validation set of a train-val-test split, where each split will have something along the lines of 70%, 10%, and 20% of the entire dataset respectively. As a baseline, random search can be used while Bayesian optimization with Gaussian processes has been shown to be more compute efficient. scikit-optimize is a good package for this.
A good python library for hyper-parameter tuning is keras tuner. You can utilize different tuners in this library, but for the large data, as you've mentioned, Hyperband Optimization can be state-of-the-art and appropriate one.

SciKit One-class SVM classifier training time increases exponentially with size of training data

I am using the Python SciKit OneClass SVM classifier to detect outliers in lines of text. The text is converted to numerical features first using bag of words and TF-IDF.
When I train (fit) the classifier running on my computer, the time seems to increase exponentially with the number of items in the training set:
Number of items in training data and training time taken:
10K: 1 sec, 15K: 2 sec, 20K: 8 sec, 25k: 12 sec, 30K: 16 sec, 45K: 44 sec.
Is there anything I can do to reduce the time taken for training, and avoid that this will become too long when training data size increases to a couple of hundred thousand items ?
Well scikit's SVM is a high-level implementation so there is only so much you can do, and in terms of speed, from their website, "SVMs do not directly provide probability estimates, these are calculated using an expensive five-fold cross-validation."
You can increase your kernel size parameter based on your available RAM, but this increase does not help much.
You can try changing your kernel, though your model might be incorrect.
Here is some advice from http://scikit-learn.org/stable/modules/svm.html#tips-on-practical-use: Scale your data.
Otherwise, don't use scikit and implement it yourself using neural nets.
Hope I'm not too late. OCSVM, and SVM, is resource hungry, and the length/time relationship is quadratic (the numbers you show follow this). If you can, see if Isolation Forest or Local Outlier Factor work for you, but if you're considering applying on a lengthier dataset I would suggest creating a manual AD model that closely resembles the context of these off-the-shelf solutions. By doing this then you should be able to work either in parallel or with threads.
For anyone coming here from Google, sklearn has implemented SGDOneClassSVM, which "has a linear complexity in the number of training samples". It should be faster for large datasets.

Is scikit-learn suitable for big data tasks?

I'm working on a TREC task involving use of machine learning techniques, where the dataset consists of more than 5 terabytes of web documents, from which bag-of-words vectors are planned to be extracted. scikit-learn has a nice set of functionalities that seems to fit my need, but I don't know whether it is going to scale well to handle big data. For example, is HashingVectorizer able to handle 5 terabytes of documents, and is it feasible to parallelize it? Moreover, what are some alternatives out there for large-scale machine learning tasks?
HashingVectorizer will work if you iteratively chunk your data into batches of 10k or 100k documents that fit in memory for instance.
You can then pass the batch of transformed documents to a linear classifier that supports the partial_fit method (e.g. SGDClassifier or PassiveAggressiveClassifier) and then iterate on new batches.
You can start scoring the model on a held-out validation set (e.g. 10k documents) as you go to monitor the accuracy of the partially trained model without waiting for having seen all the samples.
You can also do this in parallel on several machines on partitions of the data and then average the resulting coef_ and intercept_ attribute to get a final linear model for the all dataset.
I discuss this in this talk I gave in March 2013 at PyData: http://vimeo.com/63269736
There is also sample code in this tutorial on paralyzing scikit-learn with IPython.parallel taken from: https://github.com/ogrisel/parallel_ml_tutorial

Categories

Resources