Custom metric across rows in TensorFlow - python

I'm trying to calculate metrics for my TensorFlow model across rows with a common key -- specifically precision at k for an information retrieval task -- and I'm finding this extremely nontrivial. My data include a field that indicates the session ID of each row, and there are variable number of rows for each session ID (but small, under 100). My task is to train across the rows as independent observations, so I don't want to group on the session ID and train per-session as it will bias the model. The entire point of the model is to train and evaluate on individual items independently of context, but evaluate the quality of that evaluation within the context, by session IDs.
As a side note, part of the challenge I'm concerned about is data locality as I'm performing distributed training. However it seems that there may be a single evaluator? ("Evaluator is a special task that is not part of the training cluster.")
Once I realized that tf.metrics.precision_at_k calculates precision at k within class predictions for a given row / data point and not , I have considered writing a custom metric function to call from within the Estimator train_and_evaluate method that keeps an internal dict of session ID to tuples of labels and predictions, and transforms these into Tensors to feed to tf.metrics.precision_at_k.
Caveats:
I don't know if I can store this dict, as I don't think I can put it in the computational graph. Can / should I try to store its state in the metric method itself? Will that even be retained after the graph is created, and will it be correctly accessed on subsequent calls to the method?
I don't know how or if I can group items with the same session ID onto the same executor -- even if a method like group_by_window or group_by_reducer on the Dataset works, how does this affect locality in a distributed context?
I could reduce my eval set size to fit into memory but I don't know how to force this to run on only one executor.
I haven't had much luck finding any examples or more information online about anything like this, and the TF code and docs can be somewhat unhelpful, so I'd appreciate any advice! Thanks!

Related

LightGBM: train() vs update() vs refit()

I'm implementing LightGBM (Python) into a continuous learning pipeline. My goal is to train an initial model and update the model (e.g. every day) with newly available data.
Most examples load an already trained model and apply train() once again:
updated_model = lightgbm.train(params=last_model_params, train_set=new_data, init_model = last_model)
However, I'm wondering if this is actually the correct way to approach continuous learning within the LightGBM library since the amount of fitted trees (num_trees()) grows for every application of train() by n_estimators. For my understanding a model update should take an initial model definition (under a given set of model parameters) and refine it without ever growing the amount of trees/size of the model definition.
I find the documentation regarding train(), update() and refit() not particularly helpful. What would be considered the right approach to implement continuous learning with LightGBM?
In lightgbm (the Python package for LightGBM), these entrypoints you've mentioned do have different purposes.
The main lightgbm model object is a Booster. A fitted Booster is produced by training on input data. Given an initial trained Booster...
Booster.refit() does not change the structure of an already-trained model. It just updates the leaf counts and leaf values based on the new data. It will not add any trees to the model.
Booster.update() will perform exactly 1 additional round of gradient boosting on an existing Booster. It will add at most 1 tree to the model.
train() with an init_model will perform gradient boosting for num_iterations additional rounds. It also allows for lots of other functionality, like custom callbacks (e.g. to change the learning rate from iteration-to-iteration) and early stopping (to stop adding trees if performance on a validation set fails to improve). It will add up to num_iterations trees to the model.
What would be considered the right approach to implement continuous learning with LightGBM?
There are trade-offs involved in this choice and no one of these is the globally "right" way to achieve the goal "modify an existing model based on newly-arrived data".
Booster.refit() is the only one of these approaches that meets your definition of "refine [the model] without ever growing the amount of trees/size of the model definition". But it could lead to drastic changes in the predictions produced by the model, especially if the batch of newly-arrived data is much smaller than the original training data, or if the distribution of the target is very different.
Booster.update() is the simplest interface for this, but a single iteration might not be enough to get most of the information from the newly-arrived data into the model. For example, if you're using fairly shallow trees (say, num_leaves=7) and a very small learning rate, even newly-arrived data that is very different from the original training data might not change the model's predictions by much.
train(init_model=previous_model) is the most flexible and powerful option, but it also introduces more parameters and choices. If you choose to use train(init_model=previous_model), pay attention to parameters num_iterations and learning_rate. Lower values of these parameters will decrease the impact of newly-arrived data on the trained model, higher values will allow a larger change to the model. Finding the right balance between those is a concern for your evaluation framework.

Can I store my sklearn Isolation Forest estimator values and build a new model later with those values?

I am able to build an Isolation Forest for anomaly detection. However, due to storage limitations, I cannot store all the data I used to train it. I would also like to input more data later.
I was wondering if it's possible for me to get the estimator values when I originally train it, and save those. Then, a week later when I want to retrain the model with some newly acquired data, could I first restore my old model using those stored estimator values (so I don't need to be able to access the old data), and then the model would adapt to the newly added values.
The reason I have chosen to resort to this is because I couldn't find any algorithms for anomaly detection that learn iteratively (so a free, open source suggestion in that department would work great too!)
Any help with this would be deeply appreciated!

NN: outputting a probability density function instead of a single value

This might sound silly but I'm just wondering about the possibility of modifying a neural network to obtain a probability density function rather than a single value when you are trying to predict a scalar. I know that when you are trying to classify images or words you can get a probability for each class, so I'm thinking there might be a way to do something similar with a continuous value and plot it. (Similar to the posterior plot with bayesian optimisation)
Such details could be interesting when deploying a model for prediction and could provide more flexibility than a single value.
Does anyone knows a way to obtain such an output?
Thanks!
Ok So I found a solution to this issue, though it adds a lot of overhead.
Initially I thought the keras callback could be of use but despite the fact that it provided the flexibility that I wanted i.e.: train only on test data or only a subset and not for every test. It seems that callbacks are only given summary data from the logs.
So the first step what to create a custom metric that would do the same calculation as any metric with the 2 arrays ( the true value and the predicted value) and once those calculations are done, output them to a file for later use.
Then once we found a way to gather all the data for every sample, the next step was to implement a method that could give a good measure of error. I'm currently implementing a handful of methods but the most fitting one seem to be bayesian bootstraping ( user lmc2179 has a great python implementation). I also implemented ensemble methods and gaussian process as alternatives or to use as other metrics and some other bayesian methods.
I'll try to find if there are internals in keras that are set during the training and testing phases to see if I can set a trigger for my metric. The main issue with using all the data is that you obtain a lot of unreliable data points at the start since the network is not optimized. Some data filtering could be useful to remove a good amount of those points to improve the results of the error predictors.
I'll update if I find anything interesting.

Where does machine learning algorithme store the result?

I think this is kind of "blasphemy" for someone who comes from the AI world, but since I come from the world where we program and get a result, and there is the concept of storing something un memory, here is my question :
Machine learning works by iterations, the more there are iterations, the best our algorithm becomes, but after those iterations, there is a result stored somewhere ? because if I think as a programmer, if I re-run the program, I must store previous results somewhere, or they will be overwritten ? or I need to use an array for example to store my results.
For example, if I train my image recognition algorithme with a bunch of cats pictures data sets, what are the variables I need to add to my algorithme, so if I use it with an image library, it will always success everytime I find a cat, but I will use what? since there is nothing saved for my next step ?
All videos and tutorials I have seen, they only draw a graph as decision making visualy, and not applying something to use it in future program ?
For example, this example, kNN is used to teach how to detect a written digit, but where is the explicit value to use ?
https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/2_BasicModels/nearest_neighbor.py
NB: people clicking on close request or downvoting at least give a reason.
the more there are iterations, the best our algorithm becomes, but after those iterations, there is a result stored somewhere
What you're alluding to here is the optimization part.
However to optimize a model, we first have to represent it.
For example, if I'm creating a very simple linear model to predict house prices using its surface in square meters I might go for this model:
price = a * surface + b
That's the representation.
Now that you have represented the model, you want to optimize it, i.e. find the params a and b that minimize the prediction error.
there is a result stored somewhere ?
In the above, we say that we have learned the params or weights a and b.
That's what you keep, the weights which come from optimization (also called training) and of course the model itself.
I think there is some confusion. Let's clear it up.
Machine Learning models usually have parameters, and these parameters are trainable. This means a training algorithm find the "right" values of these parameters in order to properly work for a given task.
This is the learning part. The actual parameter values are "inferred" from training data.
What you would call the result of the training process is a model. The model is represented by formulas with parameters, and these parameters must be stored. Typically when you use a ML/DL framework (like scikit-learn or Keras), the parameters are stored alongside some information about the type of model, so it can be reconstructed at runtime.

How to use TF model for inference while training

I'm adapting the Tensorflow tutorial for sequence to sequence modeling for my project. Specifically, I am basing my code off of translate.py.
The tutorial computes the perplexity on the dev set every n training steps. I'd instead like to calculate BLEU score on the dev set.
The problem I'm facing is that when creating a model, you specify whether it is forward only or not. Looking through the code, it seems that if it is (which happens when training), at each step the network will not calculate the final output for the input sequence, but will calculate gradients. When it's not forward only (as in the decoding function later in the tutorial), it applies the loop function which feeds the output back into the input for the RNN which allows for the output sequence to be generated. However, it doesn't compute the gradients. So as far as I understand it, you can construct a model for either training (i.e. gradients) or testing (i.e. performing full inference on it).
Since I want to compute the BLEU score, I need some sequence produced by the model which corresponds to an input sequence in the dev set. Because of how the models are constructed, I would need both types of models (forward-only and not forward-only). However, trying to do this (even with a new session and a new variable scope), I can't seem to load the model for inference while I also have a model created for training. Without a new session/variable scope, I get errors about duplicated variables. It would be nice if there were a way to switch the model from not forward-only to forward-only.
In this case, is there any way to perform inference (run the full RNN) while I am also in the scope of training it?

Categories

Resources