Batch Norm - Extract Running Mean & Running Variance in TensorFlow - python

I am trying to look at the running mean and running variance of a trained tensorflow model that is exported via GCMLE (saved_model.pb, assets/* & variables/*). Where are these values kept in the graph? I can access gamma/beta values from tf.GraphKeys.TRAINABLE_VARIABLES but I have not been able to find the running mean and running variance in any of the tf.GraphKeys.MODEL_VARIABLES. Are the running mean and running variance stored somewhere else?
I know that at test time (ie. Modes.EVAL), the running mean and running variance are used to normalize the incoming data, then the normalized data is scaled and shifted using gamma and beta. I am trying to look at all of the variables that I need at inference time, but I cannot find the running mean and running variance. Are these only used at test time and not at inference time (Modes.PREDICT)? If so, that would explain why I can't find them in the exported model, but I am expecting them to be there.
Based on tf.GraphKeys I have tried other things like tf.GraphKeys.MOVING_AVERAGE_VARIABLES but they are also empty. I also saw this line in the batch_normalization documentation "Note: when training, the moving_mean and moving_variance need to be updated. By default the update ops are placed in tf.GraphKeys.UPDATE_OPS, so they need to be added as a dependency to the train_op." so I then tried looking at tf.GraphKeys.UPDATE_OPS from my saved model and they contain an assign op batch_normalization/AssignMovingAvg:0 but still not clear where I would get the value from.

It appears that the moving mean and moving variance are stored within tf.GraphKeys.GLOBAL_VARIABLES and it looks like the reason nothing showed up in MODEL_VARIABLES is because you need to use tf.contrib.framework.local_variable

In addition to #reese0106's answer, if you'd like to take out the moving_mean, moving_variance for BatchNorm, you can index them with names as follows.
vars = tf.global_variables() # shows every variable being used.
vars_moving_mean_variance = []
for var in vars:
if ("moving_mean" in var.name) or ("moving_variance" in var.name):
vars_moving_mean_variance.append(var)
print(vars_moving_mean_variance)
p.s. Thanks for the question and the answer. I solved my own problem too.

Related

How to get action_propability() in stable baselines 3

I am just getting started self-studying reinforcement-learning with stable-baselines 3. My long-term goal is to train an agent to play a specific turn-based boardgame. Currently I am quite overwhelmed with new stuff, though.
I have implemented a gym-environment that I can use to play my game manually or by having it pick random actions.
Currently I am stuck with trying to get a model to hand me actions in response to an observation. The action-space of my environment is a DiscreteSpace(256). I create the model with the environment as model = PPO('MlpPolicy', env, verbose=1). When I later call model.predict(observation) I do get back a number that looks like an action. When run repeatedly I get different numbers, which I assume is to be expected on an untrained model.
Unfortunately in my game most of the actions are illegal in most states and I would like to filter them and pick the best legal one. Or simply dump the output result for all the actions out to get an insight on what's happening.
In browsing other peoples code I have seen references to model.action_probability(observation). Unfortunately method is not part of stable baselines 3 as far as I can tell. The guide for migration from stable baselines 2 to v3 only mentions it not being implemented [1].
Can you give me a hint on how to go on?
In case anyone comes across this post in the future, this is how you do it for PPO.
import numpy as np
from stable_baselines3.common.policies import obs_as_tensor
def predict_proba(model, state):
obs = obs_as_tensor(state, model.policy.device)
dis = model.policy.get_distribution(obs)
probs = dis.distribution.probs
probs_np = probs.detach().numpy()
return probs_np
About this point.
when I later call model.predict(observation) I do get back a number that looks like an action.
You can prevent that behavior with the following line
model.predict(observation, deterministic=True)
when you add deterministic=True, all the predicted actions will be always determined by the maximum probability, instead of the probability by itself.
Just to give you an example, let's suppose you have the following probabilities:
25% of action A
75% of action B
If you don't use the deterministic=True, the model will use those probabilities to return a prediction.
If you use deterministic=True, the model is going to return always action B.

Keeping variable names when exporting Pyomo into a .mps file

So, i'm currently working with a pyomo model with multiple instances that are being solved in parallel. Issue is, solving them takes pyomo quite a long time (like 2 to 3 secs, even though the solving part by gurobi takes about 0.08s). I've found out that, by exporting a pyomo instance into an .mps file and then giving it to gurobipy i can get like an increase of 30% in overall speed.
The problem comes later, when i want to work with the variables of the solved model, because ive noticed that, when exporting the original instance from pyomo into a .mps file, variable names get lost; they all get named "x" (so, for example, model.Delta, model.Pg, model.Alpha, etc get turned into x1, x2, ... ,x9999 instead of Delta[0], Delta[1], ... Alpha[99,99]).
Is there a way to keep the original variable name when exporting the model?
Managed to solve it!
To anyone who might find this useful, i passed a dictionary with "symbolic_solver_labels" as an io_options argument for the method, like this:
instance.write(filename = str(es_) + ".mps", io_options = {"symbolic_solver_labels":True})
Now my variables are correctly labeled in the .mps file!

Why scaler.inverse_transform returns different values for the same inputs in sklearn?

My data set has one output that I call Y and 5 inputs called X. I read the output and input in from my system in python which are stored in an array called FinalArray.
Later one I use StandardScaler from sklearn to scale my data set as follows:
scaler = StandardScaler()
scaler.fit(FinalArray)
FinalArrayScaled = scaler.transform(FinalArray)
FinalArrayScaled is later on divided into train and test as it is usually recommended for regression techniques. Since I am using surrogate models (more specifically Kriging) I am implementing infill to add more points to my sample domain to improve the confidence in my model (RSME and r^2). The infill method returns the values needed scaled. Remember that the input used for the Surrogate model has been scaled previously.
Here is a short example of how the output looks like for 4 samples and 5 features (Image 1)
The first column (0) represents the output of the system and the other columns represent the input of my system. So, each raw represent a specific experiment.
In order to know the values with the appropriate dimensions, I implemented the scaler.inverse_transform on the output of the method. However, the output seems weird because once I apply the method scaler.inverse_transform I obtained very different values for the same input, refer to Figure 2.
Notice the elements (0,1) and (0,2) from Figure 1. Although they are exactly the same the lead to totally different values in Figure 2. The same applies to many others. Why is this?
I found a bug on my source code. For the sake of completeness, I am sharing the final results herein.
I had an error in a for loop. I wanted to erase the question to avoid confusions in the future but I was not able to do it.
Thanks

Find probaility of failure using logistic regression in python

I am trying to get some insights from a dataset using Logistic Regression. The starting dataframe has the information of if something was removed unscheduled (1 = Yes and 0 = No) and some data which was provided with this removal.
This looks like this:
This data is then 'dummyfied' using pandas.get_dummies, where the result is as expected.
Then I normalized the found coefficients (using coef_) to scale everything the same. I put this in a dataframe with the column 'Parameter' (which is the column name of the dummy dataframe) and the column 'Value' (which is the value obtained using the coefficients).
Now I will get the following result.
Now this result shows that the On-wing Time is the biggest contributor in the unscheduled removal.
Now the question: how can I predict what the chance is that there will be an unscheduled removal for this reason (this column)? So, what is the chance that I will get another unscheduled removal which is caused by the On-wing Time?
Note that these parameters can change since this data is fake data and data may be added later on. So when the biggest contributor changes, the prediction should also focus on the new biggest contributor.
I hope you understand the question.
EDIT
The complete code and dataset (fake one) can be found here: 1drv.ms/u/s!AjkQWQ6EO_fMiSEfu3vYgSTBR0PZ
Ganesh

Python OpenCV Random trees parameters

I am trying to create an image classifier based on RandomTrees from OpenCV (version 2.4). Right now, I initialize everything likes this:
self.model = cv.RTrees()
max_num_trees=10
max_error=1
max_d=5
criteria=cv.TERM_CRITERIA_MAX_ITER+cv.TERM_CRITERIA_EPS
parameters = dict(max_depth=max_d, min_sample_count=5, use_surrogates=False, nactive_vars=0,
term_crit=(criteria, max_num_trees, max_error))
self.model.train(dataset, cv.CV_ROW_SAMPLE, responses, params=parameters)
I did it by looking at this question. The only problem is, whatever I change in the parameters, classification always remains the same (and wrong). Since the python documentation on this is very very scarce, I have no choice but to ask here what to do and how to check what I am doing. How to get the number of trees it generates and all the other things that are explained for C++ but not for Python - like train error? For example, I tried:
self.model.tree_count
self.model.get_tree_count()
but got an error every time. Also, am I doing the termination criteria initialization correctly?

Categories

Resources