In Tensorflow what is GraphKeys.INIT_OP for? - python

Looking at the documentation for GraphKeys: https://www.tensorflow.org/api_docs/python/tf/GraphKeys
There is a GraphKeys.INIT_OP listed that has no documentation.
What is this collection for exactly?
I'm looking for the best way to add a few necessary assign OPs to the graph such that they will be run once at initialization time only. My initial thought was to add them to GraphKeys.GLOBAL_VARIABLES which are run at the time sess.run(tf.global_variables_initializer()) is run. When I saw GraphKeys.INIT_OP I wondered if it perhaps offered a more robust option?

INIT_OP should contains the global variable initialization op. By default it contains an op that when run, runs these two:
variables.global_variables_initializer()
resources.initialize_resources(resources.shared_resources())
LOCAL_INIT_OP should contains the local variable initialization op. By default it contains an op that when run, runs these three:
variables.local_variables_initializer()
lookup_ops.tables_initializer()
resources.initialize_resources(resources.local_resources())

Related

TensorFlow How to Initialize Global Step

So I'm trying to run a training session, and when I do I get this error when trying to run my algorithm (when I use tf.train.get_global_step()):
ValueError: global_step is required for exponential_decay.
For some reason, tf.train.get_or_create_global_step() doesn't exist for me, I'm not sure if that's because it's a removed method or what. I updated TensorFlow and everything I'm up to date.
I've dug around the documentation and there's nothing about it. To run I'm using tf.app.run() with a main function.
Is there another way to initialize the global step variable?
Although tf.train.get_or_create_step() is perfectly fine, here is another solution:
g_step = tf.get_variable('global_step', trainable=False, initializer=0)
learning_rate = tf.train.exponential_decay(0.1, g_step)
tf.train.AdamOptimizer(learning_rate).minimize(loss=loss, global_step=g_step)
Create an untrainable variable that initializes with zero and passes it to the Optimizer.
If you need global_step later use tf.train.global_step():
sess = tf.Session()
# Initialize the variable
sess.run(g_step.initializer)
print('global_step: %s' % tf.train.global_step(sess, g_step))
So, the reason this function wasn't showing up was because I actually hadn't been on the newest version of TensorFlow even though it was telling me I was completely up to date.
Seen Here:
So all I did to fix it was uninstall tensorflow, then install from the actual link I don't have it anymore, but a quick google search would suffice.

Returning results of a Pyomo optimisation from a function

I have set up a problem with Pyomo that optimises the control strategy of a CHP unit, which is laid out (very roughly) in the following way:
class Problem
def OptiControl
<Pyomo Concrete model formulation>
return (model.obj.value())
The problem class is there because there are several control strategies methods that I'm investigating, and so I call these by using (for example) b = problem1.OptiControl().
The problem is that if I try to return the value of the objective my script gets stuck and when I Ctrl+C out of it I get (' Signal', 2, 'recieved, but no process queued'). Also if I write model.write() the script ends normally but nothing is displayed in IPython. Trying to print model.obj.value() also doesn't work.
I assume this has something to do with the fact that I'm calling Pyomo in a function because the model worked succesfully before, but I don't know how to get round this.
EDIT: writing the values to a file also doesn't work. If it helps, this is the excerpt of my code where I solve the model:
opt = SolverFactory("glpk") # Choose solver
solution = opt.solve(model) # Solve model
model.write()
with open('ffs.txt','w') as f:
model.obj.expr()
for t in model.P:
f.write(model.f["CHP",t].value)
I ended up working around this problem by re-writing my model as an AbstractModel, writing my data to a .dat file and reading that (clunky, but I need results now so it had to be done). For some reason it works now and does everything I expect it would do.
It also turned out my problem was infeasible which might have added to my problems, but at least with this method I could use results.write() and I was able to find that out which wasn't the case before.

What is the alternative of tf.Variable.ref() in Tensorflow version 0.12?

I'm trying to run open code of A3C reinforcement learning algorithm to learn A3C in A3C code
However,I got several errors and I could fix except one.
In the code, ref() which is a member function of tf.Variable is used (1,2), but in recent tensorflow version 0.12rc, that function seems to be deprecated.
So I don't know what is the best way to replace it (I don't understand exactly why the author used ref()). When I just changed it to the variable itself (for example v.ref() to v), there was no error, but reward is not changed. It seems it cannot learn and I guess it is because the variables are not properly updated.
Please advise me what is the proper way to modify the code to work.
The new method tf.Variable.read_value() is the replacement for tf.Variable.ref() in TensorFlow 0.12 and later.
The use case for this method is slightly tricky to explain, and is motivated by some caching behavior that causes multiple uses of a remote variable on a different device to use a cached value. Let's say you have the following code:
with tf.device("/cpu:0")
v = tf.Variable([[1.]])
with tf.device("/gpu:0")
# The value of `v` will be captured at this point and cached until `m2`
# is computed.
m1 = tf.matmul(v, ...)
with tf.control_dependencies([m1])
# The assign happens (on the GPU) after `m1`, but before `m2` is computed.
assign_op = v.assign([[2.]])
with tf.control_dependencies([assign_op]):
with tf.device("/gpu:0"):
# The initially read value of `v` (i.e. [[1.]]) will be used here,
# even though `m2` is computed after the assign.
m2 = tf.matmul(v, ...)
sess.run(m2)
You can use tf.Variable.read_value() to force TensorFlow to read the variable again later, and it will be subject to whatever control dependencies are in place. So if you wanted to see the result of the assign when computing m2, you'd modify the last block of the program as follows:
with tf.control_dependencies([assign_op]):
with tf.device("/gpu:0"):
# The `read_value()` call will cause TensorFlow to transfer the
# new value of `v` from the CPU to the GPU before computing `m2`.
m2 = tf.matmul(v.read_value(), ...)
(Note that, currently, if all of the ops were on the same device, you wouldn't need to use read_value(), because TensorFlow doesn't make a copy of the variable when it is used as the input to an op on the same device. This can cause a lot of confusion—for example when you enqueue a variable to a queue!—and it's one of the reasons that we're working on enhancing the memory model for variables.)

Tensorflow session returns as 'closed'

I have successfully ported the CIFAR-10 ConvNet tutorial code for my own images and am able to train on my data and generate Tensorboard outputs etc.
My next step was to implement an evaluation of new data against the model I built. I am trying now to use cifar10_eval.py as a starting point however am running into some difficulty.
I should point out that the original tutorial code runs entirely without a problem, including cifar10_eval.py. However, when moving this particular code to my application, I get the following error message (last line).
RuntimeError: Attempted to use a closed Session.
I found this error is thrown by TF's session.py
# Check session.
if self._closed:
raise RuntimeError('Attempted to use a closed Session.')
I have checked the directories in which all files should reside and be created, and all seems exactly as it should (they mirror perfectly those created by running the original tutorial code). They include a train, eval and data folders, containing checkpoints/events files, events file, and data binaries respectively.
I wonder if you could help pointing out how I can debug this, as I'm sure there may be something in the data flow that got disrupted when transitioning the code. Unfortunately, despite digging deep and comparing to the original, I can't find the source, as they are essentially similar with trivial changes in file names and destination directories only.
EDIT_01:
Debugging step by step, it seems the line that actually throws the error is #106 in the original cifar10_eval.py:
def eval_once(args etc)
...
with tf.Session() as sess:
...
summary = tf.Summary()
summary.ParseFromString(sess.run(summary_op)) # <========== line 106
summary_op is created in def evaluate of this same script and passed as an arg to def eval_once.
summary_op = tf.merge_all_summaries()
...
while True:
eval_once(saver, summary_writer, top_k_op, summary_op)
From documentation on Session, a session can be closed with .close command or when using it through a context-manager in with block. I did find tensorflow/models/image/cifar10 | xargs grep "sess" and I don't see any sess.close, so it must be the later.
IE, you'll get this error if you do something like this
with tf.Session() as sess:
sess.run(..)
sess.run(...) # Attempted to use a closed Session.
It was a simple (but humbling) error in indentation.
summary = tf.Summary()
summary.ParseFromString(sess.run(summary_op))
summary.value.add(tag='Precision # 1', simple_value=precision)
summary_writer.add_summary(summary, global_step)
was outside of the try: block, and of course, no session could be found.
Sigh.

FMU FMI simulation, no modification of results when setting certain type of parameter

I developed for the example a simple Modelica model based on the fluid library of the MSL. I connected a MassFlowSource with a pipe and a Boundary_PT as sink function as in the picture below:
http://www.casimages.com/img.php?i=14061806120359130.png
I generate a FMU package with OpenModelica (in mode model-exchange).
I manage this FMU package with python with the code below:
import pyfmi, os
from pyfmi import load_fmu
myModel = load_fmu('PathToFolder\\test3.fmu')
res1 = myModel.simulate() # First simulation with m_flow in source set to [1] Kg/s
x = myModel.get('boundary1.m_flow') # Mass flow rate of the source
y = myModel.get('pipe.port_a.m_flow') # Mass flow rate in pipe
print x, y
myModel.set('boundary1.m_flow', 2)
option = myModel.simulate_options()
option['initialize'] = False # Need to initialize the simulation
res2 = myModel.simulate(options = option) # Second simulation with m_flow in source set to [2] Kg/s
x = myModel.get('boundary1.m_flow') # Mass flow rate of the source
y = myModel.get('pipe.port_a.m_flow') # Mass flow rate in pipe
print x, y
os.system('pause')
The objective is to show a problem when you change a parameter in the model, here the "m_flow" variable in source component. This new set to "2" should change the "m_flow" in pipe but it does not.
Results: In the first simulation the both "m_flow" are gotten to "1" and it's normal because the model is set like this. In the second simulation, I set the parameter to "2" in the source but the pipe "m_flow" stay to "1" (It should be "2").
http://www.casimages.com/img.php?i=140618060905759619.png
The model of the fluid source in Modelica is this one (only our interesting part):
equation
if not use_m_flow_in then
m_flow_in_internal = m_flow;
end if;
connect(m_flow_in, m_flow_in_internal);
I think the FMU don't consider parameter when they are in a if-condition. For me it's a problem because I need to manage FMU and to be sure that if I set a parameter, the simulation will use this new set. How be sure that FMU/FMI works well? Where is the exhaustive list with the type of parameters we can't manage in FMU?
I already know that parameters which change the number of equations can't be consider in FMU management (idem for variables which change the index of DAEs).
Note that OpenModelica has a concept of structural parameters and the Evaluate=true annotation. For example, if a parameter is used as an array dimension, it might be evaluated to an Integer value. All uses of that parameter will use the evaluated value, as if it was a constant.
Rather than including a picture of the diagram, the Modelica source code would have been easier to look at in order to find out what OpenModelica did to the system.
I suspect a parameter was evaluated. If you generate non-FMU code, you could inspect the modelName_init.xml generated by OpenModelica and find the entry for a parameter and look for the property isValueChangeable.
You could also use OMEdit to debug the system and view the initial equation (generate the executable including debug information). File->Open Transformations File, then select the modelName_info.xml file. Search for the variable you tried to change and go to the initial equation that defined it. It could very well be that a start-value (set by PyFMI) is ignored because it is not needed to produce a solution.
whenever you try to set new values to the parameter,
Follow these steps:
1.Reset the model
2.set new values for the parameter
3.Simulate the model.
I am not familiar with PyFMI, but I kinda encountered the same situation before. You could try a few things below.
Try to terminate/free the instant after your first sim.
As most parameters could not be changed after init, you could make that parameter as an input connector, so that this specific parameter could be changed at any time.
(In FMU from Dymola) I also found that if that parameter involves in your initial nonlinear system of equation, then you will get an error "the model could not be initialized" if you try to init the model on the same instant.

Categories

Resources