I have a pipeline to read train and validation datasets from tfrecords.
I build batches using tf.train.batch. During training I want to switch between training and evaluation on validation dataset.
Here is simplified snippet of code how I implement it now.
is_training_pl = tf.placeholder(tf.bool)
images_train, labels_train = tf.train.batch([img_train, label_train])
images_val, labels_val = tf.train.batch([img_val, label_val])
data = tf.cond(is_training_pl, lambda: [images_train, labels_train], lambda: [images_val, labels_val])
loss = my_model(input=data)
I know that one can do it with tf.cond, but the problem with it is that both train and val batch ops would be executed when tf.cond is called.
On github ebrevdo told (link to the comment) that it's possible to use tf.train.maybe_batch for this purpose instead, which is more efficient.
Can anyone give an example of how to use tf.train.batch in my case please?
Related
Post checking official documentation and example, I am still confused if test data passed to the setup function is completely unseen by the model???
from pycaret.datasets import get_data
from pycaret.internal.pycaret_experiment import TimeSeriesExperiment
# get data
y = get_data('airline', verbose=False)
# no of future steps to forecast
fh = 12 # or alternately fh = np.arange(1,13)
fold = 3
# setup
exp = TimeSeriesExperiment()
exp.setup(data=y, fh=fh, fold = fold)
exp.models()
which gives description as:
Also, checked at cv-graph, we can conclude that test data set is not used while cv. But, Still as it's not mentioned anywhere about it, need a concrete evidence.
Train-Test split
Train c-v splits
If you notice the cv splits, they do not use the test data at all. So any step such as create_model, tune_model, blend_model, compare_models that use Cross-Validation, will not use the test data at all for training.
Once you are happy with the models from these steps, you can finalize the model using finalize_model. In this case, whatever model you pass to finalize_model is trained on the complete dataset (train + test) so that you can make true future predictions.
I'm training a simple VAE model on 64*64 images and I would like to see the images generated after every epoch or every couple batches to see the progress.
when I train the model I wait until the training is done and then I look at the results.
I tried to make a custom callback function in Keras that generates an image and saves it but couldn't do it. is it even possible? I couldn't find anything like it.
it would be awesome if you refer me to a source that explains how to do so or show me an example.
Note: I'm interested in a clean Keras.callback solution and not to iterate over every epoch, train and generate the sample
If you still need it, you can define custom callback in keras as a subclass of keras.callbacks.Callback:
class CustomCallback(keras.callbacks.Callback):
def __init__(self, save_path, VAE):
self.save_path = save_path
self.VAE = VAE
def on_epoch_end(self, epoch, logs={}):
#load the image
#get latent_space with self.VAE.encoder.predict(image)
#get reconstructed image wtih self.VAE.decoder.predict(latent_space)
#plot reconstructed image with matplotlib.pyplot
Then define callback as image_callback = CustomCallback(...)
and place image_callback in the list of callbacks
Yeah its actually possible, but i always use matplotlib and a self-defined function for that. For example something like that.
for steps in range (epochs):
Train,Test = YourDataGenerator() # load your images for one loop
model.fit(Train,Test,batch_size= ...)
result = model.predict(Test_image)
plt.imshow(result[0,:,:,:]) # keras always returns [batch.nr,heigth,width,channels]
filename1 = '/content/runde2/%s_generated_plot_%06d.png' % (test, (steps+1))
plt.savefig(filename1 )
plt.close()
I think there is also a clean keras.callback version, but i always used this approach because you can use other libraries for easier data augmentation per loop. But thats just my opinion, hope i could help you at least a bit.
I'm quite familiar in TensorFlow 1.x and I'm considering to switch to TensorFlow 2 for an upcoming project. I'm having some trouble understanding how to write scalars to TensorBoard logs with eager execution, using a custom training loop.
Problem description
In tf1 you would create some summary ops (one op for each thing you would want to store), which you would then merge into a single op, run that merged op inside a session and then write this to a file using a FileWriter object. Assuming sess is our tf.Session(), an example of how this worked can be seen below:
# While defining our computation graph, define summary ops:
# ... some ops ...
tf.summary.scalar('scalar_1', scalar_1)
# ... some more ops ...
tf.summary.scalar('scalar_2', scalar_2)
# ... etc.
# Merge all these summaries into a single op:
merged = tf.summary.merge_all()
# Define a FileWriter (i.e. an object that writes summaries to files):
writer = tf.summary.FileWriter(log_dir, sess.graph)
# Inside the training loop run the op and write the results to a file:
for i in range(num_iters):
summary, ... = sess.run([merged, ...], ...)
writer.add_summary(summary, i)
The problem is that sessions don't exist anymore in tf2 and I would prefer not disabling eager execution to make this work. The official documentation is written for tf1 and all references I can find suggest using the Tensorboard keras callback. However, as far as I know, this only works if you train the model through model.fit(...) and not through a custom training loop.
What I've tried
The tf1 version of tf.summary functions, outside of a session. Obviously any combination of these functions fails, as FileWriters, merge_ops, etc. don't even exist in tf2.
This medium post states that there has been a "cleanup" in some tensorflow APIs including tf.summary(). They suggest using from tensorflow.python.ops.summary_ops_v2, which doesn't seem to work. This implies using a record_summaries_every_n_global_steps; more on this later.
A series of other posts 1, 2, 3, suggest using the tf.contrib.summary and tf.contrib.FileWriter. However, tf.contrib has been removed from the core TensorFlow repository and build process.
A TensorFlow v2 showcase from the official repo, which again uses the tf.contrib summaries along with the record_summaries_every_n_global_steps mentioned previously. I couldn't make this to work either (even without using the contrib library).
tl;dr
My questions are:
Is there a way to properly use tf.summary in TensroFlow 2?
If not, is there another way to write TensorBoard logs in TensorFlow 2, when using a custom training loop (not model.fit())?
Yes, there is a simpler and more elegant way to use summaries in TensorFlow v2.
First, create a file writer that stores the logs (e.g. in a directory named log_dir):
writer = tf.summary.create_file_writer(log_dir)
Anywhere you want to write something to the log file (e.g. a scalar) use your good old tf.summary.scalar inside a context created by the writer. Suppose you want to store the value of scalar_1 for step i:
with writer.as_default():
tf.summary.scalar('scalar_1', scalar_1, step=i)
You can open as many of these contexts as you like inside or outside of your training loop.
Example:
# create the file writer object
writer = tf.summary.create_file_writer(log_dir)
for i, (x, y) in enumerate(train_set):
with tf.GradientTape() as tape:
y_ = model(x)
loss = loss_func(y, y_)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# write the loss value
with writer.as_default():
tf.summary.scalar('training loss', loss, step=i+1)
I am training a neural network with tensorflow (1.12) in a supervised fashion. I'd like to only train on specific examples. The examples are created on the fly by cutting out subsequences, hence I want to do the conditioning within tensorflow.
This is my original part of code:
train_step, gvs = minimize_clipped(optimizer, loss,
clip_value=FLAGS.gradient_clip,
return_gvs=True)
gradients = [g for (g,v) in gvs]
gradient_norm = tf.global_norm(gradients)
tf.summary.scalar('gradients/norm', gradient_norm)
eval_losses = {'loss1': loss1,
'loss2': loss2}
The training step is later executed as:
batch_eval, _ = sess.run([eval_losses, train_step])
I was thinking about inserting something like
train_step_fake = ????
eval_losses_fake = tf.zeros_like(tensor)
train_step_new = tf.cond(my_cond, train_step, train_step_fake)
eval_losses_new = tf.cond(my_cond, eval_losses, eval_losses_fake)
and then doing
batch_eval, _ = sess.run([eval_losses, train_step])
However, I am not sure how to create a fake train_step.
Also, is this a good idea in general or is there a smoother way of doing this? I am using a tfrecords pipeline, but no other high-level modules (like keras, tf.estimator, eager execution etc.).
Any help is obviously greatly appreciated!
Answering the specific question first. It's certainly possible to only perform your training step based on the tf.cond outcome. Note that the 2nd and 3rd params are lambdas though so more something like:
train_step_new = tf.cond(my_cond, lambda: train_step, lambda: train_step_fake)
eval_losses_new = tf.cond(my_cond, lambda: eval_losses, lambda: eval_losses_fake)
Your instinct that this may not be the right thing to do is correct though.
It's much more preferable (both in terms of efficiency and in terms of reading and reasoning about your code) to filter out the data you want to ignore before it gets to your model in the first place.
This is something you could achieve using the Dataset API. which has a really useful filter() method you could use. If you are using the dataset api to read your TFRecords right now then this should be as simple as adding something along the lines of:
dataset = dataset.filter(lambda x: {whatever op you were going to use in tf.cond})
If you are not yet using the dataset API, now is probably the time to have a little read up on it and consider it rather than butchering the model with that tf.cond() to act as a filter.
I know this question is asked more than one time, but I couldn't understand codes or the logic behind.
In my data set, first I created a layer, sigmoid layer, then I connected this layer to the output layer and I've used softmax function in the output layer.
fl = tf.layers.dense(x, 10,activation=tf.sigmoid)
output = tf.layers.dense(fl, 2,activation=tf.nn.softmax)
I've created loss and accuracy, initialized variables, set optimizer and train variables, then I start running on my data:
loss = tf.losses.softmax_cross_entropy(onehot_labels=y,logits=output)
accuracy = tf.metrics.accuracy(tf.argmax(y_train,1),tf.argmax(output,1))
# inits
init_local = tf.local_variables_initializer()
init_global = tf.global_variables_initializer()
sess.run(init_global)
sess.run(init_local)
optimizer = tf.train.GradientDescentOptimizer(rate)
train = optimizer.minimize(loss)
for i in range(1000):
_, lv = sess.run((train, loss))
if i%5 == 0:
print("L: " + str(lv))
print("Accuracy: "+str(sess.run(accuracy)))
I can see that my loss value decreases every time I run on the training set. And my accuracy is ~0.93.
The problem is, from now on, I don't know how to test this model with real data.
Also, how can I draw a histogram of my real data? I have correct labels for my real data as well.
I will assume that you use Dataset to feed your training data and you want to run on test data immediately after training (since you don't have checkpoints in your code).
When using Dataset, you would create an iterator and call get_next() on it. Then, you would use the return values of get_next() as inputs to your model.
To run your model on the test data, you can use two high-level approaches:
If you test data has the same format as you train data, create a dataset that reads your test data. Then, create another copy (sometimes called a "tower") of your model (operations will be new but variables will be shared) that uses the test Dataset. Then, use sess.run() similarly to how you use it for training - you might not need to compute loss or train, but only accuracy.
If you test data has different format, you can feed it directly, by using feed_dict argument to sess.run(). You would feed your test data as values for tensors returned from get_next(). Usually, one feeds placeholders, but TensorFlow allows you to feed any tensor.
As for histograms, Tensorboard has a nice way of visualizing them: https://www.tensorflow.org/programmers_guide/tensorboard_histograms.