I'm learning to use Detecron2. I've followed this link to create a custom object detector.
My training code -
# training Detectron2
from detectron2.engine import DefaultTrainer
from detectron2.config import get_cfg
import os
cfg = get_cfg()
cfg.merge_from_file("./detectron2_repo/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
cfg.DATASETS.TRAIN = ("pedestrian",)
cfg.DATASETS.TEST = () # no metrics implemented for this dataset
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = "detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl" # initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.02
cfg.SOLVER.MAX_ITER = 300 # 300 iterations seems good enough, but you can certainly train longer
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128 # faster, and good enough for this dataset
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
It saves a log file in output dir thus I can use tensorboard to show the training accuracy -
%load_ext tensorboard
%tensorboard --logdir output
It works fine and I can see my model's training accuracy. But When testing/validating the model -
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7 # set the testing threshold for this model
cfg.DATASETS.TEST = ("pedestrian_day", )
predictor = DefaultPredictor(cfg)
Although from Detectron2 tutorial I've got -
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from detectron2.data import build_detection_test_loader
evaluator = COCOEvaluator("pedestrian_day", cfg, False, output_dir="./output/")
val_loader = build_detection_test_loader(cfg, "pedestrian_day", mapper=None)
inference_on_dataset(trainer.model, val_loader, evaluator)
but this gives the AP, AP50, AP75, APm, APl and APs for both training and testing.
My question is how can I able to see the testing accuracy in tensorboard like the training one?
By default evaluation during training is disabled
If you would like to enable it you have to set below param
# set eval step intervals
cfg.TEST.EVAL_PERIOD =
But for evaluation to work you have to modify build_evaluator function in detectron2/engine/defaults.py
An example of build_evaluator function is provided in tools/train_net.py script of https://github.com/facebookresearch/detectron2 repo
This issue in detectron2 discusses about creating custom LossEvalHook to monitor eval loss, sounds like a good approach to try
Related
I am using DistilBERT to do sentiment analysis on my dataset. The dataset contains text and a label for each row which identifies whether the text is a positive or negative movie review (eg: 1 = positive and 0 = negative). Here is the code from the huggingface documentation (https://huggingface.co/transformers/custom_datasets.html?highlight=imdb)
#This dataset can be explored in the Hugging Face model hub (IMDb), and can be alternatively downloaded with the 🤗 Datasets library with load_dataset("imdb").
wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
tar -xf aclImdb_v1.tar.gz
#This data is organized into pos and neg folders with one text file per example. Let’s write a function that can read this in.
from pathlib import Path
def read_imdb_split(split_dir):
split_dir = Path(split_dir)
texts = []
labels = []
for label_dir in ["pos", "neg"]:
for text_file in (split_dir/label_dir).iterdir():
texts.append(text_file.read_text())
labels.append(0 if label_dir is "neg" else 1)
return texts, labels
train_texts, train_labels = read_imdb_split('aclImdb/train')
test_texts, test_labels = read_imdb_split('aclImdb/test')
from sklearn.model_selection import train_test_split
train_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, train_labels, test_size=.2)
from transformers import DistilBertTokenizerFast
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
train_encodings = tokenizer(train_texts, truncation=True, padding=True)
val_encodings = tokenizer(val_texts, truncation=True, padding=True)
test_encodings = tokenizer(test_texts, truncation=True, padding=True)
import torch
class IMDbDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_dataset = IMDbDataset(train_encodings, train_labels)
val_dataset = IMDbDataset(val_encodings, val_labels)
test_dataset = IMDbDataset(test_encodings, test_labels)
#Now that our datasets our ready, we can fine-tune a model either #with the 🤗 Trainer/TFTrainer or with native PyTorch/TensorFlow. See #training.
#Fine-tuning with Trainer
#The steps above prepared the datasets in the way that the trainer is #expected. Now all we need to do is create a model to fine-tune, #define the TrainingArguments/TFTrainingArguments and instantiate a #Trainer/TFTrainer.
from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
)
trainer.train()
#We can also train with Pytorch/Tensorflow
from torch.utils.data import DataLoader
from transformers import DistilBertForSequenceClassification, AdamW
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model = DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
model.to(device)
model.train()
train_loader = DataLoader(train_dataset, batch_size=16, shuffle=True)
optim = AdamW(model.parameters(), lr=5e-5)
for epoch in range(3):
for batch in train_loader:
optim.zero_grad()
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
labels = batch['labels'].to(device)
outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
loss = outputs[0]
loss.backward()
optim.step()
model.eval()
I want to know test this model on a new piece of data. So, I have a dataframe which contains a piece of text/review for each row, and I want to predict the label. Does anyone know how I would go about doing that? I apologize, I am very new to this and would greatly appreciate any help! I tried taking in text, cleaning it, and then doing
prediction = model.predict(text)
and I got an error saying DistilBERT has no attribute .predict.
If you just want to use the model, you can use the corresponding pipeline:
from transformers import pipeline
classifier = pipeline('sentiment-analysis')
Then you can use it:
classifier("I hate this book")
The code that you've shared from the documentation essentially covers the training and evaluation loop. Beware that your shared code contains two ways of fine-tuning, once with the trainer, which also includes evaluation, and once with native Pytorch/TF, which contains just the training portion and not the evaluation portion.
Here is how the native method can be tweaked to generate predictions on the test set:
# Put model in evaluation mode
model.eval()
# Tracking variables for storing ground truth and predictions
predictions , true_labels = [], []
# Prediction Loop
for batch in test_dataset:
# Unpack the inputs from our dataloader and move to GPU/accelerator
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
labels = batch['labels'].to(device)
# Telling the model not to compute or store gradients, saving memory and
# speeding up prediction
with torch.no_grad():
# Forward pass, calculate logit predictions
outputs = model(input_ids, attention_mask=attention_mask,
labels=labels)
logits = outputs[0]
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = labels.to('cpu').numpy()
# Store predictions and true labels
predictions.append(logits)
true_labels.append(label_ids)
After the execution of this loop, predictions will contain logits, i.e., the probability distribution from the model before any form of normalization.
You can use the following to pick the label with the maximum score from the logits, and produce a classification report
from sklearn.metrics import classification_report, accuracy_score
# Combine the results across all batches.
flat_predictions = np.concatenate(predictions, axis=0)
# For each sample, pick the label (0 or 1) with the higher score.
flat_predictions = np.argmax(flat_predictions, axis=1).flatten()
# Combine the correct labels for each batch into a single list.
flat_true_labels = np.concatenate(true_labels, axis=0)
# Accuracy
print(accuracy_score(flat_true_labels, flat_predictions))
# Classification Report
report = classification_report(flat_true_labels, flat_predictions)
For a more elegant way of performing predictions, you can create a BERTModel Class that would contain different methods and variables for handling the tokenization, creation of dataloader, running the predictions, etc.
You can try code like this example: Link-BERT
You'll arrange the dataset according to the BERT model. D Section in this link, you can just change the model name and your dataset.
I am new to machine learning programming. I want to plot training accuracy, training loss, validation accuracy, and validation loss in following program.
I use some tutorials to do this, it work fine, but I want this graph
import tensorflow as tf
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
# Commented out IPython magic to ensure Python compatibility.
# install
!pip install pytorch-pretrained-bert pytorch-nlp
!pip install awscli awsebcli botocore==1.18.18 --upgrade
# BERT imports
import torch
import keras
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
from pytorch_pretrained_bert import BertTokenizer, BertConfig
from pytorch_pretrained_bert import BertAdam, BertForSequenceClassification
from keras.models import Sequential
from keras.layers import Dense
import matplotlib.pyplot as plt
import numpy
from tqdm import tqdm, trange
import pandas as pd
import io
import numpy as np
import matplotlib.pyplot as plt
# % matplotlib inline
# specify GPU device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpu = torch.cuda.device_count()
torch.cuda.get_device_name(0)
# Upload the train file from your local drive
from google.colab import files
uploaded = files.upload()
df = pd.read_csv("text.tsv", delimiter='\t', header=None, names=['sentence_source', 'sentence', 'label', 'label_notes'])
df.shape
df.sample(19)
# Create sentence and label lists
sentences = df.sentence.values
# We need to add special tokens at the beginning and end of each sentence for BERT to work properly
sentences = ["[CLS] " + sentence + " [SEP]" for sentence in sentences]
labels = df.label.values
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences]
print ("Tokenize the first sentence:")
print (tokenized_texts[0])
# Set the maximum sequence length. The longest sequence in our training set is 47, but we'll leave room on the end anyway.
# In the original paper, the authors used a length of 512.
MAX_LEN = 128
# Use the BERT tokenizer to convert the tokens to their index numbers in the BERT vocabulary
input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts]
# Pad our input tokens
input_ids = pad_sequences(input_ids, maxlen=MAX_LEN, dtype="long", truncating="post", padding="post")
# Create attention masks
attention_masks = []
# Create a mask of 1s for each token followed by 0s for padding
for seq in input_ids:
seq_mask = [float(i>0) for i in seq]
attention_masks.append(seq_mask)
# Use train_test_split to split our data into train and validation sets for training
train_inputs, validation_inputs, train_labels, validation_labels = train_test_split(input_ids, labels,
random_state=2018, test_size=0.1, stratify=labels)
train_masks, validation_masks, _, _ = train_test_split(attention_masks, input_ids,
random_state=2018, test_size=0.1, stratify=labels)
#stratify
# Convert all of our data into torch tensors, the required datatype for our model
train_inputs = torch.tensor(train_inputs)
validation_inputs = torch.tensor(validation_inputs)
train_labels = torch.tensor(train_labels)
validation_labels = torch.tensor(validation_labels)
train_masks = torch.tensor(train_masks)
validation_masks = torch.tensor(validation_masks)
# Select a batch size for training. For fine-tuning BERT on a specific task, the authors recommend a batch size of 16 or 32
batch_size = 32
# Create an iterator of our data with torch DataLoader. This helps save on memory during training because, unlike a for loop,
# with an iterator the entire dataset does not need to be loaded into memory
train_data = TensorDataset(train_inputs, train_masks, train_labels)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)
validation_data = TensorDataset(validation_inputs, validation_masks, validation_labels)
validation_sampler = SequentialSampler(validation_data)
validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=batch_size)
"""### **TRAIN**"""
# Load BertForSequenceClassification, the pretrained BERT model with a single linear classification layer on top.
model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=5)
model.cuda()
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}
]
# This variable contains all of the hyperparemeter information our training loop needs
optimizer = BertAdam(optimizer_grouped_parameters,
lr=2e-5,
warmup=.1)
# Function to calculate the accuracy of our predictions vs labels
def flat_accuracy(preds, labels):
pred_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
return np.sum(pred_flat == labels_flat) / len(labels_flat)
t = []
# Store our loss and accuracy for plotting
train_loss_set = []
# Number of training epochs (authors recommend between 2 and 4)
epochs = 1
# trange is a tqdm wrapper around the normal python range
for _ in trange(epochs, desc="Epoch"):
# Training
# Set our model to training mode (as opposed to evaluation mode)
model.train()
# Tracking variables
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
# Train the data for one epoch
for step, batch in enumerate(train_dataloader):
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# Clear out the gradients (by default they accumulate)
optimizer.zero_grad()
# Forward pass
loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)
train_loss_set.append(loss.item())
# Backward pass
loss.backward()
# Update parameters and take a step using the computed gradient
optimizer.step()
# Update tracking variables
tr_loss += loss.item()
nb_tr_examples += b_input_ids.size(0)
nb_tr_steps += 1
print("Train loss: {}".format(tr_loss/nb_tr_steps))
# Validation
# Put model in evaluation mode to evaluate loss on the validation set
model.eval()
# Tracking variables
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
# Evaluate data for one epoch
for batch in validation_dataloader:
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# Telling the model not to compute or store gradients, saving memory and speeding up validation
with torch.no_grad():
# Forward pass, calculate logit predictions
logits = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask)
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
eval_accuracy += tmp_eval_accuracy
nb_eval_steps += 1
print("Validation Accuracy: {}".format(eval_accuracy/nb_eval_steps))
# plot training performance
plt.figure(figsize=(15,8))
plt.title("Training loss")
plt.xlabel("Batch")
plt.ylabel("Loss")
plt.plot(train_loss_set)
plt.show()```
You can use tensorboard within Google Colab
#this loads Tensorboard notebooks extension so it displays inline
%load_ext tensorboard
...
#This will show tensorboard before training begins and it will update as training continues
%tensorboard --logdir logs
...
#training code
Google's officialy guidelines are here: https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/tensorboard_in_notebooks.ipynb
Hello guys this is my code, am still a beginner using tensorflow, this is my code
am trying to run a text classification DNN until now everything is fine.
I want to save my model and import it so i can use it to predict new values but I don't have any idea how to do it.
To give you a genral idea on what am trying to do.
I have 2 folders (training & test)
each folder has (4 folders (classification categories))
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
import logging
print("Loading all files from directory ...")
# Load all files from a directory in a DataFrame.
def load_directory_data(directory):
data = {}
data["sentence"] = []
data["tnemitnes"] = []
print("getting in a loop")
for file_path in os.listdir(directory):
with tf.gfile.GFile(os.path.join(directory, file_path), "r") as f:
print("directory : ",directory)
print("file path : ",file_path)
data["sentence"].append(f.read())
data["tnemitnes"].append(re.match("(\d+)\.txt", file_path).group(1))
return pd.DataFrame.from_dict(data)
print("merging all files in the training set ...")
# Merge all type of emails examples, add a polarity column and shuffle.
def load_dataset(directory):
pos_df = load_directory_data(os.path.join("train/br"))
neg_df = load_directory_data(os.path.join(directory, "train/mi"))
dos_df = load_directory_data(os.path.join(directory, "train/Brouillons")) #dsd
nos_df = load_directory_data(os.path.join(directory, "train/favoris")) #dsd
pos_df["polarity"] = 3
neg_df["polarity"] = 2
dos_df["polarity"] = 1
nos_df["polarity"] = 0
return pd.concat([pos_df, neg_df, dos_df , nos_df]).sample(frac=1).reset_index(drop=True)
print("Getting the data from files ...")
# Download and process the dataset files.
def download_and_load_datasets():
train_df = load_dataset(os.path.dirname("train"))
test_df = load_dataset(os.path.dirname("test"))
return train_df, test_df
print("configurring all logging output ...")
# Reduce logging output. ERROR
#logging.set_verbosity(tf.logging.INFO)
logging.getLogger().setLevel(logging.INFO)
print("Setting Up the data for the trainning ...")
train_df, test_df = download_and_load_datasets()
train_df.head()
print("Setting Up a Training input on the whole training set with no limit on training epochs ...")
# Training input on the whole training set with no limit on training epochs.
train_input_fn = tf.estimator.inputs.pandas_input_fn(train_df, train_df["polarity"], num_epochs=None, shuffle=True)
print("Setting Up a Prediction on the whole training set ...")
# Prediction on the whole training set.
predict_train_input_fn = tf.estimator.inputs.pandas_input_fn(train_df, train_df["polarity"], shuffle=False)
print("Setting Up a Prediction on the test set ...")
# Prediction on the test set.
predict_test_input_fn = tf.estimator.inputs.pandas_input_fn(test_df, test_df["polarity"], shuffle=False)
print("Removal of punctuation and splitting on spaces from the data ...")
#The module is responsible for preprocessing of sentences (e.g. removal of punctuation and splitting on spaces).
embedded_text_feature_column = hub.text_embedding_column(key="sentence", module_spec="https://tfhub.dev/google/nnlm-en-dim128/1")
print("Setting Up The Classifier ...")
#Estimator : For classification I did use a DNN Classifier
estimator = tf.estimator.DNNClassifier(
hidden_units=[10, 20],
feature_columns=[embedded_text_feature_column],
n_classes=4,
optimizer=tf.train.AdagradOptimizer(learning_rate=0.003))
print("Starting the Training ...")
# Training for 50 steps means 5000 training examples with the default
# batch size. This is roughly equivalent to 5 epochs since the training dataset
# contains less examples.
estimator.train(input_fn=train_input_fn, steps=20);
print("the Training had ended...")
print("setting Up the results ...")
train_eval_result = estimator.evaluate(input_fn=predict_train_input_fn)
test_eval_result = estimator.evaluate(input_fn=predict_test_input_fn)
print("Showing the results ...")
print("Training set accuracy: {accuracy}".format(**train_eval_result))
print("Test set accuracy: {accuracy}".format(**test_eval_result))
#this is when am having trouble !!! <====
tf.estimator.export(
os.path.dirname("Model"),
serving_input_fn,
default_output_alternative_key=None,
assets_extra=None,
as_text=False,
checkpoint_path=None,
graph_rewrite_specs=(GraphRewriteSpec((tag_constants.SERVING,), ()),),
strip_default_attrs=False
)
now after I have added the estimator export function it askes me to give serving_input_fn and to be honest I did find it hard to understand how to create one.
if there is an easier way it would be better.
You can esily get a serving_input_fn with tf.estimator.export.build_parsing_serving_input_receiver_fn (link)
In your case do something like:
serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
[embedded_text_feature_column])
If you expect to pass tensors directly there's also build_raw_serving_input_receiver_fn in the same package.
All I had to do is add model_dir= os.getcwd()+'\Model' to the estimator
model_dir= os.getcwd()+'\Model'
this is the new Code , I have created a new Folder and named it model.
estimator = tf.estimator.DNNClassifier(
hidden_units=[10, 20],
feature_columns=[embedded_text_feature_column],
n_classes=4,
optimizer=tf.train.AdagradOptimizer(learning_rate=0.003),
model_dir= os.getcwd()+'\Model')
You might read this before.
Tensorflow: how to save/restore a model?
A serving_input_receiver_fn should be defined.
https://www.tensorflow.org/api_docs/python/tf/estimator/export/build_parsing_serving_input_receiver_fn
This document introduce a valuable method to build the serving_input_receiver_fn.
Here is the example:
# first you should prepare feature_spec. it include the speciation of your feature columns.
feature_spec = tf.feature_column.make_parse_example_spec(my_feature_columns)
print feature_spec
serving_input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
export_model = classifier.export_savedmodel('./iris/', serving_input_receiver_fn)
I have created a model with 3 hidden layers and trained it with the specific data-set.
How can I visualize the Model, with the neuron connections and weights at each iteration.
Here is the snippet of the python code :
#<ALL IMPORT STATEMENTS>
MODEL_DIR = <model_name>
def make_estimator(model_dir):
config = run_config.RunConfig(model_dir=model_dir)
feat_cols = [tf.feature_column.numeric_column("x", shape=<number_of_feat_cols>)]
return estimator.DNNClassifier(config=config, hidden_units=[<>,<>,<>],feature_columns=feat_cols,n_classes=2,optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.001))
data = pd.read_csv(<csv_file>)
feat_data = data.drop('Type',axis=1)
feat_data_matrix = feat_data.as_matrix()
labels = data['Type']
labels_matrix = labels.as_matrix()
deep_model = make_estimator(MODEL_DIR)
input_fn = estimator.inputs.numpy_input_fn(x={'x':feat_data_matrix}, y=labels_matrix, shuffle=True, batch_size=10, num_epochs=1000)
tr_steps = <step_size>
deep_model.train(input_fn=input_fn,steps=tr_steps)
print ("Training Done")
In the code above, I have not created any tensorflow session, without it where can I implement the TensorBoard APIs for visualizing the model ?
By using the Python API simply call the method tf.summary.FileWriter
Then if you load the file written by the SummaryWriter into TensorBoard, the graph is shown.
You have to load the graph like this:
# Launch the graph.
current_session = tf.Session()
current_session.run(init)
# Create a summary writer, add the 'graph' to the event file.
writer = tf.summary.FileWriter(<some-directory>, current_session.graph)
See here.
I'm trying to fine tune inception models, and validate it with test data. But all the examples given at tensorflow slime web page only either fine-tuning or testing, there is not any example that doing both at same graph and session.
Basically I want to this.
with tf.Graph().as_default():
image, image_raw, label,image_name, label_name = dut.distorted_inputs(params,is_training=is_training)
test_image, test_image_raw, test_label,test_image_name, test_label_name = dut.distorted_inputs(params,is_training=False)
# I'm creating as it is suggested at github slim page:
logits, _ =inception.inception_v2(image, num_classes=N, is_training=True)
tf.get_variable_scope().reuse_variables()
logits_tes, _ =inception.inception_v2(test_image, num_classes=N, is_training=Test)
err=tf.sub(logits, label)
losses = tf.reduce_mean(tf.reduce_sum(tf.square(err)))
# total_loss = model_loss+losses
total_loss = losses+slim.losses.get_total_loss()
test_err=tf.sub(test_logits, test_label)
test_loss= tf.reduce_mean(tf.reduce_sum(tf.square(test_err)))
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
train_op = slim.learning.create_train_op(total_loss, optimizer)
final_loss = slim.learning.train(
train_op,
logdir=params["cp_file"],
init_fn=ut.get_init_fn(slim,params),
number_of_steps=2,
summary_writer=summary_writer
)
this code fails As it can be seen, I don't have loop separately to call my test models, I want to test my model on my test data at each 10th batch.
Does calling train with number_of_steps=10 and then using the evaluation code work?