I am implementing a GAN to generate fake tweets. I would like to visualize my model using Tensorboard but it's not displaying anything. This is my first time using Tensorboard, and I followed a tutorial on YouTube (https://gist.github.com/dandelionmane/4f02ab8f1451e276fea1f165a20336f1#file-mnist-py). When I run tensorboard --logdir=/path_to_dir it gives me a port, and this port takes me to Tensorboard, but nothing is displayed. Below is my code. Thank you!
code deleted
It's pretty long, so please ctrl-F to find the lines related to Tensorboard.
You need to add the following line after you have defined your graph:
writer.add_graph(sess.graph)
Look at the documentation here.
Look at this question:
How to create a Tensorflow Tensorboard Empty Graph
Related
I'm new to this topic, so forgive me my lack of knowledge. There is a very good model called inception resnet v2 that basically works like this, the input is an image and outputs a list of predictions with their positions and bounded rectangles. I find this very useful, and I thought of using the already worked model in order to recognize things that it now can't (for example if a human is wearing a mask or not). Yes, I wanted to add a new recognition class to the model.
import tensorflow as tf
import tensorflow_hub as hub
mod = hub.load("https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1")
mod is an object of type
tensorflow.python.training.tracking.tracking.AutoTrackable, reading the documentation (that was only available on the source code was a bit hard to understand without context)
and I tried to inspect some of it's properties in order to see if I could figure it out by myself.
And well, I didn't. How can I see the network, the layers, the weights? the fit methods, Is it's all abstracted away?. Can I convert it to keras? I want to experiment with it, see if I can modify it, and see if I could export the model to another representation, for example pytorch.
I wanted to do this because I thought it'd be better to modify an already working model instead of creating one from scratch. Also because I'm not good at training models myself.
I've run into this issue too. Tensorflow hub guide says:
This error frequently arises when loading models in TF1 Hub format with the hub.load() API in TF2. Adding the correct signature should fix this problem.
mod = hub.load(handle).signatures['default']
As an example, you can see this notebook.
You can dir the loaded model asset to see what's defined on it
m = hub.load(handle)
dir(model)
As mentioned in the other answer, you can also look at the signatures with print(m.signatures)
Hub models are SavedModel assets and do not have a keras .fit method on them. If you want to train the model from scratch, you'll need to go to the source code.
Some models have more extensive exported interfaces including access to individual layers, but this model does not.
I am using TensorFlow's eager execution and I would like to visualize embeddings in TensorBoard. I use the following code to setup the visualization:
self._writer = tf.contrib.summary.create_file_writer('path')
embedding_config = projector.ProjectorConfig()
embedding = embedding_config.embeddings.add()
embedding.tensor_name = self._word_embeddings.name
embedding.metadata_path = 'metadata.tsv'
projector.visualize_embeddings(self._writer, embedding_config)
where self._word_embeddings is my variable for the embeddings. However, when executing this script TensorFlow throws the following error message:
logdir = summary_writer.get_logdir()
AttributeError: 'SummaryWriter' object has no attribute 'get_logdir'
Has anybody experienced something similar and has an idea how to get the embedding visualization to run in eager mode?
I am using TensorFlow 1.10.0.
Any kind of help is greatly appreciated!
If you only care about visualization, and since you are working in eager mode, things can be much simpler.
As I can see, you already have your metadata.TSV file set. The only thing left, is to write your embedding matrix to a TSV file. Like, just a for loop over the matrix rows, with the values TAB separated.
Last step, you can load the tensorboard projector online, without installing it via: http://projector.tensorflow.org/ and upload your data. You have to upload the embedding file, and the metadata file separately, in two simple steps.
Im super new to tensorflow, and I`m following the tutorials on its webpage.
I already understood the code for the MNIST Dataset tutorial, but I would like to save the model so I can load it afterwards and test it against my own image set.
Im tried many ways of saving it but i keep failing.
Im talking about this tutorial.
Any help will be appreciated!
Edit: Everywhere I go, I see a Session variable, but in this example I dont, and that confuses me...
Do you know how can I save the model from the tutorial and reuse it?
I'm a tensorflow beginner. So, excuse my question if it is stupied
I checked a github code for implementing CNN using MNIST data and tensorflow.
the link below:
https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/convolutional_network.py
However, I need to save the model generated by this code, but don't know how to do it, as this code does not involve the use of sessions, how to incoperate session on it?
Would appreciate your response.
The linked code is using tf.estimator.Estimator to train the model. Its documentation includes how to save the model using export_savedmodel. A saved model can be imported by specifying its location through the model_dir argument of the tf.estimator.Estimator initialiser.
I'm having an issue i can't manage to solve.
I'm just approaching the Super Resolution Images on Python and i found this on github: https://github.com/titu1994/Image-Super-Resolution
I think this is exactly what i need for my project.
So i just install everything i need to run it and i run it with this:
python main.py (path)t1.bmp
t1.bmp is an image stored in the "input-images" directory so my command is this:
python main.py C:\Users\cecilia....\t1.bmp
The error i get is this:
http://imgur.com/X3ssj08
http://imgur.com/rRSdyUb
Can you please help me solving this? (The code i'm using is the one on the github i linked)
Thanks in advance
The very first line on the Readme in the github link that you give says that the code is designed for theano only. Yet in your traceback it shows that you are using tensorflow as backend...
The error that you are having is typical of having the wrong image format for the used backend. You have to know that for convolutional networks, Theano and tensorflow have different conventions. Theano expects the following order for the dimensions (batch, channels, nb_rows , nb_cols) and tensorflow (batch, nb_rows, nb_cols, channels). The first is known as "channels_first" and the other "channels_last". So what happens is that the code you are trying to run (which is explicitly said to be designed for Theano) organises the data to match the channels_first format, which causes tensorflow to crash because the dimensions don't match what it expects.
Bottom line: use theano, or change the code appropriately to make it work on tensorflow.