I am new to TensorFlow. While I am reading the existing documentation, I found the term tensor really confusing. Because of it, I need to clarify the following questions:
What is the relationship between tensor and Variable, tensor
vs. tf.constant, 'tensor' vs. tf.placeholder?
Are they all types of tensors?
TensorFlow doesn't have first-class Tensor objects, meaning that there are no notion of Tensor in the underlying graph that's executed by the runtime. Instead the graph consists of op nodes connected to each other, representing operations. An operation allocates memory for its outputs, which are available on endpoints :0, :1, etc, and you can think of each of these endpoints as a Tensor. If you have tensor corresponding to nodename:0 you can fetch its value as sess.run(tensor) or sess.run('nodename:0'). Execution granularity happens at operation level, so the run method will execute op which will compute all of the endpoints, not just the :0 endpoint. It's possible to have an Op node with no outputs (like tf.group) in which case there are no tensors associated with it. It is not possible to have tensors without an underlying Op node.
You can examine what happens in underlying graph by doing something like this
tf.reset_default_graph()
value = tf.constant(1)
print(tf.get_default_graph().as_graph_def())
So with tf.constant you get a single operation node, and you can fetch it using sess.run("Const:0") or sess.run(value)
Similarly, value=tf.placeholder(tf.int32) creates a regular node with name Placeholder, and you could feed it as feed_dict={"Placeholder:0":2} or feed_dict={value:2}. You can not feed and fetch a placeholder in the same session.run call, but you can see the result by attaching a tf.identity node on top and fetching that.
For variable
tf.reset_default_graph()
value = tf.Variable(tf.ones_initializer()(()))
value2 = value+3
print(tf.get_default_graph().as_graph_def())
You'll see that it creates two nodes Variable and Variable/read, the :0 endpoint is a valid value to fetch on both of these nodes. However Variable:0 has a special ref type meaning it can be used as an input to mutating operations. The result of Python call tf.Variable is a Python Variable object and there's some Python magic to substitute Variable/read:0 or Variable:0 depending on whether mutation is necessary. Since most ops have only 1 endpoint, :0 is dropped. Another example is Queue -- close() method will create a new Close op node which connects to Queue op. To summarize -- operations on python objects like Variable and Queue map to different underlying TensorFlow op nodes depending on usage.
For ops like tf.split or tf.nn.top_k which create nodes with multiple endpoints, Python's session.run call automatically wraps output in tuple or collections.namedtuple of Tensor objects which can be fetched individually.
From the glossary:
A Tensor is a typed multi-dimensional array. For example, a 4-D array of floating point numbers representing a mini-batch of images with dimensions [batch, height, width, channel].
Basically, every data is a Tensor in TensorFlow (hence the name):
placeholders are Tensors to which you can feed a value (with the feed_dict argument in sess.run())
Variables are Tensors which you can update (with var.assign()). Technically speaking, tf.Variable is not a subclass of tf.Tensor though
tf.constant is just the most basic Tensor, which contains a fixed value given when you create it
However, in the graph, every node is an operation, which can have Tensors as inputs or outputs.
As already mentioned by others, yes they are all tensors.
The way I understood those is to first visualize and understand 1D, 2D, 3D, 4D, 5D, and 6D tensors as in the picture below. (source: knoldus)
Now, in the context of TensorFlow, you can imagine a computation graph like the one below,
Here, the Ops take two tensors a and b as input; multiplies the tensors with itself and then adds the result of these multiplications to produce the result tensor t3. And these multiplications and addition Ops happen at the nodes in the computation graph.
And these tensors a and b can be constant tensors, Variable tensors, or placeholders. It doesn't matter, as long as they are of the same data type and compatible shapes(or broadcastable to it) to achieve the operations.
Data is stored in matrices. A 28x28 pixel grayscale image fits into a
28x28 two-dimensional matrix. But for a color image, we need more
dimensions. There are 3 color values per pixel (Red, Green, Blue), so
a three-dimensional table will be needed with dimensions [28, 28, 3].
And to store a batch of 128 color images, a four-dimensional table is
needed with dimensions [128, 28, 28, 3].
These multi-dimensional tables are called "tensors" and the list of
their dimensions is their "shape".
Source
TensorFlow's central data type is the tensor. Tensors are the underlying components of computation and a fundamental data structure in TensorFlow. Without using complex mathematical interpretations, we can say a tensor (in TensorFlow) describes a multidimensional numerical array, with zero or n-dimensional collection of data, determined by rank, shape, and type.Read More: What is tensors in TensorFlow?
Related
I have built a custom 1-dimensional KMeans module in tensorflow. When executing in pure python everything works as specified. I.e. after training, I can call km.predict(tensor) where tensor is of shape (n, 1) and it will return a (n, 1) tensor of cluster assignments.
The problem is when I try to save this model and then load it. Saving/Loading tensorflow #functions requires that I specify an input shape. However, when I do that, it forces me to only make predict calls against the shape that I specified at saving. So e.g. if I want to find the associated clusters for tensor (m, 1), tensorflow cant find the associated concrete function (since m =!= n) and fails.
(I've tried using None as the input shape, that doesn't work.)
This phenomenon is forcing me to save function signatures of shape (1,1) and then applying those functions across vectors, which murders performance.
Am I missing something here or is it actually impossible to save a tensorflow model that accepts variable length inputs?
Suppose we have a batch of images as 4D tensor of shape (batches, height, width, channels). And suppose we have some images transforming functions f0(img...), f1(img...) ..., each function processes just one image (not batch) represented as 3D tensor. Also output tensors from this functions may be different in shape than input tensors, even some of functions may produce more than one tensor and functions may have some extra arguments besides image.
Also suppose that we have non-eager execution mode, meaning that we build a graph to .pb file first and later graph is executed mostly on GPU for efficiency.
As it is common for TF, first batches dimension size might be unknown, which is signified by value None. Real size of this dimension is only known at graph evaluation time on input data (user may try to feed batches of varying sizes).
We're given a sequence of required image transformations specified as the list of functions with extra arguments. The task is to somehow process input batch of images through these functions. More than that it is needed to do in most efficient way, tf.py_functions are not allowed, meaning that if all transforming functions are implemented and run on GPU-side only then it is not allowed for intermediate results to go back and forth from gpu to cpu as inputs to py_functions.
The main difficulty is due to batches count being not known (equal to None). Hence we can't use python loop to processs each image through function. Of cause we can fix some maximum possible batch size and create a loop up to this maximum size, and for loop iterations when there is no input we can conditionally not process this input and pass empty tensor forth. Another problem that on each stage input and output list length may be different, e.g. when image transofmation splits input image into 3-4 unequal in shape tensors or does multi-crop with different windows.
I think the right approach would be to use tf.while_loop somehow, but I haven't understood how it works, and how to apply this while to case when each stage has different length of possible tensors of different shape.
There is tf.map_fn which is perfect for the case when all inputs have same shape and outputs too. Then we can pass down the transofmation path single 4D tensor. But this is not the case, inputs can be different in shapes and outputs too.
Maybe there is anything like python's list but in TF side? Meaning list to keep tensors of different shape, but TF-only, which doesn't leave GPU like python list does. If there exists such list and there is analogy of tf.map_fn then we can use this mapping to process list of tensors of different shape, by applying one function. It partially and mostly would solve my task, at least help me for sure.
I'm implementing a text classifier with a CNN similar to Kim 2014 with Tensorflow. Tensorflow provides tf.nn.embedding_lookup_sparse, which allows you to provide the word IDs as a sparse tensor. This is nice, especially for enabling variable length sequences. However, this function requires a "combination" step after the lookup, such as "mean" or "sum". This coerces it back to the dense tensor space. I don't want to do any combination. I want to keep my vectors in the sparse representation, so I can do other convolutions afterwards. Is this possible in TF?
EDIT: I want to avoid padding the input prior to the embedding lookup. This is because Tensorflow's embedding lookup generates vectors for the pad value, and its a kludge trying to mask it with zeros (see here).
I think there are two points of confusion in the question. Firstly, the combiner operation happens across the set of embedding IDs for each row of the sparse indices input sp_ids. So if sp_ids has a shape of N x 1, then you are "combining" just one embedding vector per each row of sp_ids, which will just retrieve that embedding vector (which is I think what you are saying you want).
Secondly though, the return value is the embedding vector for each row of input. The embedding vector itself is a dense vector, by very definition of what the embedding is and what the TensorFlow embedding operations calculate. So this return result will always be dense, and that's what you want. A sparse matrix representation would be horribly inefficient, since the matrix truly will be dense (full of dense embeddings), regardless of whether any 'combiner' operation happens or not.
The research paper you linked does not seem to be doing any type of special methodology that would result in a special case of a sparse embedding vector, so I don't see a reason here for expecting or desiring sparse outputs.
Maybe I am incorrect, can you provide more details about why you expect the embedding vectors themselves to be sparse vectors? That would be a highly unusual situation if so.
I have a TF model which was trained with quantization, frozen, converted to tflite with TOCO, and now I have the TFLite HTML Graph Model and json.
I can see that, for each of the tensors in my graph, each have quantization attributes (min, max, scale, zero-pt), and I'm trying to determine how each of these attributes applies to each tensor.
For instance, I understand the representation of quantized data, and I can understand that taking the quantized weights/biases, multiplying by scale and adding the minimum value returns the original weights/biases (almost).
What I don't understand:
Why do some tensors have quantization attributes (eg Relu, Sigmoid) but no intrinsic parameters (like weights and biases do)? Is it because they are output tensors and the quantization is applied before the data is input into the next operation?
At what points (if any) are the quantization applied during the dataflow through the model? For example, say there is an image tensor of floats passed a conv2d operation - where and how are the quantization attributes of weights/bias/relu used to get the output of the conv2d operation?
Essentially, If I parsed the TFLite models data to a numpy array, what are all the things I'd need to know about the flow of the data through the network (with respect to quantization) in order to recreate the model for inference from scratch.
I can't seem to find any documentation regarding this. Any help would be appreciated.
The convolution inner loop does macc of uint8 values. There is also a smaller outer loop for computing the z-offset portions of the macc. At the end of each kernel convolution you will need to downscale from the int32 accumulator to the 8 bit uint8 range using a downscale multiplier that is input_scale * kernel_scale / output_scale. Those three scale values were learned during training, and are in the tflite inference file. This paper explains the operations.
http://openaccess.thecvf.com/content_cvpr_2018/papers/Jacob_Quantization_and_Training_CVPR_2018_paper.pdf
In the TensorFlow documentation at GitHub, there is this following code:
# Reshape non-sparse elements just once:
for k in self._keys_to_features:
v = self._keys_to_features[k]
if isinstance(v, parsing_ops.FixedLenFeature):
example[k] = array_ops.reshape(example[k], v.shape)
I am wondering why there is a need to reshape a FixedLenFeature tensor after parsing it from a TFRecord file.
In fact, what is the difference between a FixedLenFeature and VarLenFeature and what is their relevance to a Tensor? I am loading images in this case, so why would all of them be classified as a FixedLenFeature? What is an example of a VarLenFeature?
Tensors are stored on disk without shape information in an Example protocol buffer format (TFRecord files are collections of Examples). The documentation in the .proto file describes things fairly well, but the basic point is that Tensor entries are stored in row-major order with no shape information, so that must be provided when the Tensors are read. Note that the situation is similar for storing Tensors in memory: the shape information is kept separately, and just reshaping a Tensor changes only metadata (things like transpose, on the other hand, can be expensive).
VarLenFeatures are sequences such as sentences which would be difficult to batch together as regular Tensors, since the resulting shape would be ragged. The parse_example documentation has some good examples. Images are fixed length in that, if you load a batch of them, they'll all have the same shape (e.g. they're all 32x32 pixels, so a batch of 10 can have shape 10x32x32).