Why does a TensorFlow tensor behave differently in math functions in Numpy than it behaves in math functions in Keras?
Numpy arrays seem to act normally when put in the same situation as the TensorFlow Tensor.
This example shows that a numpy matrix is handled correctly under numpy functions and keras functions.
import numpy as np
from keras import backend as K
arr = np.random.rand(19, 19, 5, 80)
np_argmax = np.argmax(arr, axis=-1)
np_max = np.max(arr, axis=-1)
k_argmax = K.argmax(arr, axis=-1)
k_max = K.max(arr, axis=-1)
print('np_argmax shape: ', np_argmax.shape)
print('np_max shape: ', np_max.shape)
print('k_argmax shape: ', k_argmax.shape)
print('k_max shape: ', k_max.shape)
Outputs this (as expected)
np_argmax shape: (19, 19, 5)
np_max shape: (19, 19, 5)
k_argmax shape: (19, 19, 5)
k_max shape: (19, 19, 5)
As opposed to this example
import numpy as np
from keras import backend as K
import tensorflow as tf
arr = tf.constant(np.random.rand(19, 19, 5, 80))
np_argmax = np.argmax(arr, axis=-1)
np_max = np.max(arr, axis=-1)
k_argmax = K.argmax(arr, axis=-1)
k_max = K.max(arr, axis=-1)
print('np_argmax shape: ', np_argmax.shape)
print('np_max shape: ', np_max.shape)
print('k_argmax shape: ', k_argmax.shape)
print('k_max shape: ', k_max.shape)
which outputs
np_argmax shape: ()
np_max shape: (19, 19, 5, 80)
k_argmax shape: (19, 19, 5)
k_max shape: (19, 19, 5)
You need to execute/run code (say under a TF session) to have tensors evaluated. Until then, the shapes of tensors are not evaluated.
TF docs say:
Each element in the Tensor has the same data type, and the data type is always known. The shape (that is, the number of dimensions it has and the size of each dimension) might be only partially known. Most operations produce tensors of fully-known shapes if the shapes of their inputs are also fully known, but in some cases it's only possible to find the shape of a tensor at graph execution time.
Why don't you try the following code for the 2nd example:
import numpy as np
from keras import backend as K
import tensorflow as tf
arr = tf.constant(np.random.rand(19, 19, 5, 80))
with tf.Session() as sess:
arr = sess.run(arr)
np_argmax = np.argmax(arr, axis=-1)
np_max = np.max(arr, axis=-1)
k_argmax = K.argmax(arr, axis=-1)
k_max = K.max(arr, axis=-1)
print('np_argmax shape: ', np_argmax.shape)
print('np_max shape: ', np_max.shape)
print('k_argmax shape: ', k_argmax.shape)
print('k_max shape: ', k_max.shape)
After arr = tf.constant(np.random.rand(19, 19, 5, 80)), the type of arr is tf.Tensor, but after running arr = sess.run(arr) its type will be changed to numpy.ndarray.
Related
The expectation here is that the attention is applied on the 2nd dimension (4, 5, 20, 64). I am trying to apply self attention using the following code (issue reproducible with this code):
import numpy as np
import tensorflow as tf
from keras import layers as tfl
class Encoder(tfl.Layer):
def __init__(self,):
super().__init__()
self.embed_layer = tfl.Embedding(4500, 64, mask_zero=True)
self.attn_layer = tfl.MultiHeadAttention(num_heads=2,
attention_axes=2,
key_dim=16)
return
def call(self, x):
# Input shape: (4, 5, 20) (Batch size: 4)
x = self.embed_layer(x) # Output: (4, 5, 20, 64)
x = self.attn_layer(query=x, key=x, value=x) # Output: (4, 5, 20, 64)
return x
eg_input = tf.constant(np.random.randint(0, 150, (4, 5, 20)))
enc = Encoder()
enc(eg_input)
However, the above layer defined throws the following error. Could someone please explain why is this happening & how to fix this?
{{function_node __wrapped__AddV2_device_/job:localhost/replica:0/task:0/device:CPU:0}} Incompatible shapes: [4,5,2,20,20] vs. [4,5,1,5,20] [Op:AddV2]
Call arguments received by layer 'softmax_2' (type Softmax):
• inputs=tf.Tensor(shape=(4, 5, 2, 20, 20), dtype=float32)
• mask=tf.Tensor(shape=(4, 5, 1, 5, 20), dtype=bool)
PS: If I set mask_zero = False in defining the embedding layer, the code runs fine as expected without any issues.
Just concat the input along axis=0
import numpy as np
import tensorflow as tf
from keras import layers as tfl
class Encoder(tfl.Layer):
def __init__(self,):
super().__init__()
self.embed_layer = tfl.Embedding(4500, 64, mask_zero=True)
self.attn_layer = tfl.MultiHeadAttention(num_heads=2,
key_dim=16,
attention_axes=2)
def call(self, x):
x = self.embed_layer(x) # Output: (4, 5, 20, 32)
x = tf.concat(x, axis=0)
x, attention_scores = self.attn_layer(query=x, key=x, value=x , return_attention_scores=True) # Output: (4, 5, 20, 32)
return x , attention_scores
eg_input = tf.constant(np.random.randint(0, 150, (4, 5, 20)))
enc = Encoder()
scores , attentions = enc(eg_input)
scores.shape , attentions.shape
#(TensorShape([4, 5, 20, 64]), TensorShape([4, 5, 2, 20, 20]))
I am trying to use the tensorflow.keras.layers.Flatten layer outside of a model to flatten a 4x4 tensor. I can't figure out why the Flatten layer isn't actually flattening my array.
Here is my code:
import tensorflow as tf
import numpy as np
flayer = tf.keras.layers.Flatten()
X = tf.constant(np.random.random((4,4)),dtype=tf.float32)
Xf = flatten_layer(X)
print(Xf)
and print(Xf) shows
tf.Tensor(
[[0.9866459 0.52488756 0.86211777 0.06254051]
[0.32552275 0.23201537 0.8646714 0.80754006]
[0.55823076 0.51929855 0.538077 0.4111973 ]
[0.95845264 0.14468837 0.30223057 0.09648433]], shape=(4, 4), dtype=float32)
Why doesn't my flatten layer output a 16x1 tensor?
That's because the Flatten() layer assumes that the first dimension is the number of samples, so it returns 4 flattened rows. You have 4 observations, and 1D input for each of these already. It would behave differently if you had data with shape (32, 28, 28, 1), for example, which has a higher dimensionality for each row.
import tensorflow as tf
import numpy as np
flayer = tf.keras.layers.Flatten()
X = tf.constant(np.random.random((32, 28, 28, 1)),dtype=tf.float32)
Xf = flayer(X)
print(Xf.shape)
(32, 784)
If you meant to flatten one observation with shape (4, 4), you should add a batch dimension for it to work:
X = tf.constant(np.random.random((1, 4, 4)),dtype=tf.float32)
Xf = flayer(X)
print(Xf.shape)
(1, 16)
I try to use the MNIST dataset for Alexnet with Keras, so I should change dimension(because MNIST is gray-scale, Alexnet needs to be RGB and also 227*227). Now I get some results, numpy_imgs=(10,227,227,1) but I should do this like (10,227,227,3) you can see what I did before in my code,
thank you.
import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
batch=mnist.train.next_batch(10)
X_batch = batch[0]
batch_tensor = tf.reshape(X_batch, [10, 28, 28, 1])
resized_images = tf.image.resize_images(batch_tensor, [227,227])
with tf.Session() as sess:
numpy_imgs = resized_images.eval(session=sess) # mnist images converted to numpy array
r2=[]
t=list(numpy_imgs)
dim = np.zeros((227,227))
for i in range(0,10):
R=np.stack((t[i],dim,dim),axis=2)
R=list(R)
r2.append(R)
y3=np.asarray(r2)
I tried something below but got an error like "ValueError: all input arrays must have the same shape", how can I fix it?
Take a look at tf.tile which repeat a tensor along one of its dimension:
y3 = tf.tile(numpy_imgs, (1, 1, 1, 3))
If you want to complete it with zeroed tensors, you should use tf.concat (or np.concatenate instead of stack.
dim = np.zeros((227, 227, 2))
for i in range(0, 10):
R = np.concatenate((t[i], dim), axis=2)
...
You can even do it more concisely, treating all batch at once:
dim = np.zeros((10, 227, 227, 2))
y3 = np.concatenate((numpy_imgs, dim), axis=3
Here is a more general example:
import numpy as np
def main():
i = np.random.random((10, 227, 227, 1))
dim = np.zeros((10, 227, 227, 2))
print(i.shape)
print(dim.shape)
print(np.concatenate((i, dim), axis=3).shape)
if __name__ == '__main__':
main()
(10, 227, 227, 1)
(10, 227, 227, 2)
(10, 227, 227, 3)
I'm working with the keras.datasets.fashion_mnist dataset, which contains 28 x 28 grayscale images. I've built a pretty simple convolutional neural network that accepts a placeholder of images defined as:
X = tf.placeholder(tf.float32, [None, 28, 28, INPUT_CHANNELS], name='X_placeholder')
I'm starting out with a <type 'numpy.ndarray'> of shape (100, 28, 28). 100 here represents the batch size that I've chosen to train with.
Obviously, the dimensionality doesn't line up here. The graph I've built should work with RGB images as well, hence the INPUT_CHANNEL dimension. As expected, when I try to train, I get the following error:
ValueError: Cannot feed value of shape (100, 28, 28) for Tensor u'X_placeholder:0', which has shape '(?, 28, 28, 1)'
Being relatively new to TF and numpy, I'm failing to see how to add in that extra dimension. Having pieced together my code from various sources, I can't say that I chose the placeholder input shape [None, 28, 28, INPUT_CHANNELS], but I want to stick with it instead of trying to work around it.
Question
How can I reshape my training data to match the expected placeholder dimensionality?
In numpy:
You can use np.newaxis,np.expand_dims and reshape() to add dimension.
import numpy as np
train_data = np.random.normal(size=(100,28,28))
print(train_data.shape)
new_a = train_data[...,np.newaxis]
print(new_a.shape)
new_a = np.expand_dims(train_data,axis=-1)
print(new_a.shape)
new_a = train_data.reshape(100,28,28,1)
print(new_a.shape)
(100, 28, 28)
(100, 28, 28, 1)
(100, 28, 28, 1)
(100, 28, 28, 1)
In tensorflow:
You can use tf.newaxis,tf.expand_dims and tf.reshape to add dimension.
import tensorflow as tf
train_data = tf.placeholder(shape=(None,28,28),dtype=tf.float64)
print(train_data.shape)
new_a = train_data[...,tf.newaxis]
print(new_a.shape)
new_a = tf.reshape(train_data,shape=(-1,28,28,1))
print(new_a.shape)
new_a = tf.expand_dims(train_data,axis=-1)
print(new_a.shape)
(?, 28, 28)
(?, 28, 28, 1)
(?, 28, 28, 1)
(?, 28, 28, 1)
I can't really wrap my head around this... and I'm not sure if stacking is the right term to use here.
A.shape = (28,28,1)
B.shape = (28,28,1)
If I want to merge/add/stack these arrays to this format:
C.shape = (2,28,28,1)
How do I do this? And is it a += version of this there I can add new arrays of shape (28,28,1) into the existing stack to get (3,28,28,1).
EDIT
I have this array of 100 grayscale images: (100, 784) which I guess I can reshape to (100,28,28,1) with tf.reshape.
I want to standardize all pixel values of the 100 images with tf.image.per_image_standardization (doc), but this function accepts only input shape (h,w,ch) aka. (28,28,1).
Any suggestions on how to optimize this?
CODE
for i in range(epochs):
for j in range(samples/batch_size):
batch_xs, batch_ys = mnist.train.next_batch(batch_size) #(100,784)
batch_xsr = tf.reshape(batch_xs, [-1, 28, 28, 1]) # (100,28,28,1)
...
#somehow use tf.image.per_image_standardization (input shape =
#(28,28,1)) on each of the 100 images, and end up with
#shape (100,28,28,1) again.
...
_, loss = sess.run([train, loss_op], feed_dict={x: batch_xs, y: batch_ys})
Note to self: TensorFlow needs np.array in feed dict.
You could go like this...
import numpy as np
A = np.zeros(shape=(28, 28, 1))
B = np.zeros(shape=(28, 28, 1))
A.shape # (28, 28, 1)
B.shape # (28, 28, 1)
C = np.array([A, B])
C.shape # (2, 28, 28, 1)
Then use this to add more, assuming 'new' here is the same shape as A or B.
def add_another(C, new):
return np.array(list(C) + [new])
You can use numpy's functions stack and concatenate
import numpy as np
A = np.zeros((28, 28, 1))
B = np.zeros((28, 28, 1))
C = np.stack((A, B), axis=0)
print (C.shape)
>>> (2L, 28L, 28L, 1L)
Append further arrays of shape (28, 28, 1) to an array of shape (x, 28, 28, 1) by concatenating along axis=0:
D = np.ones((28,28,1))
C = np.concatenate([C, [D]], axis=0)
#C = np.append(C, [D], axis=0) # equivalent using np.append which is wrapper around np.concatenate
print (C.shape)
>>> (3L, 28L, 28L, 1L)
EDIT
I'm not familiar with tensorflow, but try this to normalize your images
for i in range(epochs):
for j in range(samples/batch_size):
batch_xs, batch_ys = mnist.train.next_batch(batch_size) #(100,784)
batch_xsr = tf.reshape(batch_xs, [-1, 28, 28, 1]) # (100,28,28,1)
for i_image in range(batch_xsr.shape[0]):
batch_xsr[i_image,:,:,:] = tf.image.per_image_standardization(batch_xsr[i_image,:,:,:])
_, loss = sess.run([train, loss_op], feed_dict={x: batch_xs, y: batch_ys})