I am generally struggling with indexing tensors in tensorflow.
I have image data and additional scalar data. I can only use a single placeholder to input all the data to a Neural Network.
The images (img) are numpy arrays with shape (84,84,3) and I have data a with shape (2) and b with shape (1).
Now I create a single sample
sample = np.reshape(np.array([img,a,b]),(3,1)) #shape (3,1)
The placeholder is
input = tf.placeholder(dtype=tf.float32,shape=[None] + list(sample.shape))
Now when TF reads a batch of samples I would like to retrieve the batch of images, the batch of a, and the batch of b, because they need to be input in different locations in the Neural Network.
Here is a minimal example:
import tensorflow as tf
from tensorflow.contrib import layers
import numpy as np
#Numpy
img = np.random.rand(84,84,3)
a = np.random.rand(2)
b = np.random.rand(1)
sample = np.reshape(np.array([img,a,b]),(3,1)) #shape (3,1)
batch = np.repeat(np.expand_dims(sample,axis=0),32,axis=0) #shape (32,3,1)
#TF
input = tf.placeholder(dtype=tf.float32,shape=[None] + list(sample.shape))
#TODO:
tf_img = tf.#get image batch from input
tf_a = tf.#get a batch from input
tf_b = tf.#get b batch from input
out = layers.convolution2d(tf_img,num_outputs=64,kernel_size=8,stride=2,activation_fn=tf.nn.relu)
out = layers.flatten(out)
out = tf.concat([out,tf_a,tf_b])
out = layers.fully_connected(out,10,activation_fn=tf.nn.relu)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_ = sess.run(out,feed_dict={input:batch})
How can I extract the individual parts of the input from a tensor with shape (?,3,1), use the image data to create an embedding and concatenate the other two parts to that output enbedding.
Is there a better way to input the data? My only constraint is that it has to be a single placeholder.
Here's a complete example for my comment above:
import numpy as np
import tensorflow as tf
im_height = 84
im_width = 84
im_channels = 3
a_len = 2
b_len = 1
np_img = np.random.rand(im_height, im_width, im_channels)
np_a = np.random.rand(a_len)
np_b = np.random.rand(b_len)
# flatten the input and concatenate to a single 1D numpy array
np_sample = np.concatenate((np_img.reshape(-1), np_a.reshape(-1), np_b.reshape(-1)), axis=0)
# construct a pseudo batch
np_batch = np.repeat(np_sample[np.newaxis, :], 32, axis=0)
tf_batch = tf.placeholder(shape=(None, im_height*im_width*im_channels + a_len + b_len), dtype=tf.float32)
img_stop = im_height*im_width*im_channels
a_stop = img_stop+a_len
# you could also use tf.slice(...) here
tf_img = tf.reshape(tf_batch[:, 0:img_stop], (-1, im_height, im_width, im_channels))
tf_a = tf.reshape(tf_batch[:, img_stop:a_stop], (-1, a_len))
tf_b = tf.reshape(tf_batch[:, a_stop:], (-1, b_len))
with tf.Session() as sess:
fetch_dict = {'img': tf_img, 'a': tf_a, 'b': tf_b}
feed_dict = {tf_batch: np_batch}
res = sess.run(fetch_dict, feed_dict=feed_dict)
assert(np.isclose(res['img'][0, ...], np_img).all())
assert(np.isclose(res['a'][0, :], np_a).all())
assert(np.isclose(res['b'][0, :], np_b).all())
However, this is at least as invasive as adding appropriate placeholders to the code. Additionally, it's much less readable, in my opinion.
Related
I have a custom loss where I want to apply Gaussian filter to a predicted label to manipulate it a little. Using max or average pooling is simple as it is predefined in keras, but I had to make my own class for Gaussian pooling:
import numpy as np
from keras.layers import DepthwiseConv2D
from keras.layers import Input
from keras.models import Model
import tensorflow as tf
class Gaussian():
def __init__(self,shape, f = 3):
self.filt = f
self.g = self.gaussFilter(shape)
def doFilter(self, data):
return self.g.predict(data, steps=1) #steps are for predicting on const tensor, I change it when predicting on predictions
def gauss2D(self,shape=(3,3),sigma=0.5):
m,n = [(ss-1.)/2. for ss in shape]
y,x = np.ogrid[-m:m+1,-n:n+1]
h = np.exp( -(x*x + y*y) / (2.*sigma*sigma) )
h[ h < np.finfo(h.dtype).eps*h.max() ] = 0
sumh = h.sum()
if sumh != 0:
h /= sumh
return h
def gaussFilter(self, size=256):
kernel_weights = self.gauss2D(shape=(self.filt,self.filt))
in_channels = 1 # the number of input channels
kernel_weights = np.expand_dims(kernel_weights, axis=-1)
kernel_weights = np.repeat(kernel_weights, in_channels, axis=-1) # apply the same filter on all the input channels
kernel_weights = np.expand_dims(kernel_weights, axis=-1) # for shape compatibility reasons
inp = Input(shape=(size,size,1))
g_layer = DepthwiseConv2D(self.filt, use_bias=False, padding='same')(inp)
model_network = Model(input=inp, output=g_layer)
print(model_network.summary())
model_network.layers[1].set_weights([kernel_weights])
model_network.trainable= False
return model_network
This works as expected when feeding a constant tensor to the doFilter function, an example of simple data:
a = np.array([[[1, 2, 3], [4, 5, 6], [4, 5, 6]]])
filt = Gaussian(3)
print(filt.doFilter(tf.constant(a.reshape(1,3,3,1))))
However, if I try to use this in a custom loss :
def custom_loss_no_true(input_tensor, length):
def loss(y_true, y_pred):
gaus_pooler = Gaussian(256, length//8)
a = gaus_pooler.doFilter(y_pred)
...more stuff comes after
I get an error:
ValueError: When feeding symbolic tensors to a model, we expect the
tensors to have a static batch size. Got tensor with shape: (None,
256, 256, 1)
This is as I have found caused by the fact, that I am feeding a tensor that is an output of other model, a symbolic data, not actual values (source). Thus I need to change the logic of my approach, because evaluating the tensor to feed my class would break the graph and lead to no gradient propagation within the loss (or am I incorrect?). How can I apply such convolution operation on a tensor that is an output of other model? Is it even possible? Or maybe there is a way to use it without adding the layer to the model, such as MaxPooling?
You don't really need a complex keras Model nor a keras Layer if what you want to do is just convolve your input with a Gaussian kernel. Here is a port of your code with simple tensorflow ops :
import tensorflow as tf
def get_gaussian_kernel(shape=(3,3), sigma=0.5):
"""build the gaussain filter"""
m,n = [(ss-1.)/2. for ss in shape]
x = tf.expand_dims(tf.range(-n,n+1,dtype=tf.float32),1)
y = tf.expand_dims(tf.range(-m,m+1,dtype=tf.float32),0)
h = tf.exp(tf.math.divide_no_nan(-((x*x) + (y*y)), 2*sigma*sigma))
h = tf.math.divide_no_nan(h,tf.reduce_sum(h))
return h
def gaussian_blur(inp, shape=(3,3), sigma=0.5):
"""Convolve using tf.nn.depthwise_conv2d"""
in_channel = tf.shape(inp)[-1]
k = get_gaussian_kernel(shape,sigma)
k = tf.expand_dims(k,axis=-1)
k = tf.repeat(k,in_channel,axis=-1)
k = tf.reshape(k, (*shape, in_channel, 1))
# using padding same to preserve size (H,W) of the input
conv = tf.nn.depthwise_conv2d(inp, k, strides=[1,1,1,1],padding="SAME")
return conv
You can use it simply in your custom loss (assuming a 4D y_pred [batch, height width, channel]) :
a = gaussian_blur(y_pred)
I want to resize 3D images with a dynamic shape, for instance go from shape (64,64,64,1) to (128,128,128,1). The idea is to unstack the image along one axis, then use tf.image.resize_images and stack them again.
My issue is that tf.unstack can not handle variable sized inputs. If I run my code I obtain "ValueError: Cannot infer num from shape (?, ?, ?, 1)"
I have considered using tf.split instead, however it expects an integer input. Does anybody know a workaround?
Here is an example:
import tensorflow as tf
import numpy as np
def resize_by_axis(image, dim_1, dim_2, ax):
resized_list = []
# Unstack along axis to obtain 2D images
unstack_img_depth_list = tf.unstack(image, axis = ax)
# Resize 2D images
for i in unstack_img_depth_list:
resized_list.append(tf.image.resize_images(i, [dim_1, dim_2], method=1, align_corners=True))
# Stack it to 3D
stack_img = tf.stack(resized_list, axis=ax)
return stack_img
#X = tf.placeholder(tf.float32, shape=[64,64,64,1])
X = tf.placeholder(tf.float32, shape=[None,None,None,1])
# Get new shape
shape = tf.cast(tf.shape(X), dtype=tf.float32) * tf.constant(2, dtype=tf.float32)
x_new = tf.cast(shape[0], dtype=tf.int32)
y_new = tf.cast(shape[1], dtype=tf.int32)
z_new = tf.cast(shape[2], dtype=tf.int32)
# Reshape
X_reshaped_along_xy = resize_by_axis(X, dim_1=x_new, dim_2=y_new, ax=2)
X_reshaped_along_xyz= resize_by_axis(X_reshaped_along_xy, dim_1=x_new, dim_2=z_new, ax=1)
init = tf.global_variables_initializer()
# Run
with tf.Session() as sess:
sess.run(init)
result = X_reshaped_along_xyz.eval(feed_dict={X : np.zeros((64,64,64,1))})
print(result.shape)
tf.image.resize_images can resize multiple images at the same time, but it does not allow you to pick the batch axis. However, you can manipulate the dimensions of the tensor to put the axis that you want first, so it is used as batch dimension, and then put it back after resizing:
import tensorflow as tf
def resize_by_axis(image, dim_1, dim_2, ax):
# Make permutation of dimensions to put ax first
dims = tf.range(tf.rank(image))
perm1 = tf.concat([[ax], dims[:ax], dims[ax + 1:]], axis=0)
# Transpose to put ax dimension first
image_tr = tf.transpose(image, perm1)
# Resize
resized_tr = tf.image.resize_images(image_tr, [dim_1, dim_2],
method=1, align_corners=True)
# Make permutation of dimensions to put ax in its place
perm2 = tf.concat([dims[:ax] + 1, [0], dims[ax + 1:]], axis=0)
# Transpose to put ax in its place
resized = tf.transpose(resized_tr, perm2)
return resized
In your example:
import tensorflow as tf
import numpy as np
X = tf.placeholder(tf.float32, shape=[None, None, None, 1])
# Get new shape
shape = tf.cast(tf.shape(X), dtype=tf.float32) * tf.constant(2, dtype=tf.float32)
x_new = tf.cast(shape[0], dtype=tf.int32)
y_new = tf.cast(shape[1], dtype=tf.int32)
z_new = tf.cast(shape[2], dtype=tf.int32)
# Reshape
X_reshaped_along_xy = resize_by_axis(X, dim_1=x_new, dim_2=y_new, ax=2)
X_reshaped_along_xyz = resize_by_axis(X_reshaped_along_xy, dim_1=x_new, dim_2=z_new, ax=1)
init = tf.global_variables_initializer()
# Run
with tf.Session() as sess:
sess.run(init)
result = X_reshaped_along_xyz.eval(feed_dict={X : np.zeros((64, 64, 64, 1))})
print(result.shape)
# (128, 128, 128, 1)
Recently,I am studying the GAN network,I'm using it to generator a mnisit image,the environment in my computer is ubuntu16.04,tensorflow,python3.
The code can run without any error.But the result shows the network study nothing,through training,the output image is still noisy image.
Firstly I design a generator network:the input is 784 dimension's noisy data,through a hidden layer and rule it,generate a 784 dimension's image.
Then I design a discriminator network:the input is real image and fake image,through a hidden layer and rule it,the output is 1 dimension's logits.
Then I defined the generator_loss and discriminator_loss, then train generator and discriminator.It can run,but the result show the network study nothing, the loss can not convergence.
import tensorflow as tf
import numpy as np
import tensorflow.contrib.slim as slim
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/home/zyw/data/tensor_mnist-master/MNIST_data/",one_hot=True)
batch_size = 100
G_in = tf.placeholder(tf.float32,[None,784])
G_h1 = tf.layers.dense(G_in, 128)
G_h1 = tf.maximum(0.01 * G_h1, G_h1)
G_out = tf.tanh(tf.layers.dense(G_h1, 784))
real = tf.placeholder(tf.float32,[None,784])
Dl0 = tf.layers.dense(G_out, 128)
Dl0 = tf.maximum(0.01 * Dl0, Dl0)
p0 = tf.layers.dense(Dl0, 1)
Dl1 = tf.layers.dense(real, 128)
Dl1 = tf.maximum(0.01 * Dl1, Dl1)
p1 = tf.layers.dense(Dl1, 1)
G_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits =p0,labels=tf.ones_like(p0)*0.9))
D_real_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits =p1,labels=tf.ones_like(p1)*0.9))
D_fake_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits =p0,labels=tf.zeros_like(p0)))
D_total_loss = tf.add(D_fake_loss,D_real_loss)
G_train = tf.train.AdamOptimizer(0.01).minimize(G_loss)
D_train = tf.train.AdamOptimizer(0.01).minimize(D_total_loss)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(1000):
mnist_data,_ = mnist.train.next_batch(batch_size)
# noise_org = tf.random_normal([batch_size,784],stddev = 0.1,dtype = tf.float32)
noise_org = np.random.randn(batch_size, 784)
a,b,dloss= sess.run([D_real_loss,D_fake_loss,D_total_loss,G_train,D_train],feed_dict={G_in:noise_org,real:mnist_data})[:3]
if i%100==0:
print(a,b,dloss)
#test_generative_image
noise_org = np.random.randn(1, 784)
image = sess.run(G_out,feed_dict ={G_in:noise_org})
outimage = tf.reshape(image, [28,28])
plt.imshow(outimage.eval(),cmap='gray')
plt.show()
print('ok')
the result is:
0.80509 0.63548 1.44057
0.33512 0.20223 0.53735
0.332536 0.97737 1.30991
0.328048 0.814452 1.1425
0.326688 0.411907 0.738596
0.325864 0.570807 0.896671
0.325575 0.970406 1.29598
0.325421 1.02487 1.35029
0.325222 1.34089 1.66612
0.325217 0.747129 1.07235
I have added the modified code with the comments where I made the changes. Moreover, I have described about my changes below.
import tensorflow as tf
import numpy as np
import tensorflow.contrib.slim as slim
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/home/zyw/data/tensor_mnist-master/MNIST_data/",one_hot=True)
batch_size = 100
#define the generator function
def generator(input):
G_h1 = tf.layers.dense(input, 128)
# G_h1 = tf.maximum(0.01 * G_h1, G_h1)
G_out = tf.sigmoid(tf.layers.dense(G_h1, 784)) # sigmoid function added
return G_out
#Define the discrminator function
def discriminator(input):
Dl0 = tf.layers.dense(input, 128)
# Dl0 = tf.maximum(0.01 * Dl0, Dl0)
p0 = tf.layers.dense(Dl0, 1)
return p0
#Generator
with tf.variable_scope('G'):
G_in = tf.placeholder(tf.float32, [None, 784])
G_out = generator(G_in)
real = tf.placeholder(tf.float32, [None, 784])
#Discrimnator that takes the real data
with tf.variable_scope('D'):
D1 = discriminator(real)
#Discriminator that takes fake data
with tf.variable_scope('D', reuse=True): # need to use the same copy of Discrminator
D2 = discriminator(G_out)
G_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D2, labels=tf.ones_like(D2)))
D_real_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D1, labels=tf.ones_like(D1)))
D_fake_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D2, labels=tf.zeros_like(D2)))
D_total_loss = tf.add(D_fake_loss, D_real_loss)
vars = tf.trainable_variables() #all trainable variables
d_training_vars = [v for v in vars if v.name.startswith('D/')] # varibles associated with the discrminator
g_training_vars = [v for v in vars if v.name.startswith('G/')] # varibles associated with the generator
G_train = tf.train.AdamOptimizer(0.001).minimize(G_loss,var_list=g_training_vars) # only train the variables associated with the generator
D_train = tf.train.AdamOptimizer(0.001).minimize(D_total_loss,var_list=d_training_vars) # only train the variables associated with the discriminator
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(1000):
mnist_data, _ = mnist.train.next_batch(batch_size)
# noise_org = tf.random_normal([batch_size,784],stddev = 0.1,dtype = tf.float32)
noise_org = np.random.randn(batch_size, 784)
a, b, dloss = sess.run([D_real_loss, D_fake_loss, D_total_loss, G_train, D_train],feed_dict={G_in: noise_org, real: mnist_data})[:3]
if i % 100 == 0:
print(a, b, dloss)
# test_generative_image
noise_org = np.random.randn(1, 784)
image = sess.run(G_out, feed_dict={G_in: noise_org})
outimage = tf.reshape(image, [28, 28])
plt.imshow(outimage.eval(), cmap='gray')
plt.show()
print('ok')
Few points you should note when implementing a GAN,
Need to use the same copies of the discriminator (i.e share same
weights) when implementing the discriminator loss (in your case Dl0
and Dl1 should share same paraments).
Generator activation function should be sigmoid not tanh
since the output of the generator should only be varying between 0
and 1. (since its a image)
When training the discriminator, you should only train the variables that associated with the discriminator. Likewise, when training the generator you only should train the variables that associated with the generator.
Sometimes it is important to make sure that the discriminator is
more powerful than the generator, as otherwise, it would not have
sufficient capacity to learn to be able to distinguish accurately
between generated and real samples.
These are only the basic things of GANs that you should note. However, there are many other aspects that you should consider when developing a GAN. You can get a good basic idea of GANs by reading following two articles.
http://blog.aylien.com/introduction-generative-adversarial-networks-code-tensorflow/
http://blog.evjang.com/2016/06/generative-adversarial-nets-in.html
Hope this helps.
This is the main part of my code.
I'm confused on function shuffle_batch and feed_dict.
In my code below, the features and labels I put into the function are "list".(I also tried "array" before.But it seems doesn't matter.)
What I want to do is make my testing data(6144,26) and training data(1024,13) into batch:(100,26) and (100,13),then set them as the feed_dict for the placeholders.
My questions are:
1.The outputs of the function tf.train.batch_shuffle are Tensors.But I can not put tensors in the feed_dict,right?
2.When I compiled the last two rows,error says,got shape [6144, 26], but wanted [6144] .I know it may be a dimension error,but how can I fix it.
Thanks a lot.
import tensorflow as tf
import scipy.io as sio
#import signal matfile
#[('label', (8192, 13), 'double'), ('clipped_DMT', (8192, 26), 'double')]
file = sio.loadmat('DMTsignal.mat')
#get array(clipped_DMT)
data_cDMT = file['clipped_DMT']
#get array(label)
data_label = file['label']
with tf.variable_scope('split_cDMT'):
cDMT_test_list = []
cDMT_training_list = []
for i in range(0,8192):
if i % 4 == 0:
cDMT_test_list.append(data_cDMT[i])
else:
cDMT_training_list.append(data_cDMT[i])
with tf.variable_scope('split_label'):
label_test_list = []
label_training_list = []
for i in range(0,8192):
if i % 4 == 0:
label_test_list.append(data_label[i])
else:
label_training_list.append(data_label[i])
#set parameters
n_features = cDMT_training.shape[1]
n_labels = label_training.shape[1]
learning_rate = 0.8
hidden_1 = 256
hidden_2 = 128
training_steps = 1000
BATCH_SIZE = 100
#set Graph input
with tf.variable_scope('cDMT_Inputs'):
X = tf.placeholder(tf.float32,[None, n_features],name = 'Input_Data')
with tf.variable_scope('labels_Inputs'):
Y = tf.placeholder(tf.float32,[None, n_labels],name = 'Label_Data')
#set variables
#Initialize both W and b as tensors full of zeros
with tf.variable_scope('layerWeights'):
h1 = tf.Variable(tf.random_normal([n_features,hidden_1]))
h2 = tf.Variable(tf.random_normal([hidden_1,hidden_2]))
w_out = tf.Variable(tf.random_normal([hidden_2,n_labels]))
with tf.variable_scope('layerBias'):
b1 = tf.Variable(tf.random_normal([hidden_1]))
b2 = tf.Variable(tf.random_normal([hidden_2]))
b_out = tf.Variable(tf.random_normal([n_labels]))
#create model
def neural_net(x):
layer_1 = tf.add(tf.matmul(x,h1),b1)
layer_2 = tf.nn.relu(tf.add(tf.matmul(layer_1,h2),b2))
out_layer = tf.add(tf.matmul(layer_2,w_out),b_out)
return out_layer
nn_out = neural_net(X)
#loss and optimizer
with tf.variable_scope('Loss'):
loss = tf.reduce_mean(tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(logits = nn_out,labels = Y)))
with tf.name_scope('Train'):
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)
with tf.name_scope('Accuracy'):
correct_prediction = tf.equal(tf.argmax(nn_out,1),tf.argmax(Y,1))
#correct_prediction = tf.metrics.accuracy (labels = Y, predictions =nn_out)
acc = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
# Initialize
init = tf.global_variables_initializer()
# start computing & training
with tf.Session() as sess:
sess.run(init)
for step in range(training_steps):
#set batch
cmt_train_bat,label_train_bat = sess.run(tf.train.shuffle_batch([cDMT_training_list,label_training_list],batch_size = BATCH_SIZE,capacity=50000,min_after_dequeue=10000))
cmt_test_bat,label_test_bat = sess.run(tf.train.shuffle_batch([cDMT_test_list,label_test_list],batch_size = BATCH_SIZE,capacity=50000,min_after_dequeue=10000))
From the Session.run doc:
The optional feed_dict argument allows the caller to override the
value of tensors in the graph. Each key in feed_dict can be one of the
following types:
If the key is a tf.Tensor, the value may be a Python scalar, string,
list, or numpy ndarray that can be converted to the same dtype as that
tensor. Additionally, if the key is a tf.placeholder, the shape of the
value will be checked for compatibility with the placeholder.
...
So you are right: for X and Y (which are placeholders) you can't feed a tensor and tf.train.shuffle_batch is not designed to work with placeholders.
You can follow one of two ways:
get rid of placeholders and use tf.TFRecordReader in combination with tf.train.shuffle_batch, as suggested here. This way you'll have only tensors in your model and you won't need to "feed" anything additionally.
batch and shuffle the data yourself in numpy and feed into placeholders. This takes just several lines of code, so I find it easier, though both paths are valid.
Take also into account performance considerations.
I'm trying to build a softmax regression model for CIFAR classification. At first when I tried to pass in my images and labels into the feed dictionary, I got an error that said that feed dictionaries do not accept Tensors. I then converted them into numpy arrays using .eval() but the program hangs at the .eval() line and does not continue any further. How can I pass this data into the feed_dict?
CIFARIMAGELOADING.PY
import tensorflow as tf
import os
import tensorflow.models.image.cifar10 as cf
IMAGE_SIZE = 24
BATCH_SIZE = 128
def loadimagesandlabels(size):
# Load the images from the CIFAR data directory
FLAGS = tf.app.flags.FLAGS
data_dir = os.path.join(FLAGS.data_dir, 'cifar-10-batches-bin')
filenames = [os.path.join(data_dir, 'data_batch_%d.bin' % i) for i in xrange(1, 6)]
filename_queue = tf.train.string_input_producer(filenames)
read_input = cf.cifar10_input.read_cifar10(filename_queue)
# Reshape and crop the image
height = IMAGE_SIZE
width = IMAGE_SIZE
reshaped_image = tf.cast(read_input.uint8image, tf.float32)
cropped_image = tf.random_crop(reshaped_image, [height, width, 3])
# Generate a batch of images and labels by building up a queue of examples
print('Filling queue with CIFAR images')
num_preprocess_threads = 16
min_fraction_of_examples_in_queue = 0.4
min_queue_examples = int(BATCH_SIZE*min_fraction_of_examples_in_queue)
images, label_batch = tf.train.batch([cropped_image,read_input.label],batch_size=BATCH_SIZE, num_threads=num_preprocess_threads, capacity=min_queue_examples+3*BATCH_SIZE)
print(images)
print(label_batch)
return images, tf.reshape(label_batch, [BATCH_SIZE])
CIFAR.PY
#Set up placeholder vectors for image and labels
x = tf.placeholder(tf.float32, shape = [None, 1728])
y_ = tf.placeholder(tf.float32, shape = [None,10])
W = tf.Variable(tf.zeros([1728,10]))
b = tf.Variable(tf.zeros([10]))
#Implement regression model. Multiply input images x by weight matrix W, add the bias b
#Compute the softmax probabilities that are assigned to each class
y = tf.nn.softmax(tf.matmul(x,W) + b)
#Define cross entropy
#tf.reduce sum sums across all classes and tf.reduce_mean takes the average over these sums
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_*tf.log(y), reduction_indices = [1]))
#Train the model
#Each training iteration we load 128 training examples. We then run the train_step operation
#using feed_dict to replace the placeholder tensors x and y_ with the training examples
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
#Open up a Session
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(1000) :
images, labels = CIFARImageLoading.loadimagesandlabels(size=BATCH_SIZE)
unrolled_images = tf.reshape(images, (1728, BATCH_SIZE))
#convert labels to their one_hot representations
# should produce [[1,0,0,...],[0,1,0...],...]
one_hot_labels = tf.one_hot(indices= labels, depth=NUM_CLASSES, on_value=1.0, off_value= 0.0, axis=-1)
print(unrolled_images)
print(one_hot_labels)
images_numpy, labels_numpy = unrolled_images.eval(session=sess), one_hot_labels.eval(session=sess)
sess.run(train_step, feed_dict = {x: images_numpy, y_:labels_numpy})
#Evaluate the model
#.equal returns a tensor of booleans, we want to cast these as floats and then take their mean
#to get percent correctness (accuracy)
print("evaluating")
test_images, test_labels = CIFARImageLoading.loadimagesandlabels(TEST_SIZE)
test_images_unrolled = tf.reshape(test_images, (1728, TEST_SIZE))
test_images_one_hot = tf.one_hot(indices= test_labels, depth=NUM_CLASSES, on_value=1.0, off_value= 0.0, axis=-1)
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(feed_dict = {x: unrolled_images.eval(), y_ : test_images_one_hot.eval()}))
Theres a couple of things that you not are understanding really well. Throughout your graph you will work with Tensors. You define Tensors by either using tf.placeholder and feeding them in the session.run(feed_dict{}) or with tf.Variable and initializing it with session.run(tf.initialize_all_variables()). You must feed your input this way, and it should be numpy arrays in the same as shape as you expect in the placeholders. Here's a simple example:
images = tf.placeholder(type, [1728, BATCH_SIZE])
labels = tf.placeholder(type, [size])
'''
Build your network here so you have the variable: Output
'''
images_feed, labels_feed = CIFARImageLoading.loadimagesandlabels(size=BATCH_SIZE)
# here you can see your output
print sess.run(Output, feed_dict = {x: images_feed, y_:labels_feed})
You do not feed tf.functions with numpy arrays, you always feed them with Tensors. And the feed_dict is always fed with numpy arrays. The thing is: you will never have to convert tensors to numpy arrays for the input, that does not make sense. Your input must be numpy arrays, if it's a list, you can use np.asarray(list), if it's a tensor, you are doing this wrong.
I do not know what CIFARImageLoading.loadimagesandlabels returns to you, but I imagine it's not a Tensor, it's probably a numpy array already, so just get rid of this .eval().