Tensorlfow, Running Error - python

I want to know why this error occured.
The input is image files (24*375*3(width, height, channels)images, *.png) and output is labeled file(.csv) which has Boolean (0 or 1) label.
Here is my github
https://github.com/dldudwo0805/DeepLearningPractice
Plz. Give me advice.
The error code is -
The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, numpy ndarrays, or TensorHandles.

y_data = tf.reshape(y_data, [50, 1])
y_data is a tensor. Try np.reshape rather than tf.reshape

Related

How to fix "ValueError: Expected 2D array, got 1D array instead"?

When running:
scaler = StandardScaler().fit(train)
I am getting this error:
enter image description here
After that I tried:
train = train.array.reshape(-1, 1)
And I got:
AttributeError: 'list' object has no attribute 'array'
How can I reshape by data to fix the value error?
Try train = np.array(train).reshape(-1, 1).
It appears train is originally a list. So you must convert it into an array using np.array(). Then, you can use the .reshape() method to change the dimensions. .reshape(-1, 1) asks numpy to create a 2-dimensional matrix with a single column. numpy can infer the number of rows in this matrix since you only want one column. the -1 parameter asks numpy to infer the length of the first dimension. Please see this answer for a more complete understanding.

Slice Variable via another Variable Tensorflow

I have numpy code for a project and wanted to convert it to tensorflow.
I have a 2D Tensor like x => [[0,1],[1,2],[2,3]] etc. and I want to slice a 3D tensor y using this. e.g. y[x[:,0], x[:,1], :] but it doesn't work. Following is error:
ValueError: Shape must be rank - but is rank - for 'strided_slice_?' (op: 'StridedSlice') with input shapes: [-], [-], [-], [-].
Can anyone please help!
Thanks
You need scalars to index into y, not tensors of rank 1+.
Try y[x[0, 0], x[0, 1], :] for a quick test.

How to use tf.data.Dataset.padded_batch with a nested shape?

I am building a dataset with two tensors of shape [batch,width,heigh,3] and [batch,class] for each element. For simplicity lets say class = 5.
What shape do you feed to dataset.padded_batch(1000,shape) such that image is padded along the width/height/3 axis?
I have tried the following:
tf.TensorShape([[None,None,None,3],[None,5]])
[tf.TensorShape([None,None,None,3]),tf.TensorShape([None,5])]
[[None,None,None,3],[None,5]]
([None,None,None,3],[None,5])
(tf.TensorShape([None,None,None,3]),tf.TensorShape([None,5])‌​)
Each raising TypeError
The docs state:
padded_shapes: A nested structure of tf.TensorShape or tf.int64 vector
tensor-like objects representing the shape to which the respective
component of each input element should be padded prior to batching.
Any unknown dimensions (e.g. tf.Dimension(None) in a tf.TensorShape or
-1 in a tensor-like object) will be padded to the maximum size of that dimension in each batch.
The relevant code:
dataset = tf.data.Dataset.from_generator(generator,tf.float32)
shapes = (tf.TensorShape([None,None,None,3]),tf.TensorShape([None,5]))
batch = dataset.padded_batch(1,shapes)
Thanks to mrry for finding the solution. Turns out that the type in from_generator has to match the number of tensors in the entries.
new code:
dataset = tf.data.Dataset.from_generator(generator,(tf.float32,tf.float32))
shapes = (tf.TensorShape([None,None,None,3]),tf.TensorShape([None,5]))
batch = dataset.padded_batch(1,shapes)
TensorShape doesn't accept nested lists. tf.TensorShape([None, None, None, 3, None, 5]) and TensorShape(None) (note no []) are legal.
Combining these two tensors sounds odd to me, though. I'm not sure what you're trying to accomplish, but I'd recommend trying to do it without combining tensors of different dimensions.

regarding proper way of broadcasting input array from one shape into another shape

There is a function, I got the following message while calling this function
The function is to resize a given image set, and put the transformed ones into the new set imgs_p.
For instance, the input imgs has shape (5635,1,420,580). I want to transform it (5635,64,80,1). This is what I did as follows, but i got the error message as ValueError: could not broadcast input array from shape (80,64) into shape (80,1)
How to solve this problem? Thanks.
def preprocess(imgs):
imgs_p = np.ndarray((imgs.shape[0],img_rows, img_cols,imgs.shape[1]), dtype=np.uint8)
print('imgs_p: ',imgs_p.shape)
for i in range(imgs.shape[0]):
print('imgs[i,0]: ',imgs[i,0].shape)
imgs_p[i,0]=resize(imgs[i,0],(img_rows,img_cols))
return imgs_p
I presume you want to roll the "1" dimension to the correct position:
z = np.moveaxis(z, 1, -1).shape
whereafter you can run a for-loop over each image and reshape, either using skimage or scipy.ndimage.
Be careful with downsampling! You probably want to apply a Gaussian blur first to make sure all the data is taken into account.

TypeError from theano While using 3D numpy array

I am trying something similar to code below
datax=theano.shared(value=rng.rand(5,500,45))
x=T.dmatrix('x')
i=T.lscalar('i')
W=theano.shared(value=rng.rand(90,45,500))
Hb=theano.shared(value=np.zeros(90))
w_v_bias=T.dot(W,x).sum(axis=2).sum(axis=1)+Hb
z=theano.function([i],w_v_bias,givens={x:datax[i*5:(i+1)*5]})
z(0)
Theano is giving me a TypeError with msg:
Cannot convert Type TensorType(float64, 3D) (of Variable Subtensor{int64:int64:}.0) into Type TensorType(float64, matrix). You can try to manually convert Subtensor{int64:int64:}.0 into a TensorType(float64, matrix)
What I am doing wrong here?
Edit
As mentioned by daniel changing x to dtensor3 will result in another error.
ValueError: Input dimension mis-match. (input[0].shape[1] = 5, input[1].shape[1] = 90)
Apply node that caused the error: Elemwise{add,no_inplace}(Sum{axis=[1], acc_dtype=float64}.0, DimShuffle{x,0}.0)
Another way is to modify my train function but then I won't be able to do batch learning.
z=theano.function([x],w_v_bias)
z(datax[0])
I am trying to implement RBM with integer values for visible units.
The problem is that datax is a 3D tensor and datax[index*5:(index+1)*5] is also a 3D tensor but you're trying to assign that to x which is a 2D tensor (i.e. a matrix).
Changing
x = T.dmatrix('x')
to
x = T.dtensor3('x')
solves this problem but creates a new one because the dimensions of W and x don't match up to perform the dot product. It's unclear what the desired outcome is.
Solved it after few hit and trials.
What I needed was to change
x=T.dmatrix('x')
w_v_bias=T.dot(W,x).sum(axis=2).sum(axis=1)+Hb
to
x=T.dtensor3('x')
w_v_bias=T.dot(x,W).sum(axis=3).sum(axis=1)+Hb
Now it produces (5,90) array after adding Hb elementwise to each of the five vectors of dot product.

Categories

Resources