I am trying to convert an array of 3D tensors (images) to a single 4D one, so I can pass them as values to the model.fit which does not seem to accept Tensor3D arrays.
The idea would be
4dTensor = tf.tensor4d(batch)
I am actually using javascript, but either a python or js solution would probably work as the Tensorflow API is similar.
The error of this procedure is:
Argument of type 'Tensor4D' is not assignable to parameter of type 'Tensor3D[]'.
Type 'Tensor<Rank.R4>' is missing the following properties from type 'Tensor3D[]': length, pop, push, join, and 26 more.ts(2345)
You may want to use tf.stack():
4dTensor = tf.stack(batch)
Related
I'm actually trying to program a Keras Model. In my point of view, a keras Model needs a list of np.arrays as x (or a Numpy Array). In my case x is looking like this:
print(training.dtype)
object
print(training.shape)
(406,)
print(training[0].dtype)
float64
print(training[0].shape)
(5140, 5)
This is the size of my Train data (x). If I want to train the model I get this error:
return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence.
That's why I think, I prepared the data wrong. If I want to convert them with .astype to float32, I get the same error.
Thanks for your help!
If the entries in train2 do not all have the same size, you will need to pad them. As this is something that needs to be done quite regularly, Keras offers a function for this: pad_sequences
Once they all are the same size, np.array(train2) will create one single numpy array that you can pass to model.fit().
Depending on your model, the extra data you are adding this way may or may not be an issue. A common way to deal with this is Masking. Use this to generate a mask that will automatically be passed down the model so that certain values (the values you added via padding) are ignored. Note however, that not all layers support masking, so maybe this is not an option for you.
The issue is not changing the type. The issue is in the batch samples not being of the same size, so no np array could be created. You can solve this by using padding as mentioned in the comments. Have a look at keras pad_sequences What does Keras.io.preprocessing.sequence.pad_sequences do?
From what I read online I have understood that it is because it is a tuple, but the definition of tuple as just a collection of objects does not make sense in this context. Have I come across the wrong reason? If not, could someone elaborate on this with an example?
I am using numpy.
w = numpy.zeros((2,2))
The error message I get when using single set of parentheses is:
"TypeError: data type not understood."
From the numpy documentation (https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.zeros.html):
numpy.zeros(shape, dtype=float, order='C')
The first argument is the shape of the matrix with the datatype as the second. When you input with only one set of parentheses, it takes the shape to be 2 and the dtype to also be 2, that's why it's complaining about not recognizing a datatype (because 2 is not a datatype).
I have a tensorflow placeholder defined as:
fs = tf.placeholder(tf.float32, shape=(nn, mm))
Further in the code, I want to feed it.
I will obtain a numpy array "features" with shape = (nn, mm) and I write:
feed_dict.update({fs, features})
However, I get the error:
TypeError: Unhashable type"numpy.ndarray"
Because I already could feed a list with lenght = nn to a placeholder with shape = (nn,)
So before feeding the numpy array to the placeholder, I wrote
features = features.tolist() #to make them as a list, not numpy array!
again, I got a similar error:
TypeError: Unhashable type "list"
So, I was wondering how can I feed a 2d numpy array into a 2d tensorflow placeholder?
I also have checked that they all have np.float32 and tf.float32 datatypes!
I am using python3 with tensorflow version 1.1
There is a minor typo in your code. Where you wrote:
feed_dict.update({fs, features})
you should have written:
feed_dict.update({fs: features})
note the comma is replaced by a colon.
What's going on
In your code you accidentally tried to create a set containing fs and features, where what you meant to do was create a dictionary. In order to be placed in a set, a python object must implement a method called __hash__. Not all objects implement this method (for good reasons) and that includes lists and numpy arrays. So the reason you got the error message about an "unhashable type" is because you inadvertently tried to create a set containing features.
I have a 100x200 input, and a 1x100 target matrix that I am using to run a gridsearch and create a classifier in python. However, I get errors that my training set of target data is not an array. I've tried:
target=np.asarray(matTarget)
Where the matTarget is just my target imported from Matlab using scipy.io.loadmat.
My exact error is
len() of unsized object
When I try target.size I get a blank size as well.
If I do not do the array conversion, then I get
Expected array-like (array or non string sequence) got {'_header_': b'Matlab matfile ... Array([[1],[1]...)}
I still have the original matrix in Matlab and have also tried using np.array instead of asarray.
If I do print(matTarget.keys()) then I get ('header`,'version','globals','y_train'])
y_train is the name of the mat file itself
According to the documentation of scipy.io.loadmat it returns a dictionary where the values are the contained matrices.
Returns: mat_dict : dict
dictionary with variable names as keys, and loaded matrices as values.
So you need to select your matrix by its name before using it with numpy:
matrix = matTarget['name of matrix']
To visualize TensorType(float64, matrix) as an image using imshow, How could I that?I cannot directly use imshow on Tensor since it gives me this error
mat is not a numpy array, neither a scalar
When I try to convert datatype to array using numpy.asarray I get
mat data type = 17 is not supported
Is there any way to convert to uint8 datatype?
Theano tensors are symbolic variables, with no value attached to them. If you want to plot something it is probably a shared variable (the weights of your network), in which case you can grab the values with my_shared.get_value(), or the output of the network when you pass an input, in which case I believe it should already be a numpy array.