I have two images , image 1 of dimension (32,43,3) and image2 of dimension (67,86,3) . How can i store this in a numpy array , Whenever i try to append the array
image=cv2.imread(image1,0)
image=cv2.resize(image,(32,43))
x_train=np.array(image.flatten())
x_train=x_train.reshape(-1,3,32,43)
X_train =np.append(X_train,x_train) #X_train is my array
image=cv2.imread(image2,0)
image=cv2.resize(image,(67,86))
x_train=np.array(image.flatten())
x_train=x_train.reshape(-1,3,67,86)
X_train =np.append(X_train,x_train)
Value Error: total size of new array must be unchanged.
i want the X_train in shape (-1,depth,height,width).So that i can feed it into my neural network. Is there any way to store images of different dimension in array and feed into neural network ?
Don't use np.append. If you must join arrays, start with np.concatenate. It'll force you to pay more attention to the compatibility of dimensions.
You can't join 2 arrays with shapes (32,43,3) (67,86,3) to make a larger array of some compatible shape. The only dimension they share is the last.
These reshapes don't make sense either: (-1,3,32,43), (-1,3,67,86).
It works, but it also messes up the 'image'. You aren't just adding a 4th dimension. It looks like you want to do some axis swapping or transpose as well. Practice with some small arrays so you can see what's happening, e.g. (2,4,3).
What final shape do you expect for Xtrain?
You can put these two images in a object dtype array, which is basically the same as the list [image1, image2]. But I doubt if your neuralnet can do anything practical with that.
If you reshaped the (32,43,3) array to (16,86,3) you could concatenate that with (67,86,3) on axis=0 to produce a (83,86,3) array. If you needed the 3 to be first, I'd use np.transpose(..., (2,0,1)).
Conversely reshape (67,86,3) to (2*67,43,3).
Passing the (32,43,3) to (32,86,3) is another option.
Joining them on a new 4th dimension, requires that the number of 'rows' match as well as the number of 'columns'.
Related
I have a two-dimensional np.array, where cells are filled with floats or 1d arrays.
In the two-dimensional array, the first dimension are samples, the second dimension are sample descriptions from different sources. Each cell is a string, represented as an ASCII-encoded array or floats.
Example:
array([[3.2, array([1,2,5,1]), array([1,6,9]), array([1,2])],
[2.1, array([1,2,9]), array([8,3,5,8]), array([1,3])],
[1.2, array([1,1]), array([4,2,6,4,5]), array([2,2,4])]])
The first three columns are my inputs, the fourth is my output.
I want to feed a seq2seq LSTM in TensorFlow with this data.
As first approach, I've tried to convert each 1d array in cells to a Tensor but I get an error:
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object
type tensorflow.python.framework.ops.EagerTensor).
I'm wondering if it is necessary to unpack the 1d arrays in cells to a new dimension. How can that be done, considering 1d arrays in cells have different lenghts?
Somewhere, I've read that using batch_size=1 is it possible to feed LSTM with arrays of different dimensions. Does someone have experience with that?
Thanks for your help.
I have a tensor of shape "torch.Size([2, 2, 3])" and another tensor of shape "torch.Size([2, 1, 3])". I want a concatenated tensor of shape "torch.Size([2, 2, 6])".
For example :
a=torch.tensor([[[2,3,5],[12,13,15]],[[20,30,50],[120,130,150]]])
b=torch.tensor([[[99,99,99]],[[999,999,999]]])
I want the output as : [[[99,99,99,2,3,5],[99,99,99,12,13,15]],[[999,999,999,20,30,50],[999,999,999,120,130,150]]]
I have written a O(n2) solution using two for loops but,
This is taking a lot of time with millions of calculation, Does anyone help me in doing this efficiently ?? May be some matrix calculation trick for tensors ??
To exactly match the example you have provided:
c = torch.cat([b.repeat([1,a.shape[1]//b.shape[1],1]),a],2)
The reasoning behind this is that the concatenate operation in pytorch (and numpy and other libraries) will complain if the dimensions of the two tensors in the non-specified axes (in this case 0 and 1) do not match. Therefore, you have to repeat the tensor along the non-matching axis (the first axis, therefore the second element of the repeat list) in order to make the dimensions align. Note that the solution here will only work if the middle dimension of a is evenly divisible by the middle dimension of b.
In newer versions of pytorch, this can also be done using the torch.tile() function.
I've got a numpy_array of size (3275412, 50, 22) which represents my data reshaped for LSTM purposes and I have got a target vector of shape (3275412,).
I want to balance my data so that there is approximately the same number of data with target 0 and 1.
The way I prepared the data makes that I can not do this balancing operation before reshaping.
Firstly, I wanted to apply make_imbalance function (see this link for details) but I can't apply it on a 2-D array (got an error).
My question is : what's the most efficient way to do it for a 3D array ?
My thoughts: I thought about firstly "flatten" my 3-D array to a 2-D array by "concatenating" the second and third dimension (but don't know how so please tell me ??) then apply make_imbalance and then reshape the result to a 3-D array (again, don't know how to do). It seems a little bit tricky however...
So any help would be appreciated, either for an other imbalancing method or for help about reshaping 3D->2D or vice-versa
You can use np.reshape with -1 for unknown dimension size.
data2d = data3d.reshape(data3d.shape[0], -1)
will give you a 2d array of shape (n_samples, n_features)
with the second and the third dimensions merged.
data2d_new, y_new = make_imbalance(data2d, y)
After make_imbalance call, you will get a 2d array with a shape (n_samples_new, n_features), where the number of rows is "unknown" but you know your other two 'feature' dimensions of the original 3d array, so
data3d_new = data2d.reshape(-1, data3d.shape[1], data3d.shape[2])
will give you back the balanced 3d dataset.
Hello I am a newbie with the tensorflow and currently, I am working with colour Images and it's PCAS.
I have extracted PCAS in a form of "Red","Green" and "Blue" and also computed the weights which are associated with "Red","Green" and "Blue" components.
After doing the all the above stuff I want to combine all three 2D matrices into the single 3D matrix.
For a tensorflow it would be a 3D tensor.
def multi(h0,ppca,mu,i,scope=None):
with tf.variable_scope(scope or"multi"):
return tf.matmul(ppca[:,:,0],h0[i,:,:,0]) + tf.reshape(mu[:,0],[4096,1]) , tf.matmul(ppca[:,:,1],h0[i,:,:,1]) + tf.reshape(mu[:,1],[4096,1]) ,tf.matmul(ppca[:,:,2],h0[i,:,:,2]) + tf.reshape(mu[:,2],[4096,1])
So from the above function, I will get all three different 2D tensors and want to combine those 2D tensors to single 3D tensor which has dimensions [4096,1,3]
How can I do that?
any help is highly appreciated.
You need to concat them like this:
three_d_image = tf.concat(0, [[r], [g], [b]])
This tells tensorflow to concat them along the x dimension and treat each tensor as a matrix.
Doing the same without the additional brackets around the r,g,b tensors will try to concat them to one large 2D matrix
A clean, easy way to do it is using the tf.stack operation (tf.pack in older versions of tensorflow), it concatenats all tensors along a new dimension. If you want your new dimension to be after all previous, you need to set the axis argument to the number of dimensions of your tensors.
three_d_image = tf.stack([r,g,b], axis=2)
one of the solutions is that you can add one more empty dimension to your 2Ds so you will have 3 matrices of 3D dimension [4096,1,1] then you can concat these 3 matrices by axis 2 tf.concat(2,matrices) gives you [4096,1,3]
the second solution can be concat of axis 1, tf.concat(1,matrices) then reshape it to 3D
On the link
http://scipy-lectures.github.io/advanced/advanced_numpy/
there is statement ,
there is no way to represent the array c given one stride and the block of memory for a. Therefore, the reshape operation needs to make a copy here.
How this copy works inside numpy ? What is time complexity of that ?
How numpy gets to know further array can't be represented using same array ?
If you have two dimensions of an array, with shapes sh0 and sh1 and strides st0 and st1, that you want to merge into a single dimension of shape sh0*sh1, the condition to be able to do so without a copy is that st0 == sh1*st1. Notice that the order the dimensions come up in the shape is relevant, and in a C order array with positive strides, dimension 0 precedes dimension 1.
There may be situations that require a more subtle analysis, about which stride and shape to multiply to compare to which stride, if you have Fortran order arrays, or negative strides, but the basic premise is still mostly the same.