Matrix multiplication (element-wise) from numpy to Pytorch - python

I got two numpy arrays (image and and environment map),
MatA
MatB
Both with shapes (256, 512, 3)
When I did the multiplication (element-wise) with numpy:
prod = np.multiply(MatA,MatB)
I got the wanted result (visualize via Pillow when turning back to Image)
But when I did it using pytorch, I got a really strange result(not even close to the aforementioned).
I did it with the following code:
MatATensor = transforms.ToTensor()(MatA)
MatBTensor = transforms.ToTensor()(MatB)
prodTensor = MatATensor * MatBTensor
For some reasons, the shape for both MatATensor and MatBtensor is
torch.Size([3, 256, 512])
Same for the prodTensor too.
When I tried to reshape to (256,512,3), I got an error.
Is there a way to get the same result?
I am new to pytorch, so any help would be appreciated.

If you read the documentation of transforms.ToTensor() you'll see this transformation does not only convert a numpy array to torch.FloatTensor, but also transpose its dimensions from HxWx3 to 3xHxW.
To "undo" this you'll need to
prodasNp = (prodTensor.permute(2, 0, 1) * 255).to(torch.uint8).numpy()
See permute for more information.

I suggest you use torch.from_numpy, which will easily convert your ndarrays to torch tensors. As in:
In[1]: MatA = np.random.rand(256, 512, 3)
In[2]: MatB = np.random.rand(256, 512, 3)
In[3]: MatA_torch = torch.from_numpy(MatA)
In[4]: MatB_torch = torch.from_numpy(MatB)
In[5]: mul_np = np.multiply(MatA, MatB)
In[6]: mul_torch = MatA_torch * MatB_torch
In[7]: torch.equal(torch.from_numpy(mul_np), mul_torch)
Out[7]: True
In[8]: mul_torch.shape
Out[8]: torch.Size([256, 512, 3])
If you want it back to numpy, just do:
mul_torch.numpy()

Related

Cannot reshape numpy array to vector

I am trying to reshape an (N, 1) array d to an (N,) vector. According to this solution and my own experience with numpy, the following code should convert it to a vector:
from sklearn.neighbors import kneighbors_graph
from sklearn.datasets import make_circles
X, labels = make_circles(n_samples=150, noise=0.1, factor=0.2)
A = kneighbors_graph(X, n_neighbors=5)
d = np.sum(A, axis=1)
d = d.reshape(-1)
However, d.shape gives (1, 150)
The same happens when I exactly replicate the code for the linked solution. Why is the numpy array not reshaping?
The issue is that the sklearn functions returned the nearest neighbor graph as a sparse.csr.csr_matrix. Applying np.sum returned a numpy.matrix, a data type that (in my opinion) should no longer exist. numpy.matrixs are incompatible with just about everything, and numpy operations on them return unexpected results.
The solution was casting the numpy.csr.csr_matrix to a numpy.array:
A = kneighbors_graph(X, n_neighbors=5)
A = A.toarray()
d = np.sum(A, axis=1)
d = d.reshape(-1)
Now we have d.shape = (150,)

How To ReShape a Numpy Array in Python

I have a numpy array of images with the shape of (5879,). Inside every index of the numpy array, I have the Pixels of the image with a shape of (640,640,3).
I want to reshape the complete array in such a way that the shape of the numpy array becomes (5879,640,640,3).
please check, whether below code works for you or not
import numpy as np
b = np.array([5879])
b.shape
output (1,)
a = np.array([[640],[640],[3]])
a = a.reshape((a.shape[0], 1))
a.shape
output (3, 1)
c = np.concatenate((a,b[:,None]),axis=0)
c.shape
Output:
(4, 1)
np.concatenate((a,b[:,None]),axis=0)
output
array([[ 640],
[ 640],
[ 3],
[5879]])
You want to stack your images along the first axis, into a 4D array. However, your images are all 3D.
So, first you need to add a leading singleton dimension to all images, and then to concatenate them along this axis:
imgs = [i_[None, ...] for i_ in orig_images] # add singleton dim to all images
x = np.concatenate(imgs, axis=0) # stack along the first axis
Edit:
Based on Mad Phyiscist's comment, it seems like using np.stack is more appropriate here: np.stack takes care of adding the leading singleton dimension for you:
x = np.stack(orig_images, axis=0)

Numpy: creating batch of numpy arrays within another numpy array (reshaping)

I have a numpy array batch of shape (32,5). Each element of the batch consists of a numpy array batch_elem = [s,_,_,_,_] where s = [img,val1,val2] is a 3-dimensional numpy array and _ are simply scalar values.
img is an image (numpy array) with dimensions (84,84,3)
I would like to create a numpy array with the shape (32,84,84,3). Basically I want to extract the image information within each batch and transform it into a 4-dimensional array.
I tried the following:
b = np.vstack(batch[:,0]) #this yields a b with shape (32,3), type: <class 'numpy.ndarray'>
Now I would like to access the images (first index in second dimension)
img_batch = b[:,0] # this returns an array of shape (32,), type: <class 'numpy.ndarray'>
How can I best access the image data and get a shape (32,84,84,3)?
Note:
s = b[0] #first s of the 32 in batch: shape (3,) , type: <class 'numpy.ndarray'>
Edit:
This should be a minimal example:
img = np.zeros([5,5,3])
s = np.array([img,1,1])
batch_elem = np.array([s,1,1,1,1])
batch = np.array([batch_elem for _ in range(32)])
Assuming I understand the problem correctly, you can stack twice on the last axis.
res = np.stack(np.stack(batch[:,0])[...,0])
import numpy as np
# fabricate some data
batch = np.array((32, 1), dtype=object)
for i in range(len(batch)):
batch[i] = [np.random.rand(84, 84, 3), None, None]
# select images
result = np.array([img for img, _, _ in batch])
# double check!
for i in range(len(batch)):
assert np.all(result[i, :, :, :] == batch[i][0])

Transform ndarray shape

I'm using python / Numpy to store small images into ndarray.
I'm stuck when I'm trying to transform an ndarray from 32,32,1 shape to 1,32,32,1. Any help ? Thanks
You need to expand the dimensions of the numpy array. Use np.expand_dims.
arr = np.expand_dims(arr, axis=0)
arr[np.newaxis, :, :, :] will work
Instead of explicitly adding an axis you could also explicitly reshape it to add the axis:
>>> import numpy as np
>>> arr = np.ones((32, 32, 1)) # just ones for demonstration purposes
>>> reshaped = arr.reshape(1, *arr.shape)
>>> reshaped.shape
(1, 32, 32, 1)

Numpy flatten RGB image array

I have 1,000 RGB images (64X64) which I want to convert to an (m, n) array.
I use this:
import numpy as np
from skdata.mnist.views import OfficialImageClassification
from matplotlib import pyplot as plt
from PIL import Image
import glob
import cv2
x_data = np.array( [np.array(cv2.imread(imagePath[i])) for i in range(len(imagePath))] )
print x_data.shape
Which gives me: (1000, 64, 64, 3)
Now if I do:
pixels = x_data.flatten()
print pixels.shape
I get: (12288000,)
However, I require an array with these dimensions: (1000, 12288)
How can I achieve that?
Apply the numpy method reshape() after applying flatten() to the flattened array:
x_data = np.array( [np.array(cv2.imread(imagePath[i])) for i in range(len(imagePath))] )
pixels = x_data.flatten().reshape(1000, 12288)
print pixels.shape
Try this:
d1, d2, d3, d4 = x_data.shape
then using numpy.reshape()
x_data_reshaped = x_data.reshape((d1, d2*d3*d4))
or
x_data_reshaped = x_data.reshape((d1, -1))
(Numpy infers the the value instead of -1 from original length and defined dimension d1)
You can iterate over your images array and flatten each row independently.
numImages = x_data.shape[0]
flattened = np.array([x_data[i].flatten() for i in range(numImages)])
You could also use this:
X is your 2D picture with size 32x32 for example and the -1 it simply means that it is an unknown dimension and we want numpy to figure it out. And numpy will figure this by looking at the 'length of the array and remaining dimensions' and making sure it satisfies the above mentioned criteria (What does -1 mean in numpy reshape?). T means to invert the transposition of tensors when using the axes keyword argument (https://docs.scipy.org/doc/numpy/reference/generated/numpy.transpose.html).
X_flatten = X.reshape(X.shape[0], -1).T
Assuming you have an array image_array you can use the reshape() method.
image_array = image_array.reshape(1000, 12288)

Categories

Resources