Related
This question already has answers here:
how to perform max/mean pooling on a 2d array using numpy
(8 answers)
Closed 7 months ago.
Lets say I have a numpy array of 4x4 dimension and want to change it to 2x2 by taking its halve. So, theoretically do something like this:
is this possible without using any loop and for it to work on not only a 4x4 but lets say a 500x500?
#input:
x_4= np.array([[1, 2, 4, 5], [3, 4, 6, 8], [5, 3, 1, -1], [2, 3, 5, 0]])
# thinking it would work with something like this:
new = x_4[:2, :2]/4 + x_4[:2, -2:]/4 + x_4[-2:, :2]/4 + x_4[-2:, -2:]/4
new
# output: array([[11, 9],[16, 15]])
#Expected output: array([[2.5, 5.75], [3.25, 1.25]])
Numpy Version:
you can do a reshape and perform mean over two axis to get the desired result
import numpy as np
blocksize = 500
Mat = np.random.rand(blocksize,blocksize)
## reshape into (blocksize/2 x blocksize/2 ) 2x2 matrices
blocks = Mat.reshape(blocksize//2, 2, blocksize//2, 2)
block_mean = np.mean(blocks, axis=(1,-1))
As was pointed out in the comments, you can use pooling, which is e.g. available in the scikit-image package:
import skimage.measure
shape = (2, 2)
skimage.measure.block_reduce(x_4, shape, np.mean)
Where shape gives you the dimensions of your pools.
This Operation called average Pooling it used in CNN and image processing to reduce the dimension of the image
you can use TensorFlow or PyTorch first you need to reshape the image to (batch_size,Channels,Rows,Columns) for PyTorch to work
import numpy as np
import torch
from torch import nn
m= nn.AvgPool2d(2, stride=2)
x_4= np.array([[1, 2, 4, 5], [3, 4, 6, 8], [5, 3, 1, -1], [2, 3, 5, 0]])
x_4=x_4[None,None,:,:]
x_4=torch.as_tensor(x_4,dtype=torch.float64)
x_4.shape
m(x_4).numpy()
Output
array([[[[2.5 , 5.75],
[3.25, 1.25]]]])
I'm trying to figure out how to split my tensor of sequential data into multiple parts based on partitioning continuous masks with value of binary number '1'.
I've read the official documentation.
Howerver I can't find any function that can handle this easy.
Any helpful ways for this in python?
I have tried with 'tf.ragged.boolean_mask' but it doesn't seem to fit in my case.
The visualized example of my explanation is:
inputs:
# both are tensors, NOT data.
data_tensor = ([3,5,6,2,6,1,3,9,5])
mask_tensor = ([0,1,1,1,0,0,1,1,0])
expected output:
output_tensor = ([[3],[5,6,2],[6,1],[3,9],[5]])
Thank you.
I recently discovered a method to do it in a very clean way in this answer by #AloneTogether:
import tensorflow as tf
data_tensor = tf.constant([3,5,6,2,6,1,3,9,5])
mask_tensor = tf.constant([0,1,1,1,0,0,1,1,0])
# Index where the mask changes.
change_idx = tf.concat([tf.where(mask_tensor[:-1] != mask_tensor[1:])[:, 0], [tf.shape(mask_tensor)[0]-1]], axis=0)
# Ranges of indices to gather.
ragged_idx = tf.ragged.range(tf.concat([[0], change_idx[:-1] + 1], axis=0), change_idx + 1)
# Gather ranges into ragged tensor.
output_tensor = tf.gather(data_tensor, ragged_idx)
print(output_tensor)
<tf.RaggedTensor [[3], [5, 6, 2], [6, 1], [3, 9], [5]]>
I have a numpy array of shape [batch_size, timesteps_per_samples, width, height], where width and height refer to a 2D grid. The values in this array can be interpreted as an elevation at a certain location that changes over time.
I want to know the elevation over time for various paths within this array. Therefore i have a second array of shape [batch_size, paths_per_batch_sample, timesteps_per_path, coordinates] (coordinates = 2, for x and y in the 2D plane).
The resulting array should be of shape [batch_size, paths_per_batch_sample, timesteps_per_path] containing the elevation over time for each sample within the batch.
The following two examples work. The first one is very slow and just serves for understanding what I am trying to do. I think the second one does what I want but I have no idea why this works nor if it may crash under certain circumstances.
Code for the problem setup:
import numpy as np
batch_size=32
paths_per_batch_sample=10
timesteps_per_path=4
width=64
height=64
elevation = np.arange(0, batch_size*timesteps_per_path*width*height, 1)
elevation = elevation.reshape(batch_size, timesteps_per_path, width, height)
paths = np.random.randint(0, high=width-1, size=(batch_size, paths_per_batch_sample, timesteps_per_path, 2))
range_batch = range(batch_size)
range_paths = range(paths_per_batch_sample)
range_timesteps = range(timesteps_per_path)
The following code works but is very slow:
elevation_per_time = np.zeros((batch_size, paths_per_batch_sample, timesteps_per_path))
for s in range_batch:
for k in range_paths:
for t in range_timesteps:
x_co, y_co = paths[s,k,t,:].astype(int)
elevation_per_time[s,k,t] = elevation[s,t,x_co,y_co]
The following code works (even fast) but I can't understand why and how o.0
elevation_per_time_fast = elevation[
:,
range_timesteps,
paths[:, :, range_timesteps, 0].astype(int),
paths[:, :, range_timesteps, 1].astype(int),
][range_batch, range_batch, :, :]
Prove that the results are equal
check = (elevation_per_time == elevation_per_time_fast)
print(np.all(check))
Can somebody explain how I can slice an nd-array by multiple other arrays?
Especially, I don't understand how the numpy knows that 'range_timesteps' has to run in step (for the index in axis 1,2,3).
Thanks in advance!
Lets take a quick look at slicing numpy array first:
a = np.arange(0,9,1).reshape([3,3])
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
Numpy has 2 ways of slicing array, full sections start:stop and by index from a list [index1, index2 ...]. The output will still be an array with the shape of your slice:
a[0:2,:]
array([[0, 1, 2],
[3, 4, 5]])
a[:,[0,2]]
array([[0, 2],
[3, 5],
[6, 8]])
The second part is that since you get a returned array with the same amount of dimensions you can easily stack any number of slices as long as you dont try to directly access an index outside of the array.
a[:][:][:][:][:][:][:][[0,2]][:,[0,2]]
array([[0, 2],
[6, 8]])
I encountered a problem in python programming.
I was manipulating the feature extraction in deep learning. I would like to add several 2D arrays into a 3D array in a for loop. I could achieve the purpose by using this easy way shown below. This method is not realistic in a large sample.
But in my situation, the data returned in the function of one sample is a 2D array (i.e. shape is (41,4)), the data itself is in the loop (i.e. 30 samples), the results I would like to obtain is a 3D array (i.e. shape is (30,41,4)).
I didn't find any related information, I really stuck here, hope someone could help me.
import numpy as np
a = np.array([[1,2,3],[4,5,6]])
b = np.array([[2,2,3],[4,5,6]])
c = np.array([[3,2,3],[4,5,6]])
print(a)
print(a.shape)
com = np.array([a,b,c])
print(com)
print(com.shape)
You can use np.stack
>>> arr = np.stack((a,b,c))
>>> arr
array([[[1, 2, 3],
[4, 5, 6]],
[[2, 2, 3],
[4, 5, 6]],
[[3, 2, 3],
[4, 5, 6]]])
>>> arr.shape
(3, 2, 3)
I'm trying to convert a piece of MATLAB code, and this is a line I'm struggling with:
f = 0
wlab = reshape(bsxfun(#times,cat(3,1-f,f/2,f/2),lab),[],3)
I've come up with
wlab = lab*(np.concatenate((3,1-f,f/2,f/2)))
How do I reshape it now?
Not going to do it for your code, but more as a general knowledge:
bsxfun is a function that fills a gap in MATLAB that python doesn't need to fill: broadcasting.
Broadcasting is a thing where if a matrix that is being multiplied/added/whatever similar is not the same size as the other one being used, the matrix will be repeated.
So in python, if you have a 3D matrix A and you want to multiply every 2D slice of it with a matrix B that is 2D, you dont need anything else, python will broadcast B for you, it will repeat the matrix again and again. A*B will suffice. However, in MATLAB that will raise an error Matrix dimension mismatch. To overcome that, you'd use bsxfun as bsxfun(#times,A,B) and this will broadcast (repeat) B over the 3rd dimension of A.
This means that converting bsxfun to python generally requires nothing.
MATLAB
reshape(x,[],3)
is the equivalent of numpy
np.reshape(x,(-1,3))
the [] and -1 are place holders for 'fill in the correct shape here'.
===============
I just tried the MATLAB expression is Octave - it's on a different machine, so I'll just summarize the action.
For lab=1:6 (6 elements) the bsxfun produces a (1,6,3) matrix; the reshape turns it into (6,3), i.e. just removes the first dimension. The cat produces a (1,1,3) matrix.
np.reshape(np.array([1-f,f/2,f/2])[None,None,:]*lab[None,:,None],(-1,3))
For lab with shape (n,m), the bsxfun produces a (n,m,3) matrix; the reshape would make it (n*m,3)
So for a 2d lab, the numpy needs to be
np.array([1-f,f/2,f/2])[None,None,:]*lab[:,:,None]
(In MATLAB the lab will always be 2d (or larger), so this 2nd case it closer to its action even if n is 1).
=======================
np.array([1-f,f/2,f/2])*lab[...,None]
would handle any shaped lab
If I make the Octave lab (4,2,3), the `bsxfun is also (4,2,3)
The matching numpy expression would be
In [94]: (np.array([1-f,f/2,f/2])*lab).shape
Out[94]: (4, 2, 3)
numpy adds dimensions to the start of the (3,) array to match the dimensions of lab, effectively
(np.array([1-f,f/2,f/2])[None,None,:]*lab) # for 3d lab
If f=0, then the array is [1,0,0], so this has the effect of zeroing values on the last dimension of lab. In effect, changing the 'color'.
It is equivalent to
import numpy as np
wlab = np.kron([1-f,f/2,f/2],lab.reshape(-1,1))
In Python, if you use numpy you do not need to do any broadcasting, as this is done automatically for you.
For instance, looking at the following code should make it clearer:
>>> import numpy as np
>>> a = np.array([[1, 2, 3], [3, 4, 5], [6, 7, 8], [9, 10, 100]])
>>> b = np.array([1, 2, 3])
>>>
>>> a
array([[ 1, 2, 3],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 100]])
>>> b
array([1, 2, 3])
>>>
>>> a - b
array([[ 0, 0, 0],
[ 2, 2, 2],
[ 5, 5, 5],
[ 8, 8, 97]])
>>>