Index the middle of a numpy array? - python

To index the middle points of a numpy array, you can do this:
x = np.arange(10)
middle = x[len(x)/4:len(x)*3/4]
Is there a shorthand for indexing the middle of the array? e.g., the n or 2n elements closes to len(x)/2? Is there a nice n-dimensional version of this?

as cge said, the simplest way is by turning it into a lambda function, like so:
x = np.arange(10)
middle = lambda x: x[len(x)/4:len(x)*3/4]
or the n-dimensional way is:
middle = lambda x: x[[slice(np.floor(d/4.),np.ceil(3*d/4.)) for d in x.shape]]

Late, but for everyone else running into this issue:
A much smoother way is to use numpy's take or put.
To address the middle of an array you can use put to index an n-dimensional array with a single index. Same for getting values from an array with take
Assuming your array has an odd number of elements, the middle of the array will be at half of it's size. By using an integer division (// instead of /) you won't get any problems here.
import numpy as np
arr = np.array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
# put a value to the center
np.put(arr, arr.size // 2, 999)
print(arr)
# take a value from the center
center = np.take(arr, arr.size // 2)
print(center)

Related

What is the fastest way to apply np.linalg.norm() (python) to each element of a 2d numpy array and a given value?

I want to compute the L2 norm between a given value x and each cell of a 2d array arr (which is currently of size 1000 x 100. My current approach:
for k in range(0, 999):
for l in range(0, 999):
distance = np.linalg.norm([x - arr[k][l]], ord= 2)
x and arr[k][l] are both scalars. I actually want to compute the pairwise distance of each array cell to the given value x. In the end I need 1000x1000 distances for 1000x 1000 values.
Unfortunately, the approach above is a bottleneck, when it comes to the time it takes to finish. Which is why I am searching for a way to speed this up. I am gratefull for any advice.
A reproducable example (as asked for):
arr = [[1, 2, 4, 4], [5, 6, 7, 8]]
x = 2
for k in range(0, 3):
for l in range(0, 1):
distance = np.linalg.norm([x - arr[k][l]], ord= 2)
Please note, that the real arr is much bigger. This is merely a toy example.
Actually, I am not bound to use np.linalg.norm(). I simply want the l2 norm for all of these array cells with the given value x. If you know any function which is more suitable, I would be willing to try it.
You can do the followng
Substarct x from the array arr
Then compute the norm
diff = arr - x
distance = np.linalg.norm(diff, axis=2, ord=2)

Alternative to loop for for boolean / nonzero indexing of numpy array

I need to select only the non-zero 3d portions of a 3d binary array (or alternatively the true values of a boolean array). Currently I am able to do so with a series of 'for' loops that use np.any, but this does work but seems awkward and slow, so currently investigating a more direct way to accomplish the task.
I am rather new to numpy, so the approaches that I have tried include a) using
np.nonzero, which returns indices that I am at a loss to understand what to do with for my purposes, b) boolean array indexing, and c) boolean masks. I can generally understand each of those approaches for simple 2d arrays, but am struggling to understand the differences between the approaches, and cannot get them to return the right values for a 3d array.
Here is my current function that returns a 3D array with nonzero values:
def real_size(arr3):
true_0 = []
true_1 = []
true_2 = []
print(f'The input array shape is: {arr3.shape}')
for zero_ in range (0, arr3.shape[0]):
if arr3[zero_].any()==True:
true_0.append(zero_)
for one_ in range (0, arr3.shape[1]):
if arr3[:,one_,:].any()==True:
true_1.append(one_)
for two_ in range (0, arr3.shape[2]):
if arr3[:,:,two_].any()==True:
true_2.append(two_)
arr4 = arr3[min(true_0):max(true_0) + 1, min(true_1):max(true_1) + 1, min(true_2):max(true_2) + 1]
print(f'The nonzero area is: {arr4.shape}')
return arr4
# Then use it on a small test array:
test_array = np.zeros([2, 3, 4], dtype = int)
test_array[0:2, 0:2, 0:2] = 1
#The function call works and prints out as expected:
non_zero = real_size(test_array)
>> The input array shape is: (2, 3, 4)
>> The nonzero area is: (2, 2, 2)
# So, the array is correct, but likely not the best way to get there:
non_zero
>> array([[[1, 1],
[1, 1]],
[[1, 1],
[1, 1]]])
The code works appropriately, but I am using this on much larger and more complex arrays, and don't think this is an appropriate approach. Any thoughts on a more direct method to make this work would be greatly appreciated. I am also concerned about errors and the results if the input array has for example two separate non-zero 3d areas within the original array.
To clarify the problem, I need to return one or more 3D portions as one or more 3d arrays beginning with an original larger array. The returned arrays should not include extraneous zeros (or false values) in any given exterior plane in three dimensional space. Just getting the indices of the nonzero values (or vice versa) doesn't by itself solve the problem.
Assuming you want to eliminate all rows, columns, etc. that contain only zeros, you could do the following:
nz = (test_array != 0)
non_zero = test_array[nz.any(axis=(1, 2))][:, nz.any(axis=(0, 2))][:, :, nz.any(axis=(0, 1))]
An alternative solution using np.nonzero:
i = [np.unique(_) for _ in np.nonzero(test_array)]
non_zero = test_array[i[0]][:, i[1]][:, :, i[2]]
This can also be generalized to arbitrary dimensions, but requires a bit more work (only showing the first approach here):
def real_size(arr):
nz = (arr != 0)
result = arr
axes = np.arange(arr.ndim)
for axis in range(arr.ndim):
zeros = nz.any(axis=tuple(np.delete(axes, axis)))
result = result[(slice(None),)*axis + (zeros,)]
return result
non_zero = real_size(test_array)

filter numpy array into separate arrays based on value, for contour plotting

I have numpy data which I am trying to turn into contour plot data. I realize this can be done through matplotlib, but I am trying to do this with just numpy if possible.
So, say I have an array of numbers 1-10, and and I want to divide the array according to contour "levels". I want to turn the input array into an array of boolean arrays, each of those being the size of the input, with a 1/True for any data point in that contour level and 0/False everywhere else.
For example, suppose the input is:
[1.2,2.3,3.4,2.5]
And the levels are [1,2,3,4],
then the return should be:
[[1,0,0,0],[0,1,0,1],[0,0,1,0]]
So here is the start of an example I whipped up:
import numpy as np
a = np.random.rand(3,3)*10
print(a)
b = np.zeros(54).reshape((6,3,3))
levs = np.arange(6)
#This is as far as I've gotten:
bins = np.digitize(a, levs)
print(bins)
I can use np.digitize to find out which level each value in a should belong to, but that's as far as I get. I'm fairly new to numpy and this really has me scratching me head. Any help would be greatly appreciated, thanks.
We could gather the indices off np.digitize output, which would represent the indices along the first n-1 axes, where n is the no. of dims in output to be set in the output as True values. So, we could use indexing after setting up the output array or we could use a outer range comparison to achieve the same upon leverage broadcasting.
Hence, with broadcasting one that covers generic n-dim arrays -
idx = np.digitize(a, levs)-1
out = idx==(np.arange(idx.max()+1)).reshape([-1,]+[1]*idx.ndim)
With indexing-based one re-using idx from previous method, it would be -
# https://stackoverflow.com/a/46103129/ #Divakar
def all_idx(idx, axis):
grid = np.ogrid[tuple(map(slice, idx.shape))]
grid.insert(axis, idx)
return tuple(grid)
out = np.zeros((idx.max()+1,) + idx.shape,dtype=int) #dtype=bool for bool array
out[all_idx(idx,axis=0)] = 1
Sample run -
In [77]: a = np.array([1.2,2.3,3.4,2.5])
In [78]: levs = np.array([1,2,3,4])
In [79]: idx = np.digitize(a, levs)-1
...: out = idx==(np.arange(idx.max()+1)).reshape([-1,]+[1]*idx.ndim)
In [80]: out.astype(int)
Out[80]:
array([[1, 0, 0, 0],
[0, 1, 0, 1],
[0, 0, 1, 0]])

My numpy array always ends in zero?

I think I missed something somewhere. I filled a numpy array using two for loops (x and y) and a function based on the x,y position. The only problem is that the value of the array always ends in zero irregardless of the size of the array.
thetamap = numpy.zeros(36, dtype=float)
thetamap.shape = (6, 6)
for y in range(0,5):
for x in range(0,5):
thetamap[x][y] = x+y
print thetamap
range(0, 5) produces 0, 1, 2, 3, 4. The endpoint is always omitted. You want simply range(6).
Better yet, use the awesome power of NumPy to make the array in one line:
thetamap = np.arange(6) + np.arange(6)[:,None]
This makes a row vector and a column vector, then adds them together using NumPy broadcasting to make a matrix.

Convert a 1D array to a 2D array in numpy

I want to convert a 1-dimensional array into a 2-dimensional array by specifying the number of columns in the 2D array. Something that would work like this:
> import numpy as np
> A = np.array([1,2,3,4,5,6])
> B = vec2matrix(A,ncol=2)
> B
array([[1, 2],
[3, 4],
[5, 6]])
Does numpy have a function that works like my made-up function "vec2matrix"? (I understand that you can index a 1D array like a 2D array, but that isn't an option in the code I have - I need to make this conversion.)
You want to reshape the array.
B = np.reshape(A, (-1, 2))
where -1 infers the size of the new dimension from the size of the input array.
You have two options:
If you no longer want the original shape, the easiest is just to assign a new shape to the array
a.shape = (a.size//ncols, ncols)
You can switch the a.size//ncols by -1 to compute the proper shape automatically. Make sure that a.shape[0]*a.shape[1]=a.size, else you'll run into some problem.
You can get a new array with the np.reshape function, that works mostly like the version presented above
new = np.reshape(a, (-1, ncols))
When it's possible, new will be just a view of the initial array a, meaning that the data are shared. In some cases, though, new array will be acopy instead. Note that np.reshape also accepts an optional keyword order that lets you switch from row-major C order to column-major Fortran order. np.reshape is the function version of the a.reshape method.
If you can't respect the requirement a.shape[0]*a.shape[1]=a.size, you're stuck with having to create a new array. You can use the np.resize function and mixing it with np.reshape, such as
>>> a =np.arange(9)
>>> np.resize(a, 10).reshape(5,2)
Try something like:
B = np.reshape(A,(-1,ncols))
You'll need to make sure that you can divide the number of elements in your array by ncols though. You can also play with the order in which the numbers are pulled into B using the order keyword.
If your sole purpose is to convert a 1d array X to a 2d array just do:
X = np.reshape(X,(1, X.size))
convert a 1-dimensional array into a 2-dimensional array by adding new axis.
a=np.array([10,20,30,40,50,60])
b=a[:,np.newaxis]--it will convert it to two dimension.
There is a simple way as well, we can use the reshape function in a different way:
A_reshape = A.reshape(No_of_rows, No_of_columns)
You can useflatten() from the numpy package.
import numpy as np
a = np.array([[1, 2],
[3, 4],
[5, 6]])
a_flat = a.flatten()
print(f"original array: {a} \nflattened array = {a_flat}")
Output:
original array: [[1 2]
[3 4]
[5 6]]
flattened array = [1 2 3 4 5 6]
some_array.shape = (1,)+some_array.shape
or get a new one
another_array = numpy.reshape(some_array, (1,)+some_array.shape)
This will make dimensions +1, equals to adding a bracket on the outermost
Change 1D array into 2D array without using Numpy.
l = [i for i in range(1,21)]
part = 3
new = []
start, end = 0, part
while end <= len(l):
temp = []
for i in range(start, end):
temp.append(l[i])
new.append(temp)
start += part
end += part
print("new values: ", new)
# for uneven cases
temp = []
while start < len(l):
temp.append(l[start])
start += 1
new.append(temp)
print("new values for uneven cases: ", new)
import numpy as np
array = np.arange(8)
print("Original array : \n", array)
array = np.arange(8).reshape(2, 4)
print("New array : \n", array)

Categories

Resources