I've a 2-Dim array containing the residual sum of squares of a given fit (unimportant here).
RSS[i,j] = np.sum((spectrum_theo - sp_exp_int) ** 2)
I would like to find the matrix element with the minimum value AND its position (i,j) in the matrix. Find the minimum element is OK:
RSS_min = RSS[RSS != 0].min()
but for the index, I've tried:
ij_min = np.where(RSS == RSS_min)
which gives me:
ij_min = (array([3]), array([20]))
I would like to obtain instead:
ij_min = (3,20)
If I try :
ij_min = RSS.argmin()
I obtain:
ij_min = 0,
which is a wrong result.
Does it exist a function, in Scipy or elsewhere, that can do it? I've searched on the web, but I've found answers leading only with 1-Dim arrays, not 2- or N-Dim.
Thanks!
The easiest fix based on what you have right now would just be to extract the elements from the array as a final step:
# ij_min = (array([3]), array([20]))
ij_min = np.where(RSS == RSS_min)
ij_min = tuple([i.item() for i in ij_min])
Does this work for you
import numpy as np
array = np.random.rand((1000)).reshape(10,10,10)
print np.array(np.where(array == array.min())).flatten()
in the case of multiple minimums you could try something like
import numpy as np
array = np.array([[1,1,2,3],[1,1,4,5]])
print zip(*np.where(array == array.min()))
You can combine argmin with unravel_index.
For example, here's an array RSS:
In [123]: np.random.seed(123456)
In [124]: RSS = np.random.randint(0, 99, size=(5, 8))
In [125]: RSS
Out[125]:
array([[65, 49, 56, 43, 43, 91, 32, 87],
[36, 8, 74, 10, 12, 75, 20, 47],
[50, 86, 34, 14, 70, 42, 66, 47],
[68, 94, 45, 87, 84, 84, 45, 69],
[87, 36, 75, 35, 93, 39, 16, 60]])
Use argmin (which returns an integer that is the index in the flattened array), and then pass that to unravel_index along with the shape of RSS to convert the index of the flattened array into the indices of the 2D array:
In [126]: ij_min = np.unravel_index(RSS.argmin(), RSS.shape)
In [127]: ij_min
Out[127]: (1, 1)
ij_min itself can be used as an index into RSS to get the minimum value:
In [128]: RSS_min = RSS[ij_min]
In [129]: RSS_min
Out[129]: 8
Related
Playing around with numpy:
import numpy as np
l = [39, 54, 72, 46, 89, 53, 96, 64, 2, 75]
nl = np.array(l.append(3))
>> array(None, dtype=object)
Now, if I call on l, I'll get the list: [39, 54, 72, 46, 89, 53, 96, 64, 2, 75, 3]
My question is, why doesn't numpy create that list as an array?
If I do something like this:
nl = np.array(l.extend([45])) I get the same thing.
But, if I try to concatenate without a method: nl = np.array(l+[45]) it works.
What is causing this behaviour?
The append function will always return None. You must do this in two different lines of code:
import numpy as np
l = [39, 54, 72, 46, 89, 53, 96, 64, 2, 75]
l.append(3)
nl = np.array(l)
append and extend are in-place methods and return None.
print(l.append(3)) # None
print(l.extend([3])) # None
I want to make a function where the sum of the Arrays and Arrays2 array is equivalent to val. The function should modify the Arrays and Arrays2 values so that the last index will output the sum of all values in the array to be val. How will be able to get the Expected Output?
import numpy as np
Arrays = np.array([50, 30, 25, 87, 44, 68, 45])
Arrays2 = np.array([320])
val = 300
Expected output:
[50, 30, 25, 87, 44, 64]
[300]
something like this?
import numpy as np
Arrays = np.array([50, 30, 25, 87, 44, 68, 45])
Arrays2 = np.array([320])
val = 300
def thisRareFunction(arr):
outArrays = []
acum = 0
for x in arr:
acum += x
if acum <=val:
outArrays.append(x)
else:
outArrays.append(x -(acum-val))
break
return outArrays
print(thisRareFunction(Arrays))
print(thisRareFunction(Arrays2))
I came across a line of code used to reduce a 3D Tensor to a 2D Tensor in PyTorch. The 3D tensor x is of size torch.Size([500, 50, 1]) and this line of code:
x = x[lengths - 1, range(len(lengths))]
was used to reduce x to a 2D tensor of size torch.Size([50, 1]). lengths is also a tensor of shape torch.Size([50]) containing values.
Please can anyone explain how this works? Thank you.
After being quite stumped by the behavior, I did some more digging into this, and found that it is consistent behavior with the indexing of multi-dimensional NumPy arrays. What makes this counter-intuitive is the less obvious fact that both arrays have to have the same length, i.e. in this case len(lengths).
In fact, it works as the following:
* lengths is determining the order in which you access the first dimension. I.e., if you have a 1D array a = [0, 1, 2, ...., 500], and access it with the list b = [300, 200, 100], then the result a[b] = [301, 201, 101] (This also explains the lengths - 1 operator, which simply causes the accessed values to be the same as the index used in b, or lengths, respectively).
* range(len(lengths)) then *simply chooses the i-th element in the i-th row. If you have a square matrix, you can interpret this as the diagonal of the matrix. Since you only access a single element for each position along the first two dimensions, this can be stored in a single dimension (thus reducing your 3D tensor to 2D). The latter dimension is simply kept "as is".
If you want to play around with this, I strongly recommend to change the range() value to something longer/shorter, which will result in the following error:
IndexError: shape mismatch: indexing arrays could not be broadcast
together with shapes (x,) (y,)
where x and y are your specific length values.
To write this accessing method out in the long form to understand what happens "under the hood", also consider the below example:
import torch
x = torch.randint(500, 50, 1)
lengths = torch.tensor([2, 30, 1, 4]) # random examples to explore
diag = list(range(len(lengths))) # [0, 1, 2, 3]
result = []
for i, row in enumerate(lengths):
temp_tensor = x[row, :, :] # temp_tensor.shape = [1, 50, 1]
temp_tensor = temp_tensor.squeeze(0)[diag[i]] # temp_tensor.shape = [1, 1]
result.append(temp.tensor)
# back to pytorch
result = torch.tensor(result)
result.shape # [4, 1]
The key feature here is passing values of a tensor lengths as indices for x.
Here simplified example, I swaped dimensions of container, so index dimenson goes first:
container = torch.arange(0, 50 )
container = f.reshape((5, 10))
>>>tensor([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39],
[40, 41, 42, 43, 44, 45, 46, 47, 48, 49]])
indices = torch.arange( 2, 7, dtype=torch.long )
>>>tensor([2, 3, 4, 5, 6])
print( container[ range( len(indices) ), indices] )
>>>tensor([ 2, 13, 24, 35, 46])
Note: we got one thing from a row ( range( len(indices) ) makes sequential row numbers), with column number given by indices[ row_number ]
I have the two lists of arrays
splocations = [array([1,2,3]),array([4,5,6]),array([7,8,9])]
eviddisp = [array([10,11,12]), array([13,14,15])]
which I would like to multiply with each other such that I multiply each list element (which is an array) with each other list element. Here I would get a 3x2 matrix where each element is a vector. So the matrix element [0,0] would be
array([10, 22, 36]) = array([1,2,3]) * array([10,11,12])
So this matrix would be in fact a tensor of shape 3x2x3. How can I get this tensor/matrix?
I get that I need to use array(splocations) and array(eviddisp) somehow. By I realised, I am looking for a solution with numpy's tensordot, but I don't get it right. How to I proceed?
I think this is what you want, taking automatic broadcasting into account:
from numpy import array
splocations = [array([1,2,3]),array([4,5,6]),array([7,8,9])]
eviddisp = [array([10,11,12]), array([13,14,15])]
splocations = array(splocations)
viddisp = array(eviddisp)
result = splocations[:, None, :]*eviddisp
result
array([[[ 10, 22, 36],
[ 13, 28, 45]],
[[ 40, 55, 72],
[ 52, 70, 90]],
[[ 70, 88, 108],
[ 91, 112, 135]]])
Does Numpy have any built in functions to randomly select values from a 1D numpy array with a higher weighting given to values at the end of the array? Is there an easier way to do this than defining a skewed distribution and sampling from it to get array indices?
You can give a weight to np.choice, as shown:
a = np.random.random(100) # an array to draw from
n = 10 # number of values to draw
i = np.arange(a.size) # an array of the index value for weighting
w = np.exp(i/10.) # higher weights for larger index values
w /= w.sum() # weight must be normalized
Now, access your values with:
np.random.choice(a, size=n, p=w)
Clearly you can change your weight array however you want, I did an exponential decay from the end with decay length 10; increase that decay length for a wider selection:
for np.exp(i/50.):
In [38]: np.random.choice(a, size=n, p=w)
Out[38]: array([37, 53, 45, 22, 88, 69, 56, 86, 96, 24])
for np.exp(i):
In [41]: np.random.choice(a, size=n, p=w)
Out[41]: array([99, 99, 98, 99, 99, 99, 99, 97, 99, 98])
If you only want to be able to get each value once, be sure to set replace=False, otherwise you can get the same value several times (especially if it's highly weighted, as in the second example above). See this example:
In [33]: np.random.choice(a, size=n, replace=False, p=w)
Out[33]: array([99, 84, 86, 91, 87, 81, 96, 89, 97, 95])
In [34]: np.random.choice(a, size=n, replace=True, p=w)
Out[34]: array([94, 98, 99, 98, 97, 99, 91, 96, 97, 93])
My original answer was:
If the form of the distribution doesn't really matter, you could do something like a poisson distribution of indices:
idx = np.random.poisson(size=10)
Your sample:
a[-idx-1]