Python/Numpy - Get Index into Main Array from Subset - python

Say I have a 100 element numpy array. I perform some calculation on a subset of this array - maybe 20 elements where some condition is met. Then I pick an index in this subset, how can I (efficiently) recover the index in the first array? I don't want to perform the calculation on all values in a because it is expensive, so I only want to perform it where it is required (where that condition is met).
Here is some pseudocode to demonstrate what I mean (the 'condition' here is the list comprehension):
a = np.arange(100) # size = 100
b = some_function(a[[i for i in range(0,100,5)]]) # size = 20
Index = np.argmax(b)
# Index gives the index of the maximum value in b,
# but what I really want is the index of the element
# in a
EDIT:
I wasn't being very clear, so I've provided a more full example. I hope this makes it more clear about what my goal is. I feel like there is some clever and efficient way to do this, without some loops or lookups.
CODE:
import numpy as np
def some_function(arr):
return arr*2.0
a = np.arange(100)*2. # size = 100
b = some_function(a[[i for i in range(0,100,5)]]) # size = 20
Index = np.argmax(b)
print Index
# Index gives the index of the maximum value in b, but what I really want is
# the index of the element in a
# In this specific case, Index will be 19. So b[19] is the largest value
# in b. Now, what I REALLY want is the index in a. In this case, that would
# 95 because some_function(a[95]) is what made the largest value in b.
print b[Index]
print some_function(a[95])
# It is important to note that I do NOT want to change a. I will perform
# several calculations on SOME values of a, then return the indices of 'a' where
# all calculations meet some condition.

I am not sure if I understand your question. So, correct me if I am wrong.
Let's say you have something like
a = np.arange(100)
condition = (a % 5 == 0) & (a % 7 == 0)
b = a[condition]
index = np.argmax(b)
# The following should do what you want
a[condition][index]
Or if you don't want to work with masks:
a = np.arange(100)
b_indices = np.where(a % 5 == 0)
b = a[b_indices]
index = np.argmax(b)
# Get the value of 'a' corresponding to 'index'
a[b_indices][index]
Is this what you want?

Use a secondary array, a_index, which is just the indices of the elements of a, so a_index[3,5] = (3,5). Then you can get the original index as a_index[condition == True][Index].
If you can guarantee that b is a view on a, you can use the memory layout information of the two arrays to find a translation between b's and a's indices.

Does something like this work ?
mask = S == 1
ind_local = np.argmax(X[mask])
G = np.ravel_multi_index(np.where(mask), mask.shape)
ind_global = np.unravel_index(G[ind_local], mask.shape)
return ind_global
This returns the global index of the argmax.

Normally you'd store the index based on the condition before you made any changes to the array. You use the index to make the changes.
If a is your array:
>>> a = np.random.random((10,5))
>>> a
array([[ 0.22481885, 0.80522855, 0.1081426 , 0.42528799, 0.64471832],
[ 0.28044374, 0.16202575, 0.4023426 , 0.25480368, 0.87047212],
[ 0.84764143, 0.30580141, 0.16324907, 0.20751965, 0.15903343],
[ 0.55861168, 0.64368466, 0.67676172, 0.67871825, 0.01849056],
[ 0.90980614, 0.95897292, 0.15649259, 0.39134528, 0.96317126],
[ 0.20172827, 0.9815932 , 0.85661944, 0.23273944, 0.86819205],
[ 0.98363954, 0.00219531, 0.91348196, 0.38197302, 0.16002007],
[ 0.48069675, 0.46057327, 0.67085243, 0.05212357, 0.44870942],
[ 0.7031601 , 0.50889065, 0.30199446, 0.8022497 , 0.82347358],
[ 0.57058441, 0.38748261, 0.76947605, 0.48145936, 0.26650583]])
And b is your subarray:
>>> b = a[2:4,2:7]
>>> b
array([[ 0.16324907, 0.20751965, 0.15903343],
[ 0.67676172, 0.67871825, 0.01849056]])
It can be shown that a still owns the data in b:
>>> b.base
array([[ 0.22481885, 0.80522855, 0.1081426 , 0.42528799, 0.64471832],
[ 0.28044374, 0.16202575, 0.4023426 , 0.25480368, 0.87047212],
[ 0.84764143, 0.30580141, 0.16324907, 0.20751965, 0.15903343],
[ 0.55861168, 0.64368466, 0.67676172, 0.67871825, 0.01849056],
[ 0.90980614, 0.95897292, 0.15649259, 0.39134528, 0.96317126],
[ 0.20172827, 0.9815932 , 0.85661944, 0.23273944, 0.86819205],
[ 0.98363954, 0.00219531, 0.91348196, 0.38197302, 0.16002007],
[ 0.48069675, 0.46057327, 0.67085243, 0.05212357, 0.44870942],
[ 0.7031601 , 0.50889065, 0.30199446, 0.8022497 , 0.82347358],
[ 0.57058441, 0.38748261, 0.76947605, 0.48145936, 0.26650583]])
You can make changes to both a and b in two ways:
>>> b+=1
>>> b
array([[ 1.16324907, 1.20751965, 1.15903343],
[ 1.67676172, 1.67871825, 1.01849056]])
>>> a
array([[ 0.22481885, 0.80522855, 0.1081426 , 0.42528799, 0.64471832],
[ 0.28044374, 0.16202575, 0.4023426 , 0.25480368, 0.87047212],
[ 0.84764143, 0.30580141, 1.16324907, 1.20751965, 1.15903343],
[ 0.55861168, 0.64368466, 1.67676172, 1.67871825, 1.01849056],
[ 0.90980614, 0.95897292, 0.15649259, 0.39134528, 0.96317126],
[ 0.20172827, 0.9815932 , 0.85661944, 0.23273944, 0.86819205],
[ 0.98363954, 0.00219531, 0.91348196, 0.38197302, 0.16002007],
[ 0.48069675, 0.46057327, 0.67085243, 0.05212357, 0.44870942],
[ 0.7031601 , 0.50889065, 0.30199446, 0.8022497 , 0.82347358],
[ 0.57058441, 0.38748261, 0.76947605, 0.48145936, 0.26650583]])
Or:
>>> a[2:4,2:7]+=1
>>> a
array([[ 0.22481885, 0.80522855, 0.1081426 , 0.42528799, 0.64471832],
[ 0.28044374, 0.16202575, 0.4023426 , 0.25480368, 0.87047212],
[ 0.84764143, 0.30580141, 1.16324907, 1.20751965, 1.15903343],
[ 0.55861168, 0.64368466, 1.67676172, 1.67871825, 1.01849056],
[ 0.90980614, 0.95897292, 0.15649259, 0.39134528, 0.96317126],
[ 0.20172827, 0.9815932 , 0.85661944, 0.23273944, 0.86819205],
[ 0.98363954, 0.00219531, 0.91348196, 0.38197302, 0.16002007],
[ 0.48069675, 0.46057327, 0.67085243, 0.05212357, 0.44870942],
[ 0.7031601 , 0.50889065, 0.30199446, 0.8022497 , 0.82347358],
[ 0.57058441, 0.38748261, 0.76947605, 0.48145936, 0.26650583]])
>>> b
array([[ 1.16324907, 1.20751965, 1.15903343],
[ 1.67676172, 1.67871825, 1.01849056]])
Both are equivalent and neither is more expensive than the other. Therefore as long as you retain the indices that created b from a, you can always view the changed data in the base array. Often it is not even necessary to create a subarray when doing operations on slices.
Edit
This assumes some_func returns the indices in the subarray where some condition is true.
I think when a function returns indices and you only want to feed that function a subarray, you still need to store the indices of that subarray and use them to get the base array indices. For example:
>>> def some_func(a):
... return np.where(a>.8)
>>> a = np.random.random((10,4))
>>> a
array([[ 0.94495378, 0.55532342, 0.70112911, 0.4385163 ],
[ 0.12006191, 0.93091941, 0.85617421, 0.50429453],
[ 0.46246102, 0.89810859, 0.31841396, 0.56627419],
[ 0.79524739, 0.20768512, 0.39718061, 0.51593312],
[ 0.08526902, 0.56109783, 0.00560285, 0.18993636],
[ 0.77943988, 0.96168229, 0.10491335, 0.39681643],
[ 0.15817781, 0.17227806, 0.17493879, 0.93961027],
[ 0.05003535, 0.61873245, 0.55165992, 0.85543841],
[ 0.93542227, 0.68104872, 0.84750821, 0.34979704],
[ 0.06888627, 0.97947905, 0.08523711, 0.06184216]])
>>> i_off, j_off = 3,2
>>> b = a[i_off:,j_off:] #b
>>> i = some_func(b) #indicies in b
>>> i
(array([3, 4, 5]), array([1, 1, 0]))
>>> map(sum, zip(i,(i_off, j_off))) # indicies in a
[array([6, 7, 8]), array([3, 3, 2])]
Edit 2
This assumes some_func returns a modified copy of the subarray b.
Your example would look something like this:
import numpy as np
def some_function(arr):
return arr*2.0
a = np.arange(100)*2. # size = 100
idx = np.array(range(0,100,5))
b = some_function(a[idx]) # size = 20
b_idx = np.argmax(b)
a_idx = idx[b_idx] # indices in a translated from indices in b
print b_idx, a_idx
print b[b_idx], a[a_idx]
assert b[b_idx] == 2* a[a_idx] #true!

Related

How to slice an array around its minimun

I am trying to define a function that finds the minimum value of an array and slices it around that value (plus or minus 5 positions). My array looks something like this:
[[ 0. 9.57705087]
[ 0.0433 9.58249315]
[ 0.0866 9.59745942]
[ 0.1299 9.62194967]
[ 0.1732 9.65324278]
[ 0.2165 9.68725702]
[ 0.2598 9.72263184]
[ 0.3031 9.75256437]
[ 0.3464 9.77025178]
[ 0.3897 9.76889121]
[ 0.433 9.74167982]
[ 0.4763 9.68589645]
[ 0.5196 9.59881999]
[ 0.5629 9.48861383]
[ 0.6062 9.3593597 ]]
However, I am dealing with much larger sets and need a function that can do it automatically without me having to manually find the minimun and then slice the array around that.I want to find the minimun of the array[:,1] values and then apply the slicing to the whole array.
Use np.argmin() to get the index of the minimum value. This will do it using the second column only (you haven't specified if it's the minimum value across columns or not).
your_array[:np.argmin(your_array[:, 1]), :]
To slice it 5 values further than the minimum, use:
your_array[:np.argmin(your_array[:, 1]) + 5, :]
Given your objective array:
import numpy as np
anarray = np.array([[ 0., 9.57705087],
[ 0.0433, 9.58249315],
[ 0.0866, 9.59745942],
[ 0.1299, 9.62194967],
[ 0.1732, 9.65324278],
[ 0.2165, 9.68725702],
[ 0.2598, 9.72263184],
[ 0.3031, 9.75256437],
[ 0.3464, 9.77025178],
[ 0.3897, 9.76889121],
[ 0.433, 9.74167982],
[ 0.4763, 9.68589645],
[ 0.5196, 9.59881999],
[ 0.5629, 0.48861383],
[ 0.6062, 9.3593597]])
This function will do the job:
def slice_by_five(array):
argmin = np.argmin(array[:,1])
if argmin < 5:
return array[:argmin+6,:]
return array[argmin-5:argmin+6,:]
check = slice_by_five(anarray)
print(check)
Output:
[[0.3897 9.76889121]
[0.433 9.74167982]
[0.4763 9.68589645]
[0.5196 9.59881999]
[0.5629 9.48861383]
[0.6062 9.3593597 ]]
The function can certainly be generalized to account for any neighborhood of size n:
def slice_by_n(array, n):
argmin = np.argmin(array[:,1])
if argmin < n:
return array[:argmin+n+1,:]
return array[argmin-n:argmin+n+1,:]
check = slice_by_n(anarray, 2)
print(check)
Output:
[[0.5196 9.59881999]
[0.5629 9.48861383]
[0.6062 9.3593597 ]]

Filter numpy ndarray with another ndarray, row by row

I have 2 numpy ndarray
The first contain x and y values :
xy_arr = [[ 736190.125 1130. ]
[ 736190.16666667 1130. ]
[ 736190.20833333 1130. ]
...,
[ 736190.375 1140. ]
[ 736190.41666667 1140. ]
[ 736190.45833333 1140. ]
[ 736190.5 1140. ]]
the second have x y and index values and is much bigger than the first:
xyind_arr = [[ 7.35964000e+05 1.02000000e+03 0.00000000e+00]
[ 7.35964042e+05 1.02000000e+03 1.00000000e+00]
[ 7.35964083e+05 1.02000000e+03 2.00000000e+00]
...,
[ 7.36613397e+05 1.09500000e+03 3.07730000e+04]
[ 7.36613404e+05 1.10000000e+03 3.07740000e+04]
[ 7.36613411e+05 1.10500000e+03 3.07750000e+04]]
I want to keep all rows of the xyind_arr where values are same in xy_arr like :
(xyind_arr[:,0] == xy_arr[:,0]) and (xyind_arr[:,1] == xy_arr[:,1])
My code :
sub_array = xyind_arr[((xyind_arr[:, 0] == xy_arr[:, 0]) &
(xyind_arr[:, 1] == xy_arr[:, 1]))]
Only work if the xy_array have one element.
For example :
import numpy as np
xy_arr = np.array([[56, 400]])
xyind_arr = np.array([[5, 6, 0],[8, 12, 1],[9, 17, 2],[56, 400, 3],[23, 89, 4]])
sub_array = xyind_arr[((xyind_arr[:, 0] == xy_arr[:, 0]) &
(xyind_arr[:, 1] == xy_arr[:, 1]))]
print(sub_array)
result OK :
[[ 56 400 3]]
But with
xy_arr = np.array([[5, 6],[8, 12],[23, 89]])
The result is
[]
And I expected
[[5, 6, 0],[8, 12, 1],[23, 89, 4]]
Is there any clean numpy method to obtain this filtered sub array ?
Edit :
Finally I let down the numpy solution and use the python set() :
xy_arr_set = set(map(tuple, xy_arr))
xyind_arr_set = set(map(tuple, xyind_arr))
for x, y, ind in xyind_arr_set:
if (x,y) in xy_arr_set:
"do what i need"
There is numpy.isin but it tests only against a scalar array; there is no tuple-comparison in it. You could use this method to find all rows of Array1 where the 0th column entry is in 0th column of Array2, and also the 1st column entry is in 1st column of Array2. But this is different from your task, because there is no guarantee that both 0th and 1st entry were found in the same row of Array2.
Since xyind_arr is much larger, I think it should be acceptable to loop over the smaller array xy_arr, applying one of the xy_arr filters at a time, and concatenate the results. For this to work, the rows of xy_arr must be unique, so better check that first:
xy_arr = np.unique(xy_arr, axis=0)
sub_array = np.concatenate([xyind_arr[(xyind_arr[:, 0] == xy_arr[k, 0]) &
(xyind_arr[:, 1] == xy_arr[k, 1])]
for k in np.arange(xy_arr.shape[0])], axis=0)
Note: the order of rows will not be preserved.

List of List of List slicing in Python

I have simulated 10000 scenarios for 4 variables during 120 months.
Hence, I have a scenarios list of lists of lists on which to get and element I would have to use scenarios[1][1][1], for example, and this would give me a float.
I want to slice this in two, dividing by the second list. Which means I want to keep the 10000 scenarios for 4 variables for the first 60 months.
How would I go about doing this?
My intuition would tell me to do
scenarios[:][0:60]
but this does not work. Instead of cutting the second list, it cuts the first. What is wrong?
Example:
Q = data.cov().as_matrix() # monthly covariance matrix Q
r=[0.00565,0.00206,0.00368,0.00021] # monthly return
scenarios = [[]]*10000
for i in range(10000):
scenarios[i] = np.random.multivariate_normal(r, Q, size = 120) # monthly scenarios
In my case, Q=
2.167748064990633258e-03 -8.736421379048196659e-05 1.457397098602368978e-04 2.799384719379381381e-06
-8.736421379048196659e-05 9.035930360181909865e-04 3.196576120840064102e-04 3.197146643002681875e-06
1.457397098602368978e-04 3.196576120840064102e-04 2.390042779951682440e-04 2.312645986876262622e-06
2.799384719379381381e-06 3.197146643002681875e-06 2.312645986876262622e-06 4.365866475269951553e-06
Use a list comprehension:
early_scenarios = [x[:60] for x in scenarios]
So, you are trying to use multidimensional slicing on Python list objects, but fundamentally, list objects do not have dimensions. They have no inherent knowledge of their contents, other than the total number of them. But, you *shouldn't be working with list objects at all! Instead, replace this:
scenarios = [[]]*10000
for i in range(10000):
scenarios[i] = np.random.multivariate_normal(r, Q, size = 120) # monthly scenarios
With this:
scenarios = np.random.multivariate_normal(r, Q, size=(1000, 120))
In a REPL:
>>> scenarios = np.random.multivariate_normal(r, Q, size=(1000, 120))
>>> scenarios.shape
(1000, 120, 4)
Then, you can slice to your heart's content in N dimensions using:
scenarios[:, 0:60]
Or, a more wieldy slice:
>>> scenarios[500:520, 0:60]
array([[[-0.05785267, 0.01122828, 0.00786622, -0.00204875],
[ 0.01682276, 0.00163375, 0.00439909, -0.0022255 ],
[ 0.02821342, -0.01634708, 0.01175085, -0.00194007],
...,
[ 0.04918003, -0.02146014, 0.00071328, -0.00222226],
[-0.03782566, -0.00685615, -0.00837397, -0.00095019],
[-0.06164655, 0.02817698, 0.01001757, -0.00149662]],
[[ 0.00071181, -0.00487313, -0.01471801, -0.00180559],
[ 0.05826763, 0.00978292, 0.02442642, -0.00039461],
[ 0.04382627, -0.00804489, 0.00046985, 0.00086524],
...,
[ 0.01231702, 0.01872649, 0.01534518, -0.0022179 ],
[ 0.04212831, -0.05289387, -0.03184881, -0.00078165],
[-0.04361605, -0.01297212, 0.00135886, 0.0057856 ]],
[[ 0.00232622, 0.01773357, 0.00795682, 0.00016406],
[-0.04367355, -0.02387383, -0.00448453, 0.0008559 ],
[ 0.01256918, 0.06565425, 0.05170755, 0.00046948],
...,
[ 0.04457427, -0.01816762, 0.00068176, 0.00186112],
[ 0.00220281, -0.01119046, 0.0103347 , -0.00089715],
[ 0.02178122, 0.03183001, 0.00959293, -0.00057862]],
...,
[[ 0.06338153, 0.01641472, 0.01962643, -0.00256244],
[ 0.07537754, -0.0442643 , -0.00362656, 0.00153777],
[ 0.0505006 , 0.0070783 , 0.01756948, 0.0029576 ],
...,
[ 0.03524508, -0.03547517, -0.00664972, -0.00095385],
[-0.03699107, 0.02256328, 0.00300107, 0.00253193],
[-0.0199608 , -0.00536222, 0.01370301, -0.00131981]],
[[ 0.08601913, -0.00364473, 0.00946769, 0.00045275],
[ 0.01943327, 0.07420857, 0.00109217, -0.00183334],
[-0.04481884, -0.02515305, -0.02357894, -0.00198166],
...,
[-0.01221928, -0.01241903, 0.00928084, 0.00066379],
[ 0.10871802, -0.01264407, 0.00601223, 0.00090526],
[-0.02603179, -0.00413112, -0.006037 , 0.00522712]],
[[-0.02929114, 0.02188803, -0.00427137, 0.00250174],
[ 0.02479416, -0.01470632, -0.01355196, 0.00338125],
[-0.01915726, -0.00869161, 0.01451885, -0.00137969],
...,
[ 0.05398784, -0.00834729, -0.00437888, 0.00081602],
[ 0.00626345, -0.0261016 , -0.01484753, 0.00060499],
[ 0.05427697, 0.04006612, 0.03371313, -0.00203731]]])
>>>
You need to explicitly slice each secondary list, either in a loop or in list comprehensions. I built a 10x10 set of lists so you have to change the indexing to fit your problem:
x = []
for a in range(10):
x.append([10*a+n for n in range(10)])
# x is now a list of 10 lists, each of which has 10 elements
print(x)
x1 = [a[:5] for a in x]
# x1 is a list of containing the low elements of the secondary lists
x2 = [a[5:] for a in x]
# x2 is a list containing the high elements of the secondary lists
print(x1, x2)
Python slicing doesn't consider all dimension like this. Your expression makes a copy of the entire list, scenarios[:], and then takes the first 60 elements of the copy. You need to write a comprehension to grab the elements you want.
Perhaps
[scenarios[x][y][z]
for x in range(len(scenarios))
for y in range(60)
for z in range(len(scenarios[0][0])) ]

Python - can't figure out how to multiply 2 panels (3D arrays) without looping

I have a panel like this:
values = ['count1','count2','price1','price2']
fruit = ['apple','orange']
days = ['d1','d2']
dictx = {}
list_count = []
list_price = []
for v in values:
if 'count' in v:
dictx[v] = pd.DataFrame(np.random.randint(0,10,size=(len(days), len(fruit))), columns=fruit)
list_count.append(v)
else:
dictx[v] = pd.DataFrame(np.random.rand(len(days), len(fruit)), columns=fruit)
list_price.append(v)
pan = pd.Panel.from_dict(dictx)
Without looping (pretend I had "count" and "price" items 1-100000), I am trying to achieve this:
(pan.ix['count1',:,:] * pan.ix['price1',:,:])
+ (pan.ix['count2',:,:] * pan.ix['price2',:,:])
+ (pan.ix['countn',:,:] * pan.ix['pricen',:,:])
I created list_count and list_price as I thought they could be used like this: pan.ix[list_count,:,:] & pan.ix[list_price,:,:]
So I have one more dimension than I know how to deal with at the moment. I hope there is some wonderful 3D array or panel function that can swing this.
Thanks!
If I understand correctly you might be looking for this:
sum(pan.ix[list_count].values * pan.ix[list_price].values)
The .values attribute is a numpy array which then lets you do your elementwise multiplication, addition, etc.
Example
>>> pan.ix[list_count].values
array([[[8, 9],
[2, 0]],
[[5, 1],
[8, 3]]])
>>> pan.ix[list_price].values
array([[[ 0.57644595, 0.52264882],
[ 0.82041129, 0.16165434]],
[[ 0.08450438, 0.58036628],
[ 0.90809822, 0.77834048]]])
>>> pan.ix[list_count].values * pan.ix[list_price].values
array([[[ 4.6115676 , 4.7038394 ],
[ 1.64082259, 0. ]],
[[ 0.42252192, 0.58036628],
[ 7.26478578, 2.33502144]]])
>>> sum(pan.ix[list_count].values * pan.ix[list_price].values)
array([[ 5.03408952, 5.28420568],
[ 8.90560837, 2.33502144]])

Slice np.array, rankdata and return ranks to parent of slice

I started with the following array, that is a set of 3 values for 3 fields that I need to rank for the 3 objects with ids 123,124,126. Ultimately in a report I will look up the values and ranks by the object_id.
ha = np.array(
[
(123,5,3,4),
(124,4,999,3),
(126,6,5,999)
], dtype=[
('object_id','int8'),('val1','int16'),
('val2','int16'),('val3','int16')])
I am not sure exactly how best to rank them and store that data. My plan was to make a copy of this array, use scipy.stats.rankdata to rank field and store values.
ra = np.copy(ha)
ra['val1'] = rankdata(ha['val1'], method='min').astype(int)
This works except for the case when the object doesn't have a value for a specific field it default to 999 and then these objects need to be removed from the ranking. This is what my code looks like now:
ra = np.copy(ha)
subset = ha[np.where(ha['val1'] < 999)
ranks = rankdata(subset['val1'], method='min').astype(int)
My problem now is how to get the ranks values back into my ra array in the correct position? It is a subset of ha, which means it is no longer the same size as ha or ra
EDIT:
This is the result I need to end up with after taking subsets of the first array and rankings the values < 999 from lowest to highest.
ra = np.array(
[
(123,2,1,2),
(124,1,0,1),
(126,3,2,0)
], dtype=[
('object_id','int8'),('val1','int16'),
('val2','int16'),('val3','int16')])
SOLUTION
>>> ha = np.array(
[
(123,5,3,4),
(124,4,999,3),
(126,6,5,999)
], dtype=[
('object_id','int8'),('val1','int16'),
('val2','int16'),('val3','int16')])
>>> c = np.copy(ha)
>>> i = ha['val2']<999
>>> c['val2'] = 0
>>> c['val2'][i] = rankdata(ha['val2'][i], method='max').astype(int)
>>> c['val2']
array([1, 0, 2], dtype=int16)
Is this the sort of thing you want (using a simpler sort on a 1d array)?
In [14]: x=np.array([1,0,999,3,2])
In [15]: i=x<999
In [16]: np.sort(x[i])
Out[16]: array([0, 1, 2, 3])
In [17]: y=x.copy()
In [18]: y[i]=np.sort(x[i])
In [19]: y
Out[19]: array([ 0, 1, 999, 2, 3])

Categories

Resources