I have an unknown number n of variables that can range from 0 to 1 with some known step s, with the condition that they sum up to 1. I want to create a matrix of all combinations. For example, if n=3 and s=0.33333 then the grid will be (The order is not important):
0.00, 0.00, 1.00
0.00, 0.33, 0.67
0.00, 0.67, 0.33
0.00, 1.00, 0.00
0.33, 0.00, 0.67
0.33, 0.33, 0.33
0.33, 0.67, 0.00
0.67, 0.00, 0.33
0.67, 0.33, 0.00
1.00, 0.00, 0.00
How can I do that for an arbitrary n?
Here is a direct method using itertools.combinations:
>>> import itertools as it
>>> import numpy as np
>>>
>>> # k is 1/s
>>> n, k = 3, 3
>>>
>>> combs = np.array((*it.combinations(range(n+k-1), n-1),), int)
>>> (np.diff(np.c_[np.full((len(combs),), -1), combs, np.full((len(combs),), n+k-1)]) - 1) / k
array([[0. , 0. , 1. ],
[0. , 0.33333333, 0.66666667],
[0. , 0.66666667, 0.33333333],
[0. , 1. , 0. ],
[0.33333333, 0. , 0.66666667],
[0.33333333, 0.33333333, 0.33333333],
[0.33333333, 0.66666667, 0. ],
[0.66666667, 0. , 0.33333333],
[0.66666667, 0.33333333, 0. ],
[1. , 0. , 0. ]])
If speed is a concern, itertools.combinations can be replaced by a numpy implementation.
EDIT
Here is a better solution. It basically partitions the number of steps into the amount of variables to generate all the valid combinations:
def partitions(n, k):
if n < 0:
return -partitions(-n, k)
if k <= 0:
raise ValueError('Number of partitions must be positive')
if k == 1:
return np.array([[n]])
ranges = np.array([np.arange(i + 1) for i in range(n + 1)])
parts = ranges[-1].reshape((-1, 1))
s = ranges[-1]
for _ in range(1, k - 1):
d = n - s
new_col = np.concatenate(ranges[d])
parts = np.repeat(parts, d + 1, axis=0)
s = np.repeat(s, d + 1) + new_col
parts = np.append(parts, new_col.reshape((-1, 1)), axis=1)
return np.append(parts, (n - s).reshape((-1, 1)), axis=1)
def make_grid_part(n, step):
num_steps = round(1.0 / step)
return partitions(num_steps, n) / float(num_steps)
print(make_grid_part(3, 0.33333))
Output:
array([[ 0. , 0. , 1. ],
[ 0. , 0.33333333, 0.66666667],
[ 0. , 0.66666667, 0.33333333],
[ 0. , 1. , 0. ],
[ 0.33333333, 0. , 0.66666667],
[ 0.33333333, 0.33333333, 0.33333333],
[ 0.33333333, 0.66666667, 0. ],
[ 0.66666667, 0. , 0.33333333],
[ 0.66666667, 0.33333333, 0. ],
[ 1. , 0. , 0. ]])
For comparison:
%timeit make_grid_part(5, .1)
>>> 338 µs ± 2.25 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit make_grid_simple(5, .1)
>>> 26.4 ms ± 806 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
make_grid_simple actually runs out of memory if you push it just a bit further.
Here is one simple way:
def make_grid_simple(n, step):
num_steps = round(1.0 / step)
vs = np.meshgrid(*([np.linspace(0, 1, num_steps + 1)] * n))
all_combs = np.stack([v.flatten() for v in vs], axis=1)
return all_combs[np.isclose(all_combs.sum(axis=1), 1)]
print(make_grid_simple(3, 0.33333))
Output:
[[ 0. 0. 1. ]
[ 0.33333333 0. 0.66666667]
[ 0.66666667 0. 0.33333333]
[ 1. 0. 0. ]
[ 0. 0.33333333 0.66666667]
[ 0.33333333 0.33333333 0.33333333]
[ 0.66666667 0.33333333 0. ]
[ 0. 0.66666667 0.33333333]
[ 0.33333333 0.66666667 0. ]
[ 0. 1. 0. ]]
However, this is not the most efficient way to do it, since it is simply making all the possible combinations and then just picking the ones that add up to 1, instead of generating only the right ones in the first place. For small step sizes, it may incur in too high memory cost.
Assuming that they always add up to 1, as you said:
import itertools
def make_grid(n):
# setup all possible values in one position
p = [(float(1)/n)*i for i in range(n+1)]
# combine values, filter by sum()==1
return [x for x in itertools.product(p, repeat=n) if sum(x) == 1]
print(make_grid(n=3))
#[(0.0, 0.0, 1.0),
# (0.0, 0.3333333333333333, 0.6666666666666666),
# (0.0, 0.6666666666666666, 0.3333333333333333),
# (0.0, 1.0, 0.0),
# (0.3333333333333333, 0.0, 0.6666666666666666),
# (0.3333333333333333, 0.3333333333333333, 0.3333333333333333),
# (0.3333333333333333, 0.6666666666666666, 0.0),
# (0.6666666666666666, 0.0, 0.3333333333333333),
# (0.6666666666666666, 0.3333333333333333, 0.0),
# (1.0, 0.0, 0.0)]
We can think of this as a problem of dividing some fixed number of things (1/s in this case and represented using sum_left parameter) between some given number of bins (n in this case). The most efficient way I can think of doing this is using a recursion:
In [31]: arr = []
In [32]: def fun(n, sum_left, arr_till_now):
...: if n==1:
...: n_arr = list(arr_till_now)
...: n_arr.append(sum_left)
...: arr.append(n_arr)
...: else:
...: for i in range(sum_left+1):
...: n_arr = list(arr_till_now)
...: n_arr.append(i)
...: fun(n-1, sum_left-i, n_arr)
This would give an output like:
In [36]: fun(n, n, [])
In [37]: arr
Out[37]:
[[0, 0, 3],
[0, 1, 2],
[0, 2, 1],
[0, 3, 0],
[1, 0, 2],
[1, 1, 1],
[1, 2, 0],
[2, 0, 1],
[2, 1, 0],
[3, 0, 0]]
And now I can convert it to a numpy array to do an elementwise multiplication:
In [39]: s = 0.33
In [40]: arr_np = np.array(arr)
In [41]: arr_np * s
Out[41]:
array([[ 0. , 0. , 0.99999999],
[ 0. , 0.33333333, 0.66666666],
[ 0. , 0.66666666, 0.33333333],
[ 0. , 0.99999999, 0. ],
[ 0.33333333, 0. , 0.66666666],
[ 0.33333333, 0.33333333, 0.33333333],
[ 0.33333333, 0.66666666, 0. ],
[ 0.66666666, 0. , 0.33333333],
[ 0.66666666, 0.33333333, 0. ],
[ 0.99999999, 0. , 0. ]])
This method will also work for an arbitrary sum (total):
import numpy as np
import itertools as it
import scipy.special
n = 3
s = 1/3.
total = 1.00
interval = int(total/s)
n_combs = scipy.special.comb(n+interval-1, interval, exact=True)
counts = np.zeros((n_combs, n), dtype=int)
def count_elements(elements, n):
count = np.zeros(n, dtype=int)
for elem in elements:
count[elem] += 1
return count
for i, comb in enumerate(it.combinations_with_replacement(range(n), interval)):
counts[i] = count_elements(comb, n)
ratios = counts*s
print(ratios)
Related
I have an unknown number n of variables that can range from 0 to 1 with some known step s, with the condition that they sum up to 1. I want to create a matrix of all combinations. For example, if n=3 and s=0.33333 then the grid will be (The order is not important):
0.00, 0.00, 1.00
0.00, 0.33, 0.67
0.00, 0.67, 0.33
0.00, 1.00, 0.00
0.33, 0.00, 0.67
0.33, 0.33, 0.33
0.33, 0.67, 0.00
0.67, 0.00, 0.33
0.67, 0.33, 0.00
1.00, 0.00, 0.00
How can I do that for an arbitrary n?
Here is a direct method using itertools.combinations:
>>> import itertools as it
>>> import numpy as np
>>>
>>> # k is 1/s
>>> n, k = 3, 3
>>>
>>> combs = np.array((*it.combinations(range(n+k-1), n-1),), int)
>>> (np.diff(np.c_[np.full((len(combs),), -1), combs, np.full((len(combs),), n+k-1)]) - 1) / k
array([[0. , 0. , 1. ],
[0. , 0.33333333, 0.66666667],
[0. , 0.66666667, 0.33333333],
[0. , 1. , 0. ],
[0.33333333, 0. , 0.66666667],
[0.33333333, 0.33333333, 0.33333333],
[0.33333333, 0.66666667, 0. ],
[0.66666667, 0. , 0.33333333],
[0.66666667, 0.33333333, 0. ],
[1. , 0. , 0. ]])
If speed is a concern, itertools.combinations can be replaced by a numpy implementation.
EDIT
Here is a better solution. It basically partitions the number of steps into the amount of variables to generate all the valid combinations:
def partitions(n, k):
if n < 0:
return -partitions(-n, k)
if k <= 0:
raise ValueError('Number of partitions must be positive')
if k == 1:
return np.array([[n]])
ranges = np.array([np.arange(i + 1) for i in range(n + 1)])
parts = ranges[-1].reshape((-1, 1))
s = ranges[-1]
for _ in range(1, k - 1):
d = n - s
new_col = np.concatenate(ranges[d])
parts = np.repeat(parts, d + 1, axis=0)
s = np.repeat(s, d + 1) + new_col
parts = np.append(parts, new_col.reshape((-1, 1)), axis=1)
return np.append(parts, (n - s).reshape((-1, 1)), axis=1)
def make_grid_part(n, step):
num_steps = round(1.0 / step)
return partitions(num_steps, n) / float(num_steps)
print(make_grid_part(3, 0.33333))
Output:
array([[ 0. , 0. , 1. ],
[ 0. , 0.33333333, 0.66666667],
[ 0. , 0.66666667, 0.33333333],
[ 0. , 1. , 0. ],
[ 0.33333333, 0. , 0.66666667],
[ 0.33333333, 0.33333333, 0.33333333],
[ 0.33333333, 0.66666667, 0. ],
[ 0.66666667, 0. , 0.33333333],
[ 0.66666667, 0.33333333, 0. ],
[ 1. , 0. , 0. ]])
For comparison:
%timeit make_grid_part(5, .1)
>>> 338 µs ± 2.25 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit make_grid_simple(5, .1)
>>> 26.4 ms ± 806 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
make_grid_simple actually runs out of memory if you push it just a bit further.
Here is one simple way:
def make_grid_simple(n, step):
num_steps = round(1.0 / step)
vs = np.meshgrid(*([np.linspace(0, 1, num_steps + 1)] * n))
all_combs = np.stack([v.flatten() for v in vs], axis=1)
return all_combs[np.isclose(all_combs.sum(axis=1), 1)]
print(make_grid_simple(3, 0.33333))
Output:
[[ 0. 0. 1. ]
[ 0.33333333 0. 0.66666667]
[ 0.66666667 0. 0.33333333]
[ 1. 0. 0. ]
[ 0. 0.33333333 0.66666667]
[ 0.33333333 0.33333333 0.33333333]
[ 0.66666667 0.33333333 0. ]
[ 0. 0.66666667 0.33333333]
[ 0.33333333 0.66666667 0. ]
[ 0. 1. 0. ]]
However, this is not the most efficient way to do it, since it is simply making all the possible combinations and then just picking the ones that add up to 1, instead of generating only the right ones in the first place. For small step sizes, it may incur in too high memory cost.
Assuming that they always add up to 1, as you said:
import itertools
def make_grid(n):
# setup all possible values in one position
p = [(float(1)/n)*i for i in range(n+1)]
# combine values, filter by sum()==1
return [x for x in itertools.product(p, repeat=n) if sum(x) == 1]
print(make_grid(n=3))
#[(0.0, 0.0, 1.0),
# (0.0, 0.3333333333333333, 0.6666666666666666),
# (0.0, 0.6666666666666666, 0.3333333333333333),
# (0.0, 1.0, 0.0),
# (0.3333333333333333, 0.0, 0.6666666666666666),
# (0.3333333333333333, 0.3333333333333333, 0.3333333333333333),
# (0.3333333333333333, 0.6666666666666666, 0.0),
# (0.6666666666666666, 0.0, 0.3333333333333333),
# (0.6666666666666666, 0.3333333333333333, 0.0),
# (1.0, 0.0, 0.0)]
We can think of this as a problem of dividing some fixed number of things (1/s in this case and represented using sum_left parameter) between some given number of bins (n in this case). The most efficient way I can think of doing this is using a recursion:
In [31]: arr = []
In [32]: def fun(n, sum_left, arr_till_now):
...: if n==1:
...: n_arr = list(arr_till_now)
...: n_arr.append(sum_left)
...: arr.append(n_arr)
...: else:
...: for i in range(sum_left+1):
...: n_arr = list(arr_till_now)
...: n_arr.append(i)
...: fun(n-1, sum_left-i, n_arr)
This would give an output like:
In [36]: fun(n, n, [])
In [37]: arr
Out[37]:
[[0, 0, 3],
[0, 1, 2],
[0, 2, 1],
[0, 3, 0],
[1, 0, 2],
[1, 1, 1],
[1, 2, 0],
[2, 0, 1],
[2, 1, 0],
[3, 0, 0]]
And now I can convert it to a numpy array to do an elementwise multiplication:
In [39]: s = 0.33
In [40]: arr_np = np.array(arr)
In [41]: arr_np * s
Out[41]:
array([[ 0. , 0. , 0.99999999],
[ 0. , 0.33333333, 0.66666666],
[ 0. , 0.66666666, 0.33333333],
[ 0. , 0.99999999, 0. ],
[ 0.33333333, 0. , 0.66666666],
[ 0.33333333, 0.33333333, 0.33333333],
[ 0.33333333, 0.66666666, 0. ],
[ 0.66666666, 0. , 0.33333333],
[ 0.66666666, 0.33333333, 0. ],
[ 0.99999999, 0. , 0. ]])
This method will also work for an arbitrary sum (total):
import numpy as np
import itertools as it
import scipy.special
n = 3
s = 1/3.
total = 1.00
interval = int(total/s)
n_combs = scipy.special.comb(n+interval-1, interval, exact=True)
counts = np.zeros((n_combs, n), dtype=int)
def count_elements(elements, n):
count = np.zeros(n, dtype=int)
for elem in elements:
count[elem] += 1
return count
for i, comb in enumerate(it.combinations_with_replacement(range(n), interval)):
counts[i] = count_elements(comb, n)
ratios = counts*s
print(ratios)
I have the following code to plot scalar x vs scalar f(x) where there is some matrix multiplication inside the function:
import numpy as np
import matplotlib.pyplot as plt
from numpy.linalg import matrix_power
P=np.array([\
[0,0,0.5,0,0.5],\
[0,0,1,0,0], \
[.25,.25,0,.25,.25], \
[0,0,.5,0,.5], \
[0,0,0,0,1], \
])
t=np.array([0,1,0,0,0])
ones=np.array([1,1,1,1,0])
def f(x):
return t.dot(matrix_power(P,x)).dot(ones)
x=np.arange(1,20)
plt.plot(x, f(x))
Now, the function by itself works fine.
>>> f(1)
1.0
>>> f(2)
0.75
But the plotting raises the error exponent must be an integer.
To put it another way, how do I evaluate this function upon an array? e.g.
f(np.array([1,2]))
I tried replacing the plot line with
plt.plot(x, map(f,x))
But this didn't help.
How can I fix this?
In [1]: P=np.array([\
...: [0,0,0.5,0,0.5],\
...: [0,0,1,0,0], \
...: [.25,.25,0,.25,.25], \
...: [0,0,.5,0,.5], \
...: [0,0,0,0,1], \
...: ])
In [2]:
In [2]: P
Out[2]:
array([[0. , 0. , 0.5 , 0. , 0.5 ],
[0. , 0. , 1. , 0. , 0. ],
[0.25, 0.25, 0. , 0.25, 0.25],
[0. , 0. , 0.5 , 0. , 0.5 ],
[0. , 0. , 0. , 0. , 1. ]])
In [4]: np.linalg.matrix_power(P,3)
Out[4]:
array([[0. , 0. , 0.25 , 0. , 0.75 ],
[0. , 0. , 0.5 , 0. , 0.5 ],
[0.125, 0.125, 0. , 0.125, 0.625],
[0. , 0. , 0.25 , 0. , 0.75 ],
[0. , 0. , 0. , 0. , 1. ]])
In [5]: np.linalg.matrix_power(P,np.arange(0,4))
---------------------------------------------------------------------------
TypeError: exponent must be an integer
So just give it the integer that it wants:
In [10]: [f(i) for i in range(4)]
Out[10]: [1.0, 1.0, 0.75, 0.5]
pylab.plot(np.arange(25), [f(i) for i in np.arange(25)])
From the matrix_power code:
a = asanyarray(a)
_assertRankAtLeast2(a)
_assertNdSquareness(a)
try:
n = operator.index(n)
except TypeError:
raise TypeError("exponent must be an integer")
....
Here's what it does for n=3:
In [5]: x = np.arange(9).reshape(3,3)
In [6]: np.linalg.matrix_power(x,3)
Out[6]:
array([[ 180, 234, 288],
[ 558, 720, 882],
[ 936, 1206, 1476]])
In [7]: x#x#x
Out[7]:
array([[ 180, 234, 288],
[ 558, 720, 882],
[ 936, 1206, 1476]])
You could define a matrix_power function that accepts an array of powers:
def matrix_power(P,x):
return np.array([np.linalg.matrix_power(P,i) for i in x])
With this matrix_power(P,np.arange(25)) would produce a (25,5,5) array. And your f(x) actually does work with that, returning a (25,) shape array. But I wonder, was that just fortuitous, or was it intentional? Did you write f with a 3d power array in mind?
t.dot(matrix_power(P,x)).dot(ones)
I would like calculate the sum of two in two column in a matrix(the sum between the columns 0 and 1, between 2 and 3...).
So I tried to do nested "for" loops but at every time I haven't the good results.
For example:
c = np.array([[0,0,0.25,0.5],[0,0.5,0.25,0],[0.5,0,0,0]],float)
freq=np.zeros(6,float).reshape((3, 2))
#I calculate the sum between the first and second column, and between the fird and the fourth column
for i in range(0,4,2):
for j in range(1,4,2):
for p in range(0,2):
freq[:,p]=(c[:,i]+c[:,j])
But the result is:
print freq
array([[ 0.75, 0.75],
[ 0.25, 0.25],
[ 0. , 0. ]])
Normaly the good result must be (0., 0.5,0.5) and (0.75,0.25,0). So I think the problem is in the nested "for" loops.
Is there a person who know how I can calculate the sum every two columns, because I have a matrix with 400 columns?
You can simply reshape to split the last dimension into two dimensions, with the last dimension of length 2 and then sum along it, like so -
freq = c.reshape(c.shape[0],-1,2).sum(2).T
Reshaping only creates a view into the array, so effectively, we are just using the summing operation here and as such must be efficient.
Sample run -
In [17]: c
Out[17]:
array([[ 0. , 0. , 0.25, 0.5 ],
[ 0. , 0.5 , 0.25, 0. ],
[ 0.5 , 0. , 0. , 0. ]])
In [18]: c.reshape(c.shape[0],-1,2).sum(2).T
Out[18]:
array([[ 0. , 0.5 , 0.5 ],
[ 0.75, 0.25, 0. ]])
Add the slices c[:, ::2] and c[:, 1::2]:
In [62]: c
Out[62]:
array([[ 0. , 0. , 0.25, 0.5 ],
[ 0. , 0.5 , 0.25, 0. ],
[ 0.5 , 0. , 0. , 0. ]])
In [63]: c[:, ::2] + c[:, 1::2]
Out[63]:
array([[ 0. , 0.75],
[ 0.5 , 0.25],
[ 0.5 , 0. ]])
Here is one way using np.split():
In [36]: np.array(np.split(c, np.arange(2, c.shape[1], 2), axis=1)).sum(axis=-1)
Out[36]:
array([[ 0. , 0.5 , 0.5 ],
[ 0.75, 0.25, 0. ]])
Or as a more general way even for odd length arrays:
In [87]: def vertical_adder(array):
return np.column_stack([np.sum(arr, axis=1) for arr in np.array_split(array, np.arange(2, array.shape[1], 2), axis=1)])
....:
In [88]: vertical_adder(c)
Out[88]:
array([[ 0. , 0.75],
[ 0.5 , 0.25],
[ 0.5 , 0. ]])
In [94]: a
Out[94]:
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]])
In [95]: vertical_adder(a)
Out[95]:
array([[ 1, 5, 4],
[11, 15, 9],
[21, 25, 14]])
I am porting some matlab code to python using numpy and I have the following matlab command:
[xgrid,ygrid]=meshgrid(linspace(-0.5,0.5, GridSize-1), ...
linspace(-0.5,0.5, GridSize-1));
Now, this is fine in 2D but I would like to extend this to n-dimensional. So depending on the input data, GridSize can be a 2, 3 or 4 dimensional vector. So, in 2D this would be:
[xgrid, grid] = np.meshgrid(np.linspace(-0.5,0.5, GridSize[0]),
np.linspace(-0.5,0.5, GridSize[1]));
However, I do not know the dimensions of the input before, so is it possible to rewrite this expression, so that it can generate grids with arbitrary number of dimensions?
You could use loop comprehension to generate all 1D arrays and then use np.meshgrid on all those with * operator that internally does unpacking of argument lists, which is equivalent of MATLAB's comma separated lists, like so -
allG = [np.linspace(-0.5,0.5, G) for G in GridSize]
out = np.meshgrid(*allG)
Sample runs
1) 2D Case :
In [27]: GridSize = [3,4]
In [28]: allG = [np.linspace(-0.5,0.5, G) for G in GridSize]
...: out = np.meshgrid(*allG)
...:
In [29]: out[0]
Out[29]:
array([[-0.5, 0. , 0.5],
[-0.5, 0. , 0.5],
[-0.5, 0. , 0.5],
[-0.5, 0. , 0.5]])
In [30]: out[1]
Out[30]:
array([[-0.5 , -0.5 , -0.5 ],
[-0.16666667, -0.16666667, -0.16666667],
[ 0.16666667, 0.16666667, 0.16666667],
[ 0.5 , 0.5 , 0.5 ]])
2) 3D Case :
In [51]: GridSize = [3,4,2]
In [52]: allG = [np.linspace(-0.5,0.5, G) for G in GridSize]
...: out = np.meshgrid(*allG)
...:
In [53]: out[0]
Out[53]:
array([[[-0.5, -0.5],
[ 0. , 0. ],
[ 0.5, 0.5]], ...
[[-0.5, -0.5],
[ 0. , 0. ],
[ 0.5, 0.5]]])
In [54]: out[1]
Out[54]:
array([[[-0.5 , -0.5 ], ...
[[ 0.16666667, 0.16666667],
[ 0.16666667, 0.16666667],
[ 0.16666667, 0.16666667]],
[[ 0.5 , 0.5 ],
[ 0.5 , 0.5 ],
[ 0.5 , 0.5 ]]])
In [55]: out[2]
Out[55]:
array([[[-0.5, 0.5], ....
[[-0.5, 0.5],
[-0.5, 0.5],
[-0.5, 0.5]]])
I have a matrix that should have ones on the diagonal but the columns are mixed up.
But I don't know how, without the obvious for loop, to efficiently interchange rows to get unity on the diagonals. I'm not even sure what key I would pass to sort on.
Any suggestions?
You can use numpy's argmax to determine the goal column ordering and reorder your matrix using the argmax results as column indices:
>>> z = numpy.array([[ 0.1 , 0.1 , 1. ],
... [ 1. , 0.1 , 0.09],
... [ 0.1 , 1. , 0.2 ]])
numpy.argmax(z, axis=1)
>>> array([2, 0, 1]) #Goal column indices
z[:,numpy.argmax(z, axis=1)]
>>> array([[ 1. , 0.1 , 0.1 ],
... [ 0.09, 1. , 0.1 ],
... [ 0.2 , 0.1 , 1. ]])
>>> import numpy as np
>>> a = np.array([[ 1. , 0.5, 0.5, 0. ],
... [ 0.5, 0.5, 1. , 0. ],
... [ 0. , 1. , 0. , 0.5],
... [ 0. , 0.5, 0.5, 1. ]])
>>> np.array(sorted(a, cmp=lambda x, y: list(x).index(1) - list(y).index(1)))
array([[ 1. , 0.5, 0.5, 0. ],
[ 0. , 1. , 0. , 0.5],
[ 0.5, 0.5, 1. , 0. ],
[ 0. , 0.5, 0.5, 1. ]])
It actually sorts by rows, not columns (but the result is the same). It works by sorting by the index of the column the 1 is in.