Related
[[ 0, 0, 0, 0, 255, 0, 0, 0, 0],
[ 0, 0, 255, 255, 255, 255, 255, 0, 0],
[ 0, 255, 255, 255, 255, 255, 255, 255, 0],
[ 0, 255, 255, 255, 255, 255, 255, 255, 0],
[255, 255, 255, 255, 255, 255, 255, 255, 255],
[ 0, 255, 255, 255, 255, 255, 255, 255, 0],
[ 0, 255, 255, 255, 255, 255, 255, 255, 0],
[ 0, 0, 255, 255, 255, 255, 255, 0, 0],
[ 0, 0, 0, 0, 255, 0, 0, 0, 0]]
I have a mask array like the one above. I would like to get the x and y coordinates belonging to the perimeter of the mask. The perimeter points are the ones shown in the array below:
[[ 0, 0, 0, 0, 255, 0, 0, 0, 0],
[ 0, 0, 255, 255, 0, 255, 255, 0, 0],
[ 0, 255, 0, 0, 0, 0, 0, 255, 0],
[ 0, 255, 0, 0, 0, 0, 0, 255, 0],
[255, 0, 0, 0, 0, 0, 0, 0, 255],
[ 0, 255, 0, 0, 0, 0, 0, 255, 0],
[ 0, 255, 0, 0, 0, 0, 0, 255, 0],
[ 0, 0, 255, 255, 0, 255, 255, 0, 0],
[ 0, 0, 0, 0, 255, 0, 0, 0, 0]]
In the array above, I could just use numpy.nonzero() but I was unable to apply this logic to the original array because it returns a tuple of arrays each containing all the x or y values of all non zero elements without partitioning by row.
I wrote the code below which works but seems inefficient:
height = mask.shape[0]
width = mask.shape[1]
y_coords = []
x_coords = []
for y in range(1,height-1,1):
for x in range(0,width-1,1):
val = mask[y,x]
prev_val = mask[y,(x-1)]
next_val = mask[y, (x+1)]
top_val = mask[y-1, x]
bot_val = mask[y+1, x]
if (val != 0 and prev_val == 0) or (val != 0 and next_val == 0) or (val != 0 and top_val == 0) or (val != 0 and bot_val == 0):
y_coords.append(y)
x_coords.append(x)
I am new to python and would like to learn a better way to do this. Perhaps using Numpy?
I played a bit with your problem and found a solution and I realized you could use convolutions to count the number of neighboring 255s for each cell, and then perform a filtering of points based on the appropriate values of neighbors.
I am giving a detailed explanation below, although one part was trial and error and you could potentially skip it and get directly to the code if you understand that convolutions can count neighbors in binary images.
First observation: When does a point belong to the perimeter of the mask?
Well, that point has to have a value of 255 and "around" it, there must be at least one (and possibly more) 0.
Next: What is the definition of "around"?
We could consider all four cardinal (i.e. North, East, South, West) neighbors. In this case, a point of the perimeter must have at least one cardinal neighbor which is 0.
You have already done that, and truly, I cannot thing of a faster way by this definition.
What if we extended the definition of "around"?
Now, let's consider the neighbors of a point at (i,j) all points along an N x N square centered on (i,j). I.e. all points (x,y) such that i-N/2 <= x <= i+N/2 and j-N/2 <= y <= j+N/2 (where N is odd and ignoring out of bounds for the moment).
This is more useful from a performance point of view, because the operation of sliding "windows" along the points of 2D arrays is called a "convolution" operation. There are built in functions to perform such operations on numpy arrays really fast. The scipy.ndimage.convolve works great.
I won't attempt to fully explain convolutions here (the internet is ful of nice visuals), but the main idea is that the convolution essentially replaces the value of each cell with the weighted sum of the values of all its neighboring cells. Depending on what weight matrix, (or kernel) you specify, the convolution does different things.
Now, if your mask was 1s and 0s, to count the number of neighboring ones around a cell, you would need a kernel matrix of 1s everywhere (since the weighted sum will simply add the original ones of your mask and cancel the 0s). So we will scale the values from [0, 255] to [0,1].
Great, we know how to quickly count the neighbors of a point within an area, but the two questions are
What area size should we choose?
How many neighbors do the points in the perimeter have, now that we are including diagonal and more faraway neighbors?
I suppose there is an explicit answer to that, but I did some trial and error. It turns out, we need N=5, at which case the number of neighbors being one for each point in the original mask is the following:
[[ 3 5 8 10 11 10 8 5 3]
[ 5 8 12 15 16 15 12 8 5]
[ 8 12 17 20 21 20 17 12 8]
[10 15 20 24 25 24 20 15 10]
[11 16 21 25 25 25 21 16 11]
[10 15 20 24 25 24 20 15 10]
[ 8 12 17 20 21 20 17 12 8]
[ 5 8 12 15 16 15 12 8 5]
[ 3 5 8 10 11 10 8 5 3]]
Comparing that matrix with your original mask, the points on the perimeter are the ones having values between 11 and 15 (inclusive) [1]. So we simply filter out the rest using np.where().
A final caveat: We need to explicilty say to the convolve function how to treat points near the edges, where an N x N window won't fit. In those cases, we tell it to treat out of bounds values as 0s.
The full code is following:
from scipy import ndimage as ndi
mask //= 255
kernel = np.ones((5,5))
C = ndi.convolve(mask, kernel, mode='constant', cval=0)
#print(C) # The C matrix contains the number of neighbors for each cell.
outer = np.where( (C>=11) & (C<=15 ), 255, 0)
print(outer)
[[ 0 0 0 0 255 0 0 0 0]
[ 0 0 255 255 0 255 255 0 0]
[ 0 255 0 0 0 0 0 255 0]
[ 0 255 0 0 0 0 0 255 0]
[255 0 0 0 0 0 0 0 255]
[ 0 255 0 0 0 0 0 255 0]
[ 0 255 0 0 0 0 0 255 0]
[ 0 0 255 255 0 255 255 0 0]
[ 0 0 0 0 255 0 0 0 0]]
[1] Note that we are also counting the point itself as one of its own neighbors. That's alright.
I think this would work, I edit my answer based on my new understanding of the problem you want the outer pixel of 255 circle
What I did here is getting all the axis of where pixels are 255 items (print it to see) and then selecting the first occurrence and last occurrence that's it easy
result=np.where(pixel==255)
items=list(zip(result[0],result[1]))
unique=[]
perimiter=[]
for index in range(len(items)-1):
if items[index][0]!=items[index+1][0] or items[index][0] not in unique:
unique.append(items[index][0])
perimiter.append(items[index])
perimiter.append(items[-1])
Output
[(0, 4),
(1, 2),
(1, 6),
(2, 1),
(2, 7),
(3, 1),
(3, 7),
(4, 0),
(4, 8),
(5, 1),
(5, 7),
(6, 1),
(6, 7),
(7, 2),
(7, 6),
(8, 4)]
I would like to extend the 2d array in python in some way.
No loops
F.e. if it is:
[[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]
I would say I want to extend it by the factor of 2 and get like this:
[[255, 0, 255, 0, 255, 0],
[0, 0, 0, 0, 0, 0],
[255, 0, 255, 0, 255, 0],
[0, 0, 0, 0, 0, 0],
[255, 0, 255, 0, 255, 0],
[0, 0, 0, 0, 0, 0]]
and etc, if by 4 factor.
Is there any function?
Here is a solution using numpy. You didn't provide example for N=4, so I guessed :
import numpy as np
arr = np.array([[255, 255, 255], [255, 255, 255], [255, 255, 255]])
factor = 4
print(arr)
nx, ny = arr.shape
if nx != ny:
raise Exception("Array is not square")
step = 2 + factor//2 - 1
stop = nx * step
print('stop:', stop)
print('step:', step)
for x in range(1,stop,step):
print()
nx, ny = arr.shape
print('x:', x)
value = [[0]*nx]*(factor//2)
print('Inserting columns:', value)
arr = np.insert(arr, x, value, axis=1)
nx, ny = arr.shape
print(arr)
value = [[0]*ny]*(factor//2)
print('Inserting rows:', value)
arr = np.insert(arr, x, value, axis=0)
print(arr)
[[255 255 255]
[255 255 255]
[255 255 255]]
stop: 9
step: 3
x: 1
Inserting columns: [[0, 0, 0], [0, 0, 0]]
[[255 0 0 255 255]
[255 0 0 255 255]
[255 0 0 255 255]]
Inserting rows: [[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
[[255 0 0 255 255]
[ 0 0 0 0 0]
[ 0 0 0 0 0]
[255 0 0 255 255]
[255 0 0 255 255]]
x: 4
Inserting columns: [[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
[[255 0 0 255 0 0 255]
[ 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0]
[255 0 0 255 0 0 255]
[255 0 0 255 0 0 255]]
Inserting rows: [[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]]
[[255 0 0 255 0 0 255]
[ 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0]
[255 0 0 255 0 0 255]
[ 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0]
[255 0 0 255 0 0 255]]
x: 7
Inserting columns: [[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]]
[[255 0 0 255 0 0 255 0 0]
[ 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0]
[255 0 0 255 0 0 255 0 0]
[ 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0]
[255 0 0 255 0 0 255 0 0]]
Inserting rows: [[0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]]
[[255 0 0 255 0 0 255 0 0]
[ 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0]
[255 0 0 255 0 0 255 0 0]
[ 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0]
[255 0 0 255 0 0 255 0 0]
[ 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0]]
You can create new bigger array with zeros
factor = 2
h, w = arr.shape[:2]
new_w = h*factor
new_h = h*factor
new_arr = np.zeros((new_h, new_w), np.uint8)
and then you can use for-loops with zip() and range(0, new_h, factor) to get value and its new position
for row, y in zip(arr, range(0, new_h, factor)):
for value, x in zip(row, range(0, new_w, factor)):
new_arr[y,x] = value
gives
[[255 0 255 0 255 0]
[ 0 0 0 0 0 0]
[255 0 255 0 255 0]
[ 0 0 0 0 0 0]
[255 0 255 0 255 0]
[ 0 0 0 0 0 0]]
If you use different value instead of 0 in range then you can get offset
offset_y = 1
offset_x = 1
for row, y in zip(arr, range(offset_y, new_h, factor)):
for value, x in zip(row, range(offset_x, new_w, factor)):
new_arr[y,x] = value
gives:
[[ 0 0 0 0 0 0]
[ 0 255 0 255 0 255]
[ 0 0 0 0 0 0]
[ 0 255 0 255 0 255]
[ 0 0 0 0 0 0]
[ 0 255 0 255 0 255]]
Working code
import numpy as np
arr = np.array([[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]
)
factor = 2
h, w = arr.shape[:2]
new_w = h*factor
new_h = h*factor
new_arr = np.zeros((new_h, new_w), np.uint8)
offset_x = 0
offset_y = 0
for row, y in zip(arr, range(offset_y, new_h, factor)):
#print(row, y)
for value, x in zip(row, range(offset_x, new_w, factor)):
#print(y, x, value)
new_arr[y,x] = value
print(new_arr)
BTW: You could even use factor_x, factor_y with different values.
for example
factor_x = 4
factor_y = 2
in code
import numpy as np
arr = np.array([[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]
)
factor_x = 4
factor_y = 2
h, w = arr.shape[:2]
new_w = h*factor_x
new_h = h*factor_y
new_arr = np.zeros((new_h, new_w), np.uint8)
offset_x = 0
offset_y = 0
for row, y in zip(arr, range(offset_y, new_h, factor_y)):
#print(row, y)
for value, x in zip(row, range(offset_x, new_w, factor_x)):
#print(y, x, value)
new_arr[y,x] = value
print(new_arr)
gives
[[255 0 0 0 255 0 0 0 255 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0]
[255 0 0 0 255 0 0 0 255 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0]
[255 0 0 0 255 0 0 0 255 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0]]
Extending is simple with slicing:
array = np.array([[255, 255, 255], [255, 255, 255], [255, 255, 255]])
factor = 2
extended_array = np.zeros((array.shape[0]*factor, array.shape[1]*factor))
extended_array[::factor,::factor] = array
You can do this without numpy just with list comprehension:
lst = [[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]
extend_list = [ [lst[j // 2][i // 2] if j % 2 == 0 and i % 2 == 0 else 0 for i in range( 2 * (len(lst[j // 2])) )] if j != len(2 * lst) else [0 for _ in range( (2 * len(lst)) -1)] for j in range(2 * (len(lst)) )]
print(extend_list)
the output:
[[255, 0, 255, 0, 255, 0], [0, 0, 0, 0, 0, 0], [255, 0, 255, 0, 255, 0], [0, 0, 0], [255, 0, 255, 0, 255, 0], [0, 0, 0, 0, 0, 0]]
how can I combine a binary mask image array (this_mask - shape:4,4) with a predefined color array (mask_color, shape:3)
this_mask = np.array([
[0,1,0,0],
[0,0,0,0],
[0,0,0,0],
[0,0,0,0],
])
this_mask.shape # (4,4)
mask_color = np.array([128, 128, 64])
mask_color.shape # (3)
to get a new color mask image array (this_mask_colored, shape:4,4,3)?
this_mask_colored = # do something with `this_mask` and `mask_color`
# [
# [
# [0,128,0],
# [0,0,0],
# [0,0,0],
# [0,0,0]
# ],
# [
# [0,128,0],
# [0,0,0],
# [0,0,0],
# [0,0,0]
# ],
# [
# [0,64,0],
# [0,0,0],
# [0,0,0],
# [0,0,0]
# ],
# ]
this_mask_colored.shape # (4,4,3)
I tried for loop through pixel by pixel, is it slow when when image is 225x225, what is best way to do this?
For each image, I have multiple layers of mask, and each mask layer needs to have a different predefine color.
This might work:
this_mask = np.array([
[0,1,0,0],
[0,0,0,0],
[0,0,0,0],
[0,0,0,0],
])
mask_color = np.array([128, 128, 64])
res = []
for row in new:
tmp = []
for col in row:
tmp.append(np.array([1,1,1]) * col)
res.append(np.array(tmp))
res = res * mask_color
For each entry, 1 will be converted to [1, 1, 1] and 0 is [0, 0, 0]
I do this because I want to use the benefit of the operation * (element-wise multiplication)
This works:
test = np.array([[0, 0, 0],
[1, 1, 1],
[0, 0, 0],
[0, 0, 0]])
test * np.array([128, 128, 64])
We'll get
array([[ 0, 0, 0],
[128, 128, 64],
[ 0, 0, 0],
[ 0, 0, 0]])
And we want to put all the calculation to the numpy's side. So we loop through the array just for conversion and the rest is for numpy.
This takes 0.2 secs for 255x255 of 1 with one mask_color and 2 secs for 1000x1000
The following function should do what you want.
def apply_mask_color(mask, mask_color):
return np.concatenate(([mask[ ... , np.newaxis] * color for color in mask_color]), axis=2)
Given the following code:
this_mask = np.array([
[0,1,0,0],
[0,0,0,0],
[0,0,0,0],
[0,0,0,0],
])
mask_color = np.array([128, 128, 64])
applied = apply_mask_color(this_mask, mask_color)
print(applied.shape) #(4, 4, 3)
It is important to note that the output isn't QUITE what you expected. Rather, every element inside is now a 3 dimensional array housing the R G B values detailed in mask_color
print(applied)
Output:
[[[ 0 0 0]
[128 128 64]
[ 0 0 0]
[ 0 0 0]]
[[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]]
[[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]]
[[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]]]
I think is is more what you're looking for.
So I'm trying to generate a list of possible adjacent movements within a 3d array (preferebly n-dimensional).
What I have works as it's supposed to, but I was wondering if there's a more numpythonic way to do so.
def adjacents(loc, bounds):
adj = []
bounds = np.array(bounds) - 1
if loc[0] > 0:
adj.append((-1, 0, 0))
if loc[1] > 0:
adj.append((0, -1, 0))
if loc[2] > 0:
adj.append((0, 0, -1))
if loc[0] < bounds[0]:
adj.append((1, 0, 0))
if loc[1] < bounds[1]:
adj.append((0, 1, 0))
if loc[2] < bounds[2]:
adj.append((0, 0, 1))
return np.array(adj)
Here are some example outputs:
adjacents((0, 0, 0), (10, 10, 10))
= [[1 0 0]
[0 1 0]
[0 0 1]]
adjacents((9, 9, 9), (10, 10, 10))
= [[-1 0 0]
[ 0 -1 0]
[ 0 0 -1]]
adjacents((5, 5, 5), (10, 10, 10))
= [[-1 0 0]
[ 0 -1 0]
[ 0 0 -1]
[ 1 0 0]
[ 0 1 0]
[ 0 0 1]]
Here's an alternative which is vectorized and uses a constant, prepopulated array:
# all possible moves
_moves = np.array([
[-1, 0, 0],
[ 0,-1, 0],
[ 0, 0,-1],
[ 1, 0, 0],
[ 0, 1, 0],
[ 0, 0, 1]])
def adjacents(loc, bounds):
loc = np.asarray(loc)
bounds = np.asarray(bounds)
mask = np.concatenate((loc > 0, loc < bounds - 1))
return _moves[mask]
This uses asarray() instead of array() because it avoids copying if the input happens to be an array already. Then mask is constructed as an array of six bools corresponding to the original six if conditions. Finally, the appropriate rows of the constant data _moves are returned.
But what about performance?
The vectorized approach above, while it will appeal to some, actually runs only half as fast as the original. If it's performance you're after, the best simple change you can make is to remove the line bounds = np.array(bounds) - 1 and subtract 1 inside each of the last three if conditions. That gives you a 2x speedup (because it avoids creating an unnecessary array).
In a desperate attempt to switch from Matlab to python, I am encountering the following problem:
In Matlab, I am able to define a matrix like:
N = [1 0 0 0 -1 -1 -1 0 0 0;% A
0 1 0 0 1 0 0 -1 -1 0;% B
0 0 0 0 0 1 0 1 0 -1;% C
0 0 0 0 0 0 1 0 0 -1;% D
0 0 0 -1 0 0 0 0 0 1;% E
0 0 -1 0 0 0 0 0 1 1]% F
The rational basis nullspace (kernel) can then be calculated by:
K_nur= null(N,'r')
And the orthonormal basis like:
K_nuo= null(N)
This outputs the following:
N =
1 0 0 0 -1 -1 -1 0 0 0
0 1 0 0 1 0 0 -1 -1 0
0 0 0 0 0 1 0 1 0 -1
0 0 0 0 0 0 1 0 0 -1
0 0 0 -1 0 0 0 0 0 1
0 0 -1 0 0 0 0 0 1 1
K_nur =
1 -1 0 2
-1 1 1 0
0 0 1 1
0 0 0 1
1 0 0 0
0 -1 0 1
0 0 0 1
0 1 0 0
0 0 1 0
0 0 0 1
K_nuo =
0.5933 0.1332 0.3070 -0.3218
-0.0930 0.0433 0.2029 0.7120
0.1415 0.0084 0.5719 0.2220
0.3589 0.1682 -0.0620 0.1682
-0.1628 0.4518 0.3389 -0.4617
0.3972 -0.4867 0.0301 -0.0283
0.3589 0.1682 -0.0620 0.1682
-0.0383 0.6549 -0.0921 0.1965
-0.2174 -0.1598 0.6339 0.0538
0.3589 0.1682 -0.0620 0.1682
I have been trying to replicate this in Python SAGE, but so far, I have had no success. My code looks like this:
st1= matrix([
[ 1, 0, 0, 0,-1,-1,-1, 0, 0, 0],
[ 0, 1, 0, 0, 1, 0, 0,-1,-1, 0],
[ 0, 0, 0, 0, 0, 1, 0, 1, 0,-1],
[ 0, 0, 0, 0, 0, 0, 1, 0, 0,-1],
[ 0, 0, 0,-1, 0, 0, 0, 0, 0, 1],
[ 0, 0,-1, 0, 0, 0, 0, 0, 1, 1]])
print st1
null2_or= transpose(st1).kernel()
null2_ra= transpose(st1).kernel().basis()
print "nullr2_or"
print null2_or
print "nullr2_ra"
print null2_ra
Note: The transpose was introduced after reading through some tutorials on this and has to do with the nature of SAGE automatically computing the kernel from the left (which in this case yields no result at all).
The problem with this now is: It DOES print me something... But not the right thing.
The output is as follows:
sage: load stochiometric.py
[ 1 0 0 0 -1 -1 -1 0 0 0]
[ 0 1 0 0 1 0 0 -1 -1 0]
[ 0 0 0 0 0 1 0 1 0 -1]
[ 0 0 0 0 0 0 1 0 0 -1]
[ 0 0 0 -1 0 0 0 0 0 1]
[ 0 0 -1 0 0 0 0 0 1 1]
nullr2_or
Free module of degree 10 and rank 4 over Integer Ring
Echelon basis matrix:
[ 1 0 0 1 0 0 1 1 -1 1]
[ 0 1 0 1 0 -1 1 2 -1 1]
[ 0 0 1 -1 0 1 -1 -2 2 -1]
[ 0 0 0 0 1 -1 0 1 0 0]
nullr2_ra
[
(1, 0, 0, 1, 0, 0, 1, 1, -1, 1),
(0, 1, 0, 1, 0, -1, 1, 2, -1, 1),
(0, 0, 1, -1, 0, 1, -1, -2, 2, -1),
(0, 0, 0, 0, 1, -1, 0, 1, 0, 0)
]
Upon closer inspection, you can see that the resulting kernel matrix (nullspace) looks similar, but is not the same.
Does anyone know what I need to do to get the same result as in Matlab and, if possible, how to obtain the orthonormal result (in Matlab called K_nuo).
I have tried to look through the tutorials, documentation etc., but so far, no luck.
There might be a way do this with SAGE builtin functions; I'm not sure.
However, if a numpy/python-based solution will do, then:
import numpy as np
def null(A, eps=1e-15):
"""
http://mail.scipy.org/pipermail/scipy-user/2005-June/004650.html
"""
u, s, vh = np.linalg.svd(A)
n = A.shape[1] # the number of columns of A
if len(s)<n:
expanded_s = np.zeros(n, dtype = s.dtype)
expanded_s[:len(s)] = s
s = expanded_s
null_mask = (s <= eps)
null_space = np.compress(null_mask, vh, axis=0)
return np.transpose(null_space)
st1 = np.matrix([
[ 1, 0, 0, 0,-1,-1,-1, 0, 0, 0],
[ 0, 1, 0, 0, 1, 0, 0,-1,-1, 0],
[ 0, 0, 0, 0, 0, 1, 0, 1, 0,-1],
[ 0, 0, 0, 0, 0, 0, 1, 0, 0,-1],
[ 0, 0, 0,-1, 0, 0, 0, 0, 0, 1],
[ 0, 0,-1, 0, 0, 0, 0, 0, 1, 1]])
K = null(st1)
print(K)
yields the orthonormal null space:
[[ 0.59330559 0.13320203 0.30701044 -0.32180406]
[-0.09297005 0.04333798 0.20286425 0.71195719]
[ 0.14147329 0.00837169 0.5718718 0.22197807]
[ 0.35886225 0.16816832 -0.06199711 0.16817506]
[-0.16275558 0.45177747 0.33887617 -0.46165922]
[ 0.39719892 -0.48674377 0.03013138 -0.0283199 ]
[ 0.35886225 0.16816832 -0.06199711 0.16817506]
[-0.03833668 0.65491209 -0.09212849 0.19649496]
[-0.21738895 -0.15979664 0.63386891 0.05380301]
[ 0.35886225 0.16816832 -0.06199711 0.16817506]]
this confirms the columns have the null space property:
print(np.allclose(st1*K, 0))
# True
and this confirms that K is orthonormal:
print(np.allclose(K.T*K, np.eye(4)))
# True
Something like this should work:
sage: st1= matrix([
[ 1, 0, 0, 0,-1,-1,-1, 0, 0, 0],
[ 0, 1, 0, 0, 1, 0, 0,-1,-1, 0],
[ 0, 0, 0, 0, 0, 1, 0, 1, 0,-1],
[ 0, 0, 0, 0, 0, 0, 1, 0, 0,-1],
[ 0, 0, 0,-1, 0, 0, 0, 0, 0, 1],
[ 0, 0,-1, 0, 0, 0, 0, 0, 1, 1]])
sage: K = st1.right_kernel(); K
Free module of degree 10 and rank 4 over Integer Ring
Echelon basis matrix:
[ 1 0 0 1 0 0 1 1 -1 1]
[ 0 1 0 1 0 -1 1 2 -1 1]
[ 0 0 1 -1 0 1 -1 -2 2 -1]
[ 0 0 0 0 1 -1 0 1 0 0]
sage: M = K.basis_matrix()
The gram_schmidt method gives a pair of matrices. Type M.gram_schmidt? to see the documentation.
sage: M.gram_schmidt() # rows are orthogonal, not orthonormal
(
[ 1 0 0 1 0 0 1 1 -1 1]
[ -1 1 0 0 0 -1 0 1 0 0]
[ 5/12 3/4 1 1/6 0 1/4 1/6 -1/12 5/6 1/6]
[ 12/31 -25/62 4/31 -9/62 1 -29/62 -9/62 10/31 17/62 -9/62],
[ 1 0 0 0]
[ 1 1 0 0]
[ -7/6 -3/4 1 0]
[ 1/6 1/2 -4/31 1]
)
sage: M.gram_schmidt()[0] # rows are orthogonal, not orthonormal
[ 1 0 0 1 0 0 1 1 -1 1]
[ -1 1 0 0 0 -1 0 1 0 0]
[ 5/12 3/4 1 1/6 0 1/4 1/6 -1/12 5/6 1/6]
[ 12/31 -25/62 4/31 -9/62 1 -29/62 -9/62 10/31 17/62 -9/62]
sage: M.change_ring(RDF).gram_schmidt()[0] # orthonormal
[ 0.408248290464 0.0 0.0 0.408248290464 0.0 0.0 0.408248290464 0.408248290464 -0.408248290464 0.408248290464]
[ -0.5 0.5 0.0 0.0 0.0 -0.5 0.0 0.5 0.0 0.0]
[ 0.259237923683 0.466628262629 0.622171016838 0.103695169473 0.0 0.15554275421 0.103695169473 -0.0518475847365 0.518475847365 0.103695169473]
[ 0.289303646409 -0.30135796501 0.0964345488031 -0.108488867403 0.747367753224 -0.349575239411 -0.108488867403 0.241086372008 0.204923416206 -0.108488867403]
The matrix st1 has integer entries, so Sage treats it as a matrix of integers, and tries to do as much as possible with integer arithmetic, and failing that, rational arithmetic. Because of this, Gram-Schmidt orthonormalization will fail, since it involves taking square roots. This is why the method change_ring(RDF) is there: RDF stands for Real Double Field. You could instead just change one entry of st1 from 1 to 1.0, and then it will treat st1 as a matrix over RDF from the start and you won't need to do this change_ring anywhere.
To expand on John's great answer, I think you just have two different bases for the same vector space. Note his use of right_kernel.
sage: st1= matrix([
....: [ 1, 0, 0, 0,-1,-1,-1, 0, 0, 0],
....: [ 0, 1, 0, 0, 1, 0, 0,-1,-1, 0],
....: [ 0, 0, 0, 0, 0, 1, 0, 1, 0,-1],
....: [ 0, 0, 0, 0, 0, 0, 1, 0, 0,-1],
....: [ 0, 0, 0,-1, 0, 0, 0, 0, 0, 1],
....: [ 0, 0,-1, 0, 0, 0, 0, 0, 1, 1]])
sage: st2 = matrix([[1,-1, 0, 2],
....: [-1, 1, 1, 0],
....: [ 0, 0, 1, 1],
....: [ 0, 0, 0, 1],
....: [ 1, 0, 0, 0],
....: [ 0,-1, 0, 1],
....: [ 0, 0, 0, 1],
....: [ 0, 1, 0, 0],
....: [ 0, 0, 1, 0],
....: [ 0, 0, 0, 1]])
sage: st2 = st2.transpose()
sage: st2
[ 1 -1 0 0 1 0 0 0 0 0]
[-1 1 0 0 0 -1 0 1 0 0]
[ 0 1 1 0 0 0 0 0 1 0]
[ 2 0 1 1 0 1 1 0 0 1]
sage: st1.right_kernel()
Free module of degree 10 and rank 4 over Integer Ring
Echelon basis matrix:
[ 1 0 0 1 0 0 1 1 -1 1]
[ 0 1 0 1 0 -1 1 2 -1 1]
[ 0 0 1 -1 0 1 -1 -2 2 -1]
[ 0 0 0 0 1 -1 0 1 0 0]
sage: st2.row_space()
Free module of degree 10 and rank 4 over Integer Ring
Echelon basis matrix:
[ 1 0 0 1 0 0 1 1 -1 1]
[ 0 1 0 1 0 -1 1 2 -1 1]
[ 0 0 1 -1 0 1 -1 -2 2 -1]
[ 0 0 0 0 1 -1 0 1 0 0]
Your spaces are the same, just different bases in Sage and Matlab.