Generate Parity-check matrix from Generator matrix - python

Is there a function in (numpy) or well-tested function to calculate Parity-check matrix (https://en.wikipedia.org/wiki/Parity-check_matrix)
from Generator matrix?
P.S.
I did not find solution on this site.

If my understanding is correct parity-check matrix is nullspace of generator
matrix in modulo 2. There is solution for this in scipy but this function
give non integer nullspace. You can use sympy but it can be slow for big
matrices.
"""
>>> np.set_string_function(str)
>>> h
[[0 1 1 1 1 0 0]
[1 0 1 1 0 1 0]
[1 1 0 1 0 0 1]]
>>> (g # h.T) % 2
[[0 0 0]
[0 0 0]
[0 0 0]
[0 0 0]]
"""
import sympy
import numpy as np
g = np.array([[1, 1, 1, 0, 0, 0, 0],
[1, 0, 0, 1, 1, 0, 0],
[0, 1, 0, 1, 0, 1, 0],
[1, 1, 0, 1, 0, 0, 1]])
h = np.array(sympy.Matrix(g).nullspace()) % 2
where h is parity-check matrix.

Related

All possible combinations in a binary image

I'm trying to create all possible combinations of 0 and 1 in an array that have the shape (n, 10). For example, if we assume an arbitrary combination like this: np.array([0, 0, 1, 1, 0, 0, 1, 1, 0, 0]), how can I generate all possible combinations (which will result in 2^10=1024 arrays)?
Yes, you can use itertools.product() with the repeat parameter to generate the desired output:
import numpy as np
from itertools import product
np.array(list(product([0, 1], repeat=10)))
This outputs:
[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 1]
[0 0 0 ... 0 1 0]
...
[1 1 1 ... 1 0 1]
[1 1 1 ... 1 1 0]
[1 1 1 ... 1 1 1]]
You can use permutations from the itertools module:
import numpy as np
import itertools
list = np.array([0, 0, 1, 1, 0, 0, 1, 1, 0, 0])
combs = itertools.permutations(list)
for i in combs:
print(i)

Efficient and Pythonic way to calculate Euclidean distance to the nearest nonzero element, for each nonzero element in NumPy 2D array

I have a bi-dimensional NumPy array of shape M × N with many values set to 0 and others with value ≠ 0.
The following is an example of the aforesaid matrix:
A = np.array([[0, 0, 0, 1, 0, 2, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 3, 0], [1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 6, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0]])
And that's it nicely formatted:
A = [[0 0 0 1 0 2 0 0]
[0 0 0 0 0 0 0 0]
[0 0 0 0 1 0 0 0]
[0 1 0 0 0 0 0 0]
[0 0 0 1 0 0 3 0]
[1 0 0 0 0 0 0 0]
[0 0 0 0 6 0 0 0]
[0 0 1 0 0 0 0 0]]
My task is to find, for each nonzero element (e.g. 1, 2, 1, 1, 1, 3, 1, 6 and 1) in the 2D array (A), the distance to the nearest nonzero element (except itself) by means of Euclidean distance, and then create a list (L) with the calculated distances.
The following invariant must hold:
if np.count_nonzero(A) < 2:
assert len(L) == 0
else
assert np.count_nonzero(A) == len(L)
Calculations for array A is the following:
Nearest nonzero element for A[0, 3] = 1 is A[0, 5] = 2 at distance = 2
Nearest nonzero element for A[0, 5] = 2 is A[0, 3] = 1 at distance = 2
Nearest nonzero element for A[2, 4] = 1 is A[0, 3] = 1 at distance = 2.24
Nearest nonzero element for A[3, 1] = 1 is A[4, 3] = 1 at distance = 2.24
Nearest nonzero element for A[4, 3] = 1 is A[2, 4] = 1 at distance = 2.24
Nearest nonzero element for A[4, 6] = 3 is A[2, 4] = 1 at distance = 2.83
Nearest nonzero element for A[5, 0] = 1 is A[3, 1] = 1 at distance = 2.24
Nearest nonzero element for A[6, 4] = 6 is A[4, 3] = 1 at distance = 2.24
Nearest nonzero element for A[7, 2] = 1 is A[6, 4] = 6 at distance = 2.24
The list L is then L = [2, 2, 2.24, 2.24, 2.24, 2.83, 2.24, 2.24, 2.24].
I wrote the following code to solve the problem, and I think it works correctly, but it has two problems: it's the naive, brute force and non vectorized solution of 𝒪(M² × N²) time complexity, and is not very clear, concise and succinct; that is, it's not Pythonic.
def get_distance_list(A):
L = []
for (m, n), a_mn in np.ndenumerate(A):
# skip this element if its value is 0
if a_mn == 0:
continue
d_min = math.inf
for (k, l), a_kl in np.ndenumerate(A):
# skip this element if its value is 0 or if it's me
if a_kl == 0 or (m, n) == (k, l):
continue
d = scipy.spatial.distance.euclidean((m, n), (k, l))
d_min = min(d_min, d)
# in case there are less than two nonzero values in the matrix,
# the returned list must be empty, so only add the distance
# if it's different than the default value of +inf
if d_min != math.inf:
L.append(d_min)
return L
Do you know if there is a built-in function (maybe in NumPy, SciPy, SciKit, etc.) which can replace the one I wrote, or if there is a faster/vectorized and more Pythonic way to solve the problem?
I think using scipy.spatial.KDTree is perfect for this.
from scipy.spatial import KDTree
nonzeros = np.transpose(np.nonzero(A))
t = KDTree(nonzeros)
dists, nns = t.query(nonzeros, 2)
for (i, j), d in zip(nns, dists[:,1]):
print(nonzeros[i], "is closest to", nonzeros[j], "with distance", d)
Result:
[0 3] is closest to [0 5] with distance 2.0
[0 5] is closest to [0 3] with distance 2.0
[2 4] is closest to [0 5] with distance 2.23606797749979
[3 1] is closest to [4 3] with distance 2.23606797749979
[4 3] is closest to [3 1] with distance 2.23606797749979
[4 6] is closest to [2 4] with distance 2.8284271247461903
[5 0] is closest to [3 1] with distance 2.23606797749979
[6 4] is closest to [4 3] with distance 2.23606797749979
[7 2] is closest to [6 4] with distance 2.23606797749979
This uses numpy, there could well be other functions that can streamline this.
import numpy as np
A = np.array([[0, 0, 0, 1, 0, 2, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 3, 0],
[1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 6, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0]])
x, y = np.where( A != 0 )
print( x,'\n', y)
# [0 0 2 3 4 4 5 6 7] # x coords
# [3 5 4 1 3 6 0 4 2] # y coords
diff_x = np.subtract.outer( x, x ) # differences all x from each x
diff_y = np.subtract.outer( y, y ) # all y from each y
distance = np.sqrt( diff_x * diff_x + diff_y * diff_y )
# Or using np.complex
# point = x + y * np.complex( 0, 1 ) # x and y in the complex plane
# distance = abs(np.subtract.outer( point, point )) # Euclidian distances point to all points
distance[ distance == 0 ] = distance.max() # or use np.diagonal to remove zeroes.
ind = np.argmin( distance, axis = 1 ) # indices of minimums
for i, ix in enumerate( ind ):
print( x[i], y[i], 'close to', x[ix], y[ix], 'distance = ', distance[ i, ix ] )
This produces:
0 3 close to 0 5 distance = 2.0
0 5 close to 0 3 distance = 2.0
2 4 close to 0 3 distance = 2.23606797749979
3 1 close to 4 3 distance = 2.23606797749979
4 3 close to 2 4 distance = 2.23606797749979
4 6 close to 2 4 distance = 2.8284271247461903
5 0 close to 3 1 distance = 2.23606797749979
6 4 close to 4 3 distance = 2.23606797749979
7 2 close to 6 4 distance = 2.23606797749979

Python - numpy arrays - Abelian sandpile

I'm trying to do the Abelian sandpile model using a simple numpy array.
When a 'pile' is 4 >=, then it collapse among its neighbors.
I understand how the "gravity" thing works, but I can't think of a way of making it.
Here's the code to make my array :
import numpy as np
spile = np.zeros((5, 5), dtype=np.uint32)
spile[2, 2] = 16
Which gives me the following :
array([[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0],
[ 0, 0, 16, 0, 0],
[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0]], dtype=uint32)
Now, I need the "gravity" code that does these steps of calculation :
array([[ 0, 0, 0, 0, 0],
[ 0, 0, 4, 0, 0],
[ 0, 4, 0, 4, 0],
[ 0, 0, 4, 0, 0],
[ 0, 0, 0, 0, 0]], dtype=uint32)
array([[ 0, 0, 1, 0, 0],
[ 0, 2, 1, 2, 0],
[ 1, 1, 0, 1, 1],
[ 0, 2, 1, 2, 0],
[ 0, 0, 1, 0, 0]], dtype=uint32)
The last array is the final result I'm trying to get.
I'm not trying to make you guys code for me, I just need some ideas as I've never ever did such a thing (but feel free to provide a code if you're that kind :p ).
Use np.divmod to identify where the cells tumble and how much tumbles. Then use array slicing to shift the amounts tumbled and add back into the sandpile.
import numpy as np
spile = np.zeros((5, 5), dtype=np.uint32)
spile[2, 2] = 16
def do_add( spile, tumbled ):
""" Updates spile in place """
spile[ :-1, :] += tumbled[ 1:, :] # Shift N and add
spile[ 1:, :] += tumbled[ :-1, :] # Shift S
spile[ :, :-1] += tumbled[ :, 1:] # Shift W
spile[ :, 1:] += tumbled[ :, :-1] # Shift E
def tumble( spile ):
while ( spile > 3 ).any():
tumbled, spile = np.divmod( spile, 4 )
do_add( spile, tumbled )
# print( spile, '\n' ) # Uncomment to print steps
return spile
print( tumble( spile ) )
# or tumble( spile ); print( spile )
# [[0 0 1 0 0]
# [0 2 1 2 0]
# [1 1 0 1 1]
# [0 2 1 2 0]
# [0 0 1 0 0]]
Uncommented print statement prints these results
[[0 0 0 0 0]
[0 0 4 0 0]
[0 4 0 4 0]
[0 0 4 0 0]
[0 0 0 0 0]]
[[0 0 1 0 0]
[0 2 0 2 0]
[1 0 4 0 1]
[0 2 0 2 0]
[0 0 1 0 0]]
[[0 0 1 0 0]
[0 2 1 2 0]
[1 1 0 1 1]
[0 2 1 2 0]
[0 0 1 0 0]]
http://rosettacode.org/wiki/Abelian_sandpile_model

How to convert a boolean array into a matrix?

I am a beginner, and I want to know is it possible to convert a boolean array into a matrix in NumPy?
For example, we have a boolean array a like this:
a = [[False],
[True],
[True],
[False],
[True]]
And, we turn it into the following matrix:
m = [[0, 0, 0, 0, 0]
[0, 1, 0, 0, 0]
[0, 0, 1, 0, 0]
[0, 0, 0, 0, 0]
[0, 0, 0, 0, 1]]
I mean the array to be the diagonal of the matrix.
You can use np.diagflat which creates a two-dimensional array with the flattened input as a diagonal:
np.diagflat(np.array(a, dtype=int))
#[[0 0 0 0 0]
# [0 1 0 0 0]
# [0 0 1 0 0]
# [0 0 0 0 0]
# [0 0 0 0 1]]
Working example

Do you have some advices about signal processing on binary time series?

I have a binary time series with some ASK modulated signals in different frequencies inside of it.
Let's say it's something like this: x = [0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0, ...]
What's matter to me is having all the '1' and '0' in an interval of 4 samples or more, but sometimes the '0' and '1' change places like this: x1 = [0,0,0,1,1,1,1,1] when it had to be x2 = [0,0,0,0,1,1,1,1]
And there's also some noise as spikes as seen in n1 = [0,0,0,0,0,0,1,1,0,0,0,0,0] when it should be only zeros.
I've already tried moving average and it introduced a lag to the signal that was't good for my application.
Do you have some advices about signal processing on binary time series?
The following code finds the indices of all continuous sequences with the length smaller than 4 (min_cont_length). It also gives you the lengths of the problematic sectors, so you can decide how to handle them.
import numpy as np
def find_index_of_err(signal, min_cont_length = 4):
# pad sides to detect problems at the edges
signal = np.concatenate(([1-signal[0]],signal,[1-signal[-1]]))
# calculate differences from 1 element to the next
delta = np.concatenate(([0], np.diff(signal, 1)))
# detect discontinuities
discontinuity = np.where(delta!=0)[0]
# select discontinuities with matching length (< min_cont_length)
err_idx = discontinuity[:-1][np.diff(discontinuity) < min_cont_length] - 1
# get also the size of the gap
err_val = np.diff(discontinuity)[np.argwhere(np.diff(discontinuity) < min_cont_length).flatten()]
return err_idx, err_val
# some test signals
signals = np.array([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1]])
for sig in signals:
index, value = find_index_of_err(sig)
print(sig, index, value)
# Output:
# [1 0 0 0 0 0 0 0 0 0 0] [0] [1]
# [0 0 1 0 0 0 0 0 0 0 0] [0 2] [2 1]
# [0 0 0 0 1 0 0 0 0 0 0] [4] [1]
# [0 0 0 0 0 0 1 1 0 0 0] [6 8] [2 3]
# [0 0 0 0 0 0 1 1 1 1 1] [] []

Categories

Resources