I try to encode Hill cipher in Python, but i have some problems.
I consider matrices that their coefficients are integers modulo 26
How to make a program that calculates the determinant of that matrix?
And second question:
How to find inversion of that matrix ?(if exists)
Regards
If you can use Sagemath (run your code in Sage or import Sage into Python), you can use:
M = Matrix(Zmod(26), your_numpy_matrix)
determinant = M.det()
inverse = M.inverse()
Theoretically, you can compute the whole determinant and then apply modulo, but this will lead to problems.
I tried sympy, but did not manager a working solution for larger dimension.
M = sympy.Matrix(your_numpy_matrix)
determinant = M.det() % 26 #huge intermediate results/crashes
inverse = M.inv_mod(26)
numpy will not work, as you will encounter numerical problems.
PS: I am aware, the question is old, but I wanted to document possible solutions.
Related
Consider a case where, given an MxM matrix A and a vector b, I want to solve something of the form inv(A # A.T) # b (where I know A is invertible).
As far as I know, it is always faster to use solve_* rather than inv. There are also variants for more efficient solving for PSD matrices (which A # A.T must be), using Cholesky factorization.
My question - since I'm constructing the matrix A # A.T just to immediately throw it away - is there a more specialized procedure for solving linear equations with the gram matrix of A without having to construct it?
You can compute the factorization of A and then use that to solve your system.
Assume we want to solve
A A^T x = b
for x.
Compute the factorization of A=LU.
Then solve Ay=b for y.
Then solve A^T x = y for x.
This way you dont have to compute the matrix A^T A.
Note that if one has a factorization of A=LU then one can solve Ax=b as well as A^T x=b efficiently for x.
This is because A^T=U^T L^T which is again a factorization of a lower times an upper triangular matrix.
I am trying to apply the Colamd (Column approximate minimum degree permutation) on a scipy sparse matrix.
A incomplete solution can be found from the scipy doc:
from scipy.sparse import csc_matrix, linalg as sla
A = csc_matrix([[1,2,0,4],[1,0,0,1],[1,0,2,1],[2,2,1,0.]])
lu = sla.splu(A,permc_spec = 'COLAMD')
print(lu.perm_c)
However, if A is a singular matrix like below, the 'Python Scipy.sparse RuntimeError: Factor is exactly singular' is raised.
A = csc_matrix([[0,1,0], [0,2,1], [0,3,0]])
To sum up, I am searching a colamd code/algorithm in python working like the colamd(S) matlab method works (ie with singular matrix).
I cannot call matlab code from python.
Thank you
Not sure if this will give you the same exact solution (since it's not clear to me how stable AMD/COLAMD is); but I was able to get a solution by adding a small epsilon-identity matrix to make A positive definite. As you make epsilon small (e.g. machine epsilon), I expect to get some reasonable solution.
epsilon=1e-6
from scipy.sparse import csc_matrix, linalg as sla
A = csc_matrix([[0,1,0], [0,2,1], [0,3,0]])+epsilon*csc_matrix(np.eye(3))
lu = sla.splu(A,permc_spec = 'COLAMD')
print(lu.perm_c)
> [0 1 2]
(note for your example problem; pretty much any positive epsilon other than 1.0 gave this permutation)
I am a newbie when it comes to using python libraries for numerical tasks. I was reading a paper on LexRank and wanted to know how to compute eigenvectors of a transition matrix. I used the eigval function and got a result that I have a hard time interpreting:
a = numpy.zeros(shape=(4,4))
a[0,0]=0.333
a[0,1]=0.333
a[0,2]=0
a[0,3]=0.333
a[1,0]=0.25
a[1,1]=0.25
a[1,2]=0.25
a[1,3]=0.25
a[2,0]=0.5
a[2,1]=0.0
a[2,2]=0.0
a[2,3]=0.5
a[3,0]=0.0
a[3,1]=0.333
a[3,2]=0.333
a[3,3]=0.333
print LA.eigval(a)
and the eigenvalue is:
[ 0.99943032+0.j
-0.13278637+0.24189178j
-0.13278637-0.24189178j
0.18214242+0.j ]
Can anyone please explain what j is doing here? Isn't the eigenvalue supposed to be a scalar quantity? How can I interpret this result broadly?
j is the imaginary number, the square root of minus one. In math it is often denoted by i, in engineering, and in Python, it is denoted by j.
A single eigenvalue is a scalar quantity, but an (m, m) matrix will have m eigenvalues (and m eigenvectors). The Wiki page on eigenvalues and eigenvectors has some examples that might help you to get your head around the concepts.
As #unutbu mentions, j denotes the imaginary number in Python. In general, a matrix may have complex eigenvalues (i.e. with real and imaginary components) even if it contains only real values (see here, for example). Symmetric real-valued matrices are an exception, in that they are guaranteed to have only real eigenvalues.
In my attempt to perform cholesky decomposition on a variance-covariance matrix for a 2D array of periodic boundary condition, under certain parameter combinations, I always get LinAlgError: Matrix is not positive definite - Cholesky decomposition cannot be computed. Not sure if it's a numpy.linalg or implementation issue, as the script is straightforward:
sigma = 3.
U = 4
def FromListToGrid(l_):
i = np.floor(l_/U)
j = l_ - i*U
return np.array((i,j))
Ulist = range(U**2)
Cov = []
for l in Ulist:
di = np.array([np.abs(FromListToGrid(l)[0]-FromListToGrid(i)[0]) for i, x in enumerate(Ulist)])
di = np.minimum(di, U-di)
dj = np.array([np.abs(FromListToGrid(l)[1]-FromListToGrid(i)[1]) for i, x in enumerate(Ulist)])
dj = np.minimum(dj, U-dj)
d = np.sqrt(di**2+dj**2)
Cov.append(np.exp(-d/sigma))
Cov = np.vstack(Cov)
W = np.linalg.cholesky(Cov)
Attempts to remove potential singularies also failed to resolve the problem. Any help is much appreciated.
Digging a bit deeper in problem, I tried printing the Eigenvalues of the Cov matrix.
print np.linalg.eigvalsh(Cov)
And the answer turns out to be this
[-0.0801339 -0.0801339 0.12653595 0.12653595 0.12653595 0.12653595 0.14847999 0.36269785 0.36269785 0.36269785 0.36269785 1.09439988 1.09439988 1.09439988 1.09439988 9.6772531 ]
Aha! Notice the first two negative eigenvalues? Now, a matrix is positive definite if and only if all its eigenvalues are positive. So, the problem with the matrix is not that it's close to 'zero', but that it's 'negative'. To extend #duffymo analogy, this is linear algebra equivalent of trying to take square root of negative number.
Now, let's try to perform same operation, but this time with scipy.
scipy.linalg.cholesky(Cov, lower=True)
And that fails saying something more
numpy.linalg.linalg.LinAlgError: 12-th leading minor not positive definite
That's telling something more, (though I couldn't really understand why it's complaining about 12-th minor).
Bottom line, the matrix is not quite close to 'zero' but is more like 'negative'
The problem is the data you're feeding to it. The matrix is singular, according to the solver. That means a zero or near-zero diagonal element, so inversion is impossible.
It'd be easier to diagnose if you could provide a small version of the matrix.
Zero diagonals aren't the only way to create a singularity. If two rows are proportional to each other then you don't need both in the solution; they're redundant. It's more complex than just looking for zeroes on the diagonal.
If your matrix is correct, you have a non-empty null space. You'll need to change algorithms to something like SVD.
See my comment below.
I am working with data from neuroimaging and because of the large amount of data, I would like to use sparse matrices for my code (scipy.sparse.lil_matrix or csr_matrix).
In particular, I will need to compute the pseudo-inverse of my matrix to solve a least-square problem.
I have found the method sparse.lsqr, but it is not very efficient. Is there a method to compute the pseudo-inverse of Moore-Penrose (correspondent to pinv for normal matrices).
The size of my matrix A is about 600'000x2000 and in every row of the matrix I'll have from 0 up to 4 non zero values. The matrix A size is given by voxel x fiber bundle (white matter fiber tracts) and we are expecting maximum 4 tracts to cross in a voxel. In most of the white matter voxels we expect to have at least 1 tract, but I will say that around 20% of the lines could be zeros.
The vector b should not be sparse, actually b contains the measure for each voxel, which is in general not zero.
I would need to minimize the error, but there are also some conditions on the vector x. As I tried the model on smaller matrices, I never needed to constrain the system in order to satisfy these conditions (in general 0
Is that of any help? Is there a way to avoid taking the pseudo-inverse of A?
Thanks
Update 1st June:
thanks again for the help.
I can't really show you anything about my data, because the code in python give me some problems. However, in order to understand how I could choose a good k I've tried to create a testing function in Matlab.
The code is as follow:
F=zeros(100000,1000);
for k=1:150000
p=rand(1);
a=0;
b=0;
while a<=0 || b<=0
a=random('Binomial',100000,p);
b=random('Binomial',1000,p);
end
F(a,b)=rand(1);
end
solution=repmat([0.5,0.5,0.8,0.7,0.9,0.4,0.7,0.7,0.9,0.6],1,100);
size(solution)
solution=solution';
measure=F*solution;
%check=pinvF*measure;
k=250;
F=sparse(F);
[U,S,V]=svds(F,k);
s=svds(F,k);
plot(s)
max(max(U*S*V'-F))
for s=1:k
if S(s,s)~=0
S(s,s)=1/S(s,s);
end
end
inv=V*S'*U';
inv*measure
max(inv*measure-solution)
Do you have any idea of what should be k compare to the size of F? I've taken 250 (over 1000) and the results are not satisfactory (the waiting time is acceptable, but not short).
Also now I can compare the results with the known solution, but how could one choose k in general?
I also attached the plot of the 250 single values that I get and their squares normalized. I don't know exactly how to better do a screeplot in matlab. I'm now proceeding with bigger k to see if suddently the value will be much smaller.
Thanks again,
Jennifer
You could study more on the alternatives offered in scipy.sparse.linalg.
Anyway, please note that a pseudo-inverse of a sparse matrix is most likely to be a (very) dense one, so it's not really a fruitful avenue (in general) to follow, when solving sparse linear systems.
You may like to describe a slight more detailed manner your particular problem (dot(A, x)= b+ e). At least specify:
'typical' size of A
'typical' percentage of nonzero entries in A
least-squares implies that norm(e) is minimized, but please indicate whether your main interest is on x_hat or on b_hat, where e= b- b_hat and b_hat= dot(A, x_hat)
Update: If you have some idea of the rank of A (and its much smaller than number of columns), you could try total least squares method. Here is a simple implementation, where k is the number of first singular values and vectors to use (i.e. 'effective' rank).
from scipy.sparse import hstack
from scipy.sparse.linalg import svds
def tls(A, b, k= 6):
"""A tls solution of Ax= b, for sparse A."""
u, s, v= svds(hstack([A, b]), k)
return v[-1, :-1]/ -v[-1, -1]
Regardless of the answer to my comment, I would think you could accomplish this fairly easily using the Moore-Penrose SVD representation. Find the SVD with scipy.sparse.linalg.svds, replace Sigma by its pseudoinverse, and then multiply V*Sigma_pi*U' to find the pseudoinverse of your original matrix.