I am trying to compute the eigen values of a matrix built by a matrix product M^{-1}K.
I know M and K, I have initialized them properly. I thus try to compute the inverse of M:
M_inv = np.linalg.inv(M)
with np.printoptions(threshold=np.inf, precision=10, suppress=True,linewidth=20000):
print(np.matrix(M_inv * M))
That should print the identity, but I get:
Which clearly is not the identity. I need to find the eigen values of M_inv * K, but if M_Inv is so innacurate I won't get anything useful, what do I do?
This is the matrix:
And this is my initialization code:
def mij(i, j, h):
if i==j:
return 2.0 * h / 3.0
else:
return h / 6.0
def kij(i, j, h):
if i==j:
return 2.0 / h
else:
return -1 / h
n = 500
size=n+1
h = 1 / n
t=np.linspace(0,1,n)
# Get A
M = np.zeros((n, n))
K = np.zeros((n, n))
for i in range(0, n):
M[i,i] = mij(i, i, h)
if i+1 < n:
M[i,i+1] = mij(i, i+1, h)
if i-1 >= 0:
M[i,i-1] = mij(i, i-1, h)
K[i,i] = kij(i, i, h)
if i+1 < n:
K[i,i+1] = kij(i, i+1, h)
if i-1 >= 0:
K[i,i-1] = kij(i, i-1, h)
Try to compute the inverse column by column using this:
c1 = numpy.linalg.solve(M, [1, 0, ..., 0])
cn = numpy.linalg.solve(M, [0, ..., 0, 1])
An example with a tri-diagonal matrix in this code:
import numpy as np
M = np.array([[1,2,0],[1,4,9],[0,8,27]])
I = np.identity(3)
print(M)
#using inv
Minv1 = np.linalg.inv(M)
#using solve
Minv2 = list()
for i in range(3):
Minv2.append(np.linalg.solve(M, I[i]))
Minv2 = np.array([list(column) for column in zip(*Minv2)])
#same as:
Minv3 = np.linalg.solve(M, I)
print(Minv1)
print(Minv2)
print(Minv3)
Generated output:
[[ 1 2 0]
[ 1 4 9]
[ 0 8 27]]
[[-2. 3. -1. ]
[ 1.5 -1.5 0.5 ]
[-0.44444444 0.44444444 -0.11111111]]
[[-2. 3. -1. ]
[ 1.5 -1.5 0.5 ]
[-0.44444444 0.44444444 -0.11111111]]
[[-2. 3. -1. ]
[ 1.5 -1.5 0.5 ]
[-0.44444444 0.44444444 -0.11111111]]
The numpy.linalg.solve function is supposed to have a higher precission than the numpy.linalg.inv.
With n=5:
M = np.array([[1,2,0,0,0],[1,4,9,0,0],[0,8,27,1,0],[0,0,81,1,2],[0,0,0,1,23]])
I = np.identity(len(M))
print(M)
#using inv
Minv1 = np.linalg.inv(M)
#using solve
Minv2 = list()
for i in range(len(M)):
Minv2.append(np.linalg.solve(M, I[i]))
Minv2 = np.array([list(column) for column in zip(*Minv2)])
#same as:
Minv3 = np.linalg.solve(M, I)
print(Minv1)
print(Minv2)
print(Minv3)
Generated output:
[[ 1 2 0 0 0]
[ 1 4 9 0 0]
[ 0 8 27 1 0]
[ 0 0 81 1 2]
[ 0 0 0 1 23]]
[[ 1.63157895e+00 -6.31578947e-01 -9.21052632e-02 1.00877193e-01
-8.77192982e-03]
[-3.15789474e-01 3.15789474e-01 4.60526316e-02 -5.04385965e-02
4.38596491e-03]
[-4.09356725e-02 4.09356725e-02 -1.02339181e-02 1.12085770e-02
-9.74658869e-04]
[ 3.63157895e+00 -3.63157895e+00 9.07894737e-01 1.00877193e-01
-8.77192982e-03]
[-1.57894737e-01 1.57894737e-01 -3.94736842e-02 -4.38596491e-03
4.38596491e-02]]
[[ 1.63157895e+00 -6.31578947e-01 -9.21052632e-02 1.00877193e-01
-8.77192982e-03]
[-3.15789474e-01 3.15789474e-01 4.60526316e-02 -5.04385965e-02
4.38596491e-03]
[-4.09356725e-02 4.09356725e-02 -1.02339181e-02 1.12085770e-02
-9.74658869e-04]
[ 3.63157895e+00 -3.63157895e+00 9.07894737e-01 1.00877193e-01
-8.77192982e-03]
[-1.57894737e-01 1.57894737e-01 -3.94736842e-02 -4.38596491e-03
4.38596491e-02]]
[[ 1.63157895e+00 -6.31578947e-01 -9.21052632e-02 1.00877193e-01
-8.77192982e-03]
[-3.15789474e-01 3.15789474e-01 4.60526316e-02 -5.04385965e-02
4.38596491e-03]
[-4.09356725e-02 4.09356725e-02 -1.02339181e-02 1.12085770e-02
...
[ 3.63157895e+00 -3.63157895e+00 9.07894737e-01 1.00877193e-01
-8.77192982e-03]
[-1.57894737e-01 1.57894737e-01 -3.94736842e-02 -4.38596491e-03
4.38596491e-02]]
Related
Draw 10,000 points on the plane so that both the x and y coordinates of each point are in the range [-1, 1].
Print the coordinates of only those points which are contained in a circle with radius r0 = 1.0.
def nextRandom(seed):
m = 233280 # modulus
a = 9301 # multiplier
c = 49297 # increment
x0 = seed # start-value
return 2*(((a * x0 + c) % m)/m)-1 # between [-1, 1]
N = 10
x = [0]*N
y = [0]*N
p = [0]*N
x0 = 1
y0 = 0
r = 1.0
for i in range(1, N, 1):
x[i] = nextRandom(x0)
y[i] = nextRandom(x[i])
p[i] = x[i] * x[i] + y[i] * y[i]
if(p[i]<=r*r):
print(i, "(", "{0:.2f}, ".format(x[i]), "{0:.2f}".format(y[i]), ")")
import matplotlib.pyplot as plt
plt.scatter(x, y)
plt.show()
Output
In [33]: runfile('C:/Users/pc/Desktop/temp.py', wdir='C:/Users/pc/Desktop/')
1 ( -0.50, -0.62 )
2 ( -0.50, -0.62 )
3 ( -0.50, -0.62 )
4 ( -0.50, -0.62 )
5 ( -0.50, -0.62 )
6 ( -0.50, -0.62 )
7 ( -0.50, -0.62 )
8 ( -0.50, -0.62 )
9 ( -0.50, -0.62 )
Why is this source code plotting only two points?
Edit: modified the code as follows:
for i in range(1, N, 1):
x[i] = nextRandom(x0)
x0 = x[i] ##<=========================added this line
y[i] = nextRandom(x[i])
p[i] = x[i] * x[i] + y[i] * y[i]
if(p[i]<=r*r):
print(i, "(", "{0:.2f}, ".format(x[i]), "{0:.2f}".format(y[i]), ")")
Output
1 ( -0.50, -0.62 )
2 ( -0.62, -0.63 )
3 ( -0.63, -0.63 )
4 ( -0.63, -0.63 )
5 ( -0.63, -0.63 )
6 ( -0.63, -0.63 )
7 ( -0.63, -0.63 )
8 ( -0.63, -0.63 )
9 ( -0.63, -0.63 )
I am not seeing much improvement.
Looks like an issue with the proposed random number generation scheme. Instead of dividing by m in the nextRandom function, you can generate a bunch of pseudorandom integers between 0 and m, then rescale and plot.
# output ints!
def nextRandom(seed):
m = 233280 # modulus
a = 9301 # multiplier
c = 49297 # increment
x0 = seed # start-value
return ((a * x0 + c) % m)
# generate (hopefully) random ints
m = 233280
# initialize integer arrays to store iterative applications
# of nextRandom. Random seed for x is 0, random seed for y is 1
rx, ry = [0], [1]
for i in range(500):
rx.append(nextRandom(rx[-1]))
ry.append(nextRandom(ry[-1]))
# rescale to the 2x2 square around the origin
xs = [2*x/m-1 for x in rx]
ys = [2*y/m-1 for y in ry]
# different colors based on distance to the origin
color = ['red' if x**2 + y**2 < 1 else 'blue' for x, y in zip(xs, ys)]
from matplotlib import pyplot as plt
plt.scatter(xs, ys, c=color)
Results look like this:
import numpy as np
import pandas as pd
from scipy.spatial.distance import directed_hausdorff
df:
1 1.1 2 2.1 3 3.1 4 4.1
45.13 7.98 45.10 7.75 45.16 7.73 NaN NaN
45.35 7.29 45.05 7.68 45.03 7.96 45.05 7.65
Calculated distance for 1 couple
x = df['3']
y = df['3.1']
P = np.array([x, y])
q = df['4']
w = df['4.1']
Q = np.array([q, w])
Q_final = list(zip(Q[0], Q[1]))
P_final = list(zip(P[0], P[1]))
directed_hausdorff(P_final, Q_final)[0]
Desired output:
Same process with for loop for the whole dataset
distance from a['0'], a['0']is 0
from a['0'], a['1'] is 0.234 (some number)
from a['0'], a['2'] is .. ...
From [0] to all, then to [1] to all and etc.
Finally I should get a matrix with 0s` in diagonal
I Have tried:
space = list(df.index)
dist = []
for j in space:
for k in space:
if k != j:
dist.append((j, k, directed_hausdorff(P_final, Q_final)[0]))
But getting same value of distance between [3] and [4]
I am not entirely sure what you are trying to do.. but based on how you calculated the first one, here is a possible solution:
import pandas as pd
import numpy as np
from scipy.spatial.distance import directed_hausdorff
df = pd.read_csv('something.csv')
groupby = lambda l, n: [tuple(l[i:i+n]) for i in range(0, len(l), n)]
values = groupby(df.columns.values, 2)
matrix = np.zeros((4, 4))
for Ps in values:
x = df[str(Ps[0])]
y = df[str(Ps[1])]
P = np.array([x, y])
for Qs in values:
q = df[str(Qs[0])]
w = df[str(Qs[1])]
Q = np.array([q, w])
Q_final = list(zip(Q[0], Q[1]))
P_final = list(zip(P[0], P[1]))
matrix[values.index(Ps), values.index(Qs)] = directed_hausdorff(P_final, Q_final)[0]
print(matrix)
Output:
[[0. 0.49203658 0.47927028 0.46861498]
[0.31048349 0. 0.12083046 0.1118034 ]
[0.25179357 0.22135944 0. 0.31064449]
[0.33955854 0.03 0.13601471 0. ]]
I'm trying to work out the singular values from a matrix with many zeros using SLEPc:s Lanczos type svd solver, in python/cython.
The matrix that I use is a PETc matrix
[[ 0.00648130+0.32060635j 0 0 0 0 0 ]
[ 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 ]
[ 0 0 0 0 0 -0.00668978-0.31948359j ]]
When when I invoke the svd solver with the code bellow
size = Matrix.getSize()
S = SLEPc.SVD()
S.create()
S.setOperator(Matrix)
S.setType(SLEPc.SVD.Type.LANCZOS)
S.setDimensions(min(size))
S.solve()
i get the error
/usr/local/lib/python2.7/dist-packages/slepc4py/lib/linux-gnu-cxx-complex/SLEPc.so in slepc4py.SLEPc.SVD.solve (src/slepc4py.SLEPc.c:35357)()
Error: error code 76
[0] SVDSolve() line 111 in /home/fremling/slepc-3.7.2/src/svd/interface/svdsolve.c
[0] SVDSolve_Lanczos() line 229 in /home/fremling/slepc-3.7.2/src/svd/impls/lanczos/gklanczos.c
[0] DSSolve() line 543 in /home/fremling/slepc-3.7.2/src/sys/classes/ds/interface/dsops.c
[0] DSSolve_SVD_DC() line 255 in /home/fremling/slepc-3.7.2/src/sys/classes/ds/impls/svd/dssvd.c
[0] Error in external library
[0] Error in Lapack xBDSDC 5
I realize some of the singular values will be zero, but that should not be reasons for the crash, right?
I should mention that most of the time the code runs without problem, but when there are many zeros, these crashes happen.
Complete code example works with given matrix for all SLEPc SVD methods except SLEPc.SVD.Type.CROSS. Tests were run using version 3.7.0 of slepc4py and petsc4py.
import numpy as np
import slepc4py.SLEPc as SLEPc
import petsc4py.PETSc as PETSc
# numpy version
A = np.array([[0.00648130+0.32060635j,0,0,0,0,0]
,[0,0,0,0,0,0]
,[0,0,0,0,0,0]
,[0,0,0,0,0,0]
,[0,0,0,0,0,0]
,[0,0,0,0,0,-0.00668978-0.31948359j]])
u,s,d = np.linalg.svd(A)
print('Singular values: ', s)
# SLEPc version
Ap = PETSc.Mat()
Ap.create()
Ap.setSizes(A.shape)
Ap.setUp()
for row in range(A.shape[0]):
for col in range(A.shape[1]):
Ap.setValue(row, col, A[row,col])
Ap.assemble()
#for stype in [SLEPc.SVD.Type.CROSS, SLEPc.SVD.Type.CYCLIC, SLEPc.SVD.Type.LANCZOS, SLEPc.SVD.Type.LAPACK, SLEPc.SVD.Type.TRLANCZOS]:
for stype in [SLEPc.SVD.Type.CYCLIC, SLEPc.SVD.Type.LANCZOS, SLEPc.SVD.Type.LAPACK, SLEPc.SVD.Type.TRLANCZOS]:
S = SLEPc.SVD()
S.create()
S.setOperator(Ap)
S.setType(stype)
S.setDimensions(A.shape[0])
S.solve()
s_slepc = []
i=0
while i < S.getConverged():
s_slepc.append(S.getValue(i))
i += 1
print('Singular values (SLEPc %s): ' % S.getType(), s_slepc)
Produces output:
('Singular values: ', array([ 0.32067186, 0.31955362, 0. , 0. , 0. , 0. ]))
('Singular values (SLEPc cyclic): ', [0.3206718555003113, 0.31955362216025096, 5.558046393682893e-17, 1.5567126663969806e-34, 1.1955235065555233e-34, 8.758810386256485e-36])
('Singular values (SLEPc lanczos): ', [0.32067185550031124, 0.31955362216025107, 7.598620143277e-17, 9.80035376111015e-18, 8.135560423584465e-18, 4.5426042596528355e-18])
('Singular values (SLEPc lapack): ', [0.32067185550031124, 0.31955362216025107, 0.0, 0.0, 0.0, 0.0])
('Singular values (SLEPc trlanczos): ', [0.32067185550031124, 0.31955362216025107, 1.4803092323093608e-09, 9.80035376111015e-18, 8.135560423584465e-18, 4.5426042596528355e-18])
Suppose we have a matrix of dimension N x M and we want to reduce its dimension preserving the values in each by summing the firs neighbors.
Suppose the matrix A is a 4x4 matrix:
A =
3 4 5 6
2 3 4 5
2 2 0 1
5 2 2 3
we want to reduce it to a 2x2 matrix as following:
A1 =
12 20
11 6
In particular my matrix represent the number of incident cases in an x-y plane. My matrix is A=103x159, if I plot it I get:
what I want to do is to aggregate those data to a bigger area, such as
Assuming you're using a numpy.matrix:
import numpy as np
A = np.matrix([
[3,4,5,6],
[2,3,4,5],
[2,2,0,1],
[5,2,2,3]
])
N, M = A.shape
assert N % 2 == 0
assert M % 2 == 0
A1 = np.empty((N//2, M//2))
for i in range(N//2):
for j in range(M//2):
A1[i,j] = A[2*i:2*i+2, 2*j:2*j+2].sum()
Though these loops can probably be optimized away by proper numpy functions.
I see that there is a solution using numpy.maxtrix, maybe you can test my solution too and return your feedbacks.
It works with a*b matrix if a and b are even. Otherwise, it may fail if a or b are odd.
Here is my solution:
v = [
[3,4,5,6],
[2,3,4,5],
[2,2,0,1],
[5,2,2,3]
]
def shape(v):
return len(v), len(v[0])
def chunks(v, step):
"""
Chunk list step per step and sum
Example: step = 2
[3,4,5,6] => [7,11]
[2,3,4,5] => [5,9]
[2,2,0,1] => [4,1]
[5,2,2,3] => [7,5]
"""
for i in v:
for k in range(0, len(i),step):
yield sum(j for j in i[k:k+step])
def sum_chunks(k, step):
"""
Sum near values with step
Example: step = 2
[
[7,11], [
[5,9], => [12, 11],
[4,1], [20, 6]
[7,5] ]
]
"""
a, c = [k[i::step] for i in range(step)], []
print(a)
for m in a:
# sum near values
c.append([sum(m[j:j+2]) for j in range(0, len(m), 2)])
return c
rows, columns = shape(v)
chunk_list = list(chunks(v, columns // 2))
final_sum = sum_chunks(chunk_list, rows // 2)
print(final_sum)
Output:
[[12, 11], [20, 6]]
I have an array A of the form:
1.005 1.405 1.501 1.635
2.020 2.100 2.804 2.067
3.045 3.080 3.209 3.627
4.080 4.005 4.816 4.002
5.125 5.020 5.025 5.307
6.180 6.045 6.036 6.015
7.245 7.320 7.049 7.807
8.320 8.125 8.064 8.042
9.405 9.180 9.581 9.060
10.500 10.245 10.100 10.082
and B of the form:
10
9
8
7
6
5
4
3
2
1
I would like to add or subtract each of the entries with a number that is less than a particular number, in this case 0.5, so that certain conditions are met, e.g. sum of (Bi-Ai)^2 is minimized, much like an optimization problem. As an example, let us take A23, that has a value 2.804, I need to vary it between 2.304 < A23 < 2.804 so that for a particular value in this range, the sum of of (Bi-Ai)^2. And then for A24 I vary it between 1.567 < A24 < 2.567 so that, D is minimized.
Reproducible code
import numpy as np
A = np.array([[1.005, 1.405, 1.501, 1.635],
[2.020, 2.100, 2.804, 2.067],
[3.045, 3.080, 3.209, 3.627],
[4.080, 4.005, 4.816, 4.002],
[5.125, 5.020, 5.025, 5.307],
[6.180, 6.045, 6.036, 6.015],
[7.245, 7.320, 7.049, 7.807],
[8.320, 8.125, 8.064, 8.042],
[9.405, 9.180, 9.581, 9.060],
[10.500, 10.245, 10.100, 10.082]])
B = np.array([10, 9, 8, 7, 6, 5, 4, 3, 2, 1])
C = np.empty(shape = (A.shape[0], A.shape[1]))
D = np.empty(shape = (A.shape[0], ))
m, n = A.shape
for i in range(m):
for j in range (n):
C[i, j] = np.sum((B[i] - A[i, j]) ** 2)
D = np.sum(C, axis = 0)