sympy big matrix need help in realising some forgotten operation - python

Given a big sympy matrix how can I access the off diagonal blocks of the big matrix? I cant post images by the looks of it, so do run the code and try to see what I mean.
So If the code is run, the output matrix I have currently called H_QN (16 x 16 matrix) has the correct 4x4 blocks. Say underneath the first 4x4 block I want to add some elements (some J terms which I have written in a 4x4 matrix called Cqubit) in the adjacent 4x4 block. How do I do that? If I can post images it would be a lot easier to explain :D
#if you can imagine the below image a sub matrix of the big matrix.
#Where its a 8 x 8 matrix. That is the 'o'
#represents a 4x4 matrix and so do the 'x'.
#I want to add the 'x' to this submatrix and consequently the big one.
#if i can get enough reputation i can post images to help clarify better!
|ox|
|xo|
Here is the jupyter notebook pastebin
https://paste.pythondiscord.com/vuwoqemuce
and the raw code:
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display
import sympy as sp
from sympy.physics.quantum import TensorProduct
sp.init_printing()
# In[5]:
H_single = sp.Matrix.diag([0,1,2])
I = sp.eye(2) #2+2 system
def Id(n):
return sp.eye(n)
H_single, Id(4)
# In[6]:
TensorProduct(I,H_single)
# In[7]:
TensorProduct(H_single,I) #think like changing posn of basis vectros like 01 10
# $$
# \begin{equation}
# Rqubit = R_{qu1} \otimes I + I \otimes R_{qu2} + K(\hat{a}^\dagger \otimes \hat{a} + \hat{a} \otimes \hat{a}^\dagger) + P^*\hat{a} + P\hat{a}^\dagger
# \end{equation}
# $$
#
# In[8]:
E1, E2, K, P1, P2 = sp.symbols("E_1 E_2 K_{21} P_1 P_2")
a_dag = sp.Matrix([[0,0],[1,0]])
a_dag,a_dag.T, E1, E2, K, P1, P2
# In[9]:
Rqu1 = sp.Matrix([[0,0],[0,E1]])
Rqu2 = sp.Matrix([[0,0],[0,E2]])
Rqu1, Rqu2
# In[10]:
#2+2 system
a_21 = (TensorProduct(a_dag, a_dag.T))
a_21
# In[11]:
R = TensorProduct(Rqu1, Id(2)) + TensorProduct(Id(2),Rqu2)
R
# In[12]:
R = TensorProduct(Rqu1, Id(2)) + TensorProduct(Id(2),Rqu2) + K*a_21
R
# In[13]:
a_21 = K*(TensorProduct(a_dag, a_dag.T))
# In[14]:
R = TensorProduct(Rqu1, Id(2)) + TensorProduct(Id(2),Rqu2) + a_21.T + a_21
R
# In[15]:
Rqu1 = sp.Matrix([[0,P1.conjugate()],[P1,E1]])
Rqu2 = sp.Matrix([[0,P2.conjugate()],[P2,E2]])
Rqu1, Rqu2
# In[16]:
R = TensorProduct(Rqu1, Id(2)) + TensorProduct(Id(2),Rqu2) + (a_21.H + a_21)
R
# In[18]:
c1 = sp.Matrix.diag([0,0]) #perhaps qubit 1 empty state
c2 = sp.Matrix.diag([0,0])
ch = TensorProduct(c1, I) + TensorProduct(I,c2)
ch
# In[19]:
I4 = TensorProduct(I, I)
H_QN = TensorProduct(ch, I4)+TensorProduct(I4, R)
H_QN
#now repeating process but adding the J terms
# In[20]:
E1, E2, K, P1, P2, J11, J12, J21, J22 = sp.symbols("E_1 E_2 K_{21} P_1 P_2 J_{11} J_{12} J_{21} J_{22}")
a_dag = sp.Matrix([[0,0],[1,0]])
a_dag,a_dag.T, E1, E2, K, P1, P2, J11, J12, J21, J22
#more code below (max charac reached
)
# In[21]:
Rqu1 = sp.Matrix([[0,P1.conjugate()],[P1,E1]])
Rqu2 = sp.Matrix([[0,P2.conjugate()],[P2,E2]])
Rqu1, Rqu2
# $$
# \begin{equation}
# Cqubit = C_{qu1} \otimes I + I \otimes C_{qu2} + (J_{kl}^*\hat{a}_k \otimes \hat{a}_l + J_{kl}\hat{a}_l^\dagger \otimes \hat{a}_k^\dagger)
# \end{equation}
# $$
# In[22]:
Cqu1 = sp.Matrix([[0,J22.conjugate()],
[0,0]])
Cqu2 = sp.Matrix([[0,J21.conjugate()],
[0,0]])
Cqu1, Cqu2
# In[24]:
Cqubit_ = TensorProduct(Cqu2, I) + TensorProduct(I, Cqu1)
Cqubit = Cqubit_.T
Cqubit,Cqubit_
# In[35]:
C = TensorProduct(I4, ch)+TensorProduct(I4, Cqubit_)+TensorProduct(I4, Cqubit)
C

Related

Is there any python function that works exactly in same way of MATLAB image to colum "im2co" in-built function?

Can we convert the following MATLAB functions to corresponding python version:
x1 = im2col(original_image, [block_size block_size], 'distinct');
r1 = col2im(x1, [block_size block_size],[Hight Width], 'distinct');
I tried to write the following two functions, but my output is not the same as MATLAB functions. It can not recover the same image back again. Please check my code below.
def im2col(mtx, block_size):
mtx_shape = mtx.shape
m=mtx.shape[0]
n=mtx.shape[1]
mblocks = np.ceil(m/block_size)
nblocks = np.ceil(n/block_size)
sx= int(mblocks)
sy =int(nblocks)
result = np.empty((block_size * block_size, sx * sy))
# Moved along the line, so the first holding column (i) does not move down along the row (j)
for i in range(sy):
for j in range(sx):
result[:, i * sx + j] = mtx[j:j + block_size, i:i + block_size].ravel(order='F')
return result
def col2im(mtx, block_size, image_size):
p = block_size[0]
q = block_size[1]
mblocks = np.ceil(image_size[0]/p)
nblocks = np.ceil(image_size[1]/q)
sx= int(mblocks)
sy= int(nblocks)
result = np.zeros(image_size)
weight = np.zeros(image_size) # Weight record of each cell numbers plus the repeated many times
col = 0
# Moved along the line, so the first holding column (i) does not move down along the row (j)
for i in range(sy):
for j in range(sx):
#result[j:j + p, i:i + q] += mtx[:, col].reshape(block_size, order='F')
result[j:j + p, i:i + q] = result[j:j + p, i:i + q]+ mtx[:, col].reshape(block_size)
weight[j:j + p, i:i + q] = weight[j:j + p, i:i + q]+ np.ones(block_size)
col = col + 1
#return np.divide(result, weight), result
#return result
return np.divide(result, weight, out=np.zeros_like(result), where=weight!=0)
To run them:
image = cv2.imread('my_image.jpg', cv2.IMREAD_UNCHANGED)
hh = image.shape[0]
wh = image.shape[1]
blk_size =32
x1 = im2col(image, blk_size)
res1 = col2im(x1, (blk_size, blk_size), (hh, wh))
I will be thankful if anyone provides an exact python version of MATLAB im2col with 'distinct' criteria.
However, online available functions provided here(link) can return the same image, but it also fail to provide the extact im2col output matrix as MATLAB im2col. There dimension is different. MATLAB gives x1 dimension as 1024x64, while python gives dimension of 1024x50625 which is not wanted.

Multigrid Poisson Solver

I am trying to make my own CFD solver and one of the most computationally expensive parts is solving for the pressure term. One way to solve Poisson differential equations faster is by using a multigrid method. The basic recursive algorithm for this is:
function phi = V_Cycle(phi,f,h)
% Recursive V-Cycle Multigrid for solving the Poisson equation (\nabla^2 phi = f) on a uniform grid of spacing h
% Pre-Smoothing
phi = smoothing(phi,f,h);
% Compute Residual Errors
r = residual(phi,f,h);
% Restriction
rhs = restriction(r);
eps = zeros(size(rhs));
% stop recursion at smallest grid size, otherwise continue recursion
if smallest_grid_size_is_achieved
eps = smoothing(eps,rhs,2*h);
else
eps = V_Cycle(eps,rhs,2*h);
end
% Prolongation and Correction
phi = phi + prolongation(eps);
% Post-Smoothing
phi = smoothing(phi,f,h);
end
I've attempted to implement this algorithm myself (also at the end of this question) however it is very slow and doesn't give good results so evidently it is doing something wrong. I've been trying to find why for too long and I think it's just worthwhile seeing if anyone can help me.
If I use a grid size of 2^5 by 2^5 points, then it can solve it and give reasonable results. However, as soon as I go above this it takes exponentially longer to solve and basically get stuck at some level of inaccuracy, no matter how many V-Loops are performed. at 2^7 by 2^7 points, the code takes way too long to be useful.
I think my main issue is that my implementation of a jacobian iteration is using linear algebra to calculate the update at each step. This should, in general, be fast however, the update matrix A is an n*m sized matrix, and calculating the dot product of a 2^7 * 2^7 sized matrix is expensive. As most of the cells are just zeros, should I calculate the result using a different method?
if anyone has any experience in multigrid methods, I would appreciate any advice!
Thanks
my code:
# -*- coding: utf-8 -*-
"""
Created on Tue Dec 29 16:24:16 2020
#author: mclea
"""
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import convolve2d
from mpl_toolkits.mplot3d import Axes3D
from scipy.interpolate import griddata
from matplotlib import cm
def restrict(A):
"""
Creates a new grid of points which is half the size of the original
grid in each dimension.
"""
n = A.shape[0]
m = A.shape[1]
new_n = int((n-2)/2+2)
new_m = int((m-2)/2+2)
new_array = np.zeros((new_n, new_m))
for i in range(1, new_n-1):
for j in range(1, new_m-1):
ii = int((i-1)*2)+1
jj = int((j-1)*2)+1
# print(i, j, ii, jj)
new_array[i,j] = np.average(A[ii:ii+2, jj:jj+2])
new_array = set_BC(new_array)
return new_array
def interpolate_array(A):
"""
Creates a grid of points which is double the size of the original
grid in each dimension. Uses linear interpolation between grid points.
"""
n = A.shape[0]
m = A.shape[1]
new_n = int((n-2)*2 + 2)
new_m = int((m-2)*2 + 2)
new_array = np.zeros((new_n, new_m))
i = (np.indices(A.shape)[0]/(A.shape[0]-1)).flatten()
j = (np.indices(A.shape)[1]/(A.shape[1]-1)).flatten()
A = A.flatten()
new_i = np.linspace(0, 1, new_n)
new_j = np.linspace(0, 1, new_m)
new_ii, new_jj = np.meshgrid(new_i, new_j)
new_array = griddata((i, j), A, (new_jj, new_ii), method="linear")
return new_array
def adjacency_matrix(rows, cols):
"""
Creates the adjacency matrix for an n by m shaped grid
"""
n = rows*cols
M = np.zeros((n,n))
for r in range(rows):
for c in range(cols):
i = r*cols + c
# Two inner diagonals
if c > 0: M[i-1,i] = M[i,i-1] = 1
# Two outer diagonals
if r > 0: M[i-cols,i] = M[i,i-cols] = 1
return M
def create_differences_matrix(rows, cols):
"""
Creates the central differences matrix A for an n by m shaped grid
"""
n = rows*cols
M = np.zeros((n,n))
for r in range(rows):
for c in range(cols):
i = r*cols + c
# Two inner diagonals
if c > 0: M[i-1,i] = M[i,i-1] = -1
# Two outer diagonals
if r > 0: M[i-cols,i] = M[i,i-cols] = -1
np.fill_diagonal(M, 4)
return M
def set_BC(A):
"""
Sets the boundary conditions of the field
"""
A[:, 0] = A[:, 1]
A[:, -1] = A[:, -2]
A[0, :] = A[1, :]
A[-1, :] = A[-2, :]
return A
def create_A(n,m):
"""
Creates all the components required for the jacobian update function
for an n by m shaped grid
"""
LaddU = adjacency_matrix(n,m)
A = create_differences_matrix(n,m)
invD = np.zeros((n*m, n*m))
np.fill_diagonal(invD, 1/4)
return A, LaddU, invD
def calc_RJ(rows, cols):
"""
Calculates the jacobian update matrix Rj for an n by m shaped grid
"""
n = int(rows*cols)
M = np.zeros((n,n))
for r in range(rows):
for c in range(cols):
i = r*cols + c
# Two inner diagonals
if c > 0: M[i-1,i] = M[i,i-1] = 0.25
# Two outer diagonals
if r > 0: M[i-cols,i] = M[i,i-cols] = 0.25
return M
def jacobi_update(v, f, nsteps=1, max_err=1e-3):
"""
Uses a jacobian update matrix to solve nabla(v) = f
"""
f_inner = f[1:-1, 1:-1].flatten()
n = v.shape[0]
m = v.shape[1]
A, LaddU, invD = create_A(n-2, m-2)
Rj = calc_RJ(n-2,m-2)
update=True
step = 0
while update:
v_old = v.copy()
step += 1
vt = v_old[1:-1, 1:-1].flatten()
vt = np.dot(Rj, vt) + np.dot(invD, f_inner)
v[1:-1, 1:-1] = vt.reshape((n-2),(m-2))
err = v - v_old
if step == nsteps or np.abs(err).max()<max_err:
update=False
return v, (step, np.abs(err).max())
def MGV(f, v):
"""
Solves for nabla(v) = f using a multigrid method
"""
# global A, r
n = v.shape[0]
m = v.shape[1]
# If on the smallest grid size, compute the exact solution
if n <= 6 or m <=6:
v, info = jacobi_update(v, f, nsteps=1000)
return v
else:
# smoothing
v, info = jacobi_update(v, f, nsteps=10, max_err=1e-1)
A = create_A(n, m)[0]
# calculate residual
r = np.dot(A, v.flatten()) - f.flatten()
r = r.reshape(n,m)
# downsample resitdual error
r = restrict(r)
zero_array = np.zeros(r.shape)
# interploate the correction computed on a corser grid
d = interpolate_array(MGV(r, zero_array))
# Add prolongated corser grid solution onto the finer grid
v = v - d
v, info = jacobi_update(v, f, nsteps=10, max_err=1e-6)
return v
sigma = 0
# Setting up the grid
k = 6
n = 2**k+2
m = 2**(k)+2
hx = 1/n
hy = 1/m
L = 1
H = 1
x = np.linspace(0, L, n)
y = np.linspace(0, H, m)
XX, YY = np.meshgrid(x, y)
# Setting up the initial conditions
f = np.ones((n,m))
v = np.zeros((n,m))
# How many V cyles to perform
err = 1
n_cycles = 10
loop = True
cycle = 0
# Perform V cycles until converged or reached the maximum
# number of cycles
while loop:
cycle += 1
v_new = MGV(f, v)
if np.abs(v - v_new).max() < err:
loop = False
if cycle == n_cycles:
loop = False
v = v_new
print("Number of cycles " + str(cycle))
plt.contourf(v)
I realize that I'm not answering your question directly, but I do note that you have quite a few loops that will contribute some overhead cost. When optimizing code, I have found the following thread useful - particularly the line profiler thread. This way you can focus in on "high time cost" lines and then start to ask more specific questions regarding opportunities to optimize.
How do I get time of a Python program's execution?

Fourier transform and filter given data set

Overall, I want to calculate a fourier transform of a given data set and filter out some of the frequencies with the biggest absolute values. So:
1) Given a data array D with accompanying times t, 2) find the k biggest fourier coefficients and 3) remove those coefficients from the data, in order to filter out certain signals from the original data.
Something goes wrong in the end when plotting the filtered data set over the given times. I'm not exactly sure, where the error is. The final 'filtered data' plot doesn't look even slightly smoothened and somehow changes its position compared with the original data. Is my code completely bad?
Part 1):
n = 1000
limit_low = 0
limit_high = 0.48
D = np.random.normal(0, 0.5, n) + np.abs(np.random.normal(0, 2, n) * np.sin(np.linspace(0, 3*np.pi, n))) + np.sin(np.linspace(0, 5*np.pi, n))**2 + np.sin(np.linspace(1, 6*np.pi, n))**2
scaling = (limit_high - limit_low) / (max(D) - min(D))
D = D * scaling
D = D + (limit_low - min(D)) # given data
t = linspace(0,D.size-1,D.size) # times
Part 2):
from numpy import linspace
import numpy as np
from scipy import fft, ifft
D_f = fft.fft(D) # fft of D-dataset
#---extract the k biggest coefficients out of D_f ---
k = 15
I, bigvals = [], []
for i in np.argsort(-D_f):
if D_f[i] not in bigvals:
bigvals.append(D_f[i])
I.append(i)
if len(I) == k:
break
bigcofs = np.zeros(len(D_f))
bigcofs[I] = D_f[I] # array with only zeros in in except for the k maximal coefficients
Part 3):
D_filter = fft.ifft(bigcofs)
D_new = D - D_filter
p1=plt.plot(t,D,'r')
p2=plt.plot(t,D_new,'b');
plt.legend((p1[0], p2[0]), ('original data', 'filtered data'))
I appreciate your help, thanks in advance.
There were two issues I noticed:
you likely want the components with largest absolute value, so np.argsort(-np.abs(D_f)) instead of np.argsort(-D_f).
More subtly: bigcofs = np.zeros(len(D_f)) is of type float64 and was discarding the imaginary part at the line bigcofs[I] = D_f[I]. You can fix this with bigcofs = np.zeros(len(D_f), dtype=complex)
I've improved your code slightly below to get desired results:
import numpy as np
from scipy import fft, ifft
import matplotlib.pyplot as plt
n = 1000
limit_low = 0
limit_high = 0.48
N_THRESH = 10
D = 0.5*np.random.normal(0, 0.5, n) + 0.5*np.abs(np.random.normal(0, 2, n) * np.sin(np.linspace(0, 3*np.pi, n))) + np.sin(np.linspace(0, 5*np.pi, n))**2 + np.sin(np.linspace(1, 6*np.pi, n))**2
scaling = (limit_high - limit_low) / (max(D) - min(D))
D = D * scaling
D = D + (limit_low - min(D)) # given data
t = np.linspace(0,D.size-1,D.size) # times
# transformed data
D_fft = fft.fft(D)
# Create boolean mask for N largest indices
idx_sorted = np.argsort(-np.abs(D_fft))
idx = idx_sorted[0:N_THRESH]
mask = np.zeros(D_fft.shape).astype(bool)
mask[idx] = True
# Split fft above, below N_THRESH points:
D_below = D_fft.copy()
D_below[mask] = 0
D_above = D_fft.copy()
D_above[~mask] = 0
#inverse separated functions
D_above = fft.ifft(D_above)
D_below = fft.ifft(D_below)
# plot
plt.ion()
f, (ax1, ax2, ax3) = plt.subplots(3,1)
l1, = ax1.plot(t, D, c="r", label="original")
l2, = ax2.plot(t, D_above, c="g", label="top {} coeff. signal".format(N_THRESH))
l3, = ax3.plot(t, D_below, c="b", label="remaining signal")
f.legend(handles=[l1,l2,l3])
plt.show()

Elegant way to translate set of weighted sums to python

Given a set of terms ||(p_i' - sum{w_ji*(R_j*p_i+v_j)})||^2, where ||...||^2 denotes the squared norm, I want to efficiently set up an array (or a list) in Python filled with these terms. p_i', p_i, v_j are three-dimensional vectors, and R_j is a 3x3 matrix.
I've already tried this but I don't know how to incorporate the sum over j.
new_points = r_mesh.points() # p', return Nx3 array
old_points = avg_mesh.points() # p
n_joints = 3
rv = np.arange(n_joints * 15) # R_j and v_j are stored in rv
weights = np.random.rand(n_joints, len(new_points)) # w
func = [[np.linalg.norm(
new_points[i] - (weights[j, i] * ((np.array(rv[j * 15:j * 15 + 9]).reshape(3, 3) # old_points[i]) + np.array(
rv[j * 9 + 9: j * 9 + 12])))) for j in range(n_joints)] for i in range(len(new_points))]
To make things clearer here is the original equation that I transformed into a non-linear function as to feed it to the Levenberg-Marquardt method.
EDIT: I'm sorry, before, there was a wrong image.
The simplest ("auto pilot", no actual thinking required) method would be np.einsum:
# set up example:
n_i, n_j = 20, 30
p = np.random.random((n_i, 3))
pp = np.random.random((n_i, 3))
R = np.random.random((n_j, 3, 3))
w = np.random.random((n_j, n_i))
v = np.random.random((n_j, 3))
# now just tell einsum which index is where and let it
# do its magic
# R_j p_i
Rp = np.einsum('jkl,il', R,p)
# by Einstein convention this will sum over l,
# so Rp has indices ijk
# w_ji (Rp_ij + v_j)
wRpv = np.einsum('ji,ijk->ik', w,Rp+v)
# pure Einstein convention would sum over i and j,
# we override this by passing explicit output indices
# ik to keep i alive
# squared norm
d = pp - wRpv
result = np.einsum('ik,ik', d,d)

How can I avoid using a loop in this specific snippet of python code?

I have a specific python issue, that desperately needs to be sped up by avoiding the use of a loop, yet, I am at a loss as to how to do this. I need to read in a fits image, convert this to a numpy array (roughly, 2000 x 2000 elements in size), then for each element compute the statistics of a ring of elements around it.
As I have my code now, the statistics of the ring around the element is computed with a function using masks. This is fast but, of course, I call this function 2000x2000 times (the slow part).
I am relatively new to python. I think that using the mask function is clever, but I cannot find a way around individually addressing each element. Best of thanks for any help you can provide.
# First, the function computing the statistics within a ring
around the central pixel:<br/>
# flux = image intensity at pixel (i,j)<br/>
# rad1, rad2 = inner and outer radii<br/>
# array = image array<br/>_
def snr(flux, i, j, rad1, rad2, array):
a, b = i, j
nx, ny = array.shape
y, x = np.ogrid[-a:nx-a, -b:ny-b]
mask = (x*x + y*y >= rad1*rad1) & (x*x + y*y <= rad2*rad2)
Nmask = np.count_nonzero(mask)
noise = 0.6052697 * abs(Nmask * flux - sum(array[mask]))
return noise
# Now, the call to snr for each pixel in the array data1:<br/>_
frame1 = fits.open(in_frame, mode='readonly') # read in fits file
data1 = frame1[ext].data # convert to np array
ny, nx = data1.shape # array dimensions
noise1 = zeros((ny, nx), float) # empty array
r1 = 5 # inner radius (pixels)
r2 = 7 # outer radius (pixels)
# The function is fast, but calling it 2k x 2k times is not:
for j in range(ny):
for i in range(nx):
noise1[i,j] = der_snr(data1[i,j], i, j, r1, r2, data1)
The operation that you are trying to do can be expressed as an image convolution. Try something like this:
import numpy as np
import scipy.ndimage
from astropy.io import fits
def make_kernel(inner_radius, outer_radius):
if inner_radius > outer_radius:
raise ValueError
x, y = np.ogrid[-outer_radius:outer_radius + 1, -outer_radius:outer_radius + 1]
r2 = x * x + y * y
kernel = (r2 >= inner_radius * inner_radius) & (r2 <= outer_radius * outer_radius)
return kernel
in_frame = '<file path>'
ext = '...'
frame1 = fits.open(in_frame, mode='readonly')
data1 = frame1[ext].data
inner_radius = 5
outer_radius = 7
kernel = make_kernel(inner_radius, outer_radius)
n_kernel = np.count_nonzero(kernel)
conv = scipy.ndimage.convolve(data1, kernel, mode='constant')
noise1 = 0.6052697 * np.abs(n_kernel * data1 - conv)

Categories

Resources