Issue in numpy array loop for central difference - python

Input array for reference,
u = array([[ 0., 0., 0., 0., 0.],
[ 0., 1., 1., 1., 0.],
[ 0., 1., 1., 1., 0.],
[ 0., 1., 1., 1., 0.],
[ 0., 0., 0., 0., 0.]])
python function using for loop
import numpy as np
u = np.zeros((5,5))
u[1:-1,1:-1]=1
def cds(n):
for i in range(1,4):
for j in range(1,4):
u[i,j] = u[i,j+1] + u[i,j-1] + u[i+1,j] + u[i-1,j]
return u
above function cds(5) provide the following result by using for loop,
u=array([[ 0., 0., 0., 0., 0.],
[ 0., 2., 4., 5., 0.],
[ 0., 4., 10., 16., 0.],
[ 0., 5., 16., 32., 0.],
[ 0., 0., 0., 0., 0.]])
same function using numpy
def cds(n):
u[1:-1,1:-1] = u[1:-1,2:] + u[1:-1,:-2] + u[2:,1:-1] + u[:-2,1:-1]
return u
But for the same input array(u), function cds(5) using NUMPY provide different result.,
u=array([[ 0., 0., 0., 0., 0.],
[ 0., 2., 3., 2., 0.],
[ 0., 3., 4., 3., 0.],
[ 0., 2., 3., 2., 0.],
[ 0., 0., 0., 0., 0.]])
The reason for this problem is, python "for loop" updates every u[i,j] value to the exsisting u array while looping but "numpy" didn't.....
I want same result from numpy as like as from the for loop.
Is there any way to solve this issue in NUMPY? please help me, Thanks in advance...

Related

Reordering block matrix

I have a multi-level indexed square matrix, that needs to be reordered.
Say I have a two-level indexing system x and y and the square matrix M has the shape (len(x)*len(y), len(x)*len(y)).
M is sorted by the x index and I want to transform it to be sorted by the y index. Here is an example to contruct an arbitary square matrix M:
import numpy as np
nx = 4 # equal to len(x), arbitary
ny = 3 # equal to len(y), arbitary
A=np.ones(ny*ny).reshape(ny,ny) #arbitary
B=np.ones(ny*ny).reshape(ny,ny)*2 #arbitary
C=np.ones(ny*ny).reshape(ny,ny)*3 #arbitary
D=np.ones(ny*ny).reshape(ny,ny)*4 #arbitary
E=np.arange(ny*ny).reshape(ny,ny) #arbitary
M = np.block([[A, np.zeros((ny,ny)), E, np.zeros((ny,ny))],
[np.zeros((ny,ny)), B, np.zeros((ny,ny)),np.zeros((ny,ny))],
[np.zeros((ny,ny)),np.zeros((ny,ny)),C, np.zeros((ny,ny))],
[np.zeros((ny,ny)), np.zeros((ny,ny)), np.zeros((ny,ny)), D]])
and the resulting matrix M may look like this
array([[1., 1., 1., 0., 0., 0., 0., 1., 2., 0., 0., 0.],
[1., 1., 1., 0., 0., 0., 3., 4., 5., 0., 0., 0.],
[1., 1., 1., 0., 0., 0., 6., 7., 8., 0., 0., 0.],
[0., 0., 0., 2., 2., 2., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 2., 2., 2., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 2., 2., 2., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 3., 3., 3., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 3., 3., 3., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 3., 3., 3., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 4., 4., 4.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 4., 4., 4.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 4., 4., 4.]])
Now I want to transform the M into M_transformed that looks like this
array([[1., 0., 0., 0., 1., 0., 1., 0., 1., 0., 2., 0.],
[0., 2., 0., 0., 0., 2., 0., 0., 0., 2., 0., 0.],
[0., 0., 3., 0., 0., 0., 3., 0., 0., 0., 3., 0.],
[0., 0., 0., 4., 0., 0., 0., 4., 0., 0., 0., 4.],
[1., 0., 3., 0., 1., 0., 4., 0., 1., 0., 5., 0.],
[0., 2., 0., 0., 0., 2., 0., 0., 0., 2., 0., 0.],
[0., 0., 3., 0., 0., 0., 3., 0., 0., 0., 3., 0.],
[0., 0., 0., 4., 0., 0., 0., 4., 0., 0., 0., 4.],
[1., 0., 6., 0., 1., 0., 7., 0., 1., 0., 8., 0.],
[0., 2., 0., 0., 0., 2., 0., 0., 0., 2., 0., 0.],
[0., 0., 3., 0., 0., 0., 3., 0., 0., 0., 3., 0.],
[0., 0., 0., 4., 0., 0., 0., 4., 0., 0., 0., 4.]])
I use a very elementary, 4 layers of for loops to solve this problem and I believe there must be a more straight forward way (like a library) to solve this issue, as the matrix M can grow very large depending on the length of x and y (nx and ny)
M_transformed = np.zeros(M.shape)
for i in range(nx):
for j in range(nx):
for k in range(ny):
for l in range(ny):
M_transformed[k * nx + i,l * nx + j] = M[i * ny + k, j * ny + l]
I did it with no calculations, just borrowing some ideas from how to do maxpooling and experimenting a lot with swaps of axes.
I came to solution with this plan:
And this is my solution:
w = (3, 3)
initial_shape = M.shape
M = M.reshape((M.shape[0]//w[0], w[0], M.shape[1]//w[1], w[1]))
M = M.swapaxes(0, 1)
M = M.swapaxes(2, 3)
M = M.reshape(initial_shape)

Element wise dot product of matrices and vectors [duplicate]

This question already has an answer here:
python: Multiply slice i of a matrix stack by column i of a matrix efficiently
(1 answer)
Closed 5 years ago.
There are really similar questions here, here, here, but I don't really understand how to apply them to my case precisely.
I have an array of matrices and an array of vectors and I need element-wise dot product. Illustration:
In [1]: matrix1 = np.eye(5)
In [2]: matrix2 = np.eye(5) * 5
In [3]: matrices = np.array((matrix1,matrix2))
In [4]: matrices
Out[4]:
array([[[ 1., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0.],
[ 0., 0., 1., 0., 0.],
[ 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 1.]],
[[ 5., 0., 0., 0., 0.],
[ 0., 5., 0., 0., 0.],
[ 0., 0., 5., 0., 0.],
[ 0., 0., 0., 5., 0.],
[ 0., 0., 0., 0., 5.]]])
In [5]: vectors = np.ones((5,2))
In [6]: vectors
Out[6]:
array([[ 1., 1.],
[ 1., 1.],
[ 1., 1.],
[ 1., 1.],
[ 1., 1.]])
In [9]: np.array([m # v for m,v in zip(matrices, vectors.T)]).T
Out[9]:
array([[ 1., 5.],
[ 1., 5.],
[ 1., 5.],
[ 1., 5.],
[ 1., 5.]])
This last line is my desired output. Unfortunately it is very inefficient, for instance doing matrices # vectors that computes unwanted dot products due to broadcasting (if I understand well, it returns the first matrix dot the 2 vectors and the second matrix dot the 2 vectors) is actually faster.
I guess np.einsum or np.tensordot might be helpful here but all my attempts have failed:
In [30]: np.einsum("i,j", matrices, vectors)
ValueError: operand has more dimensions than subscripts given in einstein sum, but no '...' ellipsis provided to broadcast the extra dimensions.
In [34]: np.tensordot(matrices, vectors, axes=(0,1))
Out[34]:
array([[[ 6., 6., 6., 6., 6.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]],
[[ 0., 0., 0., 0., 0.],
[ 6., 6., 6., 6., 6.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]],
[[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 6., 6., 6., 6., 6.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]],
[[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 6., 6., 6., 6., 6.],
[ 0., 0., 0., 0., 0.]],
[[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 6., 6., 6., 6., 6.]]])
NB: my real-case scenario use more complicated matrices than matrix1 and matrix2
With np.einsum, you might use:
np.einsum("ijk,ki->ji", matrices, vectors)
#array([[ 1., 5.],
# [ 1., 5.],
# [ 1., 5.],
# [ 1., 5.],
# [ 1., 5.]])
You can use # as follows
matrices # vectors.T[..., None]
# array([[[ 1.],
# [ 1.],
# [ 1.],
# [ 1.],
# [ 1.]],
# [[ 5.],
# [ 5.],
# [ 5.],
# [ 5.],
# [ 5.]]])
As we can see it computes the right thing but arranges them wrong.
Therefore
(matrices # vectors.T[..., None]).squeeze().T
# array([[ 1., 5.],
# [ 1., 5.],
# [ 1., 5.],
# [ 1., 5.],
# [ 1., 5.]])

Most Efficient Way to Create a Numpy Array Based on Values in Multiple Cells of Another Array

I have an application where I have to process 1000's of 2D arrays. The result of the processed array is based on half of a Kings Move neighborhood in the original array. I'm trying to avoid loops if I can due to speed considerations. So, here is an example Numpy Array:
array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 1., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 1., 5., 5., 5., 5., 0., 0., 0.],
[ 0., 1., 5., 5., 1., 1., 1., 1., 1., 0., 0.],
[ 5., 5., 5., 5., 1., 5., 1., 1., 1., 1., 0.],
[ 1., 1., 1., 5., 1., 1., 5., 5., 1., 1., 0.],
[ 5., 1., 5., 1., 1., 5., 5., 5., 1., 5., 0.],
[ 0., 5., 1., 5., 1., 1., 1., 1., 1., 0., 0.],
[ 0., 0., 1., 1., 1., 1., 1., 1., 0., 0., 0.],
[ 0., 0., 0., 1., 5., 5., 5., 0., 0., 0., 0.]])
At each element, I want the sum of the cell right above it, the upper right diagonal element, the cell to the immediate right and the lower diagonal. So, using the element at [6][0] I would want to sum 1 + 1 + 1 + 5.
Of course, I also have to handle the edge cases where one of the 4 cells is not there. I have started with the padded zeros on top and far right to manage some of that but I'm stuck right now. Any advice would be much appreciated!
What you're doing can be viewed as performing a convolution with a particular convolution kernel. Here's a solution using the scipy convolve2d function:
import numpy as np
import scipy as sp
import scipy.signal
x = np.random.randint(5,size=(10,10))
kernel = np.array([[0,1,1],[0,0,1],[0,0,1]])
kernel = np.fliplr(np.flipud(kernel))
check = sp.signal.convolve2d(x,kernel,mode='same')
print x
print check

is TensorSharedVariable in theano initilized twice in function?

In theano, once the sharedvarialbe is initialized in one function, it will never be initialized again even if the function is accessed repeatedly, am I right?
def sgd_updates_adadelta(params,cost,rho=0.95,epsilon=1e-6,norm_lim=9,word_vec_name='Words'):
updates = OrderedDict({})
exp_sqr_grads = OrderedDict({})
exp_sqr_ups = OrderedDict({})
gparams = []
for param in params:
empty = np.zeros_like(param.get_value())
exp_sqr_grads[param] = theano.shared(value=as_floatX(empty),name="exp_grad_%s" % param.name)
gp = T.grad(cost, param)
exp_sqr_ups[param] = theano.shared(value=as_floatX(empty), name="exp_grad_%s" % param.name)
gparams.append(gp)
In the code above, the exp_sqr_grads variable and the exp_sqr_ups variable will not be initialized with zeros again when the sgd_updates_adadelta function is called the second time?
Shared variables are not static, if that is what you mean. My understanding of your code:
import theano
import theano.tensor as T
global_list = []
def f():
a = np.zeros((4, 5), dtype=theano.config.floatX)
b = theano.shared(a)
global_list.append(b)
Copy and paste this into an IPython and then try:
f()
f()
print global_list
The list will contain two items. They are not the same object:
In [9]: global_list[0] is global_list[1]
Out[9]: False
And they don't refer to the same memory: Do
global_list[0].set_value(np.arange(20).reshape(4, 5).astype(theano.config.floatX))
Then
In [20]: global_list[0].get_value()
Out[20]:
array([[ 0., 1., 2., 3., 4.],
[ 5., 6., 7., 8., 9.],
[ 10., 11., 12., 13., 14.],
[ 15., 16., 17., 18., 19.]])
In [21]: global_list[1].get_value()
Out[21]:
array([[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]])
Having established that initializing shared variables several times leads to different variables, here is how to update a shared variable using a function. We re-use the established shared variables:
s = global_list[1]
x = T.scalar(dtype=theano.config.floatX)
g = theano.function([x], [s], updates=[(s, T.inc_subtensor(s[0, 0], x))])
g now increments the top left value of s by x at every call:
In [7]: s.get_value()
Out[7]:
array([[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]])
In [8]: g(1)
Out[8]:
[array([[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]])]
In [9]: s.get_value()
Out[9]:
array([[ 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]])
In [10]: g(10)
Out[10]:
[array([[ 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]])]
In [11]: s.get_value()
Out[11]:
array([[ 11., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.]])

reduce() hstack python

I am trying to use reduce() function to create a function hstack() which horizontally stacks multiple arrays. As a simple example, lets say
>>>>M=eye((4))
>>>>M
array([[ 1., 0., 0., 0.],
[ 0., 1., 0., 0.],
[ 0., 0., 1., 0.],
[ 0., 0., 0., 1.]])
>>>>hstack([M,M])
array([[ 1., 0., 0., 0., 1., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 1., 0., 0.],
[ 0., 0., 1., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 1., 0., 0., 0., 1.]])
This works as I want. Now I define
>>>> hstackm = lambda *args: reduce(hstack, args)
And try to do the hstack() from the previous case
>>>>hstackm([M,M])
[array([[ 1., 0., 0., 0.],
[ 0., 1., 0., 0.],
[ 0., 0., 1., 0.],
[ 0., 0., 0., 1.]]),
array([[ 1., 0., 0., 0.],
[ 0., 1., 0., 0.],
[ 0., 0., 1., 0.],
[ 0., 0., 0., 1.]])]
Which is incorrect. How do I define hstackm() to obtain a proper output?
My final objective will be to create a hstackm() function to stack SPARSE matrices if it is possible. Something like,
hstackm = lambda *args: reduce(sparse.hstack, args).
The _*args_ would be csr or _lil_matrix_
thank you
In [16]: hstackm = lambda args: reduce(lambda x,y:hstack((x,y)), args)
In [17]: hstackm([M,M])
Out[17]:
array([[ 1., 0., 0., 0., 1., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 1., 0., 0.],
[ 0., 0., 1., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 1., 0., 0., 0., 1.]])
Your function hstack takes one parameter, a list of matrices. reduce() calls it with two parameters instead, each a matrix.
Change your hstack method to accept an arbitrary number of arguments instead:
def hstack(*matrices):
....
instead of hstack(matrices), then call it as hstack(M, M).

Categories

Resources