Related
I have the following two signals:
X0 = array([0., 0., 0., 0., 0., 1., 0., 0., 0., 0.])
rbf_kernel = array([2.40369476e-04, 4.82794999e-03, 4.97870684e-02, 2.63597138e-01,
7.16531311e-01, 1.00000000e+00, 7.16531311e-01, 2.63597138e-01,
4.97870684e-02, 4.82794999e-03])
I tried to convolve the two signals using np.convolve(X0, rbf_kernel, mode='same') but the resulted convolution is shifted by one to the right as shown below. Green, orange, blue curves are X0, rbf_kernel, and the result from the last command line respectively. I expect to see the maximum convolution when the two convoluted signals were matched (i.e, at point 5) but that did not happen.
The result is shifted because of padding used for same convolution. Convolution is a process of sliding flipped kernel on input and taking dot product at each step.
For valid convolution kernel should overlap at every stride fully hence output size will be n - m + 1 (n - len(input), m - len(kernel) assuming m <= n). For same convolution output size will be max(m, n) to achieve that we need to apply (m - 1) zero padding on input and then perform valid convolution.
In your example n = m = 10 and same convolution output size will be max(10, 10) = 10. It requires zero padding of m - 1 = 9 which is 5 left zero padding and 4 right zero padding. Padded input(X0) looks like :
padded_x = [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.] with length 19.
flipped kernel = [4.82794999e-03 4.97870684e-02 2.63597138e-01 7.16531311e-0, 1.00000000e+00 7.16531311e-01 2.63597138e-01 4.97870684e-02
4.82794999e-03 2.40369476e-04]
So on convolution output will be maximum at 6th step(starting from 0)
Here's a sample SAME convolution code:
import numpy as np
import matplotlib.pyplot as plt
def same_conv(x, k):
if len(k) > len(x):
# consider longer as x and other as kernel
x, k = k, x
n = x.shape[0]
m = k.shape[0]
padding = m - 1
left_pad = int(np.ceil(padding / 2))
right_pad = padding - left_pad
x = np.pad(x, (left_pad, right_pad), 'constant')
# print(len(x))
out = []
# flip the kernel
k = k[::-1]
# print(k)
for i in range(n):
out.append(np.dot(x[i: i+m], k))
return np.array(out)
X0 = np.array([0., 0., 0., 0., 0., 1., 0., 0., 0., 0.])
rbf_kernel = np.array([2.40369476e-04, 4.82794999e-03, 4.97870684e-02, 2.63597138e-01,
7.16531311e-01, 1.00000000e+00, 7.16531311e-01, 2.63597138e-01,
4.97870684e-02, 4.82794999e-03])
convolved = same_conv(X0, rbf_kernel)
plt.plot(X0)
plt.plot(rbf_kernel)
plt.plot(convolved)
plt.show()
which results in the same shifted output as yours.
I have a scheme where I store a matrix with zeros on the diagonals as a vector. I want to later on optimize over that vector, so I require gradient tracking.
My challenge is to reshape between the two.
I want - for domain specific reasons - keep the order of data in the matrix so that transposed elements of the W matrix next to each other in the vector form.
The size of the W matrix is subject to change, so I start with enumering items in the top-left part of the matrix, and continue outwards.
I have come up with two ways to do this. See code snippet.
import torch
import torch.sparse
w = torch.tensor([10,11,12,13,14,15],requires_grad=True,dtype=torch.float)
i = torch.LongTensor([
[0, 1,0],
[1, 0,1],
[0, 2,2],
[2, 0,3],
[1, 2,4],
[2, 1,5],
])
v = torch.FloatTensor([1, 1, 1 ,1,1,1 ])
reshaper = torch.sparse.FloatTensor(i.t(), v, torch.Size([3,3,6])).to_dense()
W_mat_with_reshaper = reshaper # w
W_mat_directly = torch.tensor([
[0, w[0], w[2],],
[w[1], 0, w[4],],
[w[3], w[5], 0,],
])
print(W_mat_with_reshaper)
print(W_mat_directly)
and this gives output
tensor([[ 0., 10., 12.],
[11., 0., 14.],
[13., 15., 0.]], grad_fn=<UnsafeViewBackward>)
tensor([[ 0., 10., 12.],
[11., 0., 14.],
[13., 15., 0.]])
As you can see, the direct way to reshape the vector into a matrix does not have a grad function, but the multiply-with-a-reshaper-tensor does. Creating the reshaper-tensor seems like it will be a hassle, but on the other hand, manually writing the matrix for is also infeasible.
Is there a way to do arbitrary reshapes in pytorch that keeps grack of gradients?
Instead of constructing W_mat_directly from the elements of w, try assigning w into W:
W_mat_directly = torch.zeros((3, 3), dtype=w.dtype)
W_mat_directly[(0, 0, 1, 1, 2, 2), (1, 2, 0, 2, 0, 1)] = w
You'll get
tensor([[ 0., 10., 11.],
[12., 0., 13.],
[14., 15., 0.]], grad_fn=<IndexPutBackward>)
You can use the facts that:
slicing preserves gradients while indexing doesn't;
concatenation preserves gradients while tensor creation doesn't.
tensor0 = torch.zeros(1)
W_mat_directly = torch.concatenate(
[tensor0, w[0:1], w[1:2], w[1:2], tensor0, w[4:5], w[3:4], w[5:6], tensor0]
).reshape(3,3)
With this approach you can apply arbitrary functions to the elements of the initial tensor w.
I have very large matrix, so dont want to sum by going through each row and column.
a = [[1,2,3],[3,4,5],[5,6,7]]
def neighbors(i,j,a):
return [a[i][j-1], a[i][(j+1)%len(a[0])], a[i-1][j], a[(i+1)%len(a)][j]]
[[np.mean(neighbors(i,j,a)) for j in range(len(a[0]))] for i in range(len(a))]
This code works well for 3x3 or small range of matrix, but for large matrix like 2k x 2k this is not feasible. Also this does not work if any of the value in matrix is missing or it's like na
This code works well for 3x3 or small range of matrix, but for large matrix like 2k x 2k this is not feasible. Also this does not work if any of the value in matrix is missing or it's like na. If any of the neighbor values is na then skip that neighbour in getting the average
Shot #1
This assumes you are looking to get sliding windowed average values in an input array with a window of 3 x 3 and considering only the north-west-east-south neighborhood elements.
For such a case, signal.convolve2d with an appropriate kernel could be used. At the end, you need to divide those summations by the number of ones in kernel, i.e. kernel.sum() as only those contributed to the summations. Here's the implementation -
import numpy as np
from scipy import signal
# Inputs
a = [[1,2,3],[3,4,5],[5,6,7],[4,8,9]]
# Convert to numpy array
arr = np.asarray(a,float)
# Define kernel for convolution
kernel = np.array([[0,1,0],
[1,0,1],
[0,1,0]])
# Perform 2D convolution with input data and kernel
out = signal.convolve2d(arr, kernel, boundary='wrap', mode='same')/kernel.sum()
Shot #2
This makes the same assumptions as in shot #1, except that we are looking to find average values in a neighborhood of only zero elements with the intention to replace them with those average values.
Approach #1: Here's one way to do it using a manual selective convolution approach -
import numpy as np
# Convert to numpy array
arr = np.asarray(a,float)
# Pad around the input array to take care of boundary conditions
arr_pad = np.lib.pad(arr, (1,1), 'wrap')
R,C = np.where(arr==0) # Row, column indices for zero elements in input array
N = arr_pad.shape[1] # Number of rows in input array
offset = np.array([-N, -1, 1, N])
idx = np.ravel_multi_index((R+1,C+1),arr_pad.shape)[:,None] + offset
arr_out = arr.copy()
arr_out[R,C] = arr_pad.ravel()[idx].sum(1)/4
Sample input, output -
In [587]: arr
Out[587]:
array([[ 4., 0., 3., 3., 3., 1., 3.],
[ 2., 4., 0., 0., 4., 2., 1.],
[ 0., 1., 1., 0., 1., 4., 3.],
[ 0., 3., 0., 2., 3., 0., 1.]])
In [588]: arr_out
Out[588]:
array([[ 4. , 3.5 , 3. , 3. , 3. , 1. , 3. ],
[ 2. , 4. , 2. , 1.75, 4. , 2. , 1. ],
[ 1.5 , 1. , 1. , 1. , 1. , 4. , 3. ],
[ 2. , 3. , 2.25, 2. , 3. , 2.25, 1. ]])
To take care of the boundary conditions, there are other options for padding. Look at numpy.pad for more info.
Approach #2: This would be a modified version of convolution based approach listed earlier in Shot #1. This is same as that earlier approach, except that at the end, we selectively replace
the zero elements with the convolution output. Here's the code -
import numpy as np
from scipy import signal
# Inputs
a = [[1,2,3],[3,4,5],[5,6,7],[4,8,9]]
# Convert to numpy array
arr = np.asarray(a,float)
# Define kernel for convolution
kernel = np.array([[0,1,0],
[1,0,1],
[0,1,0]])
# Perform 2D convolution with input data and kernel
conv_out = signal.convolve2d(arr, kernel, boundary='wrap', mode='same')/kernel.sum()
# Initialize output array as a copy of input array
arr_out = arr.copy()
# Setup a mask of zero elements in input array and
# replace those in output array with the convolution output
mask = arr==0
arr_out[mask] = conv_out[mask]
Remarks: Approach #1 would be the preferred way when you have fewer number of zero elements in input array, otherwise go with Approach #2.
This is an appendix to comments under #Divakar's answer (rather than an independent answer).
Out of curiosity I tried different 'pseudo' convolutions against the scipy convolution. The fastest one was the % (modulus) wrapping one, which surprised me: obviously numpy does something clever with its indexing, though obviously not having to pad will save time.
fn3 -> 9.5ms, fn1 -> 21ms, fn2 -> 232ms
import timeit
setup = """
import numpy as np
from scipy import signal
N = 1000
M = 750
P = 5 # i.e. small number -> bigger proportion of zeros
a = np.random.randint(0, P, M * N).reshape(M, N)
arr = np.asarray(a,float)"""
fn1 = """
arr_pad = np.lib.pad(arr, (1,1), 'wrap')
R,C = np.where(arr==0)
N = arr_pad.shape[1]
offset = np.array([-N, -1, 1, N])
idx = np.ravel_multi_index((R+1,C+1),arr_pad.shape)[:,None] + offset
arr[R,C] = arr_pad.ravel()[idx].sum(1)/4"""
fn2 = """
kernel = np.array([[0,1,0],
[1,0,1],
[0,1,0]])
conv_out = signal.convolve2d(arr, kernel, boundary='wrap', mode='same')/kernel.sum()
mask = arr == 0.0
arr[mask] = conv_out[mask]"""
fn3 = """
R,C = np.where(arr == 0.0)
arr[R, C] = (arr[(R-1)%M,C] + arr[R,(C-1)%N] + arr[R,(C+1)%N] + arr[(R+1)%M,C]) / 4.0
"""
print(timeit.timeit(fn1, setup, number = 100))
print(timeit.timeit(fn2, setup, number = 100))
print(timeit.timeit(fn3, setup, number = 100))
Using numpy and scipy.ndimage, you can apply a "footprint" that defines where you look for the neighbours of each element and apply a function to those neighbours:
import numpy as np
import scipy.ndimage as ndimage
# Getting neighbours horizontally and vertically,
# not diagonally
footprint = np.array([[0,1,0],
[1,0,1],
[0,1,0]])
a = [[1,2,3],[3,4,5],[5,6,7]]
# Need to make sure that dtype is float or the
# mean won't be calculated correctly
a_array = np.array(a, dtype=float)
# Can specify that you want neighbour selection to
# wrap around at the borders
ndimage.generic_filter(a_array, np.mean,
footprint=footprint, mode='wrap')
Out[36]:
array([[ 3.25, 3.5 , 3.75],
[ 3.75, 4. , 4.25],
[ 4.25, 4.5 , 4.75]])
I have a list of 100 values in python where each value in the list corresponds to an n-dimensional list.
For e.g
x=[[1 2],[2 3]] is a 2d list
I want to compute euclidean norm over all such points. Is there a standard method to do this?
I found this on scipy and this works.
scipy
If I have interpreted the question correctly, then you have a list of 100 n-dimensional vectors, and you would like a list of their (Euclidean) norms.
I think using numpy is easiest (and quickest!) here,
import numpy as np
a = np.array(x)
np.sqrt((a*a).sum(axis=1))
If the vectors do not have equal dimension, or if you want to avoid numpy, then perhaps,
[sum([i*i for i in vec])**0.5 for vec in x]
or,
import math
[math.sqrt(sum([i*i for i in vec])) for vec in x]
Edit: Not entirely sure what you were asking for. So, alternatively: it looks like you have a list, each element of which is an n-dimensional vector, and you want the Euclidean distance between each consecutive pair. With numpy (assuming n is fixed),
x = [ [1,2,3], [4,5,6], [8,9,10], [13,14,15] ] # 3D example.
import numpy as np
a = np.array(x)
sqrDiff = (a[:-1] - a[1:])**2
np.sqrt(sqrDiff.sum(axis=1))
where the last line returns,
array([ 5.19615242, 6.92820323, 8.66025404])
Try this code:
from math import sqrt
valueList = [[[1,2], [2,3]], [[2,2], [3,3]]]
def distance(valueList):
resultList = []
for (point1, point2) in valueList:
resultList.append(sqrt(sum(map(lambda (x1, x2): (x1 - x2) * (x1 - x2), zip(point1, point2)))))
return resultList
print distance(valueList)
output is
[1.4142135623730951, 1.4142135623730951]
Here is valuelist contains 2 values, but no problem with 100 values..
You can do this to compute the euclidean norm of each row:
>>> a = np.arange(200.).reshape((100,2))
>>> a
array([[ 0., 1.],
[ 2., 3.],
[ 4., 5.],
[ 6., 7.],
[ 8., 9.],
[ 10., 11.],
...
>>> np.sum(a**2,axis=-1) ** .5
array([ 1. , 3.60555128, 6.40312424, 9.21954446,
12.04159458, 14.86606875, 17.69180601, 20.51828453,
23.34523506, 26.17250466, 29. , 31.82766093,
34.6554469 , 37.48332963, 40.31128874, 43.13930922,
45.96737974, 48.7954916 , 51.623638 , 54.45181356,
...
I'm new to python and having some problems finding the minimum and maximum values for a tuple of tuples. I need them to normalise my data. So, basically, I have a list that is a row of 13 numbers, each representing something. Each number makes a column in a list, and I need the max and min for each column. I tried indexing/iterating through but keep getting an error of
max_j = max(j)
TypeError: 'float' object is not iterable
any help would be appreciated!
The code is (assuming data_set_tup is a tuple of tuples, eg ((1,3,4,5,6,7,...),(5,6,7,3,6,73,2...)...(3,4,5,6,3,2,2...)) I also want to make a new list using the normalised values.
normal_list = []
for i in data_set_tup:
for j in i[1:]: # first column doesn't need to be normalised
max_j = max(j)
min_j = min(j)
normal_j = (j-min_j)/(max_j-min_j)
normal_list.append(normal_j)
normal_tup = tuple(normal_list)
You can transpose rows to columns and vice versa with zip(*...). (Use list(zip(*...)) in Python 3)
cols = zip(*data_set_tup)
normal_cols = [cols[0]] # first column doesn't need to be normalised
for j in cols[1:]:
max_j = max(j)
min_j = min(j)
normal_cols.append(tuple((k-min_j)/(max_j-min_j) for k in j)
normal_list = zip(*normal_cols)
This really sounds like a job for the non-builtin numpy module, or maybe the pandas module, depending on your needs.
Adding an extra dependency on your application should not be done lightly, but if you do a lot of work on matrix-like data, then your code will likely be both faster and more readable if you use one of the above modules throughout your application.
I do not recommend converting a list of lists to a numpy array and back again just to get this single result -- it's better to use the pure python method of Jannes answer. Also, seeing that you're a python beginner, numpy may be overkill right now. But I think your question deserves an answer pointing out that this is an option.
Here's a step-by-step console illustration of how this would work in numpy:
>>> import numpy as np
>>> a = np.array([[1,3,4,5,6],[5,6,7,3,6],[3,4,5,6,3]], dtype=float)
>>> a
array([[ 1., 3., 4., 5., 6.],
[ 5., 6., 7., 3., 6.],
[ 3., 4., 5., 6., 3.]])
>>> min = np.min(a, axis=0)
>>> min
array([1, 3, 4, 3, 3])
>>> max = np.max(a, axis=0)
>>> max
array([5, 6, 7, 6, 6])
>>> normalized = (a - min) / (max - min)
>>> normalized
array([[ 0. , 0. , 0. , 0.66666667, 1. ],
[ 1. , 1. , 1. , 0. , 1. ],
[ 0.5 , 0.33333333, 0.33333333, 1. , 0. ]])
So in actual code:
import numpy as np
def normalize_by_column(a):
min = np.min(a, axis=0)
max = np.max(a, axis=0)
return (a - min) / (max - min)
We have nested_tuple = ((1, 2, 3), (4, 5, 6), (7, 8, 9)).
First of all we need to normalize it. Pythonic way:
flat_tuple = [x for row in nested_tuple for x in row]
Output: [1, 2, 3, 4, 5, 6, 7, 8, 9] # it's a list
Move it to tuple: tuple(flat_tuple), get max value: max(flat_tuple), get min value: min(flat_tuple)