Weighted sum of an input inside the network - python

I have a network with multiple inputs and I split out the first 10 inputs and calculate the weighted sum, and then concatenate it with the rest of the input:
first = Lambda(lambda z: z[:, 0:11])(d_inputs)
wsum_first = Lambda(calcWSumF)(first )
d_input = concatenate([d_inputs, wsum_first], axis=-1)
with the function defined as:
w_vec = K.constant(np.array([range(10)]*64).reshape(10, 64)) # batch size is 64
def calcWSumF(x):
y = K.dot(w_vec, x)
y = K.expand_dims(y, -1)
return y
I want a constant vector to be used to calculate the weighted sum of the first part of the input. The concatenation doesn't work because the shapes don't match. How can I implement this correctly?

You can write this much better using K.sum and only a vector containing the coefficients. Further, there is no need to use a fixed batch size (it can be any number):
def calcWSumF(x, idx):
w_vec = K.constant(np.arange(idx))
y = K.sum(x[:, 0:idx] * w_vec, axis=-1, keepdims=True)
return y
d_inputs = Input((15,))
wsum_first = Lambda(calcWSumF, arguments={'idx': 10})(d_inputs)
d_input = concatenate([d_inputs, wsum_first], axis=-1)
model = Model(d_inputs, d_input)
model.predict(np.arange(15).reshape(1, 15))
# output:
array([[ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.,
11., 12., 13., 14., 285.]], dtype=float32)
# Note: 0*0 + 1*1 + 2*2 + ... + 9*9 = 285
Note that, to make it more general, we have added another argument (idx) to the lambda function which specifies how many of the elements from the beginning we would like to consider.

Related

I want to design a grid with n-number of dimensions ,solution for two dimension is available but i want n-dimensions in Numpy?

I need to generate a grid for an array with a general/variable number of dimensions. In the 2D case, I know I can use mgrid:
Some 2D data
N = 1000
x = np.random.uniform(0., 1., N)
y = np.random.uniform(10., 100., N)
xmin, xmax, ymin, ymax = x.min(), x.max(), y.min(), y.max()
Obtain 2D grid
xy_grid = np.mgrid[xmin:xmax:10j, ymin:ymax:10j]
How can I scale this approach when the number of dimensions is variable? Ie: my data could be (x, y) or (x, y, z) or (x, y, z, q), etc.
The naive approach of:
Md_data.shape = (M, N), for M dimensions
dmin, dmax = np.amin(Md_data, axis=1), np.amax(Md_data, axis=1)
Md_grid = np.mgrid[dmin:dmax:10j]
does not work
meshgrid as a function lets us use '*' unpacking:
In [412]: dmin,dmax=np.array([1,2,3]), np.array([5,6,7])
In [423]: arr = np.linspace(dmin,dmax,5)
In [424]: arr
Out[424]:
array([[1., 2., 3.],
[2., 3., 4.],
[3., 4., 5.],
[4., 5., 6.],
[5., 6., 7.]])
I'm using sparse for a more compact display.
In [425]: atup = np.meshgrid(*arr.T,indexing='ij',sparse=True)
In [426]: atup
Out[426]:
[array([[[1.]],
[[2.]],
[[3.]],
[[4.]],
[[5.]]]),
array([[[2.],
[3.],
[4.],
[5.],
[6.]]]),
array([[[3., 4., 5., 6., 7.]]])]
This ogrid does the same thing:
np.ogrid[1:5:5j,2:6:5j,3:7:5j]
Come to think of it, so is
np.ix_(arr[:,0],arr[:,1],arr[:,2])
though it doesn't have the nonsparse alternative.

How do I to average irregularly spaced x & y coordinate tensor into a grid with a specific cell size?

I have an algorithm that generates a tensor of irregularly spaced x and y coordinates (ex: torch.size([3600, 2])), and I need to average the points into grid cells of a specific size (ex: 8 by 8). The resulting grid needs to be either an array or tensor.
It's not required, but I would also like to be able to determine if any of the resulting cells have less than a specified number of points in them.
For example I can graph the tensor using matplotlib's plt.scatter, and it looks like this:
In the above example, 100,000 points exist but the number of points can sometimes be in the tens of millions.
I've tried using histogram approaches, and most of them use a specific number of cells vs a specific cell size. Matplotlib can seemingly do it in a graph, but that doesn't help me get an array or tensor.
Edit:
This code might work, if it can be made to work properly.
def grid_torch(x_coords, y_coords, grid_size=(8,8), x_extent=(0., 1.), y_extent=(0., 1.)):
x_coords = ((x_coords - x_extent[0]) / (x_extent[1] - x_extent[0])) * grid_size[0]
y_coords = ((y_coords - y_extent[0]) / (y_extent[1] - y_extent[0])) * grid_size[1]
x_list = []
for x in range(grid_size[0]):
x = torch.ones_like(x_coords) * x
y_list = []
for y in range(grid_size[1]):
y = torch.ones_like(y_coords) * y
in_bounds_x = torch.logical_and(x <= x_coords, x_coords <= x + 1)
in_bounds_y = torch.logical_and(y <= y_coords, y_coords <= y + 1)
in_bounds = torch.logical_and(in_bounds_x, in_bounds_y)
in_bounds_indices = torch.where(in_bounds)
print(in_bounds_indices)
y_list.append(in_bounds_indices)
x_list.append(torch.stack(y_list))
return torch.stack(x_list)
out = grid_torch(xy_tensor[:,0], xy_tensor[:,1])
print(out.shape)
def create_grid(grid_layout, activ, grid_size=(8,8), min_density=8):
cells = []
for x in range(grid_size[0]):
for y in range(grid_size[1]):
indices = grid_layout[x, y]
if len(indices) > min_density:
average_activation = torch.mean(activ[indices])
cells.append((average_activation, x, y))
print(average_activation, x, y)
return torch.stack(cells)
grid_test = create_grid(out, xy_tensor, grid_size=(8,8))
I think this code would give you a good starting point.
def grid_torch(x_coords, y_coords, grid_size=(8,8), x_extent=(0., 1.), y_extent=(0., 1.)):
# This part converts coordinates to bin numbers (like (2,5), (7,7) etc)
x_bin = (((x_coords - x_extent[0]) / (x_extent[1] - x_extent[0])) * grid_size[0]).int()
y_bin = (((y_coords - y_extent[0]) / (y_extent[1] - y_extent[0])) * grid_size[1]).int()
counts = torch.zeros(grid_size)
means = torch.zeros(list(grid_size) + [2])
for x in range(grid_size[0]):
for y in range(grid_size[1]):
# these tensors are 1 where (x_bin == x and y_bin == y), 0 else where
x_where = 1 * (x_bin == x)
y_where = 1 * (y_bin == y)
p_where = (x_where * y_where)
cnt = p_where.sum()
counts[x, y] = cnt
# we'll average both x and y coords seperately.
# you can embed min_density logic here.
if cnt > 0:
means[x, y, 0] = (x_coords * p_where).sum() / p_where.sum()
means[x, y, 1] = (y_coords * p_where).sum() / p_where.sum()
return counts, means
# Generate sample points
points = torch.tensor(np.concatenate([
np.random.normal(loc=0.2, scale=0.1, size=(1000, 2)),
np.random.normal(loc=0.6, scale=0.1, size=(1000, 2))
]).clip(0,1)).float()
# plt.scatter(points[:,0], points[:,1])
# plt.grid()
counts, means = grid_torch(points[:,0], points[:,1])
counts
>>>
tensor([[ 47., 114., 75., 10., 0., 0., 0., 0.],
[102., 204., 141., 27., 0., 0., 0., 0.],
[ 60., 101., 74., 16., 7., 4., 1., 0.],
[ 5., 17., 9., 23., 72., 51., 10., 0.],
[ 1., 1., 4., 54., 186., 141., 28., 3.],
[ 0., 0., 3., 47., 154., 117., 14., 0.],
[ 0., 0., 0., 9., 37., 24., 4., 0.],
[ 0., 0., 0., 2., 0., 1., 0., 0.]])

How to set the measurement matrix of opencv kalman filter depending on the measurement dimensions [OpenCV+Python]

I am working on a tracking application where I use the opencv kalman filter to validate my current measurement of the position. I use the code from this question:
At first I calculate velocity (v) and accelearation (a) of my moving object at (x, y). These 4 values are used as my kalman state. I initiate the kalman filter as follows:
(np.eye(n,m) generates the identity matrix with dimensions nxm):
def initKalman(init_state, fps):
kalman = cv.KalmanFilter(4, 4, 2)
kalman.transitionMatrix = np.array([[1., 0., 1/fps, 0.],
[0., 1., 0., 1/fps],
[0., 0., 1., 0.],
[0, 0., 0., 1.]])
kalman.measurementMatrix = 1. * np.eye(2, 4)
kalman.measurementNoiseCov = 1e-3 * np.eye(2, 2)
kalman.processNoiseCov = 1e-5 * np.eye(4, 4)
kalman.errorCovPost = 1e-1 * np.eye(4, 4)
kalman.statePost = init_state.reshape(4, 1)
return kalman
kinematics = np.array((velocity, acceleration), dtype=np.float32)
kalman_state = np.concatenate((point, kinematics))
kalman_filter = initKalman(kalman_state, fps = 15)
During operation the correction is done as follows:
def correct_kalman(kalman, state):
measurement = (np.dot(kalman.measurementNoiseCov, np.random.randn(2, 1))).reshape(-1)
measurement = np.dot(kalman.measurementMatrix, state) + measurement
return kalman.correct(measurement)
kinematics = np.array((velocity, acceleration), dtype=np.float32)
kalman_state = np.concatenate((point, kinematics))
correct_kalman(kalman_filter, kalman_state)
It seems to work witch is great, but im trying to understand why. In my understanding it shouldn't work because in correct_kalman() the velocity and acceleration are ommited in this code line:
measurement = np.dot(kalman.measurementMatrix, state) + measurement
because the measurementmatrix is just 2 x 4. (In fact, if I set acceleration and speed to 0, the behavior of the filter does not change.)
For Example take the kalman_state = np.array([10., 20., 25., 75.]) and calculate the dot product with the measurementMatrix = 1. * np.eye(2, 4)
then measurement = np.dot(kalman.measurementMatrix, kalman_state) is just
>>> measurement
array([10., 20.])
v and a are gone.
So I changed my measurementMatrix and my measurementNoiseCov to 4 x 4 dimensionality and adjusted my correction acordingly by using np.random.randn(4, 1) but now the kalman filter is way to sluggish and falls behind the measurement.
Why is the first approach working if v and a are not used?
How can I change the measurement matrix in a more targeted way than just iteratively adjusting the values?
Thanks for the help!

Keeping gradients while rearranging data in a tensor, with pytorch

I have a scheme where I store a matrix with zeros on the diagonals as a vector. I want to later on optimize over that vector, so I require gradient tracking.
My challenge is to reshape between the two.
I want - for domain specific reasons - keep the order of data in the matrix so that transposed elements of the W matrix next to each other in the vector form.
The size of the W matrix is subject to change, so I start with enumering items in the top-left part of the matrix, and continue outwards.
I have come up with two ways to do this. See code snippet.
import torch
import torch.sparse
w = torch.tensor([10,11,12,13,14,15],requires_grad=True,dtype=torch.float)
i = torch.LongTensor([
[0, 1,0],
[1, 0,1],
[0, 2,2],
[2, 0,3],
[1, 2,4],
[2, 1,5],
])
v = torch.FloatTensor([1, 1, 1 ,1,1,1 ])
reshaper = torch.sparse.FloatTensor(i.t(), v, torch.Size([3,3,6])).to_dense()
W_mat_with_reshaper = reshaper # w
W_mat_directly = torch.tensor([
[0, w[0], w[2],],
[w[1], 0, w[4],],
[w[3], w[5], 0,],
])
print(W_mat_with_reshaper)
print(W_mat_directly)
and this gives output
tensor([[ 0., 10., 12.],
[11., 0., 14.],
[13., 15., 0.]], grad_fn=<UnsafeViewBackward>)
tensor([[ 0., 10., 12.],
[11., 0., 14.],
[13., 15., 0.]])
As you can see, the direct way to reshape the vector into a matrix does not have a grad function, but the multiply-with-a-reshaper-tensor does. Creating the reshaper-tensor seems like it will be a hassle, but on the other hand, manually writing the matrix for is also infeasible.
Is there a way to do arbitrary reshapes in pytorch that keeps grack of gradients?
Instead of constructing W_mat_directly from the elements of w, try assigning w into W:
W_mat_directly = torch.zeros((3, 3), dtype=w.dtype)
W_mat_directly[(0, 0, 1, 1, 2, 2), (1, 2, 0, 2, 0, 1)] = w
You'll get
tensor([[ 0., 10., 11.],
[12., 0., 13.],
[14., 15., 0.]], grad_fn=<IndexPutBackward>)
You can use the facts that:
slicing preserves gradients while indexing doesn't;
concatenation preserves gradients while tensor creation doesn't.
tensor0 = torch.zeros(1)
W_mat_directly = torch.concatenate(
[tensor0, w[0:1], w[1:2], w[1:2], tensor0, w[4:5], w[3:4], w[5:6], tensor0]
).reshape(3,3)
With this approach you can apply arbitrary functions to the elements of the initial tensor w.

Broadcasting a function over two vectors to get a 2d numpy array

I want to broadcast a function f over a vectors so that the result is a matrix P where P[i,j] = f(v[i], v[j]).
I know that I can do it simply:
P = zeros( (v.shape[0], v.shape[0]) )
for i in range(P.shape[0]):
for j in range(P.shape[0]):
P[i, j] = f(v[i,:], v[j,:])
or more hacky:
from scipy.spatial.distance import cdist
P = cdist(v, v, metric=f)
But I am looking for the fastest and neatest way to do it.
This seems like a function of broadcasting that numpy should have built-in.
Any suggestions?
I believe what you search for is numpy.vectorize. Use it like so:
def f(x, y):
return x + y
v = numpy.array([1,2,3])
# vectorize the function
vf = numpy.vectorize(f)
# "transposing" the vector by producing a view with another shape
vt = v.reshape((v.shape[0], 1)
# calculate over all combinations using broadcast
vf(v, vt)
Output:
array([[ 2., 3., 4.],
[ 3., 4., 5.],
[ 4., 5., 6.]])

Categories

Resources