Through some help here, I have come up with a function that seems to apply the sobel derivative to an image in the X direction F(x,y) = F(x+1,y) - F(x,y)
I can't use any OpenCV functions and I need the 2D output array to be 1 column shorter than the 2D input array.
However, I can't figure out why this is still not returning an output array that is 1 column shorter. Can someone spot the issue and/or tell me if this is on the right track? Thanks much.
output = input[:-1,:]
r,c = input.shape
for i in range(0, r - 1):
output[i] = np.abs(input[i+1] - input[i])
return output
You can use numpy's diff() function. See its doc here.
And a code snippet to illustrate its use:
import numpy as np
a = np.ones([5, 4])
b = np.diff(a, axis=1)
the result b is a (5, 3) array full of zeros.
If you want to keep your loop, you can do:
r,c = input.shape
output = np.zeros([r-1, c])
for i in range(0, r - 1):
output[i] = np.abs(input[i+1] - input[i])
print output
Edit: 'mathematical' x corresponds to the second axis (vertical), and y to the first axis (horizontal). So to obtain F(x+1, y) - F(x, y), you must do:
r,c = input.shape
output = np.zeros([r, c -1])
for j in range(0, c - 1):
output[:, j] = np.abs(input[:, j+1] - input[:, j])
print output
I'm not sure if you mean to create the output array like that. You're creating output as a reference to a subarray of input, so if you modify input you also modify output or vice versa. See:
Numpy array assignment with copy
That said, running the code snippet you provided with input = np.ones([5,5]), and printing output.shape after the return, I get (4,5), which seems to be what you want?
Related
The shape of Y[n,:,:] is (200,1) and so I need Z[n,,:,:]*H[n,:,:] (or something related) to be (200,1) also. But Z[n,,:,:] and H[n,:,:] are both (200,6) so I need a multiplication operator that multiplies them and gets rid of the 6 to give an answer of shape (200,1). Any suggestions? The code is below
n=10
M = 200
D=6
dW = np.sqrt(1/n)*randn(n,M,D);
H=cap(dW,1/n,np.log(n))#the generation of the Brownian motion increment
X = define_X(1,dW,1,1,1)
H[1]
H.shape
Y = np.zeros((n+1,M,1))
Z = np.zeros_like(X)
Z[n-1,:,:]=np.dot(np.transpose(Y[n,:,:]),H[n-1,:,:])
Y[n-1,:,:]= Y[n,:,:] +f(X[n-1,:,:],Y[n,:,:],Z[n-1,:,:])*(1/10)-Z[n,,:,:]*H[n,:,:]
I am attempting to write a program which constructs a matrix and performs a singular value decomposition on it. I am evaluating the function ax^2 +bx + 1 on a grid. I then make a uniform meshgrid of a and b. The rows of the matrix correspond to different quadratic coefficients, while each column corresponds to a grid point at which the function is evaluated.
The matlab code is here:
% Collect data
x = linspace(-1,1,100);
[a,b] = meshgrid(0:0.1:1,0:0.1:1);
D=zeros(numel(x),numel(a));
sz = size(D)
% Build “Dose” matrix
for i=1:numel(a)
D(:,i) = a(i)*x.^2+b(i)*x+1;
end
% Do the SVD:
[U,S,V]=svd(D,'econ');
D_reconstructed = U*S*V';
plot(diag(S))
scatter3(a(:),b(:),V(:,1))
This is my attempt at a solution:
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-1, 1, 100)
def f(x, a, b):
return a*x*x + b*x + 1
a, b = np.mgrid[0:1:0.1,0:1:0.1]
#a = b = np.arange(0,1,0.01)
D = np.zeros((x.size, a.size))
for i in range(a.size):
D[i] = a[i]*x*x +b[i]*x +1
U, S, V = np.linalg.svd(D)
plt.plot(np.diag(S))
fig = plt.figure()
ax = plt.axes(projection="3d")
ax.scatter(a, b, V[0])
but I always get broadcasting errors which I am not sure how to fix.
Firstly, in MATLAB you're assigning to D(:,i), but in python you're assigning to D[i]. The latter is equivalent to D[i, ...] which is in your case D[i, :]. Instead you seem to need D[:, i].
Secondly, in MATLAB using a linear index into a 2d array (namely a and b) will give you flattened views. If you do that with numpy you get slices of an array instead, just as I mentioned with D[i].
You can do away with the loop with broadcasting and getting your desired 2d array by .ravelling (or reshaping) your a and b arrays:
x = np.linspace(-1, 1, 100)[:, None] # inject trailing singleton for broadcasting
a, b = np.mgrid[0:1:0.1, 0:1:0.1]
D = a.ravel() * x**2 + b.ravel() * x + 1
The way this works is that x has shape (100, 1) after we inject a trailing singleton (in MATLAB trailing singletons are implied, in numpy leading ones), and both a.ravel() and b.ravel() have shape (10*10,) which is compatible with (1, 10*10), making broadcasting possible into shape (100, 10*10). You could also replace the calls to ravel with
a, b = np.mgrid[...].reshape(2, -1)
which is a trick I sometimes use, but this is harder to read if you're unfamiliar with the pattern.
Side note: it's better to use example data where dimensions end up being of different size so that you notice if something ends up being transposed.
The sample code is as below.
I want to get dataNew(h, w, length) according to data(h, w, c) and ind(h, w). Here length < c, it means dataNew is sliced from data.
Here, length and ind[i, j] is made sure to suit the c value.
I have realize it through for loops, but I wnat the python way. Please help, thanks!
import numpy as np
h, w, c = 3, 4, 5
data = np.arange(60).reshape((h, w, c))
print(data)
length = 3
ind = np.random.randint(0, 3, 12).reshape(h, w)
print(ind)
dataNew = np.empty((h, w, length), np.int16)
for i in range(h):
for j in range(w):
st = ind[i, j]
dataNew[i, j] = data[i, j][st : st + length]
print(dataNew)
We can leverage np.lib.stride_tricks.as_strided based scikit-image's view_as_windows to get sliding windows. More info on use of as_strided based view_as_windows.
from skimage.util.shape import view_as_windows
# Get all sliding windows along the last axis
w = view_as_windows(data,(1,1,length))[...,0,0,:]
# Index into windows with start indices and slice out singleton dims
out = np.take_along_axis(w,ind[...,None,None],axis=-1)[...,0]
Last step is basically using advanced-indexing into the windows with those start indices. This could be made a bit simpler and might be easier to understand. So, alternatively, we could do -
m,n = ind.shape
I,J = np.ogrid[:m,:n]
out = w[I,J,ind]
One way would be creating an indexing array using broadcasting and use np.take_along_axis to index the array:
ix = ind[...,None] + np.arange(length)
np.take_along_axis(data, ix, -1)
I am searching a sorted array for the proper insertion indices of new data so that it remains sorted. Although searchsorted2d by #Divakar works great along column insertions, it just cannot work along rows. Is there a way to perform the same, yet along the rows?
The first idea that comes to mind is to adapt searchsorted2d for the desired behavior. However, that does not seem as easy as it appears. Here is my attempt at adapting it, but it still does not work when axis is set to 0.
import numpy as np
# By Divakar
# See https://stackoverflow.com/a/40588862
def searchsorted2d(a, b, axis=0):
shape = list(a.shape)
shape[axis] = 1
max_num = np.maximum(a.max() - a.min(), b.max() - b.min()) + 1
r = np.ceil(max_num) * np.arange(a.shape[1-axis]).reshape(shape)
p = np.searchsorted((a + r).ravel(), (b + r).ravel()).reshape(b.shape)
return p #- a.shape[axis] * np.arange(a.shape[1-axis]).reshape(shape)
axis = 0 # Operate along which axis?
n = 16 # vector size
# Initial array
a = np.random.rand(n).reshape((n, 1) if axis else (1, n))
insert_into_a = np.random.rand(n).reshape((n, 1) if axis else (1, n))
indices = searchsorted2d(a, insert_into_a, axis=axis)
a = np.insert(a, indices.ravel(), insert_into_a.ravel()).reshape(
(n, -1) if axis else (-1, n))
assert(np.all(a == np.sort(a, axis=axis))), 'Failed :('
print('Success :)')
I expect that the assertion passes in both cases (axis = 0 and axis = 1).
I've got a tensorflow model where the output of a layer is a 2d tensor, say t = [[1,2], [3,4]].
The next layer expects an input which consists of every row combination of this tensor. That is, I need to turn it into t_new = [[1,2,1,2], [1,2,3,4], [3,4,1,2], [3,4,3,4]].
So far I have tried:
1) tf.unstack(t, axis=0) loop over it's rows and append each combination to a buffer, then t_new = tf.stack(buffer, axis=0). This works except when the shape is unspecified, ie. None so...
2) I have used a tf.while_loop to generate indices idx=[[0,0], [0,1], [1,0], [1,1]], then t_new = tf.gather(t, idx).
My question here is: should I set back_prop to True or False in this tf.while_loop? I'm only generating indices inside the loop. Not sure what back_prop would even mean.
Also, do you know of a better way to achieve what I need?
Here is the while_loop:
i = tf.constant(0)
j = tf.constant(0)
idx = tf.Variable([], dtype=tf.int32)
def body(i, j, idx):
c = tf.concat([idx, [i, j]], axis=0)
i, j = tf.cond(tf.equal(j, sentence_len - 1),
lambda: (i + 1, 0),
lambda: (i, j + 1))
return i, j, c
_, _, indices = tf.while_loop(lambda i, j, _: tf.less(i, sentence_len),
body,
[i, j, idx],
shape_invariants=[i.get_shape(),
j.get_shape(),
tf.TensorShape([None])])
Now I can do t_new = tf.gather(t, indices).
But I am very confused about the meaning of tf.while_loop's back_prop - in general and especially here.
In this case you are fine to have back_prop as false. It doesn't need to back propagate through the computation of the indices because that computation doesn't depend on any learned variables.
It depends on the context. If you are indexing over some features that are produced from a differentiable function then you want to backpropagate. However, if you are indexing over some input placeholder or input data of some type then you can keep it as false, just as #Aaron said.