code:
a = T.vector()
b = T.vector()
loss = T.sum(a-b)
dy = T.grad(loss, a)
d2y = T.grad(loss, dy)
f = theano.function([a,b], y)
print f([.5,.5,.5], [1,0,1])
output:
theano.gradient.DisconnectedInputError: grad method was asked to compute
the gradientwith respect to a variable that is not part of the
computational graph of the cost, or is used only by a non-differentiable
operator: Elemwise{second}.0
how is a derivative of the graph not part of the graph? Is this why scan is used to compute the hessian?
Here:
d2y = T.grad(loss, dy)
you are attempting to compute the gradient of the loss with respect to dy. However the loss depends only on the values of a and b and not dy, hence the error. It only makes sense to compute partial derivatives of the loss with respect to parameters that actually affect its value.
The easiest way to compute the Hessian in Theano is to use the theano.gradient.hessian convenience function:
d2y = theano.gradient.hessian(loss, a)
See the documentation here for an alternative manual method that uses a combination of theano.grad and theano.scan.
In your example the Hessian will be a 3x3 matrix of zeros, since the partial derivative of the loss w.r.t. a is independent of a (it's just a vector of ones).
Related
I am trying to implement a real valued cost function that evaluates a complex input in frequency space with pytorch & autograd since I am interested in the gradients of the cost function w.r.t. the input. When I compare the autograd results with the derivative that I computed by hand (with Wirtinger calculus) I get a different result. I'm not sure where I made the mistake, whether it is in my implementation or in my own derivation of the gradient.
The cost function and its derivative by hand looks like this:
Formula of the cost function
My implementation is here
def f_derivative_by_hand(f):
f = torch.tensor(f, dtype=torch.complex128)
ftilde = torch.fft.fft(f)
absf = torch.abs(ftilde)
f2 = absf**2
C = torch.trapz(f2).numpy()
grads = 2 * torch.fft.ifft((ftilde)).numpy()
return C, grads
def f_derivative_autograd(f):
f = torch.tensor(f, dtype=torch.complex128, requires_grad=True)
ftilde = torch.fft.fft(f)
f2 = torch.abs(ftilde)**2
C = torch.trapz(f2)
C.backward()
grads = f.grad
return C.detach().numpy(), grads.detach().numpy()
When I use some data and evaluate it by both functions, the gradients of the implementation with automatic differentiation is tilted in comparison (note that I normalized the plotted arrays):
Autograd and derivative by hand comparison
I suspect there could also be something wrong with the automatic differentiation of fft though since if I remove the fourier transform from the cost function and integrate the function in real space, both implementations match exactly except at the edges (again normalized):
No FFT autograd and derivative by hand
It would be fantasic if someone could help me figure out what is wrong!
After some more investigation, I found the solution to the problem of the tilted derivatives. Apparently, the trapezoidal integration rule assumes boundary conditions that will show some artifacts at the boundaries as discussed in this pytorch forum post.
In my original problem, the observed tilt results from the integration of the fourier transformed signal which is asymmetric. The boundary artifacts introduce spatial frequencies which tilt the derivative in real space.
For me, the simplest solution is just to use a sum and weight by the frequency differential. Then, everything works out.
Let us suppose that my function is
(z : C -> C)
z = x - i*y
now here the real part is,
u(x, y) = x
the imaginary part is,
v(x, y) = -y
so, when we get the derivatives, we find
d_u_x(x,y) = 1 # derivative of u wrt x
d_u_y(x,y) = 0
d_v_x(x, y) = 0
d_v_y(x, y) = -1
so, here,
d_u_x != d_v_y
thus, it does not follow Cauchy Reimann equation.
but, then comes the Wirtinger calculus, that says, I could write my function as,
u(x, y) = ((x + iy) + (x - iy))/2
= (z + z.conj())/2
v(x, y) = (((x + iy) - (x - iy))/2i
= (z - z.conj())/2i
but what after this, how do I find the gradient.
plus, in PyTorch, what is the correct way to specify such a function,
if I do,
import torch
a = torch.randn(1, dtype=torch.cfloat, requires_grad=True)
f = a.conj()
f.backward()
print(a.grad)
is this a correct way?
You may find the following page of interest:
When you use PyTorch to differentiate any function f(z) with complex domain and/or codomain, the gradients are computed under the assumption that the function is a part of a larger real-valued loss function g(input)=L. The gradient computed is ∂L/∂z* (note the conjugation of z), the negative of which is precisely the direction of steepest descent used in Gradient Descent algorithm. Thus, all the existing optimizers work out of the box with complex parameters.
This convention matches TensorFlow’s convention for complex differentiation, but is different from JAX (which computes ∂L/∂z).
If you have a real-to-real function which internally uses complex operations, the convention here doesn’t matter: you will always get the same result that you would have gotten if it had been implemented with only real operations.
...
For optimization problems, only real valued objective functions are used in the research community since complex numbers are not part of any ordered field and so having complex valued loss does not make much sense.
It also turns out that no interesting real-valued objective fulfill the Cauchy-Riemann equations. So the theory with homomorphic function cannot be used for optimization and most people therefore use the Wirtinger calculus.
https://pytorch.org/docs/stable/notes/autograd.html
I am trying to use Hamiltonian Monte Carlo (HMC, from Tensorflow Probability) but my target distribution contains an intractable 1-D integral which I approximate with the trapezoidal rule. My understanding of HMC is that it calculates gradients of the target distribution to build a more efficient transition kernel. My question is can Tensorflow work out gradients in terms of the parameters of function, and are they meaningful?
For example this is a log-probability of the target distribution where 'A' is a model parameter:
# integrate e^At * f[t] with respect to t between 0 and t, for all t
t = tf.linspace(0., 10., 100)
f = tf.ones(100)
delta = t[1]-t[0]
sum_term = tfm.multiply(tfm.exp(A*t), f)
integrals = 0.5*delta*tfm.cumsum(sum_term[:-1] + sum_term[1:], axis=0)
pred = integrals
sq_diff = tfm.square(observed_data - pred)
sq_diff = tf.reduce_sum(sq_diff, axis=0)
log_lik = -0.5*tfm.log(2*PI*variance) - 0.5*sq_diff/variance
return log_lik
Are the gradients of this function in terms of A meaningful?
Yes, you can use tensorflow GradientTape to work out the gradients. I assume you have a mathematical function outputting log_lik with many inputs, one of it is A
GradientTape to get the gradient of A
The get the gradients of log_lik with respect to A, you can use the tf.GradientTape in tensorflow
For example:
with tf.GradientTape(persistent=True) as g:
g.watch(A)
t = tf.linspace(0., 10., 100)
f = tf.ones(100)
delta = t[1]-t[0]
sum_term = tfm.multiply(tfm.exp(A*t), f)
integrals = 0.5*delta*tfm.cumsum(sum_term[:-1] + sum_term[1:], axis=0)
pred = integrals
sq_diff = tfm.square(observed_data - pred)
sq_diff = tf.reduce_sum(sq_diff, axis=0)
log_lik = -0.5*tfm.log(2*PI*variance) - 0.5*sq_diff/variance
z = log_lik
## then, you can get the gradients of log_lik with respect to A like this
dz_dA = g.gradient(z, A)
dz_dA contains all partially derivatives of variables in A
I just show you the idea by the code above. In order to make it works you need to do the calculation by Tensor operation. So change to modify your function to use tensor type for the calculation
Another example but in tensor operation
x = tf.constant(3.0)
with tf.GradientTape() as g:
g.watch(x)
with tf.GradientTape() as gg:
gg.watch(x)
y = x * x
dy_dx = gg.gradient(y, x) # Will compute to 6.0
d2y_dx2 = g.gradient(dy_dx, x) # Will compute to 2.0
Here you can see more example from the document to understand more https://www.tensorflow.org/api_docs/python/tf/GradientTape
Further discussion on "meaningfulness"
Let me translate the python code to mathematics first (I use https://www.codecogs.com/latex/eqneditor.php, hope it can display properly):
# integrate e^At * f[t] with respect to t between 0 and t, for all t
From above, it means you have a function. I call it g(t, A)
Then you are doing a definite integral. I call it G(t,A)
From your code, t is not variable any more, it is set to 10. So, we reduce to a function that has only one variable h(A)
Up to here, function h has a definite integral inside. But since you are approximating it, we should not think it as a real integral (dt -> 0), it is just another chain of simple maths. No mystery here.
Then, the last output log_lik, which is simply some simple mathematical operations with one new input variable observed_data, I call it y.
Then a function z that compute log_lik is:
z is no different than other normal chain of maths operations in tensorflow. Therefore, dz_dA is meaningful in the sense that the gradient of z w.r.t A gives you the gradient to update A that you can minimize z
I have an objective function from a paper that I would like to minimize with gradient descent. I have not yet had to do this "from scratch" and would like some advice as to how to code it up manually. The objective function is:
T(L) = tr(X.T L^s X) - beta * ||L||.
where L is an N x N matrix positive semidefinite matrix to be estimated, X is an N x M matrix, beta is a regularization constant, X.T = X transpose, and ||.|| is the frobenius norm.
Also, L^s is the matrix exponential where L^s = F Λ^s F.T, where F is a matrix of the eigenvectors of L and Λ is the diagonal matrix of eigenvalues of L.
The derivative of the objective function is:
dT/dL = sum_{from r = 0 to r = s - 1} L^r (XX.T) L^(s-r-1) - 2 * beta * L
I have done very rudimentary gradient descent problems (such as matrix factorization) where optimization is done over every element of the matrix, or using packages/libraries. This kind of problem is more complex I am used to, and I was hoping that some of you that are much more experienced with this sort of thing could help me out.
Any general advice is much appreciated as well as specific recommendations of how to code this up in python or R.
Here is the link for the paper with this function:
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0128136#sec016
Thank you very much for your help!
Paul
In general, it would probably be advisable to use a machine learning library such as tensorflow or pytorch. If you go down this route you have several advantages 1) efficient C++ implementation of the Tensor operations 2) automatic differentiation 3) easy access to more sophisticated optimizers (e.g. ADAM).
`
If you prefer to do the gradient computation yourself you could do that by setting the gradient L.grad manually before the optimization step
A simple implementation would look like this:
import torch
n=10
m=20
s = 3
b=1e-3
n_it=40
# L=torch.nn.Parameter(torch.rand(n,n))
F=torch.nn.Parameter(torch.rand(n,n))
D=torch.nn.Parameter(torch.rand(n))
X=torch.rand((n,m))
opt=torch.optim.SGD([F,D],lr=1e-4)
for i in range(n_it):
loss = (X.T.matmul(F.matmul((D**s).unsqueeze(1)*F.T)).matmul(X)).trace() - b * F.matmul((D**s).unsqueeze(1)*F.T).norm(2)
print(loss)
opt.zero_grad()
loss.backward()
opt.step()
I am asked to write an implementation of the gradient descent in python with the signature gradient(f, P0, gamma, epsilon) where f is an unknown and possibly multivariate function, P0 is the starting point for the gradient descent, gamma is the constant step and epsilon the stopping criteria.
What I find tricky is how to evaluate the gradient of f at the point P0 without knowing anything on f. I know there is numpy.gradient but I don't know how to use it in the case where I don't know the dimensions of f. Also, numpy.gradient works with samples of the function, so how to choose the right samples to compute the gradient at a point without any information on the function and the point?
I'm assuming here, So how can i choose a generic set of samples each time I need to compute the gradient at a given point? means, that the dimension of the function is fixed and can be deduced from your start point.
Consider this a demo, using scipy's approx_fprime, which is an easier to use wrapper-method for numerical-differentiation and also used in scipy's optimizers when a jacobian is needed, but not given.
Of course you can't ignore the parameter epsilon, which can make a difference depending on the data.
(This code is also ignoring optimize's args-parameter which is usually a good idea; i'm using the fact that A and b are inside the scope here; surely not best-practice)
import numpy as np
from scipy.optimize import approx_fprime, minimize
np.random.seed(1)
# Synthetic data
A = np.random.random(size=(1000, 20))
noiseless_x = np.random.random(size=20)
b = A.dot(noiseless_x) + np.random.random(size=1000) * 0.01
# Loss function
def fun(x):
return np.linalg.norm(A.dot(x) - b, 2)
# Optimize without any explicit jacobian
x0 = np.zeros(len(noiseless_x))
res = minimize(fun, x0)
print(res.message)
print(res.fun)
# Get numerical-gradient function
eps = np.sqrt(np.finfo(float).eps)
my_gradient = lambda x: approx_fprime(x, fun, eps)
# Optimize with our gradient
res = res = minimize(fun, x0, jac=my_gradient)
print(res.message)
print(res.fun)
# Eval gradient at some point
print(my_gradient(np.ones(len(noiseless_x))))
Output:
Optimization terminated successfully.
0.09272331925776327
Optimization terminated successfully.
0.09272331925776327
[15.77418041 16.43476772 15.40369129 15.79804516 15.61699104 15.52977276
15.60408688 16.29286766 16.13469887 16.29916573 15.57258797 15.75262356
16.3483305 15.40844536 16.8921814 15.18487358 15.95994091 15.45903492
16.2035532 16.68831635]
Using:
# Get numerical-gradient function with a way too big eps-value
eps = 1e-3
my_gradient = lambda x: approx_fprime(x, fun, eps)
shows that eps is a critical parameter resulting in:
Desired error not necessarily achieved due to precision loss.
0.09323354898565098