I am using this customized function to reshape my tensors in the customized loss function.
def reshape_fortran(x, shape):
if len(x.shape) > 0:
x = x.permute(*reversed(range(len(x.shape))))
return x.reshape(*reversed(shape)).permute(*reversed(range(len(shape))))
Though, I receive this error:
RuntimeError: _unsafe_view does not support automatic differentiation for outputs with complex dtype.
for reshape_fortran output.
Do you know what might be the problem? which function is not supported in Pytorch autograd for complex numbers?
Complex Autograd was in Beta as of version 1.8, but is now stable and should fully support such operations as of 1.9.
I was able to fix the problem by using this function instead, but I have no idea why it was not working and why it is working now:
def convert_output84_torch(input):
shape84 = (8,4)
T1 = torch.transpose(input,0,2).permute(0,1,2).contiguous()
T2 = T1.view(shape84[::-1])
output = torch.transpose(T2,0,1)
return output
I replaced reshape with view, and it works fine (for now). But as my code was working fine before, I am not sure if it is a long-term solution as I am not aware of the main source.
Related
I have programmend a framework that concatenate different ( quite complicated ) linear operators in an abstract manner. It overrides the operators, "+,*,#,-" and chooses a path through the graph of compositions of functions. It isn't easy to debug to say the least, however the control flow isn't depending on the data itself and of course any operation is done with tensorflow. I was hoping to use tf.function to compile it and get an ( hopefully much faster) tf.function by XLA. However I get the following error:
TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
#tf.function
def has_init_scope():
my_constant = tf.constant(1.)
with tf.init_scope():
added = my_constant * 2
The graph tensor has name: Reshape_2:0
I do not use the tf.init_scope anywhere and there are 8 (!) google results regarding to this error - while none of them provides me any clue how to debug it.
# initilize linear operators, these are python objects that override __matmul__ etc.
P = ...
A = ...
# initilize vectors, these are compatible python objects to P and A
x = ...
y = ...
# This function recreates the python object from its raw tensorflow data.
# Since it might be dependend on the spaces and
# they also need to be set up for deserializaton the method is returned by a function of x.
# But since many vectors share the same spaces I was hoping to reuse it.
deserialize = x.deserialize()
# We want to compile the action on x to a function now
def meth( data ):
result = P # ( A.T # A # deserialize( data ) )
# we return only the raw data
return result.serialize()
meth = tf.function( meth,
#experimental_compile = True ,
input_signature = (x.serialize_signature,),
).get_concrete_function()
# we want to use meth now for many vectors
# executing this line throws the error
meth(x1)
meth(x2)
meth(x3)
Needless to say that works without the tf.function.
Did anyone stumble across the error and can help me to understand it better ? Or is the hole setup I'm trying not suitable for tensorflow ?
Edit:
The Error was caused by implicitly capturing a constant tensor in the linear operator class by a local lambda. To be honest, the error message suggest something like that, however it was difficult to understand which line in the code caused it and it wasn't easy to find the bug in the end.
A GLS (or thus also OLS) regression with constraints on parameters can readily be run using statsmodels GLM.fit_constrained() method, as with the code below (or here).
How can I make the GLMresults object resulting from such a statsmodels GLM.fit_constrained() regression picklable, so that the estimation result can be stored for re-use for prediction in a new session anytime later?
The GLMresults object obtained from fit_constrained() and containing the relevant estimation result has its .save() method that would normally readily pickle the object into a file.
This .save() works for the result from a standard (unconstrained) GLM regression, sm.glm.fit(). However, it doesn't work with the result for sm.glm.fit_unconstrained(). Instead, it throws a pickling error, seemingly because patsy DesignMatrixBuilder is not Picklable, so it links to the never resolved issue here. This at least for my Python 3.6.3 (running on Windows).
An example:
import statsmodels
import statsmodels.api as sm
import pandas as pd
# Define exapmle data & Constraints:
import numpy as np
df = pd.DataFrame(np.random.randint(0,100,size=(100, 5)), columns=list('ABCDF'))
y = df['A']
X = df[['B','C','D','F']]
constraints = ['B + C + D', 'C - F'] # Add two linear constraints on parameters: B+C+D = 0 & C-F = 0
statsmodels.genmod.families.links.identity()
OLS_from_GLM = sm.GLM(y, X)
# Unconstrained regression:
result_u = OLS_from_GLM.fit()
result_u.save('myfile_u.pickle') # This works
# Constrained regression - save() fails
result_c = OLS_from_GLM.fit_constrained(constraints)
result_c.save('myfile_c.pickle') # This fails with pickling error (tested in Python 3.6.3 on Windows): "NotImplementedError: Sorry, pickling not yet supported. See https://github.com/pydata/patsy/issues/26 if you want to help."
Is there a way to readily make the result from fit_unconstrained() picklable i.e./or storable?
I below suggest a first workaround answer; it is trivial and works well for me so far. I do not know, however, whether it is truly advisable or whether its risks are large and/or any preferable alternative solution exists.
I got this to work by simply removing (commenting out) the line
res._results.constraints = lc
in the function definition of fit_constrained() within statsmodels' active generalized_linear_model.py script (in my case in the virtualenv folder \env\Lib\site-packages\statsmodels\genmod\generalized_linear_model.py).
Idling this line seems to have created no problem for my work; I can now readily save and reload the pickled file and use it to make correct predictions based on the stored estimation; the imposed parameter constraints remain respected and predictions made using .predict() remain unchanged after reloading.
I wonder though whether there is any major risk attached to this procedure. I am not familiar with the inner workings of the statsmodels library, or with its glm.fit_constrained() method in particular. i reckon it's unadvisable to change anything in a pre-existing module one does not understand. However, it is the only way I am conveniently able to impose various constraints to my GLM parameters and to be able to save the regression results to readily re-use it for prediction in a later session.
I was recently introduced to PyTorch and began running through the library's documentation and tutorials.
In the "Creating extensions using numpy and scipy" tutorial, under "Parameter-less example", a sample function is created using numpy called BadFFTFunction.
The description for the function states:
"This layer doesn’t particularly do anything useful or mathematically
correct.
It is aptly named BadFFTFunction"
The function and its usage are given as:
from numpy.fft import rfft2, irfft2
class BadFFTFunction(Function):
def forward(self, input):
numpy_input = input.numpy()
result = abs(rfft2(numpy_input))
return torch.FloatTensor(result)
def backward(self, grad_output):
numpy_go = grad_output.numpy()
result = irfft2(numpy_go)
return torch.FloatTensor(result)
def incorrect_fft(input):
return BadFFTFunction()(input)
input = Variable(torch.randn(8, 8), requires_grad=True)
result = incorrect_fft(input)
print(result.data)
result.backward(torch.randn(result.size()))
print(input.grad)
Unfortunately, I was only recently introduced to signal processing as well, and am unsure of where the (likely obvious) error is in this function.
I am wondering, how might one go about fixing this function so that its forward and backward outputs are correct?
How can BadFFTFunction be fixed so that a differentiable FFT function can be used in PyTorch?
I think the errors are: First, the function, despite having FFT in its name, only returns the amplitudes/absolute values of the FFT output, not the full complex coefficients. Also, just using the inverse FFT to compute the gradient of the amplitudes probably doesn't make much sense mathematically (?).
There is a package called pytorch-fft that tries to make an FFT-function available in pytorch. You can see some experimental code for autograd functionality here. Also note discussion in this issue.
As of version 1.8, PyTorch has torch.fft:
torch.fft.fft(input)
I'm currently working on a quaternionic Neural Network using Tensorflow (I want to use GPUs). TensorFlow doesn't have support for quaternions, but you can represent than as a 4x4 real matrix, so it might be possible to build such a neural network in TensorFlow.
Is there a simple way to add a custom operation or to do a custom operation on tensors?
For example, I can write:
output_activation = tf.nn.softmax(tf.matmul(hidden_activation, Weight_to_ouput))
...and that's pretty cool! All you have to do is add a loss function and then do backpropagation. However, I want to do the same thing but with quaternions, for example:
output_activation = mySigmoid(myFunction(hidden_activation, Weight_to_output))
However, I need to transform the quaternions to and from tensors to optimize the GPU calculation. So I need to create a function that gets some tensors as parameters and returns the transformed tensors.
I've looked at py_func, but it seems that you can't return tensors.
I tried the following, but it failed:
def layerActivation(inputTensor,WeightTensor):
newTensor = tf.matmul(inputTensor,WeightTensor)
return newTensor
...and in main():
x = placeholder ...
W_to_hidden = tf.Variable
test = tf.py_func(layerActivation, [x,_W_to_hidden], [tf.float32])
with tf.Session() as sess:
tf.initialize_all_variables().run()
king_return = sess.run(test, feed_dict={x: qtrain})
Error : Unimplemented: Unsupported object type Tensor
Ideally I could use this output_activation in the standard backprop algorithm of TensorFlow but I don't know if it's possible.
Depending on the functionality required, you might be able to implement your operation as a composition of existing TensorFlow ops, without needing to use tf.py_func().
For example, the following works and will run on a GPU:
def layer_activation(input_tensor, weight_tensor):
return tf.matmul(input_tensor, weight_tensor)
# ...
x = tf.placeholder(...)
W_to_hidden = tf.Variable(...)
test = layer_activation(input_tensor, weight_tensor)
# ...
The main reason to use tf.py_func() is if your operations cannot be implemented using TensorFlow operations, and you want to inject some Python code (e.g. using NumPy) that works on the actual values of your tensor.
However, if your mySigmoid() or myFunction() operations cannot be implemented in terms of existing TensorFlow operations, and you want to implement them on GPU, then—as keveman says—you will need to add a new op.
If you want to run your custom operations on GPUs, you have to provide GPU implementation (kernels) in C++. Look at the documentation here for how to extend TensorFlow with custom operations, and especially the section on GPU support.
I've installed ioapiTools, a python module to manage ioapi format files. The module is supposed to handle file and perform operations on them, including basic arithmetic operations. But something is wrong and when I try to, say, multiply an array by a float or an integer, the result is a zero-valued array (both the array and the float/integer are different from zero).
The module in question creates a temporary variable using cdms2 according to the following syntax:
import cdms2 as cdms, cdtime, MV2 as MV, cdutil
import numpy as N
..........
def __mul__(self, other):
"""
Wrapper around cdms tvariable multiply
"""
tmpVar = cdms.tvariable.TransientVariable.__mul__(self,other)
iotmpVar = createVariable(tmpVar, self.ioM, id = self.id,\
attributes=self.attributes, copyFlag = False)
return iotmpVar
But the variable returns nothing but zeros.
Any ideas?
I tried to use ioapiTools, and latest version i found was 0.3.2 from http://www2-pcmdi.llnl.gov/Members/azubrow/ioapiTools/download-source-file .
unfortunately, the code doesn't seem to catchup with evolution of cdat, which now recommend using numpy instead of Numeric. automated translation tool may be resolving some problems, but not all. For example, the class iovar (defined in ioapiTools.py:2103) now needs to have _____new_____ method, as it is a subclass of numpy masked array (i dont know how things are in Numeric). With that, i seems to have _____mul_____ working. i couldn't reproduce your problem though, because i couldn't even get an instance of iovar without having _____new_____ method defined.
i can pass what i got to you if you still need one, but i am sure there are more problems hiding... let me know if you need it though.