I am encountering some problems when translating the following code from MATLAB to Python:
Matlab code snippet:
x=M_test %M_test is a 1x3 array that holds the adjustment points for the function
y=R_test %R_test is also a 1x3 array
>> M_test=[0.513,7.521,13.781]
>> R_test=[2.39,3.77,6.86]
expo3= #(b,x) b(1).*(exp(-(b(2)./x).^b(3)));
NRCF_expo3= #(b) norm(y-expo3(b,x));
B0_expo3=[fcm28;1;1];
B_expo3=fminsearch(NRCF_expo3,B0_expo3);
Data_raw.fcm_expo3=(expo3(B_expo3,Data_raw.M));
The translated (python) code:
expo3=lambda x,M_test: x[0]*(1-exp(-1*(x[1]/M_test)**x[2]))
NRCF_expo3=lambda R_test,x,M_test: np.linalg.norm(R_test-expo3,ax=1)
B_expo3=scipy.optimize.fmin(func=NRCF_expo3,x0=[fcm28,1,1],args=(x,))
For clarity, the object function 'expo3' wants to go through the adjustment points defined by M_test.
'NRCF_expo3' is the function that wants to be minimised, which is basically the error between R_test and the drawn exponential function.
When I run the code, I obtain the following error message:
B_expo3=scipy.optimize.fmin(func=NRCF_expo3,x0=[fcm28,1,1]),args=(x,))
NameError: name 'x' is not defined
There are a lot of similar questions that I have perused.
If I delete the 'args' from the optimization function, as numpy/scipy analog of matlab's fminsearch
seems to indicate it is not necessary, I obtain the error:
line 327, in function_wrapper
return function(*(wrapper_args+args))
TypeError: <lambda>() missing 2 required positional arguments: 'x' and 'M_test'
There are a lot of other modifications that I have tried, following examples like Using scipy to minimize a function that also takes non variational parameters or those found in Open source examples, but nothing works for me.
I expect this is probably quite obvious, but I am very new to Python and I feel like I am looking for a needle in a haystack. What am I not seeing?
Any help would be really appreciated. I can also provide more code, if that is necessary.
I think you shouldn't use lambdas in your code, make instead a single target function with your three parameters (see PEP8). There is a lot of missing information in you post, but for what I can infer, you want something like this:
from scipy.optimize import fmin
# Define parameters
M_TEST = np.array([0.513, 7.521, 13.781])
X_ARR = np.array([2.39,3.77,6.86])
X0 = np.array([10, 1, 1]) # whatever your variable fcm28 is
def nrcf_exp3(r_test, m_test, x):
expo3 = x[0] * (1 - np.exp(-(x[1] / m_test) ** x[2]))
return np.linalg.norm(r_test - expo3)
fmin(nrcf_exp3, X0, args=(M_TEST, X_ARR))
Related
I have programmend a framework that concatenate different ( quite complicated ) linear operators in an abstract manner. It overrides the operators, "+,*,#,-" and chooses a path through the graph of compositions of functions. It isn't easy to debug to say the least, however the control flow isn't depending on the data itself and of course any operation is done with tensorflow. I was hoping to use tf.function to compile it and get an ( hopefully much faster) tf.function by XLA. However I get the following error:
TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
#tf.function
def has_init_scope():
my_constant = tf.constant(1.)
with tf.init_scope():
added = my_constant * 2
The graph tensor has name: Reshape_2:0
I do not use the tf.init_scope anywhere and there are 8 (!) google results regarding to this error - while none of them provides me any clue how to debug it.
# initilize linear operators, these are python objects that override __matmul__ etc.
P = ...
A = ...
# initilize vectors, these are compatible python objects to P and A
x = ...
y = ...
# This function recreates the python object from its raw tensorflow data.
# Since it might be dependend on the spaces and
# they also need to be set up for deserializaton the method is returned by a function of x.
# But since many vectors share the same spaces I was hoping to reuse it.
deserialize = x.deserialize()
# We want to compile the action on x to a function now
def meth( data ):
result = P # ( A.T # A # deserialize( data ) )
# we return only the raw data
return result.serialize()
meth = tf.function( meth,
#experimental_compile = True ,
input_signature = (x.serialize_signature,),
).get_concrete_function()
# we want to use meth now for many vectors
# executing this line throws the error
meth(x1)
meth(x2)
meth(x3)
Needless to say that works without the tf.function.
Did anyone stumble across the error and can help me to understand it better ? Or is the hole setup I'm trying not suitable for tensorflow ?
Edit:
The Error was caused by implicitly capturing a constant tensor in the linear operator class by a local lambda. To be honest, the error message suggest something like that, however it was difficult to understand which line in the code caused it and it wasn't easy to find the bug in the end.
I am getting this error when i'm trying to plot a function. This is some of my code:
def f(x):
return f1
xspace=np.linspace(-3,3,100)
plt.ylim([-3,3])
plt.plot(xspace,f(xspace))
Where f1 is calculated previously
line1=x**16-1
line2=x**24-1
f1=sym.cancel(line1/line2)
My question is when i do return f1 i get the error above however when i write the function out longhand instead of writing f1 it works which seems weird to me as they are both the same. Do i always have to write the function out when defining it before graphing or can i set it to a variable? It seems very tedious to do this especially when i have to graph the second derivative for the next part of it.
Thanks in advance
I was studying the AdaDelta optimization algorithm so I tried to implement it in Python, but there is something wrong with my code, since I get the following error:
AttributeError: 'numpy.ndarray' object has no attribute 'sqrt'
I did not find something about what is causing that error. According to the message, it's because of this line of code:
rms_grad = np.sqrt(self.e_grad + epsilon)
This line is similar to this equation:
RMS[g]t=√E[g^2]t+ϵ
I got the core equations of the algorithm in this article: http://ruder.io/optimizing-gradient-descent/index.html#adadelta
Just one more detail: I'm initializing the E[g^2]t matrix like this:
self.e_grad = (1 - mu)*np.square(nabla)
Where nabla is the gradient. Similar to this equation:
E[g2]t = γE[g2]t−1 + (1−γ)g2t
(the first term is equal to zero in the first iteration, just like the line of code above)
So I want to know if I'm initializing the E matrix the wrong way or I'm doing the square root inappropriately. I tried to use the pow() function but it doesn't work. If anyone could help me with this I would be very grateful, I'm trying this for weeks.
Additional details requested by andersource:
Here is the entire source code on github: https://github.com/pedrovbeltran/neural-networks-and-deep-learning/blob/experimental/modified-networks/network2_with_adadelta.py .
I think the problem is that self.e_grad_w is an ndarray of shape (2,) which further contains two additional ndarrays with 2d shapes, instead of directly containing data. This seems to be initialized in e_grad_initializer, in which nabla_w has the same structure. I didn't track where this comes from all the way back, but I believe once you fix this issue the problem will be resolved.
In the following simple matplotlib code:
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0,5,0.1)
y = np.sin(1.33*x)
x1, y1 = np.meshgrid(x, y)
data = np.sin(x1) + y1**4
im = plt.imshow(data)
x = im.make_image()
...
I get the following inexplicable error in the last statement:
"TypeError: make_image() takes at least 2 arguments (1 given)"
And I get an even more ridiculous error if I use an argument, e.g.
x = im.make_image(magnification=2.0)
"TypeError: make_image() takes at least 2 arguments (2 given)".
This is one of the most ridilulous programming errors I have ever come upon!
I have found the missing ingredient: its a renderer. E.g.
r = plt.gcf().canvas.get_renderer()
x = im.make_image(r, magnification=2.0)
This works. Meanwhile, however, I found out with the help of a commentator here that this make_image function is not of any real use, and it is not much supported. Image maginifcation must be obtained with other means, e.g. axes.
So I consider the question solved. Thank you.
See e.g. this question for why something like
TypeError: method() takes at least n arguments (n given)
is not as ridiculous as it may sound at first sight.
Here you are calling make_image without any positional argument. The signature, however, is
make_image(renderer, magnification=1.0, unsampled=False)
So you are missing the renderer argument.
In python 3.6 the error is a little more clear. It would say something like
TypeError: make_image() missing 1 required positional argument: 'renderer'
which allows to find out the problem more easily.
Apart the question stays unclear on what the desired outcome is, so that's about what one can say at this point.
I am learning the optimization functions in scipy. I want to use the BFGS algorithm where the gradient of a function can be provided. As a basic example I want to minimize the following function: f(x) = x^T A x , where x is a vector.
When I implement this in python (see implementation below), I get the following error:
message: 'Desired error not necessarily achieved due to precision loss.'
Tracing the error back to the source led me to the function of scipy that performs a line search to determine the step length, but I have no clue why it fails on such a simple example.
My code to implement this is as follows:
# coding: utf-8
from scipy import optimize
import numpy as np
# Matrix to be used in the function definitions
A = np.array([[1.,2.],[2.,3.]])
# Objectve function to be minimized: f = x^T A x
def f(x):
return np.dot(x.T,np.dot(A,x))
# gradient of the objective function, df = 2*A*x
def fp(x):
return 2*np.dot(A,x)
# Initial value of x
x0 = np.array([1.,2.])
# Try with BFGS
xopt = optimize.minimize(f,x0,method='bfgs',jac=fp,options={'disp':1})
The problem here is that you are looking for a minimum, but the value of your target function f(x) is not bounded in the negative direction.
At first sight, your problem looks like a basic example of a convex target function, but if you take a closer look, you will realize it is not.
For convexity A has to be positive (semi-)definite. This condition is violated in your case. (Just calculate the determinant of A and you will see that immediately).
If you instead pick A = np.array([[2.,2.],[2.,3.]]), everything will be fine again.