Let's say I have the following python code
y = 2
def f(x, y):
y = y**2
return x*y
for i in range(5):
print(f(2,y))
Is it somehow possible to make the change to y within f global while still passing it to f as an argument?
I know that
y = 2
def f(x, y):
global y
y = y**2
return x*y
for i in range(5):
print(f(2,y))
will not work because y cannot be both global and a function parameter.
The 'ugly solution that I have is simply not to pass y as an argument:
y = 2
def f(x):
global y
y = y**2
return x*y
for i in range(5):
print(f(2,y))
but I am not satisfied with this, as I would like to explicitly pass y to the function and basically call it by reference.
The background of this question is that I would like to use scipy's odeint, and I have to use sparse matrices in the computation of the derivative that also change with time.
If I want to avoid converting these to numpy and back to sparse at every timestep, I have to store them globally and modify them from within the function. Because the output of the function is dictated by odeint (it has to be said derivative) it is not an option to include these matrices in the output (and I don't know how that would work anyway, because I'd have to mix scalars and matrices in the output array).
It would be nice if I could somehow pass them as a parameter but make the changes to them from within the function globally permanent.
Just use a different name for the formal argument to f:
y = 2
def f(x, y2):
global y
y = y2**2
return x*y
for i in range(5):
print(f(2,y))
If I understand your intent, then I believe this should work for you.
You cannot do this exactly, for the reason you have described: a variable cannot be at the same time global and a local argument.
However, one solution would be to do this:
y_default = 2
def f(x, y=None):
if y is None:
y = y_default
y = y**2
return x*y
This will do what you want, as you can now call f(2) or f(2,3)
Essentially the problem is that y is global and local as the error message will suggest. Therefore you avoid the local variable issue by introducing a variable z locally. You can still pass y into z, which then yields the desired result.
y = 2
def f(x, z):
y = z**2
global y
return x*y
for i in range(5):
print f(2,y)
Related
My first py file is the function that I want to find the roots, like this:
def myfun(unknowns,a,b):
x = unknowns[0]
y = unknowns[1]
eq1 = a*y+b
eq2 = x**b
z = x*y + y/x
return eq1, eq2
And my second one is to find the value of x and y from a starting point, given the parameter value of a and b:
a = 3
b = 2
x0 = 1
y0 = 1
x, y = scipy.optimize.fsolve(myfun, (x0,y0), args= (a,b))
My question is: I actually need the value of z after plugging in the result of found x and y, and I don't want to repeat again z = x*y + y/x + ..., which in my real case it's a middle step variable without an explicit expression.
However, I cannot replace the last line of fun with return eq1, eq2, z, since fslove only find the roots of eq1 and eq2.
The only solution now is to rewrite this function and let it return z, and plug in x and y to get z.
Is there a good solution to this problem?
I believe that's the wrong approach. Since you have z as a direct function of x and y, then what you need is to retrieve those two values. In the listed case, it's easy enough: given b you can derive x as the inverse of eqn2; also given a, you can invert eqn1 to get y.
For clarity, I'm changing the names of your return variables:
ret1, ret2 = scipy.optimize.fsolve(myfun, (x0,y0), args= (a,b))
Now, invert the two functions:
# eq2 = x**b
x = ret2**(1/b)
# eq1 = a*y+b
y = (ret1 - b) / a
... and finally ...
z = x*y + y/x
Note that you should remove the z computation from your function, as it serves no purpose.
I am trying to make a complete function, that takes in an expression:
def graph(formula):
fig = plt.figure()
ax = fig.gca(projection='3d')
X = np.arange(-50, 50, 0.5)
X = X[X != 0]
Y = np.arange(-50, 50, 0.5)
Y = Y[Y != 0]
X, Y = np.meshgrid(X, Y)
Z=[[0],[0]]
expression = "Z=" + formula
exec(expression)
Now I want to do graph("X+Y"), and then it should do Z = X + Y. It doesn't do that. I have tried doing the same with eval instead of exec, but no luck.
It sounds like you want to pass a "formula" that computes Z from X and Y. Rather than using exec or eval and running into issues with namespaces, a better way to do that is to pass in a function. As user s3cur3 commented, an easy way to do that is with a lambda expression:
def graph(func):
# set up X and Y up here
Z = func(X, Y)
# do stuff with Z after computing it?
graph(lambda X, Y: X+Y)
If you need more complicated logic that you can fit in a lambda, you can write out a full function if you need to:
def my_func(x, y): # this could be done in a lambda too, but lets pretend it couldn't
if random.random() < 0.5:
return x + y
return x - y
graph(my_func)
I assume you mean to pass to your function like so (to calculate Z),
def graph(formula)
...
graph(X+Y)
...
If so, why not just pass to separate values (or arrays of values)? Such as,
def graph(x, y):
...
graph(4, 5)
...
or,
mypoints = [[1, 3], [4, 8], [8, 1], [10, 3]] # 2-D array
def graph(XY):
for point in XY:
x = point[0]
y = point[1]
.... # everything else
graph(mypoints )
...
For a full example of this, check out Method: Stats.linregress( ) in this article (scroll down a bit).
Otherwise, you could:
pass the data as an array (a table of X, Y values if you will).
if it is a super complex formula that will have a bunch of attributes and methods attached to it (such as including complex numbers), perhaps creating a Formula class.
You could also write a function using the lambda syntax. This would give you the freedom of having an object (like I suggested above) as well as a "function" (of course they are practically synonymous). Read more in the docs.
I am trying to define a function of n variables to fit to a data set. The function looks like this.
Kelly Function
I then want to find the optimal ai's and bj's to fit my data set using scipy.optimize.leastsq
Here's my code so far.
from scipy.optimize import leastsq
import numpy as np
def kellyFunc(a, b, x): #Function to fit.
top = 0
bot = 0
a = [a]
b = [b]
for i in range(len(a)):
top = top + a[i]*x**(2*i)
bot = bot + b[i]*x**(2*i)
return(top/bot)
def fitKelly(x, y, n):
line = lambda params, x : kellyFunc(params[0,:], params[1,:], x) #Lambda Function to minimize
error = lambda params, x, y : line(params, x) - y #Kelly - dataset
paramsInit = [[1 for x in range(n)] for y in range(2)] #define all ai and bi = 1 for initial guess
paramsFin, success = leastsq(error, paramsInit, args = (x,y)) #run leastsq optimization
#line of best fit
xx = np.linspace(x.min(), x.max(), 100)
yy = line(paramsFin, xx)
return(paramsFin, xx, yy)
At the moment it's giving me the error:
"IndexError: too many indices" because of the way I've defined my initial lambda function with params[0,:] and params[1,:].
There are a few problems with your approach that makes me write a full answer.
As for your specific question: leastsq doesn't really expect multidimensional arrays as parameter input. The documentation doesn't make this clear, but parameter inputs are flattened when passed to the objective function. You can verify this by using full functions instead of lambdas:
from scipy.optimize import leastsq
import numpy as np
def kellyFunc(a, b, x): #Function to fit.
top = 0
bot = 0
for i in range(len(a)):
top = top + a[i]*x**(2*i)
bot = bot + b[i]*x**(2*i)
return(top/bot)
def line(params,x):
print(repr(params)) # params is 1d!
params = params.reshape(2,-1) # need to reshape back
return kellyFunc(params[0,:], params[1,:], x)
def error(params,x,y):
print(repr(params)) # params is 1d!
return line(params, x) - y # pass it on, reshape in line()
def fitKelly(x, y, n):
#paramsInit = [[1 for x in range(n)] for y in range(2)] #define all ai and bi = 1 for initial guess
paramsInit = np.ones((n,2)) #better
paramsFin, success = leastsq(error, paramsInit, args = (x,y)) #run leastsq optimization
#line of best fit
xx = np.linspace(x.min(), x.max(), 100)
yy = line(paramsFin, xx)
return(paramsFin, xx, yy)
Now, as you see, the shape of the params array is (2*n,) instead of (2,n). By doing the re-reshape ourselves, your code (almost) works. Of course the print calls are only there to show you this fact; they are not needed for the code to run (and will produce bunch of needless output in each iteration).
See my other changes, related to other errors: you had a=[a] and b=[b] in your kellyFunc, for no good reason. This turned the input arrays into lists containing arrays, which made the next loop do something very different from what you intended.
Finally, the sneakiest error: you have input variables named x, y in fitKelly, then you use x and y is loop variables in a list comprehension. Please be aware that this only works as you expect it to in python 3; in python 2 the internal variables of list comprehensions actually leak outside the outer scope, overwriting your input variables named x and y.
I am very new to object-oriented programming in Python and I am working to implement the accepted answer to this question in Python (it's originally in R).
I have a simple question - is it possible to access the output of one method for use in another method without first binding the output to self? I presume the answer is "no" - but I also imagine there is some technique that accomplishes the same task that I am not thinking of.
My start to the code is below. It works fine until you get to the kappa method. I would really like to be able to define kappa as a simple extension to curvature (since it's just the absolute value of the same) but I'm not particularly interested in adding it the list of attributes. I may just been overthinking this too, and either something like a closure is possible in Python or adding to the attribute list is the Pythonic thing to do?
import numpy as np
from scipy.interpolate import InterpolatedUnivariateSpline
class Road(object):
def __init__(self, x, y): #x, y are lists
# Raw data
self.x = x
self.y = y
# Calculate and set cubic spline functions
n = range(1, len(x)+1)
fx = InterpolatedUnivariateSpline(n, x, k=3)
fy = InterpolatedUnivariateSpline(n, y, k=3)
self.fx = fx
self.fy = fy
def curvature(self, t):
# Calculate and return the curvature
xp = self.fx.derivative(); yp = self.fy.derivative()
xpp = xp.derivative(); ypp = yp.derivative()
vel = np.sqrt(xp(t)**2 + yp(t)**2) #Velocity
curv = (xp(t)*ypp(t) - yp(t)*xpp(t)) / (vel**3) #Signed curvature
return curv
def kappa(self, t):
return abs(curv)
Just call the other method:
class Road(object):
...
def kappa(self, t):
return abs(self.curvature(t=t))
Hope I doesn't repeat any question, but couldn't find...
I'm trying to run a function with the same key parameter many times. I understand why the f function changes the x0 array, but I don't really understand why the g function takes every time different argument (y0 is constant).
I would be thankful if anyone can explain me this behaviour and give me a tip how to implement what I want (basically at the end I would like to have y == np.array([0, 30, 0]) ).
import numpy as np
x0 = np.zeros(3)
y0 = np.zeros(3)
def f(i, x = x0):
x[1] += i
return x
def g(i, y = y0.copy()):
print "y that goes to g (every time is different) \n", y
y[1] += i
return y
print "x0 before f \n" ,x0
x = f(5)
print "x0 after f is the same as x \n", x0, "\n", x
print "y0 before g \n" ,y0
for i in [10, 20, 30]:
y = g(i)
print "y0 after g doe not change, but y is NOT as I would expect! \n", y0, "\n", y
Default arguments to functions are evaluated only once, when the function is defined. This means that your function definition is equivalent to:
y0_ = y0.copy()
def g(i, y = y0_):
print "y that goes to g (every time is different) \n", y
etc
Which explains why your y argument changes every time.
"but I don't really understand why the g function takes every time different argument"
def g(i, y = y0.copy()):
....
Your y0 is constant but you are creating copy of y0 with different reference first time when g() function is called so you can't change y0 with function g(). Just change
y = y0.copy()
to
y=y0