I want to solve system of equations in python. The first code works as expected but the second doesn't although it looks wery much the same. The second code gives me error: "name 'x' is not defined". I don't understand, why this is not the case in the firs code but in the second is. The aproach in the first code is not general enough for me, because i need this for systems with different number of equations.
Thanks for help!
#first code:
eq1 = 'x[0]**2 + x[0]*x[1]-10'
eq2 = 'x[1]+3*x[0]*x[1]**2-57'
def equat(x):
return [eval(eq1), eval(eq2)]
res = scipy.optimize.root(equat, x0=(0, 0))
res.x
#second code:
eqn = ['x[0]**2 + x[0]*x[1]-10',
'x[1]+3*x[0]*x[1]**2-57']
def equat(x):
return [eval(eqi) for eqi in eqn]
res = scipy.optimize.root(equat, x0=(0, 0))
res.x
NameError: name 'x' is not defined
In Python, functions are objects, and you can store them in lists, etc. as if they were any other kind of object. In particular, you can retrieve them from a list and call the result, regardless of naming.
eval is a crude tool that is almost never what you really want to be using in Python.
So, let's build the relevant functions:
def eq1(x):
return x[0]**2 + x[0]*x[1] - 10
def eq2(x):
return x[1] + 3 *x[0]*x[1]**2 - 57
And put them in a list:
eqns = [eq1, eq2]
And use a list comprehension to apply the functions from our main heuristic function:
def equat(x):
return [eqn(x) for eqn in eqns]
And do optimization as before:
res = scipy.optimize.root(equat, x0=(0, 0))
When you use eval, you shold determine the scope of the variables used. The following code worked:
import scipy
from scipy import optimize
#second code:
eqn = ['x[0]**2 + x[0]*x[1]-10',
'x[1]+3*x[0]*x[1]**2-57']
def equat(x):
return [eval(eqi, {"x": x}) for eqi in eqn]
res = scipy.optimize.root(equat, x0=(0, 0))
res.x
Related
I know that there are way simpler ways to calculate the square of a number and store it in an array, but for the sake of another problem. I need to understand why nothing happens in this code and its structure (is the return(a) necessary ?):
s=[1,2,3,4,5]
def square(x):
return x*x
def iterate(b):
sol=[]
for b in s:
a=square(b)
return(a)
sol.append(a)
print(sol)
The goal is to store the square in sol : sol = [1,4,9,16,25]. But the code runs without printing anything. What make the following code work and not the previous one ?
s=[1,2,3,4,5]
def square(x):
return x*x
sol=[]
for b in s:
a=square(b)
sol.append(a)
print(sol)
(My problem involves curve fitting, and this structure doesnt fit my needs)
The problem is that you define iterate within square but you never call iterate. It would be better to have iterate be a separate function that calls square:
values = [1,2,3,4,5] # do not call your variable set - it is a Python keyword
def square(x):
return x*x
def iterate(values):
solution = []
for value in values:
value_squared = square(value)
solution.append(value_squared)
return solution
You could also do this without defining iterate using a list comprehension:
[square(value) for value in values]
Edit:
To answer your other questions, here is your code:
s=[1,2,3,4,5]
def square(x):
return x*x
def iterate(b):
sol=[]
for b in s:
a=square(b)
return(a)
sol.append(a)
print(sol)
In square, you never call iterate so this part of the code never runs.
If you add a call to iterate within square, you will end up in an infinite loop. This is because within iterate you call square, but you always iterate over your list s. This means that inside iterate, square(b) will always be square(1).
Within iterate you use the global variable s but it would be better to restructure your code so that you take s as input.
If you are learning about inner functions, you could define iterate and within this define square:
values = [1,2,3,4,5]
def iterate(values):
def _square(x):
return x*x
solution = []
for value in values:
value_squared = _square(value)
solution.append(value_squared)
return solution
Updated Question
Following from my original post, with the use of #Attack68 's code, I have created a program that successfully evolved the function with a choice of multiplicative functions based on a random variable. However, now I am receiving an error saying the list indices must be integers (even though I'm fairly sure they are), I'm not sure what has happened, The code is as follows:
import numpy as np
import scipy.integrate as integrate
x=np.linspace(0.0,1.0,100)
n=10 #iterations
d=700.0
def f(x):
return np.sin(x)
def g(x,list_):
return np.cos(x)*apply(x,list_)
base = [f, g]
list_ = list()
for i in range(n):
testvar=np.random.randint(1, 100, 1)
if testvar> 50 and i!=0:
func_idx = 0 # choose a random operation: 0=ten, 1=inv
else:
func_idx= 1
list_.append(func_idx)
# now you have a list of indexes referencing your base functions so you can apply them:
def apply(x,list_):
y = 1
for i in range(len(list_)):
y *= base[list_[i]](x)
return y
print(list_)
#testint=integrate.quad(apply(x,list_),-d,d)[0]
#print(testint)
print(apply(list_, x))
I am now getting the error:
TypeError: list indices must be integers or slices, not numpy.float64
I am also attempting to get this to integrate the new function after each iteration but it seems that the form of this function is not callable by scipys quad integrator, any suggestions on how to integrate the evolving function on each iteration would also be appreciated.
Original:
I am creating a simulation in python where I consider a function that evolves over a loop. This function starts off defined as:
def f(x):
return 1.0
So simply a flat distribution. After each iteration of the loop, I want the function to be redefined depending on certain (random) conditions. It could be multiplied by cos(b*x) or it could be multiplied by some function A(x), the evolution will not be the same each time due to the randomness, so I cannot simply multiply by the same value each time.
The progression in one instance could be:
f(x)----> f(x)*A(x)----> f(x)*A(x)*A(x)...
but in another instance it could be:
f(x)----> f(x)*A(x)----> f(x)*A(x)*cos(x)...
or
f(x)----> f(x)*cos(x)----> f(x)*cos(x)*cos(x)...
etc.
after each, of n iterations of this evolution, I have to compute an integral that is related to the function, so I need to essentially update the function after each iteration to be called by scipys quad integrator.
I have tried to use arrays to manipulate the distribution instead and it works as far as the function evolution goes, but upon integration, it gives the incorrect result with numpy.trapz and I cannot work out why. Sci-pys quad integrator is more accurate anyway and I had managed to get this to work previously for the first iteration only, but it requires a function based input, so without this function evolution I cannot use it.
If someone could show me if/how this function evolution is possible that'd be great. If it is not possible, perhaps someone could try to help me understand what numpy.trapz actually does so I can workout how to fix it?
How about this:
class MyFunction:
def __init__(self):
def f1(x):
return 1.0
self.functions = [f1]
def append_function(self, fn):
self.functions.append(fn)
def __call__(self, x):
product = 1.0
for f in self.functions:
product *= f(x)
return product
This object starts off as simply returning 1.0. Later you add more functions and it returns the product of all of them.
Your description suggests your iterated values are combined through a product and are not in fact a composition of functions. A simple way of recording these is to have a set of base functions:
import numpy as np
import scipy.integrate as int
def two(x):
return x*2
def inv(x):
return 1/x
base = [two, inv]
funcs = np.random.choice(base, size=10)
def apply(x, funcs):
y = 1
for func in funcs:
y *= func(x)
return y
print('function value at 1.5 ', apply(1.5, funcs))
answer = int.quad(apply, 1, 2, args=(funcs,))
print('integration over [1,2]: ', answer)
I am reading through A Concise Introduction to Programming in Python by Mark J.Johnson and I stumbled upon a piece of code that uses darts to estimate the area under the graph. The code is working perfectly fine but I am getting confused as to why you would pass a function as a parameter if you could just call the function anyway.
from random import uniform
from math import exp
def area(function , a ,b ,m ,n = 1000 ): #changed parameter for better understanding
hits = 0
total_area = m * (b-a)
for i in range(n):
x = uniform(a,b)
y = uniform(0,m)
if y <= function(x):
hits += 1
frac = hits / float(n)
return frac * total_area
def f(x):
return exp(-x**2)
def g(x): #new function
return exp(-x**2) + 2
def main():
print area(f,0,2,1)
print area(g,0,2,1)
main()
He states that passing a function as a parameter is 'powerful' but I can't see why?
f is but one graph function. It is not the only function that you could define to create a graph.
You can also define other functions:
def g(x):
return 2 * x ** 2 + x + 5
and pass this into area() without having to alter that function. area() is generic enough to calculate the area of different graph functions, and all you need to do is pass in the graph function to have it calculate that area.
Had you hardcoded f instead of using a parameter, you could no longer do that.
I think the answer should be obvious, especially in this case: You can write a generic function for something like calculus integration that works on any function you pass in. You can modify the function you're integrating by supplying a new function. Likewise for other operations like graphing.
How do I tell fmin_cobyla about a matrix constraint Ax-b >= 0? It won't take it as a vector constraint:
cons = lambda x: dot(A,x)-b
thanks.
Since the constraint must return a scalar value, you could dynamically define the scalar constraints like this:
constraints = []
for i in range(len(A)):
def f(x, i = i):
return np.dot(A[i],x)-b[i]
constraints.append(f)
For example, if we lightly modify the example from the docs,
def objective(x):
return x[0]*x[1]
A = np.array([(1,2),(3,4)])
b = np.array([1,1])
constraints = []
for i in range(len(A)):
def f(x, i = i):
return np.dot(A[i],x)-b[i]
constraints.append(f)
def constr1(x):
return 1 - (x[0]**2 + x[1]**2)
def constr2(x):
return x[1]
x = optimize.fmin_cobyla(objective, [0.0, 0.1], constraints+[constr1, constr2],
rhoend = 1e-7)
print(x)
yields
[-0.6 0.8]
PS. Thanks to #seberg for pointing out an earlier mistake.
Actually the documentation says Constraint functions;, it simply expects a list of functions each returning only a single value.
So if you want to do it all in one, maybe just modify the plain python code of the fmin_cobyla, you will find there that it defines a wrapping function around your functions, so it is easy... And the python code is really very short anyways, just small wrapper around scipy.optimize._cobyal.minimize.
On a side note, if the function you are optimizing is linear (or quadratic) like your constraints, there are probably much better solvers out there.
This is something that has bugged me for some time. I learnt Haskell before I learnt Python, so I've always been fond of thinking of many computations as a mapping onto a list. This is beautifully expressed by a list comprehension (I'm giving the pythonic version here):
result = [ f(x) for x in list ]
In many cases though, we want to execute more than a single statement on x, say:
result = [ f(g(h(x))) for x in list ]
This very quickly gets clunky, and difficult to read.
My normal solution to this is to expand this back into a for loop:
result = []
for x in list:
x0 = h(x)
x1 = g(x0)
x2 = f(x1)
result.append(x2)
One thing about this that bothers me no end is having to initialize the empty list 'result'. It's a triviality, but it makes me unhappy. I was wondering if there were any alternative equivalent forms. One way may be to use a local function(is that what they're called in Python?)
def operation(x):
x0 = h(x)
x1 = g(x0)
x2 = f(x1)
return x2
result = [ operation(x) for x in list ]
Are there any particular advantages/disadvantages to either of the two forms above? Or is there perhaps a more elegant way?
You can easily do function composition in Python.
Here's a demonstrates of a way to create a new function which is a composition of existing functions.
>>> def comp( a, b ):
def compose( args ):
return a( b( args ) )
return compose
>>> def times2(x): return x*2
>>> def plus1(x): return x+1
>>> comp( times2, plus1 )(32)
66
Here's a more complete recipe for function composition. This should make it look less clunky.
Follow the style that most matches your tastes.
I would not worry about performance; only in case you really see some issue you can try to move to a different style.
Here some other possible suggestions, in addition to your proposals:
result = [f(
g(
h(x)
)
)
for x in list]
Use progressive list comprehensions:
result = [h(x) for x in list]
result = [g(x) for x in result]
result = [f(x) for x in result]
Again, that's only a matter of style and taste. Pick the one you prefer most, and stick with it :-)
If this is something you're doing often and with several different statements you could write something like
def seriesoffncs(fncs,x):
for f in fncs[::-1]:
x=f(x)
return x
where fncs is a list of functions. so seriesoffncs((f,g,h),x) would return
f(g(h(x))).
This way if you later in your code need to workout h(q(g(f(x)))) you would simply do seriesoffncs((h,q,g,f),x) rather than make a new operations function for each combination of functions.
If your only concerned with the last result, your last answer is the best. It's clear for anyone looking at it what your doing.
I often take any code that starts to get complex and move it to a function. This basically serves as a comment for that block of code. (any complex code probably needs a re-write anyway, and putting it in a function I can go back and work on it later)
def operation(x):
x0 = h(x)
x1 = g(x0)
x2 = f(x1)
return x2
result = [ operation(x) for x in list]
A variation of dagw.myopenid.com's function:
def chained_apply(*args):
val = args[-1]
for f in fncs[:-1:-1]:
val=f(val)
return val
Instead of seriesoffncs((h,q,g,f),x) now you can call:
result = chained_apply(foo, bar, baz, x)
As far as I know there's no built-in/native syntax for composition in Python, but you can write your own function to compose stuff without too much trouble.
def compose(*f):
return f[0] if len(f) == 1 else lambda *args: f[0](compose(*f[1:])(*args))
def f(x):
return 'o ' + str(x)
def g(x):
return 'hai ' + str(x)
def h(x, y):
return 'there ' + str(x) + str(y) + '\n'
action = compose(f, g, h)
print [action("Test ", item) for item in [1, 2, 3]]
Composing outside the comprehension isn't required, of course.
print [compose(f, g, h)("Test ", item) for item in [1, 2, 3]]
This way of composing will work for any number of functions (well, up to the recursion limit) with any number of parameters for the inner function.
There are cases where it's best to go back to the for-loop, yes, but more often I prefer one of these approaches:
Use appropriate line breaks and indentation to keep it readable:
result = [blah(blah(blah(x)))
for x in list]
Or extract (enough of) the logic into another function, as you mention. But not necessarily local; Python programmers prefer flat to nested structure, if you can see a reasonable way of factoring the functionality out.
I came to Python from the functional-programming world, too, and share your prejudice.