Find a function maximum with scipy.minimize - python

I´ve been asigned to find a given fuction´s maximum with scipy.minimize with the BFGS method using lambda. I´ve already figured out that in order to do that, I need to minimize the -(f) function, but I cannot change the function itself, only the way it´s called in the minimize. Also, the asnwer must be a float.
A abd B are the two functions to maximize
Thanks in advance for the help!!
def A(x):
return -(x-1)**2
B = lambda x: -(x+2)**4
#This is where the minimize is called
def argmax(f):
from scipy.optimize import minimize
return

Since the A function must be sign inverted, create another function that calls -A(x), for example:
from scipy.optimize import minimize
res = minimize(lambda t: -A(t), 0)
print(res.x[0])
which prints
0.9999999925496535
Similarly B must be flipped so that minimize finds the argmax of B:
res = minimize(lambda t: -A(t), 0)
print(res.x[0])
which prints
-1.987949853410927
Both of these answers are close to correct (expecting 1 and -2), although the result for B has fairly large error of 0.012.
The one-liner form for both maximizations (via minimizations) would be
return minimize(lambda t: -A(t), 0).x[0]
(and similarly for B)
To minimize both functions at the same time, create a function taking a vector:
return minimize(lambda v: -A(v[0])-B(v[1]), [0, 0]).x
which returns:
array([ 0.99999975, -1.98841637])

Related

How to make function composition when i have list of functions in python?

So i want to make function composition with the list. I have done function which compose two functions into one and calculate value of this.
This function:
def zad6_1(f, g, x):
return f(g(x))
In the next step i want to use this function to compose function with the list I tried to use reduce but I think i messed up in here.
This is my creation:
def zad6(l, x):
return functools.reduce(lambda f, g: zad6_1(f, g, x), l[0:], l[1:])
My main problem is that I don't know what I should place instead of two argument after lambda function.
Of course if there is better solution than reduce, please show me that.
There is an example:
l6 = [lambda x: x**2+1, lambda x: x**3, lambda x: x+1, lambda x: 4*x]
That is my input list the from the left function a(x),b(x),c(x),d(x).
I want to make something like this a(b(c(d(x)))).
https://www.mathsisfun.com/sets/functions-composition.html
I talk about this.
PS. I cannot use loop and comperhension
The reduce function generally takes 2 arguments, a function (often a lambda), and a sequence. Sometimes you also want to provide a third argument, an initializer (initial value). You are providing more arguments.
Simple example of using reduce:
from functools import reduce
answer = reduce(lambda a, b: a+b, [0, 1, 2]) # a is the accumulated value and b the current value
print(answer) # prints 3
Here, lambda a, b: a+b is the function, and [0, 1, 2] is the sequence provided to the reduce function. Reduce will go over the sequence and execute the (lambda) function at every step. At every step the current value of the sequence and the accumulated value (from previous steps) are provided to the lambda function.
See the documentation on reduce

For a given function f, is it possible to find the value of x where f(x) is minimum (Python)?

I have defined the following function using lambda:
f = lambda x: x**2 + 3*x + 3
Is there any way that I use Python to find the value of x that minimizes the value f(x)?
I know that min() function is to find the minimum element in a list of values, but here, I am not given the list of values, rather I need to locate the value of x that minimizes f(x).
Thank you,
What you're trying to do can be done using python's symbolic package.
There are many other ways, but I suspect this will be the most useful to you.
>>> import sympy
>>> y = sympy.sympify("x**2 + 3*x + 3")
>>> sympy.solve( sympy.diff(y) ) # minimize by solving for the derivative
[-3/2]

Evolving functions in python

Updated Question
Following from my original post, with the use of #Attack68 's code, I have created a program that successfully evolved the function with a choice of multiplicative functions based on a random variable. However, now I am receiving an error saying the list indices must be integers (even though I'm fairly sure they are), I'm not sure what has happened, The code is as follows:
import numpy as np
import scipy.integrate as integrate
x=np.linspace(0.0,1.0,100)
n=10 #iterations
d=700.0
def f(x):
return np.sin(x)
def g(x,list_):
return np.cos(x)*apply(x,list_)
base = [f, g]
list_ = list()
for i in range(n):
testvar=np.random.randint(1, 100, 1)
if testvar> 50 and i!=0:
func_idx = 0 # choose a random operation: 0=ten, 1=inv
else:
func_idx= 1
list_.append(func_idx)
# now you have a list of indexes referencing your base functions so you can apply them:
def apply(x,list_):
y = 1
for i in range(len(list_)):
y *= base[list_[i]](x)
return y
print(list_)
#testint=integrate.quad(apply(x,list_),-d,d)[0]
#print(testint)
print(apply(list_, x))
I am now getting the error:
TypeError: list indices must be integers or slices, not numpy.float64
I am also attempting to get this to integrate the new function after each iteration but it seems that the form of this function is not callable by scipys quad integrator, any suggestions on how to integrate the evolving function on each iteration would also be appreciated.
Original:
I am creating a simulation in python where I consider a function that evolves over a loop. This function starts off defined as:
def f(x):
return 1.0
So simply a flat distribution. After each iteration of the loop, I want the function to be redefined depending on certain (random) conditions. It could be multiplied by cos(b*x) or it could be multiplied by some function A(x), the evolution will not be the same each time due to the randomness, so I cannot simply multiply by the same value each time.
The progression in one instance could be:
f(x)----> f(x)*A(x)----> f(x)*A(x)*A(x)...
but in another instance it could be:
f(x)----> f(x)*A(x)----> f(x)*A(x)*cos(x)...
or
f(x)----> f(x)*cos(x)----> f(x)*cos(x)*cos(x)...
etc.
after each, of n iterations of this evolution, I have to compute an integral that is related to the function, so I need to essentially update the function after each iteration to be called by scipys quad integrator.
I have tried to use arrays to manipulate the distribution instead and it works as far as the function evolution goes, but upon integration, it gives the incorrect result with numpy.trapz and I cannot work out why. Sci-pys quad integrator is more accurate anyway and I had managed to get this to work previously for the first iteration only, but it requires a function based input, so without this function evolution I cannot use it.
If someone could show me if/how this function evolution is possible that'd be great. If it is not possible, perhaps someone could try to help me understand what numpy.trapz actually does so I can workout how to fix it?
How about this:
class MyFunction:
def __init__(self):
def f1(x):
return 1.0
self.functions = [f1]
def append_function(self, fn):
self.functions.append(fn)
def __call__(self, x):
product = 1.0
for f in self.functions:
product *= f(x)
return product
This object starts off as simply returning 1.0. Later you add more functions and it returns the product of all of them.
Your description suggests your iterated values are combined through a product and are not in fact a composition of functions. A simple way of recording these is to have a set of base functions:
import numpy as np
import scipy.integrate as int
def two(x):
return x*2
def inv(x):
return 1/x
base = [two, inv]
funcs = np.random.choice(base, size=10)
def apply(x, funcs):
y = 1
for func in funcs:
y *= func(x)
return y
print('function value at 1.5 ', apply(1.5, funcs))
answer = int.quad(apply, 1, 2, args=(funcs,))
print('integration over [1,2]: ', answer)

Generating a sorting function for counter-clockwise sort

As part of a script I am making, I want to sort a series of points in a counter-clockwise order around a central point, which we will call 'a'.
I have a function that determines, for two points 'b' and 'c', if c is to the right of or left of the ray a->b. This function is right_of(a, b, c), and it is tested and works.
I want to use this function to sort a list of tuples with 2-d coordinates e.g. [(0, 0), (0, 1), (1, 1),...]. However, each time I sort, there will be a different point 'a' to pass to the function right_of(). What I want is a 'function' returnSortFunction(a) that will return a function with two arguments, f(b, c), and and when f(b, c) is called on each pair of coordinates as I sort, it should return the result of right_of(a, b, c) with 'a' already filled in.
I have tried to implement this using a factory, but I don't think I understand factories well enough to do it correctly, or determine if that is not what a factory is for. How can I build this feature?
You can have a function return a function, no problem. A simple way to do it is something like
def returnSortFunction(a):
return lambda b,c: right_of(a,b,c)
You need a wrapper function around your right_of function. You could use a lambda, but I think your logic is going to be more complicated than that. Assuming you want to pass in a function as a comparator to your sorting method, it's going to look something like this:
def returnSortFunction(a):
def comparator(p1, p2, a = a):
if p1 == p2:
return 0
elif right_of(a, p1, p2):
return 1
else:
return -1
return comparator
Functions are first class objects in python, so you can do something like this:
def prepare_funcs(number):
def inc(a):
return number + a
def mult(a):
return number * a
return inc, mult
inc5, mult5 = prepare_funcs(5)
inc2, mult2 = prepare_funcs(2)
inc5(2) #Out: 7
mult2(10) #Out: 20
For your specific context you should also check out functools module, specifically partial function. With it, you can 'partially' prepare arguments to your function like this:
right_of_5 = functools(right_of, 5)
right_of_5(b, c)
That will work, because right_of_5 will automatically fill right_of first argument - a - with number 5.

scipy.optimize with matrix constraint

How do I tell fmin_cobyla about a matrix constraint Ax-b >= 0? It won't take it as a vector constraint:
cons = lambda x: dot(A,x)-b
thanks.
Since the constraint must return a scalar value, you could dynamically define the scalar constraints like this:
constraints = []
for i in range(len(A)):
def f(x, i = i):
return np.dot(A[i],x)-b[i]
constraints.append(f)
For example, if we lightly modify the example from the docs,
def objective(x):
return x[0]*x[1]
A = np.array([(1,2),(3,4)])
b = np.array([1,1])
constraints = []
for i in range(len(A)):
def f(x, i = i):
return np.dot(A[i],x)-b[i]
constraints.append(f)
def constr1(x):
return 1 - (x[0]**2 + x[1]**2)
def constr2(x):
return x[1]
x = optimize.fmin_cobyla(objective, [0.0, 0.1], constraints+[constr1, constr2],
rhoend = 1e-7)
print(x)
yields
[-0.6 0.8]
PS. Thanks to #seberg for pointing out an earlier mistake.
Actually the documentation says Constraint functions;, it simply expects a list of functions each returning only a single value.
So if you want to do it all in one, maybe just modify the plain python code of the fmin_cobyla, you will find there that it defines a wrapping function around your functions, so it is easy... And the python code is really very short anyways, just small wrapper around scipy.optimize._cobyal.minimize.
On a side note, if the function you are optimizing is linear (or quadratic) like your constraints, there are probably much better solvers out there.

Categories

Resources