I'm trying to integrate a piecewise function using Sagemath, and finding it to be impossible. My original code is below, but it's wrong due to accidental evaluation described here.
def f(x):
if(x < 0):
return 3 * x + 3
else:
return -3 * x + 3
g(x) = integrate(f(t), t, 0, x)
The fix for plotting mentioned on the website is to use f instead of f(t), but this is apparently unsupported for the integrate() function since a TypeError is raised.
Is there a fix for this that I'm unaware of?
Instead of defining a piecewise function via def, use the built-in piecewise class:
f = Piecewise([[(-infinity, 0), 3*x+3],[(0, infinity), -3*x+3]])
f.integral()
Output:
Piecewise defined function with 2 parts, [[(-Infinity, 0), x |--> 3/2*x^2 + 3*x], [(0, +Infinity), x |--> -3/2*x^2 + 3*x]]
The piecewise functions have their own methods, such as .plot(). Plotting does not support infinite intervals, though. A plot can be obtained with finite intervals
f = Piecewise([[(-5, 0), 3*x+3],[(0, 5), -3*x+3]])
g = f.integral()
g.plot()
But you also want to subtract g(0) from g. This is not as straightforward as g-g(0), but not too bad, either: get the list of pieces with g.list(), subtract g(0) from each function, then recombine.
g0 = Piecewise([(piece[0], piece[1] - g(0)) for piece in g.list()])
g0.plot()
And there you have it:
By extending this approach, we don't even need to put finite intervals in f from the beginning. The following plots g - g(0) on a given interval [a,b], by modifying the domain:
a = -2
b = 3
g0 = Piecewise([((max(piece[0][0], a), min(piece[0][1], b)), piece[1] - g(0)) for piece in g.list()])
g.plot()
In addition to using the Piecewise class, this can easily be fixed by defining g(x) as a Python function as well:
def f(x):
if(x < 0):
return 3 * x + 3
else:
return -3 * x + 3
def g(x):
(y, e) = integral_numerical(f, 0, x)
return y
Then plot(g) works just fine.
Related
I have a convex optimization problem with separable, convex, piecewise linear functions f_i(var_i) each defined by a list of points [(values, costs)] and a couple other terms that are also convex. I'm trying to figure out how two build those piecewise functions in CVXPY.
How do I take the below two lists of points and add them to a CVXPY objective as piecewise functions?
import cvxpy as cp
w = cp.Variable(n)
f1_points = [(-5, 10), (-2, -1), (0, 0)] # -5 <= var1 <= 0 (Convex)
f2_points = [(-4, 5), (0, 0)] # -4 <= var2 <= 0 (Linear)
f1_cost_function = ...
f2_cost_function = ...
constraints = [cp.sum(w) = 0] + ...
problem = cp.Problem(cp.Minimize(cp.sum([f1_cost_function, f2_cost_function] + ...)), constraints)
So this does not appear directly possible in CVXPY from the list of points. However if the piecewise functions are rewritten as point-slope functions instead of a collection of points, the cvxpy maximum function can be used for to make the piecewise linear function.
f1_functions = [f1_line1, f1_line2, ...]
f1 = cp.maximum(f1_functions)
This is described with an example in the user guide.
If your curve is simple, like in this picture, and you're objective function is to minimize y, then you can do this simply by putting constraints like this:
contraints = [y <= f1, y <= f2, y <= f3, y <= f4 ]
objective = cp.minimize(y)
Picture of simple Piece-wise functions for a non-linear curve
I am trying to convert this code into MATLAB but I am not sure how to do the subscripts (Y[i] = Y[i-1]) as well as the func and f_exact variables
heres the code:
def Forward_Euler(y0,t0,T,dt,f):
t = np.arange(t0,T+dt,dt)
Y = np.zeros(len(t))
Y[0] = y0
for i in range(1,len(t)):
Y[i] = Y[i-1]+dt*f(Y[i-1], t[i-1])
return Y, t
func = lambda y,t: y-t
f_exact = lambda t: t+1-1/2*np.exp(t)
You can use anonymous functions in matlab:
func = #(y,t)(y - t)
f_exact = #(t)(t + 1 - exp(t)/2) % it works with any matrix t as well
And you can use for matrices as well (they should keep matrix operation rules). For example, in func function, as there is a minus in the form of function, the dimension of y and t must be the same.
Mathematica has a symbolic solver for quadratic (and maybe other) functions, e.g.:
Minimize[2 x^2 - y x + 5, {x}]
will yield the following solution:
{1/8 (40-y^2),{x->y/4}}
Is this feature supported in SymPy or a derivative library? Or I have to implement it myself?
Thanks a lot for your opinion!
I'm not sure about the generality of this approach, but the following code:
import sympy
from sympy.solvers import solve
x = sympy.var('x')
y = sympy.var('y')
f = 2*x**2 - y*x + 5
r = solve(f.diff(x), x)
f = f.subs(x, r[0])
print(f)
print(r)
Outputs:
-y**2/8 + 5
[y/4]
The first line of output (-y**2/8 + 5) is equivalent to Mathematica's 1/8 (40-y^2), just ordered differently.
The second line ([y/4]) is similar to Mathematica's {x->y/4} (solve returns a list of roots)
The idea is that we first take the partial derivative of f with respect to x, then substitute it into the original function.
I am trying to apply numpy to this code I wrote for trapezium rule integration:
def integral(a,b,n):
delta = (b-a)/float(n)
s = 0.0
s+= np.sin(a)/(a*2)
for i in range(1,n):
s +=np.sin(a + i*delta)/(a + i*delta)
s += np.sin(b)/(b*2.0)
return s * delta
I am trying to get the return value from the new function something like this:
return delta *((2 *np.sin(x[1:-1])) +np.sin(x[0])+np.sin(x[-1]) )/2*x
I am trying for a long time now to make any breakthrough but all my attempts failed.
One of the things I attempted and I do not get is why the following code gives too many indices for array error?
def integral(a,b,n):
d = (b-a)/float(n)
x = np.arange(a,b,d)
J = np.where(x[:,1] < np.sin(x[:,0])/x[:,0])[0]
Every hint/advice is very much appreciated.
You forgot to sum over sin(x):
>>> def integral(a, b, n):
... x, delta = np.linspace(a, b, n+1, retstep=True)
... y = np.sin(x)
... y[0] /= 2
... y[-1] /= 2
... return delta * y.sum()
...
>>> integral(0, np.pi / 2, 10000)
0.9999999979438324
>>> integral(0, 2 * np.pi, 10000)
0.0
>>> from scipy.integrate import quad
>>> quad(np.sin, 0, np.pi / 2)
(0.9999999999999999, 1.1102230246251564e-14)
>>> quad(np.sin, 0, 2 * np.pi)
(2.221501482512777e-16, 4.3998892617845996e-14)
I tried this meanwhile, too.
import numpy as np
def T_n(a, b, n, fun):
delta = (b - a)/float(n) # delta formula
x_i = lambda a,i,delta: a + i * delta # calculate x_i
return 0.5 * delta * \
(2 * sum(fun(x_i(a, np.arange(0, n + 1), delta))) \
- fun(x_i(a, 0, delta)) \
- fun(x_i(a, n, delta)))
Reconstructed the code using formulas at bottom of this page
https://matheguru.com/integralrechnung/trapezregel.html
The summing over the range(0, n+1) - which gives [0, 1, ..., n] -
is implemented using numpy. Usually, you would collect the values using a for loop in normal Python.
But numpy's vectorized behaviour can be used here.
np.arange(0, n+1) gives a np.array([0, 1, ...,n]).
If given as argument to the function (here abstracted as fun) - the function formula for x_0 to x_n
will be then calculated. and collected in a numpy-array. So fun(x_i(...)) returns a numpy-array of the function applied on x_0 to x_n. This array/list is summed up by sum().
The entire sum() is multiplied by 2, and then the function value of x_0 and x_n subtracted afterwards. (Since in the trapezoid formula only the middle summands, but not the first and the last, are multiplied by 2). This was kind of a hack.
The linked German page uses as a function fun(x) = x ^ 2 + 3
which can be nicely defined on the fly by using a lambda expression:
fun = lambda x: x ** 2 + 3
a = -2
b = 3
n = 6
You could instead use a normal function definition, too: defun fun(x): return x ** 2 + 3.
So I tested by typing the command:
T_n(a, b, n, fun)
Which correctly returned:
## Out[172]: 27.24537037037037
For your case, just allocate np.sin tofun and your values for a, b, and n into this function call.
Like:
fun = np.sin # by that eveywhere where `fun` is placed in function,
# it will behave as if `np.sin` will stand there - this is possible,
# because Python treats its functions as first class citizens
a = #your value
b = #your value
n = #your value
Finally, you can call:
T_n(a, b, n, fun)
And it will work!
it is commonly an easy task to build an n-th order polynomial
and find the roots with numpy:
import numpy
f = numpy.poly1d([1,2,3])
print numpy.roots(f)
array([-1.+1.41421356j, -1.-1.41421356j])
However, suppose you want a polynomial of type:
f(x) = a*(x-x0)**0 + b(x-x0)**1 + ... + n(x-x0)**n
Is there a simple way to construct a numpy.poly1d type function
and find the roots ? I've tried scipy.fsolve but it is very unstable as it depends highly on the choice of the starting values
in my particular case.
Thanks in advance
Best Regards
rrrak
EDIT: Changed "polygon"(wrong) to "polynomial"(correct)
First of all, surely you mean polynomial, not polygon?
In terms of providing an answer, are you using the same value of "x0" in all the terms? If so, let y = x - x0, solve for y and get x using x = y + x0.
You could even wrap it in a lambda function if you want. Say, you want to represent
f(x) = 1 + 3(x-1) + (x-1)**2
Then,
>>> g = numpy.poly1d([1,3,1])
>>> f = lambda x:g(x-1)
>>> f(0.0)
-1.0
The roots of f are given by:
f.roots = numpy.roots(g) + 1
In case x0 are different by power, such as:
f(x) = 3*(x-0)**0 + 2*(x-2)**1 + 3*(x-1)**2 + 2*(x-2)**3
You can use polynomial operation to calculate the finally expanded polynomial:
import numpy as np
import operator
ks = [3,2,3,2]
offsets = [0,2,1,2]
p = reduce(operator.add, [np.poly1d([1, -x0])**i * c for i, (c, x0) in enumerate(zip(ks, offsets))])
print p
The result is:
3 2
2 x - 9 x + 20 x - 14