Abstract Matrix Algebra and Calculus in sympy - python

I am doing control engineering and I often face problems of the type below and I want to know if there is a way to deal with this in sympy.
question:
tl:dr: I want to make a MatrixSymbol dependent on a scalar Symbol representing time, to allow differentiation w.r.t. time.
Actual problem: v(t)=[v1(t),v2(t),v3(t)] is a vector function of the time t and I want to calculate the Projection into the direction of v and it's time derivative. In the end I would love to get an expression of v, v.diff(t) and v.T (the transpose).
attempts:
I've tried different things and show the closest one:
This does the algebra I need, but I cannot take derivatives w.r.t. time
v = MatrixSymbol('v',3,1)
# here i'm building the terms I want
projection_v = v*sqrt(v.T*v).inverse()*v.T
orthogonal_v = Identity(3)-projection_v
orthogonal_v.as_explicit()
orthogonal_v shows the abstract equation form that I need. In the end - to check and see the result again, I'd also like to make it explicit and see the expression as a function of v[0,0], v[1,0], and v[2,0] for MatrixSymbol the function .as_explicit() does exactly that beginning with sympy version 1.10. (Thanks Francesco Bonazzi for pointing this out.)
The problem however is, that I cannot make these a function of t and take the derivative of projection_v w.r.t. the time t.
I also tried
t = Symbol('t',real=True,positive=True)
v1 = Function('v1',real=True)(t)
v2 = Function('v2',real=True)(t)
v3 = Function('v3',real=True)(t)
v_mat = FunctionMatrix(3,1,[v1,v2,v3]);
but it seems FunctionMatrix is meant to evaluate the functions directly instead of being an analog to the scalar Function.
Effectively I want to be able to calculate orthogonal_v.diff(t) and then see the component wise operations with something like orthogonal_v.diff(t).as_explicit(). Is this possible?

Related

Vector to matrix function in NumPy without accessing elements of vector

I would like to create a NumPy function that computes the Jacobian of a function at a certain point - with the Jacobian hard coded into the function.
Say I have a vector containing two arbitrary scalars X = np.array([[x],[y]]), and a function f(X) = np.array([[2xy],[3xy]]).
This function has Jacobian J = np.array([[2y, 2x],[3y, 3x]])
How can I write a function that takes in the array X and returns the Jacobian? Of course, I could do this using array indices (e.g. x = X[0,0]), but am wondering if there is a way to do this directly without accessing the individual elements of X.
I am looking for something that works like this:
def foo(x,y):
return np.array([[2*y, 2*x],[3*y, 3*x]])
X = np.array([[3],[7]])
J = foo(X)
Given that this is possible on 1-dimensional arrays, e.g. the following works:
def foo(x):
return np.array([x,x,x])
X = np.array([1,2,3,4])
J = foo(X)
You want the jacobian, which is the differential of the function. Is that correct? I'm afraid numpy is not the right tool for that.
Numpy works with fixed numbers not with variables. That is given some number you can calculate the value of a function. The differential is a different function, that has a special relationship to the original function but is not the same. You cannot just calculate the differential but must deduce it from the functional form of the original function using differentiating rules. Numpy cannot do that.
As far as I know you have three options:
use a numeric library to calculate the differential at a specific point. However you only will get the jacobian at a specific point (x,y) and no formula for it.
take a look at a pythen CAS library like e.g. sympy. There you can define expressions in terms of variables and compute the differential with respect to that variables.
Use a library that perform automatic differentiation. Maschine learning toolkits like pytorch or tensorflow have excellent support for automatic differentiation and good integration of numpy arrays. They essentially calculate the differential, by knowing the differential for all basic operation like multiplication or addition. For composed functions, the chain rule is applied and the difderential can be calculated for arbitray complex functions.

Partial derivatives of a function found using interp2d in python/sagemath

I have a function of two variables, R(t,r), that has been constructed using a list of values for R, t, and r. This function cannot be written down, the values are found from solving a differential equation (d R(t,r)/dt). I require to take the derivatives of the function, in particular, I need
dR(t,r)/dr, d^2R(t,r)/drdt. I have tried using this answer to do this, but I cannot seem to get an answer that makes sense. (note that all derivatives should be partials). Any help would be appreciated.
Edit:
my current code. I understand getting anything to work without the `Rdata' file is impossible but the file itself is 160x1001. Really, any data could be made up to get the rest to work. Z_t does not return answers that seem like the derivative of my original function based on what I know, therefore, I know it is not differentiating my function as I'd expect.
If there are numerical routines for using the array of data I do not mind, I simply need some way of figuring out the derivatives.
import numpy as np
from scipy import interpolate
data = np.loadtxt('Rdata.txt')
rvals = np.linspace(1,160,160)
tvals = np.linspace(0,1000,1001)
f = interpolate.interp2d(tvals, rvals, data)
Z_t = interpolate.bisplev(tvals, rvals, f.tck, dx=0.8, dy=0)

Changing Sinc function to regular sin in mathematica

I have a mathematica function which output is a sum of Sinc https://reference.wolfram.com/language/ref/Sinc.html functions. I need to send said output to a coworker who uses Pyomo https://www.pyomo.org/ for optimization. We have discovered that said optimization software doesn't understand Sinc even if regular Python does. I need to know if there is a way to change the output so instead of using Sinc it returns Sin(x)/x.
I have looked for a solution in Mathworks, but the function seems very limited. I have also checked question like https://mathematica.stackexchange.com/questions/19855/simplify-sinx-x-to-sincx/19856 or https://mathematica.stackexchange.com/questions/144899/simplify-is-excluding-indeterminate-expression-from-output.
However, I haven't found a way to solve the issue.
I have attempted to define by hand sinc as six(x)/x, but this doesn't work due to the indetermination at 0
This is how I define sinc:
sinc = Sinc[Pi #] & ;
sincB = (Sin[Pi #]/(Pi #)) & ;
This is where I use the data to construct an analytic expression. The upper one is the one I used in the past and the lower one is the one that I have constructed now.
shannonIP[v_, w_] =
Total[#3* sinc[(v - #1)/dDelta]*sinc[(w - #2)/dDelta] & ###
interpolatedData]
shannonIPB[v_, w_] =
Total[#3* sincB[(v - #1)/dDelta]*sincB[(w - #2)/dDelta] & ###
interpolatedData]
The resulting expression of the upper code returns a sum of Sincs, the resulting expression of the lower code returns a sum of sin(x)/x, but if evaluated at some points I run in the error of 1/0.
Is there a way to "fix" the output of the lower code or to transform the output of the upper one to an expression readable by Pyomo?
This figure is the function constructed using Sinc.
This figure is the function constructed using Sin[x]/(x+0.0000000000000001)
for arguments near zero, you should compute sinc(x) as 1-(x^2)/6+(x^4)/120

Using scipy minimize with constraint on one parameter

I am using a scipy.minimize function, where I'd like to have one parameter only searching for options with two decimals.
def cost(parameters,input,target):
from sklearn.metrics import mean_squared_error
output = self.model(parameters = parameters,input = input)
cost = mean_squared_error(target.flatten(), output.flatten())
return cost
parameters = [1, 1] # initial parameters
res = minimize(fun=cost, x0=parameters,args=(input,target)
model_parameters = res.x
Here self.model is a function that performs some matrix manipulation based on the parameters. Input and target are two matrices. The function works the way I want to, except I would like to have parameter[1] to have a constraint. Ideally I'd just like to give an numpy array, like np.arange(0,10,0.01). Is this possible?
In general this is very hard to do as smoothness is one of the core-assumptions of those optimizers.
Problems where some variables are discrete and some are not are hard and usually tackled either by mixed-integer optimization (working good for MI-linear-programming, quite okay for MI-convex-programming although there are less good solvers) or global-optimization (usually derivative-free).
Depending on your task-details, i recommend decomposing the problem:
outer-loop for np.arange(0,10,0.01)-like fixing of variable
inner-loop for optimizing, where this variable is fixed
return the model with the best objective (with status=success)
This will effect in N inner-optimizations, where N=state-space of your to fix-var.
Depending on your task/data, it might be a good idea to traverse the fixing-space monotonically (like using np's arange) and use the solution of iteration i as initial-point for the problem i+1 (potentially less iterations needed if guess is good). But this is probably not relevant here, see next part.
If you really got 2 parameters, like indicated, this decomposition leads to an inner-problem with only 1 variable. Then, don't use minimize, use minimize_scalar (faster and more robust; does not need an initial-point).

Derivative of a conjugate in sympy

When I try to differentiate a symbol with SymPy I get the following
In : x=Symbol('x')
In : diff(x,x)
Out: 1
When I differentiate the symbol respect to its conjugate the result is
In [55]: diff(x,x.conjugate())
Out[55]: 0
However, when I try to differentiate the conjugate of the symbol SymPy doesn't do it
In : diff(x.conjugate(),x)
Out: Derivative(conjugate(x), x)
This is still correct, but the result should be zero. How can I make SimPy perform the derivative of a conjugate?
I'm not sure about the mathematics if diff(conjugate(x), x) should be zero. The fact that diff(x,x.conjugate()) gives zero has nothing to do with mathematics (and might even be considered a SymPy bug). It gives zero simply because x does not contain conjugate(x) (symbolically), so it sees it as a constant with respect to it. This is probably wrong, since x is not a constant with respect to conjugate(x). The fact that SymPy lets you take derivatives with respect to defined functions is probably a bug, actually. It is supposed to allow things like diff(f(x)**2, f(x)), where f = Function('f') is an undefined function, but for defined functions, it is probably mathematically incorrect (or at least not what you expect).
See http://docs.sympy.org/latest/modules/core.html?highlight=derivative#sympy.core.function.Derivative, particularly the section on derivatives wrt non-Symbols. To paraphrase, taking derivatives with respect to a function is just a notational convenience and does not represent a mathematical chain rule. Rather, something like diff(x, conjugate(x)) should be thought of as something like diff(x.subs(conjugate(x), dummy), dummy).subs(dummy, conjugate(x)).
Regarding conjugate(x).diff(x), this gives an unevaluated derivative because no derivative is defined for conjugate. I'm not sure if any closed-form answer is possible here anyway. Probably this is the most useful thing that SymPy could return. I can't find any good answers anywhere as to what a reasonable answer for this should be (you should ask on math SE to get a better answer about it).

Categories

Resources