Changing Sinc function to regular sin in mathematica - python

I have a mathematica function which output is a sum of Sinc https://reference.wolfram.com/language/ref/Sinc.html functions. I need to send said output to a coworker who uses Pyomo https://www.pyomo.org/ for optimization. We have discovered that said optimization software doesn't understand Sinc even if regular Python does. I need to know if there is a way to change the output so instead of using Sinc it returns Sin(x)/x.
I have looked for a solution in Mathworks, but the function seems very limited. I have also checked question like https://mathematica.stackexchange.com/questions/19855/simplify-sinx-x-to-sincx/19856 or https://mathematica.stackexchange.com/questions/144899/simplify-is-excluding-indeterminate-expression-from-output.
However, I haven't found a way to solve the issue.
I have attempted to define by hand sinc as six(x)/x, but this doesn't work due to the indetermination at 0
This is how I define sinc:
sinc = Sinc[Pi #] & ;
sincB = (Sin[Pi #]/(Pi #)) & ;
This is where I use the data to construct an analytic expression. The upper one is the one I used in the past and the lower one is the one that I have constructed now.
shannonIP[v_, w_] =
Total[#3* sinc[(v - #1)/dDelta]*sinc[(w - #2)/dDelta] & ###
interpolatedData]
shannonIPB[v_, w_] =
Total[#3* sincB[(v - #1)/dDelta]*sincB[(w - #2)/dDelta] & ###
interpolatedData]
The resulting expression of the upper code returns a sum of Sincs, the resulting expression of the lower code returns a sum of sin(x)/x, but if evaluated at some points I run in the error of 1/0.
Is there a way to "fix" the output of the lower code or to transform the output of the upper one to an expression readable by Pyomo?
This figure is the function constructed using Sinc.
This figure is the function constructed using Sin[x]/(x+0.0000000000000001)

for arguments near zero, you should compute sinc(x) as 1-(x^2)/6+(x^4)/120

Related

Abstract Matrix Algebra and Calculus in sympy

I am doing control engineering and I often face problems of the type below and I want to know if there is a way to deal with this in sympy.
question:
tl:dr: I want to make a MatrixSymbol dependent on a scalar Symbol representing time, to allow differentiation w.r.t. time.
Actual problem: v(t)=[v1(t),v2(t),v3(t)] is a vector function of the time t and I want to calculate the Projection into the direction of v and it's time derivative. In the end I would love to get an expression of v, v.diff(t) and v.T (the transpose).
attempts:
I've tried different things and show the closest one:
This does the algebra I need, but I cannot take derivatives w.r.t. time
v = MatrixSymbol('v',3,1)
# here i'm building the terms I want
projection_v = v*sqrt(v.T*v).inverse()*v.T
orthogonal_v = Identity(3)-projection_v
orthogonal_v.as_explicit()
orthogonal_v shows the abstract equation form that I need. In the end - to check and see the result again, I'd also like to make it explicit and see the expression as a function of v[0,0], v[1,0], and v[2,0] for MatrixSymbol the function .as_explicit() does exactly that beginning with sympy version 1.10. (Thanks Francesco Bonazzi for pointing this out.)
The problem however is, that I cannot make these a function of t and take the derivative of projection_v w.r.t. the time t.
I also tried
t = Symbol('t',real=True,positive=True)
v1 = Function('v1',real=True)(t)
v2 = Function('v2',real=True)(t)
v3 = Function('v3',real=True)(t)
v_mat = FunctionMatrix(3,1,[v1,v2,v3]);
but it seems FunctionMatrix is meant to evaluate the functions directly instead of being an analog to the scalar Function.
Effectively I want to be able to calculate orthogonal_v.diff(t) and then see the component wise operations with something like orthogonal_v.diff(t).as_explicit(). Is this possible?

Minimize function with trust-ncg method proposes value greater than max_trust_radius

So far I understand the minimize function with method Trust-ncg, the "method specific" parameter "max_trust_radius" is the maximum value for a new step optimization.
However, I experience a weird behaviour.
I work in my doctorate data and I have a code that invokes minimize function (with trust ncg method)
passing parameters
{
'initial_trust_radius':0.1,
'max_trust_radius':1,
'eta':0.15,
'gtol':1e-5,
'disp': True
}
I invoke minimize function as:
res = minimize(bbox, x0, method='trust-ncg',jac=bbox_der, hess=bbox_hess,options=opt_par)
where
bbox is a function to evaluate the objective function
x0 is the initial guess
bbox_der is the gradient function
bbox_hess hessian function
opt_par is the dictionary above with the parameters.
Bbox invokes simulation code and get the data. It works: minimize go back and forth, proposing new values, bbox invokes simulation.
Everything works well until I got a weird issue.
The "x" vector contains 8 values. I realize that one of the iterations, the last value is greater than 1.
Per the max_trust_radius, I think that it should be less than 1, but it is 1.0621612802208713e+00
The issue causes problems because bbox can not receive the value greater than 1, as it invokes a simulation program and there is a constraint that it can not receive 1 or greater than 1.
I found the scipy code and tried to see if I could be able to find a bug or something wrong but I am not.
My main concerns are:
My understanding is that there is a bug in the scipy minimize code as the new value is greater than max_trust_radius .
How can I manipulate or control the values to avoid that values became greater than 1?
Do you suggest something to investigate the issue?
The max_trust_radius controls how large steps you are allowed to take:
max_trust_radius : float
Maximum value of the trust-region radius.
No steps that are longer than this value will be proposed.
Since you are very likely to take many steps during the minimization, each which can be up to 1 long, it is not strange at all that you (assuming ||x0||=0) end up with ||x|| > 1.
If your problem is strictly bounded then you need to apply an optimization algorithm that supports bounds on the parameters.
For scipy.optimize.minimize only L-BFGS-B, TNC and SLSQP methods seem to support the bounds= keyword.

Is it possible to define a function that takes a different form based on the sign of the parameter in Theano

I need to define the following function. Is it possible to do in Theano?
UPDATE:
To clarify I'm asking about defining a theano symbolic variable that can take the above form. I understand that I can define 2 separate variables and use either of them based on the value of R. My questions here is it possible to define a single variable that takes the above form. The reason is that I need to take gradients of this variable as well as use it in other variables and it would drastically simplify my solution if I can define this withing a single symbolic variable.
UPDATE 2:
Proposed solution with lambda doesn't work. This doesn't generate a symbolic variable that can later be used with Theano:
r = T.dscalar('r')
dd = lambda r: r + 1 if r > 0 else r - 1
Without knowing specifics about Theano, I remember that one way to turn an if-else statement into a linear equation is to make your if check into a variable itself, setting it as 0 or 1. Then, you can do something like:
sign = (R_t > 0) ## this is the part I don't know how exactly to do
(topEquation * sign) + (bottomEquation * (sign ^ 1))
This has the nice property that if sign is 1 (or True), the bottomEquation will drop out, being multiplied by 1 ^ 1 or just 0. Similarly, topEquation drops out if sign is 0/False.
One note, though maybe Theano can help with this - it will still evaluate both equations, so this could present an efficiency concern (for every single input, it's running both equations, and then ignoring one of them).

Python: How do i find an equation's value for a given input

Say, I have an equation f(x) = x**2 + 1, I need to find the value of f(2).
Easiest way is to create a function, accept a parameter and return the value.
But the problem is, f(x) is created dynamically and so, a function cannot be written beforehand to get the value.
I am using cvxpy for an optimization value. The equation would look something like below:
x = cvx.Variable()
Si = [(cvx.square(prev[i] + cvx.sqrt(200 - cvx.square(x))) for i in range(3)]
prev is an array of numbers. There will be a Si[0] Si[1] Si[2].
How do i find the value of Si[0] for x=20?
Basically, Is there any way to substitue the said Variable and find the value of equation When using cvxpy ?
Set the value of the variables and then you can obtain the value of the expression, like so:
>>> x.value = 3
>>> Si[0].value
250.281099844341
(although it won't work for x = 20 because then you'd be taking the square root of a negative number).
The general solution to interpreting code on-the-fly in Python is to use the built-in eval() but eval is dangerous with user-supplied input which could do all sorts of nasty to your system.
Fortunately, there are ways to "sandbox" eval using its additional parameters to only give the expression access to known "safe" operations. There is an example of how to limit access of eval to only white-listed operations and specifically deny it access to the built-ins. A quick look at that implementation looks close to correct, but I won't claim it is foolproof.
The sympy.sympify I mentioned in my comment uses eval() inside and carries the same warning.
In parallel to your cvx versions, you can use lambda to define functions on the fly :
f=[lambda x,i=j : (prev[i] + (200 - x*x)**.5)**2 for j in range(3)] #(*)
Then you can evaluate f[0](20), f[1](20), and so on.
(*) the i=j is needed to fit each j in the associated function.

Getting an expression over a horizon for a given recursive equation in sympy/numpy

The following example is stated just for the purpose of precise definition of the query. Consider a recursive equation x[k+1] = a*x[k] where a is some constant. Now, is there an easier way or an existing method within sympy/numpy that does the following (i.e., gives an expression over a horizon for a given recursive equation):
def get_expr(init, num):
a = Symbol('a')
expr = init
for i in range(num):
expr = a*expr
return expr
x0 = Symbol('x0')
get_expr(x0,3)
Horizon above is 3.
I was going to suggest using SymPy's rsolve to try to find a closed form solution to your equation, but it seems that at least for this specific one, there is a bug that prevents it from working. See http://code.google.com/p/sympy/issues/detail?id=2943. Maybe if you really want to know for a more complicated expression you could try that. For this one, the closed form solution is just a**n*x0.
Aside from that, SymPy doesn't have any functions that would do this evaluation directly, but it does have some things that can help. There are some memoization decorators in sympy.utilities.memoization that are made for internal use, but should work just fine for external uses. They can help make your evaluation more efficient by caching the result of previous evaluations. You'll need to write the get_expr recursively for it to work effectively. Or you could just write your own cacher. It's not that complicated.

Categories

Resources