Is there any disadvantage of not passing sympy symbols to functions?
def compute(alpha,a): #or def compute(a)
return 3*alpha+a
alpha = sympy.symbols("alpha")
expr = compute(alpha,3)
I don't have to pass alpha to compute() (I assume because it's a global variable), and right now, I think it makes the code better readable when I leave it out.
Is this considered bad design? I suppose this is a general "what to do with global variables in python" question, as has been asked here, but the answers said it would depend on the specific usecase.
I still have to create the alpha symbol before I call that function, it's just not obvious if i do not include it.
In contrast to lists (which the other question is about), SymPy treats symbols (and expressions) as immutable. Moreover, they are uniquely identified by the string you pass to Symbol or symbols. For illustration consider the following:
from sympy import Symbol
print({Symbol("a"), Symbol("a")})
>>> {a}
Therefore declaring a SymPy symbol as a global variable does not pose such a big issue – it’s like globally defining some mathematical constant. For example it cannot happen that some reasonable code using this symbol changes it.¹
So, depending on the context, it can make sense to:
define alpha globally,
pass alpha as a parameter to the function,
define alpha within the function at each call (which is possible, since symbols only depend on the associated string).
Without further context, it’s impossible to say which applies to your situation.
¹ Note that unreasonable code can still do this, e.g.:
from sympy.abc import a
from sympy.abc import a as a_2
a.name = "b"
print(a_2)
>>> b
Defining commonly used Symbols at the top of the file (i.e., globally) is perfectly good practice. As others have pointed out, even if it gets defined again, it won't really matter because two symbols with the same name are considered equal. One important caveat here: if you use assumptions, like real=True, then that does matter. So you should avoid using two symbols with the same name but different assumptions.
In your example, if alpha is always supposed to be the symbol Symbol('alpha'), then it doesn't make sense to have an argument to a function that never changes.
Another way you can do things, by the way, is to not have the function at all, but just use the expression with symbolic variables
alpha, a = symbols('alpha a')
expr = 3*alpha + a
and then use subs when you want to substitute a value
expr.subs({a: 3}) # or expr.subs(a, 3)
Related
I'm writing a Python script that parses a user-inputted string defining a differential equation, such as 'x\' = 2*x'. My main problem is that I don't want to implement numerical solution methods myself, and instead rely on SciPy's solve_ivp method, for which a function such as
def my_de(t, x):
return 2*x
is absolutely necessary, since solve_ivp's first argument must be a function. Currently, I'm working around this problem with the following piece of code (in a simplified version):
var = 'x'
de = '2*x'
def my_de(t, y):
exec(f'{var} = {y}')
return eval(de)
A quick explanation for this terribleness: I do not know what variable the user's going to use in the input. var may be theta, it may be sleepyjoe, it may be donalddump. The only thing guaranteed is that the only variable on de is var. You can forget about t for the purposes of this post.
My question is, how can I avoid using exec and eval in this context? I know using any of these is a terrible idea, and I don't want to do it. However, I'm not really seeing any other option.
I am already parsing the user input beforehand, so I can try to make this safe (prohibited variable names, etc.), but anyone who wants to abuse this will be able to anyway.
In addition to the previous comments, another possibility is to evaluate the function definition itself:
userInput="2*x + x**3" # the input you wish to implement
exec("""def test(x): return {}""".format(userInput))
print(test(1.))
This will avoid the overhead of evaluating the userInput at each call.
It's not clear from the documentation how one might easily define a function, based on existing SymPy functions, and alias it to a particular symbol for printing.
For example, I have defined the rectangular function as follows.
import sympy as sp
def rect(t):
return (sp.Heaviside(t + 1/2) - sp.Heaviside(t - 1/2))
While this is useful for the algebra side of things, preventing me from having to define a new Function subclass with derivatives and so forth, it would be nice if I could associate it with uppercase Pi (Π) so that expressions using it would print that instead of a series of Heaviside symbols.
Your rect would have to be a class rect(Function) instead of a function and it would need a custom printer. Note, too, that if you define the eval method as above (in terms of Heaviside) then your expression will no longer be the rect class...it will be an Add.
Another way to handle this would be to just use Function('\Pi')(t) in your expressions and when you are ready to do something with them, replace those instances with your definition. But what sorts of things do you want to do with the rect? Could you use a Piecewise instead?
I want to simplify a sympy expression where the arguments are matrix elements using the function expand_log.
As a simple example let's look at the expression log(exp(x)), which should be simplified to x.
As the tutorial explains, simplifications will only be applied if the required assumptions hold, i.e. in this case, x must be real.
If I have a scalar quantity, I can specify this assumption when creating the variable as shown here.
However, I use a matrix symbol which does not allow specifying assumptions at creation. I instead tried using the new assumptions module:
import sympy as sym
from sympy.assumptions import assuming, Q
x = sym.MatrixSymbol('x',1,2)
expr = sym.log(sym.exp(x[0,0]))
with assuming(Q.real(x[0,0])):
display(sym.expand_log(expr))
The output still is log(exp(x[0, 0])).
So it seems to me that the expand_log function is not aware of the assumption that I specify in the assuming context manager.
Setting force=True yields the desired result but I want to avoid not checking assumptions at all.
Does anyone have an idea how to circumvent this problem?
Consider the following sympy code:
from sympy import Add
from sympy.abc import x
t1 = 2+2*x
t2 = x
myeq = sp.UnevaluatedExpr(Add(sp.UnevaluatedExpr(t1), sp.UnevaluatedExpr(t2), evaluate=False))
# BUG! Will print: x + 2*x + 2
# Yet it should print: 2+2*x+x
print(myeq)
This code snippet was adapted from this answer. There the terms are simpler, so Add preserved the order. But how can I make Add preserve the order in this case as well?
(Remark: If we change the terms to t1=x and t2=x**2 my approach with using the sp.UnevaluatedExpr works, but the original answer that did not have those terms does not. Alas, for my specific case, not even using sp.UnevaluatedExpr works.)
This is not a bug...
... but more a missing feature. All of it being documented.
Here is what SymPy means by unevaluated.
By unevaluated it is meant that the value inside of it will not
interact with the expressions outside of it to give simplified
outputs.
In your example, the terms 2*x and x were not simplified, as is expected.
Order of input
What you are seeing is SymPy not preserving the order in which you input your terms. This is documented under the expression tree section.
The arguments of the commutative operations Add and Mul are stored in
an arbitrary (but consistent!) order, which is independent of the
order inputted.
This should not be a problem since Add and Mul are commutative.
Although, if for some reason you want to preserve the order of input due to non-commutativity of multiplication, you can do so.
In SymPy, you can create noncommutative Symbols using Symbol('A',
commutative=False), and the order of multiplication for
noncommutative Symbols is kept the same as the input)
As for now, there does not seem to be non-commutative addition.
Say, I have an equation f(x) = x**2 + 1, I need to find the value of f(2).
Easiest way is to create a function, accept a parameter and return the value.
But the problem is, f(x) is created dynamically and so, a function cannot be written beforehand to get the value.
I am using cvxpy for an optimization value. The equation would look something like below:
x = cvx.Variable()
Si = [(cvx.square(prev[i] + cvx.sqrt(200 - cvx.square(x))) for i in range(3)]
prev is an array of numbers. There will be a Si[0] Si[1] Si[2].
How do i find the value of Si[0] for x=20?
Basically, Is there any way to substitue the said Variable and find the value of equation When using cvxpy ?
Set the value of the variables and then you can obtain the value of the expression, like so:
>>> x.value = 3
>>> Si[0].value
250.281099844341
(although it won't work for x = 20 because then you'd be taking the square root of a negative number).
The general solution to interpreting code on-the-fly in Python is to use the built-in eval() but eval is dangerous with user-supplied input which could do all sorts of nasty to your system.
Fortunately, there are ways to "sandbox" eval using its additional parameters to only give the expression access to known "safe" operations. There is an example of how to limit access of eval to only white-listed operations and specifically deny it access to the built-ins. A quick look at that implementation looks close to correct, but I won't claim it is foolproof.
The sympy.sympify I mentioned in my comment uses eval() inside and carries the same warning.
In parallel to your cvx versions, you can use lambda to define functions on the fly :
f=[lambda x,i=j : (prev[i] + (200 - x*x)**.5)**2 for j in range(3)] #(*)
Then you can evaluate f[0](20), f[1](20), and so on.
(*) the i=j is needed to fit each j in the associated function.