How can I evaluate the constants C1 and C2 from a solution of a differential equation SymPy gives me? There are the initial condition f(0)=0 and f(pi/2)=3.
>>> from sympy import *
>>> f = Function('f')
>>> x = Symbol('x')
>>> dsolve(f(x).diff(x,2)+f(x),f(x))
f(x) == C1*sin(x) + C2*cos(x)
I tried some ics stuff but it's not working. Example:
>>> dsolve(f(x).diff(x,2)+f(x),f(x), ics={f(0):0, f(pi/2):3})
f(x) == C1*sin(x) + C2*cos(x)
By the way: C2 = 0 and C1 = 3.
There's a pull request implementing initial/boundary conditions, which was merged and should be released in SymPy 1.2. Meanwhile, one can solve for constants like this:
sol = dsolve(f(x).diff(x,2)+f(x),f(x)).rhs
constants = solve([sol.subs(x,0), sol.subs(x, math.pi/2) - 3])
final_answer = sol.subs(constants)
The code returns final_answer as 3.0*sin(x).
Remarks
solve may return a list of solutions, in which case one would have to substitute constants[0], etc. To force it to return a list in any case (for consistency), use dict=True:
constants = solve([sol.subs(x,0), sol.subs(x, math.pi/2) - 3], dict=True)
final_answer = sol.subs(constants[0])
If the equation contains parameters, solve may or may not solve for the variables you want (C1 and C2). This can be ensured as follows:
constants = solve([sol.subs(x,0), sol.subs(x, math.pi/2) - 3], symbols('C1 C2'))
where again, dict=True would force the list format of the output.
Related
I tried to solve this equation but still running.
I gave the symbol and the equation is "Eq((1-(1+ x )(-60))/ x+32*(1+x)(-60) , 41.81)".
The way solve and solveset usual work is to split an expression into numerator and denominator, and return solutions for the one that are not in the other.
Let's define a helper function to put the solutions from nsolve into a FiniteSet and one to give the final solution:
>>> from sympy import FiniteSet, nsolve, Add, Eq
>>> from sympy.abc import x
>>> rr = lambda x: FiniteSet(*[i[0] for i in real_roots(x, multiple=False)])
>>> sol = lambda n, d: list(rr(n) - rr(d))
>>> go = lambda eq: sol(*eq.rewrite(Add).as_numer_denom())
Now we try this out on your original expression:
>>> eq = Eq(32/(x + 1)**60 + (1 - 1/(x + 1)**60)/x, 41.81)
>>> fsol = go(eq) # very slow
>>> [i.n(3) for i in fsol]
[-3.33, -2.56, -1.44, -0.568, -0.228, 0.0220]
If you check those out by substituting into the original expression (written as an expression) you will find that only the last one is valid
>>> expr = eq.rewrite(Add)
>>> [expr.subs(x, i).n(3) for i in fsol]
[-42.1, -42.2, 4.72e+22, 2.64e+23, 1.97e+8, 1.31e-15]
Now let's replace that Float with a Rational and get solutions:
>>> req = nsimplify(eq, rational=True); req
Eq(32/(x + 1)**60 + (1 - 1/(x + 1)**60)/x, 4181/100)
>>> rsol = go(_) # pretty fast
>>> [i.n(3) for i in rsol]
[-2.00, 0.0220]
We know the 2nd solution is right; let's check the first:
>>> req.subs(x, rsol[0]).rewrite(Add).n(3)
-0.e-114
So both solutions appear to be valid and you don't get any spurious solutions which (by the way) I wasn't expecting from nsolve.
An exact analytic solution to this is unlikely but you can get numeric solutions e.g.:
In [18]: nsolve(eq, x, -2)
Out[18]: -1.99561339048822
Since this can be transformed into a polynomial you can find all real solutions like:
In [20]: p = Poly(nsimplify(eq).rewrite(Add).as_numer_denom()[0])
In [21]: [r[0].n() for r in p.real_roots(multiple=False)]
Out[21]: [-1.99561339048822, -1.0, 0, 0.0219988833527669]
Using as_numer_denom like this can potentially introduce spurious solutions though so you should check them (e.g. by plotting the function around each root). For example 0 is not actually a root.
I am using sympy to solve a very simple equation symbolically, but the solution I get for the variable is an empty matrix! Here is the code:
from sympy import *
x = Symbol('x')
l_x = Symbol('l_x')
x_min = -6
x_max = 6
precision_x = 10**8
solve(((x_max-x_min)/((2**l_x)-1))/precision_x, l_x)
print(l_x)
I tried some other simple equations such as:
solve(x**2 = 4, x)
And the later works perfectly; I just do not understand why the former one does not work!
The expression given to solve has an assumed rhs of 0 which no value of l_x can satisfy. Try something like this instead:
from sympy import *
q, r, s, t = symbols("q r s t")
eq = (q-r)/(2**s-1)/t
solve(eq-1,s)
The output is:
[log((q - r + t)/t)/log(2)]
to explicitly create an equation object with a non-zero rhs you can do something like:
solve(Eq(eq,1),s)
It is simple: your equation has no result.
The equation is 12/((2**l_x)-1))/1e8 = 0 and that has no solution.
See what y = 12/((2**x)-1))/1e8 looks like (copied from wolframalpha):
To compare, try solving e.g. 12/((2**l_x)-1))/1e8 = 1 instead:
>>> solve(((x_max-x_min)/((2**l_x)-1))/precision_x - 1, l_x)
[(-log(25000000) + log(25000003))/log(2)]
Works like a charm!
I have this differential equation written in SymPy
diffeq = Eq(f(x).diff(x, x) - 2*f(x).diff(x) + f(x), sin(x))
where f(x) is a symbol Function symbol and x is a variable symbol.
When I solve it with this:
expr = dsolve(diffeq, f(x))
I get
f(x)=(C_1+C_2x)ex+12cos(x)
Which is the correct solution to this equation. But now I would like to evaluate this function in several points. I know I can substitute the x with the subs function, but is there a way to substitute the constant values C_1 and C_2 so I can evaluate the function?
There is an open PR for this on GitHub, that would add an ics flag to dsolve.
For now, you can substitute the values manually using subs, use solve to solve for C1 and C2, and use subs to substitute the values back into the solution.
For example, if f(0) = 1 and f'(0) = 0, you'd use something like
>>> p1 = expr.subs([(x, 0), (f(0), 1)])
>>> dexpr = Eq(expr.lhs.diff(x), expr.rhs.diff(x))
>>> p2 = dexpr.subs([(x, 0), (f(x).diff(x).subs(x, 0), 0)])
>>> p1
Eq(1, C1 + 1/2)
>>> p2
Eq(0, C1 + C2)
>>> C1, C2 = symbols('C1 C2')
>>> sol = solve([p1, p2], [C1, C2])
>>> sol
{C1: 1/2, C2: -1/2}
>>> expr.subs(sol)
Eq(f(x), (-x/2 + 1/2)*exp(x) + cos(x)/2)
You can get your constants from your expression. It's a bit messy, but it works:
v1 = expr.args[1].args[1].args[0].args[0]
v2 = expr.args[1].args[1].args[0].args[1].args[0]
expr.subs(v1,1).subs(v2,2)
Explanation:
Have a look at expr.args. It is a tuple the left and the right hand side of the equation. Here we want the second entry of the tuple, thus index 1.
We then get some sympy.core.add.Add. This we can again decompose using args and can continue until we reach our constants.
SymPy can easily solve quadratic equations with short simple coefficients.
For example:
from pprint import pprint
from sympy import *
x,b,f,Lb,z = symbols('x b f Lb z')
eq31 = Eq((x*b + f)**2, 4*Lb**2*z**2*(1 - x**2))
pprint(eq31)
sol = solve(eq31, x)
pprint(sol)
But with a little bit larger coefficients - it can't:
from pprint import pprint
from sympy import *
c3,b,f,Lb,z = symbols('c3 b f Lb z')
phi,Lf,r = symbols('phi Lf r')
eq23 = Eq(
(
c3 * (2*Lb*b - 2*Lb*f + 2*Lb*r*cos(phi + pi/6))
+ (Lb**2 - Lf**2 + b**2 - 2*b*f + 2*b*r*cos(phi + pi/6) + f**2 - 2*f*r*cos(phi + pi/6) + r**2 + z**2)
)**2,
4*Lb**2*z**2*(1 - c3**2)
)
pprint(eq23)
print("\n\nSolve (23) for c3:")
solutions_23 = solve(eq23, c3)
pprint(solutions_23)
Why?
This is not specific to Sympy - other programs like Maple or Mathematica suffer from same the problem: When solving an equation, solve needs to choose a proper solution strategy (see e.g. Sympy's Solvers) based on assumptions about the variables and the structure of the equation. These are choices are normally heuristic and often incorrect (hence no solution, or false strategies are tried first). Furthermore, the assumptions of variables is often to broad (e.g., complex instead of reals).
Thus, for complex equations the solution strategy often has to be given by the user. For your example, you could use:
sol23 = roots(eq23.lhs - eq23.rhs, c3)
Since symbolic solutions are supported, one thing you can do is solve the generic quadratic and substitute in your specific coefficients:
>>> eq = eq23.lhs-eq23.rhs
>>> a,b,c = Poly(eq,c3).all_coeffs()
>>> var('A:C')
(A, B, C)
>>> ans=[i.xreplace({A:a,B:b,C:c}) for i in solve(A*x**2 + B*x + C,x)]
>>> print filldedent(ans)
...
But you can get the same result if you just shut of simplification and checking:
>>> ans=solve(eq23,c3,simplify=False,check=False)
(Those are the really expensive parts of the call to solve.)
I have a number of symbolic expressions in sympy, and I may come to realize that one of the coefficients is zero. I would think, perhaps because I am used to mathematica, that the following makes sense:
from sympy import Symbol
x = Symbol('x')
y = Symbol('y')
f = x + y
x = 0
f
Surprisingly, what is returned is x + y. Is there any way, aside from explicitly calling "subs" on every equation, for f to return just y?
I think subs is the only way to do this. It looks like a sympy expression is something unto itself. It does not reference the pieces that made it up. That is f only has the expression x+y, but doesn't know it has any link back to the python objects x and y. Consider the code below:
from sympy import Symbol
x = Symbol('x')
y = Symbol('y')
z = Symbol('z')
f1 = x + y
f2 = z + f1
f1 = f1.subs(x,0)
print(f1)
print(f2)
The output from this is
y
x + y + z
So even though f1 has changed f2 hasn't. To my knowledge subs is the only way to get done what you want.
I don't think there is a way to do that automatically (or at least no without modifying SymPy).
The following question from SymPy's FAQ explains why:
Why doesn't changing one variable change another that depends it?
The short answer is "because it doesn't depend on it." :-) Even though
you are working with equations, you are still working with Python
objects. The equations you are typing use the values present at the
time of creation to "fill in" values, just like regular python
definitions. They are not altered by changes made afterwards. Consider
the following:
>>> a = Symbol('a') # create an object with name 'a' for variable a to point to
>>> b = a + 1; b # create another object that refers to what 'a' refers to
a + 1
>>> a = 4; a # a now points to the literal integer 4, not Symbol('a')
4
>>> b # but b is still pointing at Symbol('a')
a + 1
Changing quantity a does not change b; you are not working with a set
of simultaneous equations. It might be helpful to remember that the
string that gets printed when you print a variable refering to a sympy
object is the string that was give to it when it was created; that
string does not have to be the same as the variable that you assign it
to:
>>> r, t, d = symbols('rate time short_life')
>>> d = r*t; d
rate*time
>>> r=80; t=2; d # we haven't changed d, only r and t
rate*time
>>> d=r*t; d # now d is using the current values of r and t
160
Maybe this is not what you're looking for (as it was already explained by others), but this is my solution to substitute several values at once.
def GlobalSubs(exprNames, varNames, values=[]):
if ( len(values) == 0 ): # Get the values from the
for varName in varNames: # variables when not defined
values.append( eval(varName) ) # as argument.
# End for.
# End if.
for exprName in exprNames: # Create a temp copy
expr = eval(exprName) # of each expression
for i in range(len(varNames)): # and substitute
expr = expr.subs(varNames[i], values[i]) # each variable.
# End for.
yield expr # Return each expression.
# End for.
It works even for matrices!
>>> x, y, h, k = symbols('x, y, h, k')
>>> A = Matrix([[ x, -h],
... [ h, x]])
>>> B = Matrix([[ y, k],
... [-k, y]])
>>> x = 2; y = 4; h = 1; k = 3
>>> A, B = GlobalSubs(['A', 'B'], ['x', 'h', 'y', 'k'])
>>> A
Matrix([
[2, -1],
[1, 2]])
>>> B
Matrix([
[ 4, 3],
[-3, 4]])
But don't try to make a module with this. It won't work. This will only work when the expressions, the variables and the function are defined into the same file, so everything is global for the function and it can access them.