I want to solve this equation witht the following parameters:
gamma = 0.1
F = 0.5
w = 0
A = symbols('A')
a = 1 + w**4 -w**2 + 4*(gamma**2)*w**2
b = 1 - w**2
sol = solve(a*A**2 + (9/16)*A**6 + (3/2)*b*A**4 -F**2)
list_A = []
for i in range(len(sol)):
if(type( solutions[i] )==float ):
print(sol[i])
list_A = sol[i]
However, as supposed, I am getting some real and complex values, and I want to remove the complex ones and only keep the floats. But this condition I implemented is not valid due to the type of sol[i] is either sympy.core.add.Add for complex or sympy.core.numbers.Float for floats.
My question is, how can I modify my condition so that it works for getting only the float values?
In addition, is there a way to speed it up? it is very slow if I put it in a loop for many values of omega.
this is my first time working with sympy
When it is able to validate solutions relative to assumptions on symbols, it will; so if you tell SymPy that A is real then -- if it can verify the solutions -- it will only show the real ones:
>>> A = symbols('A',real=True)
>>> sol = solve(a*A**2 + (9/16)*A**6 + (3/2)*b*A**4 -F**2)
>>> sol
[-0.437286658108243, 0.437286658108243]
Related
according to this graph: desmos
print(solve('x**2 + x - 1/x'))
# [-1/3 + (-1/2 - sqrt(3)*I/2)*(sqrt(69)/18 + 25/54)**(1/3) + 1/(9*(-1/2 - sqrt(3)*I/2)*(sqrt(69)/18 + 25/54)**(1/3)), -1/3 + 1/(9*(-1/2 + sqrt(3)*I/2)*(sqrt(69)/18 + 25/54)**(1/3)) + (-1/2 + sqrt(3)*I/2)*(sqrt(69)/18 + 25/54)**(1/3), -1/3 + 1/(9*(sqrt(69)/18 + 25/54)**(1/3)) + (sqrt(69)/18 + 25/54)**(1/3)]
I was expecting [0.755, 0.57], but, I got something I cannot use in my future program. I desire to get a list of floats as result, so refer to this post, I did following, but I got some even more weird:
def solver(solved, rit=3):
res = []
for val in solved:
if isinstance(val, core.numbers.Add):
flt = val.as_two_terms()[0]
flt = round(flt, rit)
else:
flt = round(val, rit)
if not isinstance(flt, core.numbers.Add):
res.append(flt)
return res
print(solver(solve('x**2 + x - 1/x')))
# [-0.333, -0.333, -0.333]
Now I am really disappointed with sympy, I wonder if there is an accurate way to get a list of floats as result, or I will code my own gradient descent algorithm to find the roots and intersection.
sym.solve solves an equation for the independent variable. If you provide an expression, it'll assume the equation sym.Eq(expr, 0). But this only gives you the x values. You have to substitute said solutions to find the y value.
Your equation has 3 solutions. A conjugate pair of complex solutions and a real one. The latter is where your two graphs meet.
import sympy as sym
x = sym.Symbol('x')
# better to represent it like the equation it is
eq = sym.Eq(x**2, 1/x - x)
sol = sym.solve(eq)
for s in sol:
if s.is_real:
s = s.evalf()
print(s, eq.lhs.subs({x: s})) # eq.rhs works too
There are a variety of things you can do to get the solution. If you know the approximate root location and you want a numerical answer, nsolve is simplest since it has no requirements on the type of expression:
>>> from sympy import nsolve, symbols
>>> x = symbols('x')
>>> eq = x**2 + x - 1/x
>>> nsolve(eq, 1)
0.754877666246693
You can try a guess near 0.57 but it will go to the same solution. So is there really a second real roots? You can't use real_roots on this expression because it isn't in polynomial form. But if you split it into numerator and denominator you can check for the roots of the numerator:
>>> n, d = eq.as_numer_denom()
>>> from sympy import real_roots
>>> real_roots(n)
[CRootOf(x**3 + x**2 - 1, 0)]
So there is only one real root for that expression, the one that nroots gave you.
Note: the answer that solve gives is an exact solution to the cubic equation and it can't figure out definitively which ones are a solution to the equation so it returns all three. If you evaluate them you will find that only one of them is real. But since you don't need the symbolic solution, just stick to nroots.
I am trying to solve simultaneous equations for x and y, I am not getting any result (code just keeps on running). I feel the error is related to using sqrt in the equations but not sure. Can someone help me figure this out?
from __future__ import division
from sympy import Symbol,sqrt,solve
x = Symbol('x')
y = Symbol('y')
z = Symbol('z')
a = Symbol('a')
b = Symbol('b')
c = Symbol('c')
d = Symbol('d')
e = Symbol('e')
f = Symbol('f')
g = Symbol('g')
h = Symbol('h')
print (solve((sqrt((c-a)**2+(d-b)**2)+sqrt((x-c)**2+(y-d)**2)-2*sqrt((x-a)**2+(y-b)**2),(y-b)*(e-a)-(x-a)*(f-b)) ,x,y))
This is a(nother) problem were you have to rely on the A of CAS and let SymPy assist you instead of relying on SymPy (in it's current state) to do all the work. The following assumes that eqs is a list of the two equations you want to solve as you gave in the OP.
Notice that the 2nd equation is linear in both symbols. Solve for y and substitute into the first equation.
>>> yis = solve(eqs[1], y)[0]
>>> eq0 = eqs[0].subs(y,yis)
This gives an expression that has a lot of symbols in it and that slows things down. It also has two terms with sqrt that depend on x. Replace those arguments of the sqrt with Dummy symbols and then unrad the expression to get it in polynomial form, restore replacements and factor:
>>> from sympy.solvers.solvers import unrad, S
>>> reps = {i.base:Dummy() for i in eq0.atoms(Pow) if i.has(x) and i.exp==S.Half}
>>> ireps = {v:k for k,v in reps.items()}
>>> poly = unrad(eq0.xreplace(reps), *reps.values())[0].xreplace(ireps).factor()
Using factor is an expensive process to always use, but if you know the problem is going to take a long time without it, it is worth a try. In this case a quartic reduces to a product of quadratics which are easy to solve and don't require checking or simplification:
>>> xis = solve(poly, x)
There are three solutions for x and each of these can be substituted into the expression for y to get the three solutions. The solutions are large enough so they are not shown here.
>>> count_ops(xis)
386
When I calculate the mean of a list of floats the following way
def mean(x):
sum(x) / len(x)
then I usually do not care about tiny errors in floating point operations. Though, I am currently facing an issue where I want to get all elements in a list that are equal or above the list's average.
Again, this is usually no issue but when I face cases where all elements in the list are equal floating point numbers than the mean value calculated by the function above actually returns a value above all the elements. That, in my case, obviously is an issue.
I need a workaround to that involving no reliability on python3.x libraries (like e.g. statistics).
Edit:
It has been suggested in the comments to use rounding. This interestingly resulted in errors being rarer, but they still occur, as e.g. in this case:
[0.024484987, 0.024484987, 0.024484987, 0.024484987, ...] # x
0.024485 # mean
[] # numbers above mean
I believe you should be using math.fsum() instead of sum. For example:
>>> a = [0.024484987, 0.024484987, 0.024484987, 0.024484987] * 1360001
>>> math.fsum(a) / len(a)
0.024484987
This is, I believe, the answer you are looking for. It produces more consistent results, irrespective of the length of a, than the equivalent using sum().
>>> sum(a) / len(a)
0.024484987003073517
One neat solution is to use compensated summation, combined with double-double tricks to perform the division accurately:
def mean_kbn(X):
# 1. Kahan-Babuska-Neumaier summation
s = c = 0.0
n = 0
for x in X:
t = s + x
if abs(s) >= abs(x):
c -= ((s-t) + x)
else:
c -= ((x-t) + s)
s = t
n += 1
# sum is now s - c
# 2. double-double division from Dekker (1971)
# https://link.springer.com/article/10.1007%2FBF01397083
u = s / n # first guess of division
# Python doesn't have an fma function, so do mul2 via Veltkamp splitting
v = 1.34217729e8 # 0x1p27 + 1
uv = u*v
u_hi = (u - uv) + uv
u_lo = u - u_hi
nv = n*v
n_hi = (n - nv) + nv
n_lo = n - n_hi
# r = s - u*n exactly
r = (((s - u_hi*n_hi) - u_hi*n_lo) - u_lo*n_hi) - u_lo*n_lo
# add correction
return u + (r-c)/n
Here's a sample case I found, comparing with the sum, math.fsum and numpy.mean:
>>> mean_kbn([0.2,0.2,0.2])
0.2
>>> sum([0.2,0.2,0.2])/3
0.20000000000000004
>>> import math
>>> math.fsum([0.2,0.2,0.2])/3
0.20000000000000004
>>> import numpy
>>> numpy.mean([0.2,0.2,0.2])
0.20000000000000004
How about not using the mean but just multiplying each element by the length of the list and comparing it directly to the sum of the original list?
I think this should do what you want without relying on division
I'm trying to perform the following integration using sympy;
x = Symbol('x')
expr = (x+3)**5
integrate(expr)
The answer that I'm expecting is:
But what's being returned is:
The following code works in MATLAB:
syms x
y = (x+3)^5;
int(y)
I'm unsure what I'm doing wrong in order to perform this using sympy.
This is actually a common problem seen in Calculus where for these kinds of polynomial expressions, you do get two answers. The coefficients for each of the powers of x exist but the constant factor is missing between them.
As such, there are two methods you can use to find the indefinite integral of this expression.
The first method is to perform a substitution where u = x+3, then integrate with respect to u. Then, the indefinite integral would be (1/6)*(x + 3)^6 + C as you expect.
The second method is to fully expand out the polynomial and integrate each term individually.
MATLAB elects to find the integral the first way:
>> syms x;
>> out = int((x+3)^5)
out =
(x + 3)^6/6
Something to note for later is that if we expand out this polynomial expression, we get:
>> expand(out)
ans =
x^6/6 + 3*x^5 + (45*x^4)/2 + 90*x^3 + (405*x^2)/2 + 243*x + 243/2
sympy elects to find the integral the second way:
In [20]: from sympy import *
In [21]: x = sym.Symbol('x')
In [22]: expr = (x+3)**5
In [23]: integrate(expr)
Out[23]: x**6/6 + 3*x**5 + 45*x**4/2 + 90*x**3 + 405*x**2/2 + 243*x
You'll notice that the answer is the same between both environments, but the constant factor is missing. Because the constant factor is missing, there is no neat way to factor this into the neat polynomial that you are expecting from your output seen in MATLAB.
As a final note, if you would like to reproduce what sympy generates, expand out the polynomial, then integrate. We get what sympy generates:
>> syms x;
>> out = expand((x+3)^5)
out =
x^5 + 15*x^4 + 90*x^3 + 270*x^2 + 405*x + 243
>> int(out)
ans =
x^6/6 + 3*x^5 + (45*x^4)/2 + 90*x^3 + (405*x^2)/2 + 243*x
The constant factor though shouldn't worry you. In the end, what you are mostly concerned with is a definite integral, and so the subtraction of these constant factors will happen, which won't affect the final result.
Side Note
Thanks to DSM, if you specify the manual=True flag for integrate, this will attempt to mimic performing integration by hand, which will give you the answer you're expecting:
In [26]: from sympy import *
In [27]: x = sym.Symbol('x')
In [28]: expr = (x+3)**5
In [29]: integrate(expr, manual=True)
Out[29]: (x + 3)**6/6
to start off I have already solved this problem so it's not a big deal, I'm just asking to satisfy my own curiosity. The question is how to solve a series of simultaneous equations given a set of constraints. The equations are:
tau = 62.4*d*0.0007
A = (b + 1.5*d)*d
P = b + 2*d*sqrt(1 + 1.5**2)
R = A/P
Q = (1.486/0.03)*A*(R**(2.0/3.0))*(0.0007**0.5)
and the conditions are:
tau <= 0.29, Q = 10000 +- say 3, and minimize b
As I mentioned I was already able to come up with a solution using a series of nested loops:
b = linspace(320, 330, 1000)
d = linspace(0.1, 6.6392, 1000)
ansQ = []
ansv = []
anstau = []
i_index = []
j_index = []
for i in range(len(b)):
for j in range(len(d)):
tau = 62.4*d[j]*0.0007
A = (b[i] + 1.5*d[j])*d[j]
P = b[i] + 2*d[j]*sqrt(1 + 1.5**2)
R = A/P
Q = (1.486/0.03)*A*(R**(2.0/3.0))*(0.0007**0.5)
if Q >= 10000 and tau <= 0.29:
ansQ.append(Q)
ansv.append(Q/A)
anstau.append(tau)
i_index.append(i)
j_index.append(j)
This takes a while, and there is something in the back of my head saying that there must be an easier/more elegant solution to this problem. Thanks (Linux Mint 13, Python 2.7.x, scipy 0.11.0)
You seem to only have two degrees of freedom here---you can rewrite everything in terms of b and d or b and tau or (pick your two favorites). Your constraint on tau implies directly a constraint on d, and you can use your constraint on Q to imply a constraint on b.
And it doesn't look (to me at least, I still haven't finished my coffee) that your code is doing anything other than plotting some two dimensional functions over a grid you've defined--NOT solving a system of equations. I normally understand "solving" to involve setting something equal to something else, and writing one variable as a function of another variable.
It does appear you've only posted a snippet, though, so I'll assume you do something else with your data down stream.
Ok, I see. I think this isn't really a minimization problem, it's a plotting problem. The first thing I'd do is see what ranges are implied for b and d from your constraints on tau, and then use that to derive a constraint on d. Then you can mesh those points with meshgrid (as you mentioned below) and run over all combinations.
Since you're applying the constraint before you apply the mesh (as opposed to after, as in your code), you'll only be sampling the parameter space that you're interested in. In your code you generate a bunch of junk you're not interested in, and pick out the gems. If you apply your constraints first, you'll only be left with gems!
I'd define my functions like:
P = lambda b, d: b + 2*d*np.sqrt(1 + 1.5**2)
which works like
>>> import numpy as np
>>> P = lambda b, d: b + 2*d*np.sqrt(1 + 1.5**2)
>>> P(1,2)
8.2111025509279791
Then you can write another function to serve up b and d for you, so you can do something like:
def get_func_vals(b, d):
pvals.append(P(b,d))
or, better yet, store b and d as tuples in a function that doesn't return but yields:
pvals = [P(b,d) for (b,d) in thing_that_yields_b_and_d_tuples]
I didn't test this last line of code, and I always screw up these parenthesis, but I think it's right.