I am trying to solve for C in the following equation
I can do this with sympy for an enumrated number of x's, e.g x0, x2, ..., x4 but cannot seem to figure out how to do this for i=0 to t. E.g. for a limited number
from sympy import summation, symbols, solve
x0, x1, x2, x3, x4, alpha, C = symbols('x0, x1, x2, x3, x4, alpha, C')
e1 = ((x0 + alpha * x1 + alpha**(2) * x2 + alpha**(3) * x3 + alpha**(4) * x4)
/ (1 + alpha + alpha**(2) + alpha**(3) + alpha**(4)))
e2 = (x3 + alpha * x4) / (1 + alpha)
rhs = (x0 + alpha * x1 + alpha**(2) * x2) / (1 + alpha + alpha**(2))
soln_C = solve(e1 - C*e2 - rhs, C)
Any insight would be much appreciated.
Thanks to #bryans for pointing me in the direction of Sum. Elaborating on his comment, here is one solution that seems to work. As I am fairly new to sympy if anyone has a more concise approach please share.
from sympy import summation, symbols, solve, Function, Sum
alpha, C, t, i = symbols('alpha, C, t, i')
x = Function('x')
s1 = Sum(alpha**i * x(t-i), (i, 0, t)) / Sum(alpha**i, (i, 0, t))
s2 = Sum(alpha**i * x(t-3-i), (i, 0, t-3)) / Sum(alpha**i, (i, 0, t-3))
rhs = (x(0) + alpha * x(1) + alpha**(2) * x(2)) / (1 + alpha + alpha**(2))
soln_C = solve(s1 - C*s2 - rhs, C)
I'm not sure if this can be catalogued as more "concise", but it could be useful too, when you know the upper limit of the summatory. Let's suppose that we want to evaluate this expression:
We can express it, and solve it, in sympy as follows:
from sympy import init_session
init_session(use_latex=True)
n = 4
As = symbols('A_1:' + str(n+1))
x = symbols('x')
exp = 0
for i in range(n):
exp += As[i]/(1+x)**(i+1)
Ec = Eq(exp,0)
sol = solve(Ec,x)
#print(sol)
#sol #Or, if you're working on jupyter...
Related
I am trying to find a common tangent to two curves using python but I am not able to solve it.
The equations to the two curves are complicated that involve logarithms.
Is there a way in python to compute the x coordinates of a tangent that is common to both the curves in general. If I have 2 curves f(x) and g(x), I want to find the x-coordinates x1 and x2 on a common tangent where x1 lies on f(x) and x2 on g(x). I am trying f'(x1) = g'(x2) and f'(x1) = f(x1) - f(x2) / (x1 - x2) to get x1 and x2 but I am not able to get values using nonlinsolve as the equations are too complicated.
I want to just find x-coordinates of the common tangent
Can anyone suggest a better way?
import numpy as np
import sympy
from sympy import *
from matplotlib import pyplot as plt
x = symbols('x')
a, b, c, d, e, f = -99322.50019502985, -86864.87072433547, -96876.05627516498, -89703.35055202093, -3390.863799999999, -20942.518
def func(x):
y1_1 = a - a*x + b*x
y1_2 = c - c*x + d*x
c1 = (1 - x) ** (1 - x)
c2 = (x ** x)
y2 = 12471 * (sympy.log((c1*c2)))
y3 = 2*f*x**3 - x**2*(e + 3*f) + x*(e + f)
eqn1 = y1_1 + y2 + y3
eqn2 = y1_2 + y2 + y3
return eqn1, eqn2
val = np.linspace(0, 1)
f1 = sympy.lambdify(x, func(x)[0])(val)
f2 = sympy.lambdify(x, func(x)[1])(val)
plt.plot(val, f1)
plt.plot(val, f2)
plt.show()
I am trying this
x1, x2 = sympy.symbols('x1 x2')
fun1 = func(x1)[0]
fun2 = func(x2)[0]
diff1 = diff(fun1,x1)
diff2 = diff(fun2,x2)
eq1 = diff1 - diff2
eq2 = diff1 - ((fun1 - fun2) / (x1 - x2))
sol = nonlinsolve([eq1, eq2], [x1, x2])
the first thing that needs to be done is to reduce the formulas
for example the first formula is actually this:
formula = x*(1 - x)*(17551.6542 - 41885.036*x) + x*(1 - x)*(41885.036*x - 24333.3818) + 12457.6294706944*x + log((x/(1 - x))**(12000*x)*(1 - x)**12000) - 99322.5001950298
formula = (x-x^2)*(17551.6542 - 41885.036*x) + (x-x^2)*(41885.036*x - 24333.3818) + 12457.6294706944*x + log((x/(1 - x))**(12000*x)*(1 - x)**12000) - 99322.5001950298
# constants
a = 41885.036
b = 17551.6542
c = 24333.3818
d = 12457.6294706944
e = 99322.5001950298
f = 12000
formula = (x-x^2)*(b - a*x) + (x-x^2)*(a*x - c) + d*x + log((x/(1 - x))**(f*x)*(1 - x)**f) - e
formula = (ax^3 -bx^2 + bx - ax^2) + (x-x^2)*(a*x - c) + d*x + log((x/(1 - x))**(f*x)*(1 - x)**f) - e
formula = ax^3 -bx^2 + bx - ax^2 -ax^3 + ax^2 + cx^2 -cx + d*x + log((x/(1 - x))**(f*x)*(1 - x)**f) - e
# collect x terms by power (note how the x^3 tern drops out, so its easier).
formula = (c-b)*x^2 + (b-c+d)*x + log((x/(1 - x))**(f*x)*(1 - x)**f) - e
which is much cleaner and is a quadratic with a log term.
i expect that you can do some work on the log term too, but this is an excercise for the original poster.
likewise the second formula can be reduced in the same way, which is again an excercise for the original poster.
From this, both equations need to be differentiated with respect to x to find the tangent. Then set both formulas to be equal to each other (for a common tangent).
This would completely solve the question.
I actually wonder if this is a python question at all or actually a pure maths question.....
The important point to note is that, since the derivatives are monotonic, for any value of derivative of fun1, there is a solution for fun2. This can be easily seen if you plot both derivatives.
Thus, we want a function that, given an x1, returns an x2 that matches it. I'll use numerical solution because the system is too cumbersome for numerical solution.
import scipy.optimize
def find_equal_value(f1, f2, x, x1):
goal = f1.subs(x, x1)
to_solve = sympy.lambdify(x, (f2 - goal)**2) # Quadratic functions tend to be better behaved, and the result is the same
sol = scipy.optimize.fmin(func=to_solve, x0=x1, ftol=1e-8, disp=False) # The value for f1 is a good starting guess
return sol[0]
I used fmin as the solver above because it worked and I knew how to use it by heart. Maybe root_scalar can give better results.
Using the function above, let's get some pairs (x1, x2) where the derivatives are equal:
df1 = sympy.diff(func(x)[0])
df2 = sympy.diff(func(x)[1])
x1 = 0.25236537 # Close to the zero derivative
x2 = find_equal_value(df1, df2, x, x1)
print(f'Derivative of f1 in x1: {df1.subs(x, x1)}')
print(f'Derivative of f2 in x2: {df2.subs(x, x2)}')
print(f'Error: {df1.subs(x, x1) - df2.subs(x, x2)}')
This results is:
Derivative of f1 in x1: 0.0000768765858083498
Derivative of f2 in x2: 0.0000681969431752805
Error: 0.00000867964263306931
If you want a x2 for several x1s (beware that in some cases the solver hits a value where the logs are invalid. Always check your result for validity):
x1s = np.linspace(0.2, 0.8, 50)
x2s = [find_equal_value(df1, df2, x, x1) for x1 in x1s]
plt.plot(x1s, x2s); plt.grid(); plt.show()
Can somebody please point me in the right direction...
I need to find the parameters a,b,c,d of two functions:
Y1 = ( (a * X1 + b) * p0 + (c * X2 + d) * p1 ) / (a * X1 + b + c * X2 + d)
Y2 = ( (a * X2 + b) * p2 + (c * X2 + d) * p3 ) / (a * X1 + b + c * X2 + d)
X1, X2 (independent variables) and Y1, Y2 (dependent variables) are observations, i.e. one-dimensional arrays with thousands of entries each.
p0, p1, p2, p3 are known constants (scalars).
I successfully solved the problem with the first function only with a curve-fit (see below), but how do i solve the problem for Y1 and Y2 ?
Thank you.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
X = [X1,X2]
def fitFunc(X, a,b,c,d):
X1, X2 = X
return ((a * X1 + b) * p0 + (c * X2 + d) * p1) / (a * X1 + b + c * X2 + d)
fitPar, fitCov = curve_fit(fitFunc, X, Y1)
print(fitPar)
One way would be to minimize both your functions together using scipy.optimize.minimze. In the example below, a function residual is passed a, b, c, and d as initial guesses. Using these guesses, Y1 and Y2 are evaluated, then the mean squared error is taken using the data and predicted values of respective functions. The error is returned as the mean error of the two functions. The optimized set of parameters is stored in res as res.x.
import numpy as np
from scipy.optimize import minimize
#p0 = ... known
#p1 = ... known
#p2 = ... known
#p3 = ... known
def Y1(X, a,b,c,d):
X1, X2 = X
return ((a * X1 + b) * p0 + (c * X2 + d) * p1) / (a * X1 + b + c * X2 + d)
def Y2(X, a,b,c,d):
X1, X2 = X
return ((a * X1 + b) * p2 + (c * X2 + d) * p3) / (a * X1 + b + c * X2 + d)
X1 = np.array([X1]) # your X1 array
X2 = np.array([X2]) # your X2 array
X = np.array([X1, X2])
y1_data = np.array([y1_data]) # your y1 data
y2_data = np.array([y2_data]) # your y2 data
def residual(x):
a = x[0]
b = x[1]
c = x[2]
d = x[3]
y1_pred = Y1(X,a,b,c,d)
y2_pred = Y2(X,a,b,c,d)
err1 = np.mean((y1_data - y1_pred)**2)
err2 = np.mean((y2_data - y2_pred)**2)
error = (err1 + err2) / 2
return error
x0 = [1, 1, 1, 1] # Initial guess for a, b, c, and d respectively
res = minimize(residual, x0, method="Nelder-Mead")
print(res.x)
so I saw this code for RK4 on stack and I found it very useful. However, I cannot figure out a way to plot for each y value at each increment(h) of x.
def f(x,y):
return 2*x**2-4*x+y
def RK4(x0,y0):
while x0 < b:
k1 = h*f(x0,y0)
k2 = h*f(x0+0.5*h,y0+0.5*k1)
k3 = h*f(x0+0.5*h,y0+0.5*k2)
k4 = h*f(x0+h,y0+k3)
y0+=(k1+2*k2+2*k3+k4)/6
x0+=h
return y0
b=3
h=0.001
print(RK4(1,0.7182818))
You can append each point in a list as a tuple, and then perform the line plot operation on the list of tuples. You can find it in the commented code below.
import matplotlib.pyplot as plt
def f(x, y):
return 2 * x ** 2 - 4 * x + y
def RK4(x0, y0):
pts = [] # empty list
while x0 < b:
k1 = h * f(x0, y0)
k2 = h * f(x0 + 0.5 * h, y0 + 0.5 * k1)
k3 = h * f(x0 + 0.5 * h, y0 + 0.5 * k2)
k4 = h * f(x0 + h, y0 + k3)
y0 += (k1 + 2 * k2 + 2 * k3 + k4) / 6
x0 += h
pts.append((x0, y0)) # appending the tuple
plt.plot(*zip(*pts)) # plotting the list of tuple
plt.show()
return y0
b = 3
h = 0.001
print(RK4(1, 0.7182818))
You can see the plot as follows
From a design perspective, it would be preferred if the RK4 code and the plotting code were separated, the numerical solver should not be concerned with how its results are used afterwards.
Then the next decision would be about the construction of the time array, it could be passed to the RK4 method, or be constructed inside and returned, both have advantages. If speed is a concern, the arrays should be constructed explicitly in their final form (see example on math.SE), for expediency one can also construct them incrementally. Thus the code could be changed as
def RK4(f,x0,y0,xb,dx):
x, y = [x0],[y0]
while x0 < xb:
k1 = dx*f(x0,y0)
k2 = dx*f(x0+0.5*dx,y0+0.5*k1)
k3 = dx*f(x0+0.5*dx,y0+0.5*k2)
k4 = dx*f(x0+dx,y0+k3)
y0 += (k1+2*k2+2*k3+k4)/6
x0 += dx
x.append(x0); y.append(y0) # for vector y use y0.copy()
return x,y
and then call as
x,y = RK4(f=f,x0=1.0,y0=0.7182818,xb=3.0,dx=1e-3)
plt.plot(x,y)
#title, axis labels
plt.grid(); plt.show()
So, I have this code below:
from sympy import Symbol, solve, nsolve
x1 = Symbol('x1')
x2 = Symbol('x2')
w1 = Symbol('w1')
w2 = Symbol('w2')
eq1 = w1 + w2
eq2 = (w1 * x1) + (w2 * x2)
eq3 = (w1 * x1**2) + (w2 * x2**2)
eq4 = (w1 * x1**3) + (w2 * x2**3)
print(nsolve((eq1, eq2, eq3, eq4), (x1, x2, w1, w2), (2, 0, 2/3, 0)))
For this question:
Which gives me this as a response:
x = findroot(f, x0, J=J, **kwargs)
File "C:\Users\Sabri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\mpmath\calculus\optimization.py", line 969, in findroot
for x, error in iterations:
File "C:\Users\Sabri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\mpmath\calculus\optimization.py", line 660, in __iter__
s = self.ctx.lu_solve(Jx, fxn)
File "C:\Users\Sabri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\mpmath\matrices\linalg.py", line 226, in lu_solve
A, p = ctx.LU_decomp(A)
File "C:\Users\Sabri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\mpmath\matrices\linalg.py", line 142, in LU_decomp
ctx.swap_row(A, j, p[j])
File "C:\Users\Sabri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\mpmath\matrices\matrices.py", line 876, in swap_row
A[i,k], A[j,k] = A[j,k], A[i,k]
File "C:\Users\Sabri\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\mpmath\matrices\matrices.py", line 490, in __getitem__
if key[0] >= self.__rows or key[1] >= self.__cols:
TypeError: '>=' not supported between instances of 'NoneType' and 'int'
Following the documentation here:
https://docs.sympy.org/latest/modules/solvers/solvers.html
It's not very clear what is causing the issue, except one of the variables may be None. On Google, a lot of the issues with a similar error are explicitly shown, which is not the case here. Any suggestions?
Edit: I get this answer:
Using this code:
from scipy.optimize import fsolve
def func(p):
x1, x2, w1, w2 = p
return (w1 + w2, (w1 * x1) + (w2 * x2), (w1 * x1**2) + (w2 * x2**2), (w1 * x1**3) + (w2 * x2**3))
x1, x2, w1, w2 = fsolve(func, (2, 0, 2/3, 0))
print(x1, x2, w1, w2)
So the code should return a result, but I'm not sure why it doesn't work for Sympy. Thanks!
SymPy is able to solve sets of nonlinear equations, so if you prefer not to guess you can do:
>>> from sympy import Rational, nonlinsolve
>>> eq1 = w1 + w2 - 2
>>> eq2 = (w1 * x1) + (w2 * x2) - 0
>>> eq3 = (w1 * x1**2) + (w2 * x2**2) - Rational(2, 3) # 2/3 gives float 0.66..
>>> eq4 = (w1 * x1**3) + (w2 * x2**3) - 0
>>> nonlinsolve((eq1, eq2, eq3, eq4), (x1, x2, w1, w2))
FiniteSet((-sqrt(3)/3, sqrt(3)/3, 1, 1), (sqrt(3)/3, -sqrt(3)/3, 1, 1))
So that gives two solutions.
When use nsolve(), the RHS are all zeros. Non-zeros terms on RHS are moved (2, 0, 2/3, 0) to LHS.
The third argument to nsolve() is the initial guess close to the solution.
An educated guess is (x1,x2,w1,w2) = (-1,1,1,1) in the interval [-1,1] with equal weights.
I tried a few other guesses. some of them caused the same error.
Output:
w1 + w2 - 2
w1*x1 + w2*x2
w1*x1**2 + w2*x2**2 - 0.666666666666667
w1*x1**3 + w2*x2**3
Matrix([[-0.577350269189626], [0.577350269189626], [1.00000000000000], [1.00000000000000]])
The results agree with the n=2 case on table in Wikipedia.
Code:
from sympy import Symbol, solve, nsolve
x1 = Symbol('x1')
x2 = Symbol('x2')
w1 = Symbol('w1')
w2 = Symbol('w2')
eq1 = w1 + w2 - 2
eq2 = (w1 * x1) + (w2 * x2) - 0
eq3 = (w1 * x1**2) + (w2 * x2**2) - 2 / 3
eq4 = (w1 * x1**3) + (w2 * x2**3) - 0
print(eq1, eq2, eq3, eq4, sep='\n')
print(nsolve((eq1, eq2, eq3, eq4), (x1, x2, w1, w2), (-1, 1, 0.5, 0.5)))
I'd like to use an implementation of RK4 I found online for something, but I'm having a bit of difficulty wrapping my head around the implementations I have found online.
For example:
def rk4(f, x0, y0, x1, n):
vx = [0] * (n + 1)
vy = [0] * (n + 1)
h = (x1 - x0) / float(n)
vx[0] = x = x0
vy[0] = y = y0
for i in range(1, n + 1):
k1 = h * f(x, y)
k2 = h * f(x + 0.5 * h, y + 0.5 * k1)
k3 = h * f(x + 0.5 * h, y + 0.5 * k2)
k4 = h * f(x + h, y + k3)
vx[i] = x = x0 + i * h
vy[i] = y = y + (k1 + k2 + k2 + k3 + k3 + k4) / 6
return vx, vy
Could someone please help me understand what exactly the parameters are? If possible, I'd like a more general explanation, but, if being more specific makes it easier to explain, I'm going to be using it specifically for an ideal spring system.
You are asking for the parameters here:
def rk4(f, x0, y0, x1, n):
...
return vx, vy
f is the ODE function, declared as def f(x,y) for the differential equation y'(x)=f(x,y(x)),
(x0,y0) is the initial point and value,
x1 is the end of the integration interval [x0,x1]
n is the number of sub-intervals or integration steps
vx,vx are the computed sample points, vy[k] approximates y(vx[k]).
You can not use this for the spring system, as that code only works for a scalar v. You would need to change it to work with numpy for vector operations.
def rk4(func, x0, y0, x1, n):
y0 = np.array(y0)
f = lambda x,y: np.array(func(x,y))
vx = [0] * (n + 1)
vy = np.zeros( (n + 1,)+y0.shape)
h = (x1 - x0) / float(n)
vx[0] = x = x0
vy[0] = y = y0[:]
for i in range(1, n + 1):
k1 = h * f(x, y)
k2 = h * f(x + 0.5 * h, y + 0.5 * k1)
k3 = h * f(x + 0.5 * h, y + 0.5 * k2)
k4 = h * f(x + h, y + k3)
vx[i] = x = x0 + i * h
vy[i] = y = y + (k1 + 2*(k2 + k3) + k4) / 6
return vx, vy