Related
I made a system of equations for an optimization of a pro-link motorcycle into matlab that I'm turning into python code beacuse i need to load it into another software. The matlab code is the following:
clc
clear
Lmono= 320;
Lbielletta=145;
IPS= 16.03;
Pivot= -20;
R1x=(-46.72)-((Pivot)*sind(IPS));
R1y=((180.26)-((Pivot)*cosd(IPS)));
R_1x=(-43.52)-((Pivot)*sind(IPS));
R_1y=((-151.37)-((Pivot)*cosd(IPS)));
R4=60;
R3=203.727;
Ip=100.1;
eta=36.79;
syms phi o
eqns = [(Lbielletta)^2 == ((R3*cosd(phi)-R4*sind(o)+R_1x)^2)+((-R3*sind(phi)-R4*cosd(o)-R_1y)^2)
(Lmono)^2 == ((((R3*cosd(phi))-(R4*sind(o))-((Ip*cosd(eta+o)))+(R1x))^2) + (((R3*sind(phi))+(R4*cosd(o))-((Ip*sind(eta+o)))+(R1y))^2))];
[phi ,o]=vpasolve(eqns,[phi o]);
I wrote this in python :
import math as m
Lb = 145.0
Lm = 320.0
IPS= 16.03;
Pivot= -20.0;
R1x=(-46.72)-((Pivot)*m.sin(IPS))
R1y=((180.26)-((Pivot)*m.cos(IPS)))
R_1x=(-43.52)-((Pivot)*m.sin(IPS))
R_1y=((-151.37)-((Pivot)*m.cos(IPS)))
R4=60.0
R3=203.727
Ip=100.1
eta=36.79
import sympy as sym
from sympy import sin, cos
sym.init_printing()
phi,o = sym.symbols('phi,o')
f = sym.Eq(((R3*cos(phi)-R4*sin(o)+R_1x)**2)+((-R3*sin(phi)-R4*cos(o)-R_1y)**2),Lb**2)
g = sym.Eq(((((R3*cos(phi))-(R4*sin(o))-((Ip*cos(eta+o)))+(R1x))**2) + (((R3*sin(phi))+(R4*cos(o))-((Ip*sin(eta+o)))+(R1y))**2)),Lm**2)
print(sym.nonlinsolve([f,g],(phi,o)))
But when I run the code it loads for about 30 seconds (in matlab it takes 1-2 seconds) and then returns this:
runfile('C:/Users/Administrator/.spyder-py3/temp.py', wdir='C:/Users/Administrator/.spyder-py3')
EmptySet
EmptySet?
can someone help me?
Oscar's answer is "correct", but since you are new to Python there are a few things that you'd want to know.
First, in Matlab you are using sind and cosd which require the angle in degrees. On the other hand, the trigonometric functions exposed by the math, numpy, sympy modules require the angle to be in radians. Hence, we need to convert Lb, Lm, ... to radians.
NOTE: since I don't know the kind of problem you are solving, I have applied the m.radians to every number. This is probably wrong: you have to fix it!
Once we are done with that, we can use nsolve to numerically solve the system of equation, by providing an initial guess.
import math as m
import sympy as sym
from sympy import sin, cos, lambdify, nsolve, Add
sym.init_printing()
phi,o = sym.symbols('phi,o')
Lb = m.radians(145.0)
Lm = m.radians(320.0)
IPS= m.radians(16.03)
Pivot= m.radians(-20.0)
R1x=m.radians(-46.72)-((Pivot)*m.sin(IPS))
R1y=m.radians(180.26)-((Pivot)*m.cos(IPS))
R_1x=m.radians(-43.52)-((Pivot)*m.sin(IPS))
R_1y=m.radians(-151.37)-((Pivot)*m.cos(IPS))
R4=m.radians(60.0)
R3=m.radians(203.727)
Ip=m.radians(100.1)
eta=m.radians(36.79)
f = sym.Eq(((R3*cos(phi)-R4*sin(o)+R_1x)**2)+((-R3*sin(phi)-R4*cos(o)-R_1y)**2),Lb**2)
g = sym.Eq(((((R3*cos(phi))-(R4*sin(o))-((Ip*cos(eta+o)))+(R1x))**2) + (((R3*sin(phi))+(R4*cos(o))-((Ip*sin(eta+o)))+(R1y))**2)),Lm**2)
print(nsolve([f, g], [phi, o], [0, 0]))
# out: Matrix([[0.560675440923978], [-0.0993239452750302]])
Since you are solving an optimization problem and the equations are non-linear, it is likely that there are more than one solution. We can create a contour plot of the two equations: the intersections between the contours represent the solutions:
# convert symbolic expression to numerical functions for evaluation
fn = lambdify([phi, o], f.rewrite(Add))
gn = lambdify([phi, o], g.rewrite(Add))
# plot the 0-level contour of fn and gn: the intersection between
# the curves are the solutions you are looking for
import numpy as np
import matplotlib.pyplot as plt
pp, oo = np.mgrid[0:2*np.pi:100j, 0:2*np.pi:100j]
fig, ax = plt.subplots()
ax.contour(pp, oo, fn(pp, oo), levels=[0], cmap="Greens_r")
ax.contour(pp, oo, gn(pp, oo), levels=[0], cmap="winter")
ax.set_xlabel("phi [rad]")
ax.set_ylabel("o [rad]")
plt.show()
From this picture you can see that there are two solutions. We can find the other solution by providing a better initial guess to nsolve:
print(nsolve([f, g], [phi, o], [0.5, 3]))
# out: Matrix([[0.451286281041857], [2.54087384334971]])
You can use SymPy's nsolve with an initial guess:
In [7]: sym.nsolve([f, g], [phi, o], [0, 0])
Out[7]:
⎡0.228868116194702⎤
⎢ ⎥
⎣0.345540167046199⎦
I'm trying to fit a curve of the equation:
y = ( (np.exp(-k2*(t+A))) - ((k1/v)*Co) )/ -k2
where A = (-np.log((k1/v)*Co))/k2
given to me by a supervisor to a dataset that looks like a rough exponential that flattens to a straight horizontal line at its top. When I fit the equation i am receiving only a straight line from the curve fit and a corresponding Warning:
<ipython-input-24-7e57039f2862>:36: RuntimeWarning: overflow encountered in exp
return ( (np.exp(-k2*(t+A))) - ((k1/v)*Co) )/ -k2
the code I am using looks like:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
xData_forfit = [1.07683e+13, 1.16162e+13, 1.24611e+13, 1.31921e+13, 1.40400e+13, 2.65830e+13,
2.79396e+13, 2.86676e+13, 2.95155e+13, 3.03605e+13, 3.12055e+13, 3.20534e+13,
3.27814e+13, 3.36293e+13, 3.44772e+13, 3.53251e+13, 3.61730e+13, 3.77459e+13,
3.85909e+13, 3.94388e+13, 4.02838e+13, 4.11317e+13, 4.19767e+13, 4.27076e+13,
5.52477e+13, 5.64143e+13, 5.72622e+13, 5.81071e+13, 5.89550e+13, 5.98000e+13,
6.05280e+13, 6.13759e+13, 6.22209e+13, 6.30658e+13, 6.39137e+13, 6.46418e+13,
6.55101e+13, 6.63551e+13, 6.72030e+13, 6.80480e+13, 6.88929e+13, 6.97408e+13,
7.04688e+13, 7.13167e+13, 7.21617e+13, 8.50497e+13, 8.58947e+13, 8.67426e+13,
8.75876e+13, 8.83185e+13, 9.00114e+13, 9.08563e+13, 9.17013e+13]
yData_forfit = [1375.409524, 1378.095238, 1412.552381, 1382.904762, 1495.2, 1352.4,
1907.971429, 1953.52381, 1857.352381, 1873.990476, 1925.114286, 1957.085714,
2030.52381, 1989.8, 2042.733333, 2060.095238, 2134.361905, 2200.742857,
2342.72381, 2456.047619, 2604.542857, 2707.971429 ,2759.87619, 2880.52381,
3009.590476, 3118.771429, 3051.52381, 3019.771429, 3003.561905, 3083.0,
3082.885714, 2799.866667, 3012.419048, 3013.266667, 3106.714286, 3090.47619,
3216.638095, 3108.447619, 3199.304762, 3154.257143, 3112.419048, 3284.066667,
3185.942857, 3157.380952, 3158.47619, 3464.257143, 3434.67619, 3291.457143,
2851.371429, 3251.904762, 3056.152381, 3455.07619, 3386.942857]
def fnct_to_opt(t, k2, k1):
#EXPERIMENTAL CONSTANTS
v = 105
Co = 1500
A = (-np.log((k1/v)*Co))/k2
return ( (np.exp(-k2*(t+A))) - ((k1/v)*Co) )/ -k2
initial_k2k1 = [100, 1*10**-3]
constants = curve_fit(fnct_to_opt, xData_forfit, yData_forfit, p0=initial_k2k1)
k2_fit = constants[0][0]
k1_fit = constants[0][1]
fit = []
for i in xData_forfit:
fit.append(fnct_to_opt(i,k2_fit,k1_fit))
plt.plot(xData_forfit, yData_forfit, 'or', ms='2')
plt.plot(xData_forfit, fit)
this is giving me this plot as a result:
As far as i can tell, the code isn't producing a useful output due to a too large value for the np.exp term, but i don't know how to go about diagnosing where this overflow is coming from or how to fix the issue. any help would be appreciated, thanks.
The overflow is happening exactly where the error message tells you: in the return expression of fnct_to_opt. I asked you to print the offending values just before the error point; this would show you the problem.
At the point of error, the values in A are in the range e+13 to e+14. t is insignificant; k2 is a bit under -10000.0
Thus, the values in your argument to np.exp are well out of the domain that the function can handle. Just add a line to you function and watch the results:
def fnct_to_opt(t, k2, k1):
#EXPERIMENTAL CONSTANTS
v = 105
Co = 1500
A = (-np.log((k1/v)*Co))/k2
print("TRACE", "\nk2", k2, "\nt", t, "\nA", A, "\nother", k1, v, Co)
return ( (np.exp(-k2*(t+A))) - ((k1/v)*Co) )/ -k2
I think the problem maybe in the optimization function, in the sense that maybe a mistake.
For instance:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
xData_forfit = [1.07683e+13, 1.16162e+13, 1.24611e+13, 1.31921e+13, 1.40400e+13, 2.65830e+13,
2.79396e+13, 2.86676e+13, 2.95155e+13, 3.03605e+13, 3.12055e+13, 3.20534e+13,
3.27814e+13, 3.36293e+13, 3.44772e+13, 3.53251e+13, 3.61730e+13, 3.77459e+13,
3.85909e+13, 3.94388e+13, 4.02838e+13, 4.11317e+13, 4.19767e+13, 4.27076e+13,
5.52477e+13, 5.64143e+13, 5.72622e+13, 5.81071e+13, 5.89550e+13, 5.98000e+13,
6.05280e+13, 6.13759e+13, 6.22209e+13, 6.30658e+13, 6.39137e+13, 6.46418e+13,
6.55101e+13, 6.63551e+13, 6.72030e+13, 6.80480e+13, 6.88929e+13, 6.97408e+13,
7.04688e+13, 7.13167e+13, 7.21617e+13, 8.50497e+13, 8.58947e+13, 8.67426e+13,
8.75876e+13, 8.83185e+13, 9.00114e+13, 9.08563e+13, 9.17013e+13]
yData_forfit = [1375.409524, 1378.095238, 1412.552381, 1382.904762, 1495.2, 1352.4,
1907.971429, 1953.52381, 1857.352381, 1873.990476, 1925.114286, 1957.085714,
2030.52381, 1989.8, 2042.733333, 2060.095238, 2134.361905, 2200.742857,
2342.72381, 2456.047619, 2604.542857, 2707.971429 ,2759.87619, 2880.52381,
3009.590476, 3118.771429, 3051.52381, 3019.771429, 3003.561905, 3083.0,
3082.885714, 2799.866667, 3012.419048, 3013.266667, 3106.714286, 3090.47619,
3216.638095, 3108.447619, 3199.304762, 3154.257143, 3112.419048, 3284.066667,
3185.942857, 3157.380952, 3158.47619, 3464.257143, 3434.67619, 3291.457143,
2851.371429, 3251.904762, 3056.152381, 3455.07619, 3386.942857]
def fnct_to_opt(t, k2, k1):
#EXPERIMENTAL CONSTANTS
v = 105
Co = 1500
#A = (-np.log((k1/v)*Co))/k2
#return ( (np.exp(-k2*(t+A))) - ((k1/v)*Co) )/ -k2
#A = (np.log((k1/v)*Co))/k2
return k2/np.log(t) + k1
initial_k2k1 = [10, 1]
constants = curve_fit(fnct_to_opt, xData_forfit, yData_forfit, p0=initial_k2k1)
k2_fit = constants[0][0]
k1_fit = constants[0][1]
#v_fit = constants[0][2]
#Co_fit = constants[0][3]
fit = []
for i in xData_forfit:
fit.append(fnct_to_opt(i,k2_fit,k1_fit))
plt.plot(xData_forfit, yData_forfit, 'or', ms='2')
plt.plot(xData_forfit, fit)
So I place a function simpler but with some clearer intuition behind. For instance in the original I do not think that with those signs and exponential the shape is going to be achieve at all. However looks to me that the exponential is misplaced so I change it for log. Add a constant and a scale parameter. I would suggest to check carefully the original function. Probably there is and issue with the derivation. I do not think is a computational problem.
This is something closer to what could be expected.
I have an array of scalars of m rows and n columns. I have a Variable(m) and a Variable(n) that I would like to find solutions for.
The two variables represent values that need to be broadcast over the columns and rows respectively.
I was naively thinking of writing the variables as Variable((m, 1)) and Variable((1, n)), and adding them together as if they're ndarrays. However, that doesn't work, as broadcasting is not allowed.
import cvxpy as cp
import numpy as np
# Problem data.
m = 3
n = 4
np.random.seed(1)
data = np.random.randn(m, n)
# Construct the problem.
x = cp.Variable((m, 1))
y = cp.Variable((1, n))
objective = cp.Minimize(cp.sum(cp.abs(x + y + data)))
# or:
#objective = cp.Minimize(cp.sum_squares(x + y + data))
prob = cp.Problem(objective)
result = prob.solve()
print(x.value)
print(y.value)
This fails on the x + y expression: ValueError: Cannot broadcast dimensions (3, 1) (1, 4).
Now I'm wondering two things:
Is my problem indeed solvable using convex optimization?
If yes, how can I express it in a way that cvxpy understands?
I'm very new to the concept of convex optimization, as well as cvxpy, and I hope I described my problem well enough.
I offered to show you how to represent this as a linear program, so here it goes. I'm using Pyomo, since I'm more familiar with that, but you could do something similar in PuLP.
To run this, you will need to first install Pyomo and a linear program solver like glpk. glpk should work for reasonable-sized problems, but if you are finding it's taking too long to solve, you could try a (much faster) commercial solver like CPLEX or Gurobi.
You can install Pyomo via pip install pyomo or conda install -c conda-forge pyomo. You can install glpk from https://www.gnu.org/software/glpk/ or via conda install glpk. (I think PuLP comes with a version of glpk built-in, so that might save you a step.)
Here's the script. Note that this calculates absolute error as a linear expression by defining one variable for the positive component of the error and another for the negative part. Then it seeks to minimize the sum of both. In this case, the solver will always set one to zero since that's an easy way to reduce the error, and then the other will be equal to the absolute error.
import random
import pyomo.environ as po
random.seed(1)
# ~50% sparse data set, big enough to populate every row and column
m = 10 # number of rows
n = 10 # number of cols
data = {
(r, c): random.random()
for r in range(m)
for c in range(n)
if random.random() >= 0.5
}
# define a linear program to find vectors
# x in R^m, y in R^n, such that x[r] + y[c] is close to data[r, c]
# create an optimization model object
model = po.ConcreteModel()
# create indexes for the rows and columns
model.ROWS = po.Set(initialize=range(m))
model.COLS = po.Set(initialize=range(n))
# create indexes for the dataset
model.DATAPOINTS = po.Set(dimen=2, initialize=data.keys())
# data values
model.data = po.Param(model.DATAPOINTS, initialize=data)
# create the x and y vectors
model.X = po.Var(model.ROWS, within=po.NonNegativeReals)
model.Y = po.Var(model.COLS, within=po.NonNegativeReals)
# create dummy variables to represent errors
model.ErrUp = po.Var(model.DATAPOINTS, within=po.NonNegativeReals)
model.ErrDown = po.Var(model.DATAPOINTS, within=po.NonNegativeReals)
# Force the error variables to match the error
def Calculate_Error_rule(model, r, c):
pred = model.X[r] + model.Y[c]
err = model.ErrUp[r, c] - model.ErrDown[r, c]
return (model.data[r, c] + err == pred)
model.Calculate_Error = po.Constraint(
model.DATAPOINTS, rule=Calculate_Error_rule
)
# Minimize the total error
def ClosestMatch_rule(model):
return sum(
model.ErrUp[r, c] + model.ErrDown[r, c]
for (r, c) in model.DATAPOINTS
)
model.ClosestMatch = po.Objective(
rule=ClosestMatch_rule, sense=po.minimize
)
# Solve the model
# get a solver object
opt = po.SolverFactory("glpk")
# solve the model
# turn off "tee" if you want less verbose output
results = opt.solve(model, tee=True)
# show solution status
print(results)
# show verbose description of the model
model.pprint()
# show X and Y values in the solution
for r in model.ROWS:
print('X[{}]: {}'.format(r, po.value(model.X[r])))
for c in model.COLS:
print('Y[{}]: {}'.format(c, po.value(model.Y[c])))
Just to complete the story, here's a solution that's closer to your original example. It uses cvxpy, but with the sparse data approach from my solution.
I don't know the "official" way to do elementwise calculations with cvxpy, but it seems to work OK to just use the standard Python sum function with a lot of individual cp.abs(...) calculations.
This gives a solution that is very slightly worse than the linear program, but you may be able to fix that by adjusting the solution tolerance.
import cvxpy as cp
import random
random.seed(1)
# Problem data.
# ~50% sparse data set
m = 10 # number of rows
n = 10 # number of cols
data = {
(i, j): random.random()
for i in range(m)
for j in range(n)
if random.random() >= 0.5
}
# Construct the problem.
x = cp.Variable(m)
y = cp.Variable(n)
objective = cp.Minimize(
sum(
cp.abs(x[i] + y[j] + data[i, j])
for (i, j) in data.keys()
)
)
prob = cp.Problem(objective)
result = prob.solve()
print(x.value)
print(y.value)
I did not get the idea, but just some hacky stuff based on the assumption:
you want some cvxpy-equivalent to numpy's broadcasting-rules behaviour on arrays (m, 1) + (1, n)
So numpy-wise:
m = 3
n = 4
np.random.seed(1)
a = np.random.randn(m, 1)
b = np.random.randn(1, n)
a
array([[ 1.62434536],
[-0.61175641],
[-0.52817175]])
b
array([[-1.07296862, 0.86540763, -2.3015387 , 1.74481176]])
a + b
array([[ 0.55137674, 2.48975299, -0.67719333, 3.36915713],
[-1.68472504, 0.25365122, -2.91329511, 1.13305535],
[-1.60114037, 0.33723588, -2.82971045, 1.21664001]])
Let's mimic this with np.kron, which has a cvxpy-equivalent:
aLifted = np.kron(np.ones((1,n)), a)
bLifted = np.kron(np.ones((m,1)), b)
aLifted
array([[ 1.62434536, 1.62434536, 1.62434536, 1.62434536],
[-0.61175641, -0.61175641, -0.61175641, -0.61175641],
[-0.52817175, -0.52817175, -0.52817175, -0.52817175]])
bLifted
array([[-1.07296862, 0.86540763, -2.3015387 , 1.74481176],
[-1.07296862, 0.86540763, -2.3015387 , 1.74481176],
[-1.07296862, 0.86540763, -2.3015387 , 1.74481176]])
aLifted + bLifted
array([[ 0.55137674, 2.48975299, -0.67719333, 3.36915713],
[-1.68472504, 0.25365122, -2.91329511, 1.13305535],
[-1.60114037, 0.33723588, -2.82971045, 1.21664001]])
Let's check cvxpy semi-blindly (we only dimensions; too lazy to setup a problem and fix variable to check the output :-D):
import cvxpy as cp
x = cp.Variable((m, 1))
y = cp.Variable((1, n))
cp.kron(np.ones((1,n)), x) + cp.kron(np.ones((m, 1)), y)
# Expression(AFFINE, UNKNOWN, (3, 4))
# looks good!
Now some caveats:
i don't know how efficient cvxpy can reason about this matrix-form internally
unclear if more efficient as a simple list-comprehension based form using cp.vstack and co (it probably is)
this operation itself kills all sparsity
(if both vectors are dense; your matrix is dense)
cvxpy and more or less all convex-optimization solvers are based on some sparsity assumption
scaling this problem up to machine-learning dimensions will not make you happy
there is probably a much more concise mathematical theory for your problem then to use (sparsity-assuming) (pretty) general (DCP implemented in cvxpy is a subset) convex-optimization
I´m using scipy.integrate.odeint to solve the equations of motion of a given system with a script from where I selected the most relevant part to this specific problem:
# Equations of Motion function to integrate
def solveEquationsofMotion(y0, t, nRigidBodies, nCoordinates, nConstraintsByType, dataConst, Phi, dPhidq, niu, gamma, massMatrix, gVector, alpha, beta, sda_Parameters):
...
Some calculations
matA = numpy.array
...
dydt = np.hstack((qp,qpp))
return dydt
#Integrator results
solution = odeint(solveEquationsofMotion, y0, time_span,args=(nRigidBodies, nCoordinates, nConstraintsByType, dataConst, Phi, dPhidq, niu, gamma, massMatrix, gVector, alpha, beta), full_output=0)
and it works fine.
However now I need to multiply part of the integration result (solution variable) by matA variable in each timestep to use again as the initial conditions for the next timestep.
I've looked in the scipy.integrate.odeint documentation but I haven't seen any relevant information.
Any help would be very much appreciated.
Kind Regards
Ivo
If you have to change the solution at every step, it is more logical to use the step-by-step integrator ode. It is supposed to be used in a loop anyway, so one may as well change the conditions meanwhile. Here is an example of solving y' = -sqrt(t)*y (vector valued) where y is multiplied by matA after every step.
The steps in t variable are determined by the array t. The main step is y[k, :] = r.integrate(t[k]) which gets the next value of solution, and then the initial condition is changed by r.set_initial_value(matA.dot(y[k, :]), t[k]).
import numpy as np
from scipy.integrate import ode
def f(t, y):
return -np.sqrt(t)*y
matA = np.array([[0, 1], [-1, 0]])
t = np.linspace(0, 10, 20)
y_0 = [2, 3]
y = np.zeros((len(t), len(y_0)))
y[0, :] = y_0
r = ode(f)
r.set_initial_value(y[0], t[0])
for k in range(1, len(t)):
y[k, :] = r.integrate(t[k])
r.set_initial_value(matA.dot(y[k, :]), t[k])
The values of y thus obtained are neither monotone nor positive, as the actual solution of the ODE would be - this shows that the multiplication by matA had an effect.
[[ 2.00000000e+00 3.00000000e+00]
[ 1.55052494e+00 2.32578740e+00]
[ 1.46027833e+00 -9.73518889e-01]
[-5.32831945e-01 -7.99247918e-01]
[-3.91483887e-01 2.60989258e-01]
[ 1.16154133e-01 1.74231200e-01]
[ 7.11807536e-02 -4.74538357e-02]
[-1.79307961e-02 -2.68961942e-02]
[-9.45453427e-03 6.30302285e-03]
[ 2.07088441e-03 3.10632661e-03]
[ 9.57623940e-04 -6.38415960e-04]
[-1.85274552e-04 -2.77911827e-04]
[-7.61389508e-05 5.07593005e-05]
[ 1.31604315e-05 1.97406472e-05]
[ 4.85413044e-06 -3.23608696e-06]
[-7.56142819e-07 -1.13421423e-06]
[-2.52269779e-07 1.68179853e-07]
[ 3.56625306e-08 5.34937959e-08]
[ 1.08295735e-08 -7.21971567e-09]
[-1.39690370e-09 -2.09535555e-09]]
I have a system of a linear equation and a quadratic equation that I can set up with numpy and scipy so I can get a graphical solution. Consider the example code:
#!/usr/bin/env python
# Python 2.7.1+
import numpy as np #
import matplotlib.pyplot as plt #
# d is a constant;
d=3
# h is variable; depends on x, which is also variable
# linear function:
# condition for h: d-2x=8h; returns h
def hcond(x):
return (d-2*x)/8.0
# quadratic function:
# condition for h: h^2+x^2=d*x ; returns h
def hquad(x):
return np.sqrt(d*x-x**2)
# x indices data
xi = np.arange(0,3,0.01)
# function values in respect to x indices data
hc = hcond(xi)
hq = hquad(xi)
fig = plt.figure()
sp = fig.add_subplot(111)
myplot = sp.plot(xi,hc)
myplot2 = sp.plot(xi,hq)
plt.show()
That code results with this plot:
It's clear that the two functions intersect, thus there is a solution.
How could I automatically solve what is the solution (the intersection point), while keeping most of the function definitions intact?
It turns out one can use scipy.optimize.fsolve to solve this, just need to be careful that the functions in the OP are defined in the y=f(x) format; while fsolve will need them in the f(x)-y=0 format. Here is the fixed code:
#!/usr/bin/env python
# Python 2.7.1+
import numpy as np #
import matplotlib.pyplot as plt #
import scipy
import scipy.optimize
# d is a constant;
d=3
# h is variable; depends on x, which is also variable
# linear function:
# condition for h: d-2x=8h; returns h
def hcond(x):
return (d-2*x)/8.0
# quadratic function:
# condition for h: h^2+x^2=d*x ; returns h
def hquad(x):
return np.sqrt(d*x-x**2)
# for optimize.fsolve;
# note, here the functions must be equal to 0;
# we defined h=(d-2x)/8 and h=sqrt(d*x-x^2);
# now we just rewrite in form (d-2x)/16-h=0 and sqrt(d*x-x^2)-h=0;
# thus, below x[0] is (guess for) x, and x[1] is (guess for) h!
def twofuncs(x):
y = [ hcond(x[0])-x[1], hquad(x[0])-x[1] ]
return y
# x indices data
xi = np.arange(0,3,0.01)
# function values in respect to x indices data
hc = hcond(xi)
hq = hquad(xi)
fig = plt.figure()
sp = fig.add_subplot(111)
myplot = sp.plot(xi,hc)
myplot2 = sp.plot(xi,hq)
# start from x=0 as guess for both functions
xsolv = scipy.optimize.fsolve(twofuncs, [0, 0])
print(xsolv)
print("xsolv: {0}\n".format(xsolv))
# plot solution with red marker 'o'
myplot3 = sp.plot(xsolv[0],xsolv[1],'ro')
plt.show()
exit
... which results with:
xsolv: [ 0.04478625 0.36380344]
... or, on the plot image:
Refs:
Roots finding, Numerical integrations and differential equations - Scipy: Scientific Programming in Python
Is there a python module to solve linear equations? - Stack Overflow