Related
I am trying to compute the final composition and temperature of a mixture of syn-gas and air. They enter in at 300 K and 600 K respectively. The syn-gas is a mixture of CO and H2 the proportions of which I am varying from 3:1 to 1:3. Later on, this ratio is fixed and additional nitrogen gas is introduced. Lastly, heat loss is accounted for and its effect on temperature/composition is calculated. Focusing on the first part, I am having a hard time balancing the system of non-linear equations. The general equation for the chemical reaction is as follows:
aCO + bH2 + c*(O2 + 79/21 * N2) + dN2 = eCO + fH2 + gO2 + hN2 + jCO2 + kH2O + lNO
From conservation of species:
Carbon: a = e + j
Oxygen: a + 2c = e + 2g + 2*j + k + l
Hydrogen: 2b = 2f + 2*k
Nitrogen: 2*(79/21)c + 2d = 2*h + l
Since there are three compounds, there are three partial pressure equilibrium values called K_p. K_p is a function of temperature and from empirical data is a known constant.
K_p_NO = (X_NO * P) / (sqrt(X_N2*P)*sqrt(X_O2 * P))
K_p_H2O = (X_H2O * P) / (sqrt(P*X_O2)X_H2P)
K_p_CO2 = (X_CO2 * P) / (sqrt(P*X_O2)X_COP)
Where X is the mole fraction. E.G. X_H2O = k/(e+f+g+h+j+k+l)
The variables e,f,g,h,j,k,l are 7 unknowns and there are 7 equations. Variables a, b, c, and d are varied manually and are treated as known values. Using scipy, I implemented fsolve() as shown below:
from scipy.optimize import fsolve # required library
import sympy as sp
import scipy
# known values hard coded for testing
a = 0.25
b = 0.75
c = a + b
d = 0
kp_NO = 0.00051621
kp_H2O = 0.0000000127
kp_CO2 = 0.00000001733
p = 5 # pressure
dec = 10 # decimal point precision
# Solving the system of equations
def equations(vars):
(e, f, g, h, j, k, l) = vars
f1 = e + j - a
f2 = e + 2*g + 2*j + k + l - a - 2*c
f3 = f + k - b
f4 = 2*h + l - 2*d - (2*79/21)*c
f5 = kp_NO - (l/sp.sqrt(c*(79/21)*c))
f6 = kp_H2O - (k/(b*sp.sqrt((c*p)/(e + f + g + h + j + k + l))))
f7 = kp_CO2 - (j/(a*sp.sqrt((c*p)/(e + f + g + h + j + k + l))))
return[f1, f2, f3, f4, f5, f6, f7]
e, f, g, h, j, k, l = scipy.optimize.fsolve(equations, (0.00004, 0.00004, 0.49, 3.76, 0.25, 0.75, 0.01))
# CO, H2, O2, N2, CO2, H2O, NO
print(e, f, g, h, j, k, l)
The results are
0.2499999959640893 0.7499999911270915 0.999499382628763 3.761404150987935 4.0359107126181326e-09 8.872908576472292e-09 0.001001221833654118
Notice that e = 0.24999995 but a = 0.25. It seems that the chemical reaction is progressing little if at all. Why am I getting my inputs back as results? I know something is wrong because in some cases, the chemical coefficients are negative.
Things I've tried:
Triple checked my math/definitions. Used nsolve() from sympy, nonlinsolve() from scipy, other misc. solvers.
I do not see anything wrong in your code, albeit I would have solved it differently.
This question should be moved to Chemistry stackexchange: the Kp values you are using are wrong. The value for water formation at 500K is around 7.65E22 (when pressure is expresed in bar), I am pretty sure that the constant for CO to CO2 is also much higher.
I would have written this as a comment but I do not have enough reputation.
I have created the following function which gets two line points and outputs three points of an arrowhead triangle. What I need to add is a length according to which the arrowhead will be poisitioned away from the point B across the line AB. To explain it better, I would like it like this: A----->--B where the two dashes are equal to the length
def create_arrowhead(A, B):
"""
Calculate the arrowheads vertex positions according to the edge direction.
Parameters
----------
A : array
x,y Starting point of edge
B : array
x,y Ending point of edge
Returns
-------
B, v1, v2 : tuple
The point of head, the v1 xy and v2 xy points of the two base vertices of the arrowhead.
"""
w = 0.005 # half width of the triangle base
h = w * 0.8660254037844386467637 # sqrt(3)/2
mag = math.sqrt((B[0] - A[0])**2.0 + (B[1] - A[1])**2.0)
u0 = (B[0] - A[0]) / (mag)
u1 = (B[1] - A[1]) / (mag)
U = [u0, u1]
V = [-U[1], U[0]]
v1 = [B[0] - h * U[0] + w * V[0], B[1] - h * U[1] + w * V[1]]
v2 = [B[0] - h * U[0] - w * V[0], B[1] - h * U[1] - w * V[1]]
return (B, v1, v2)
I finally found the solution. By multiplying a variable of distance with the unit vector's components and subtracting this result from B's coordinates. This will create a point on the line AB with distance d from B.
def create_arrowhead(A, B, d):
"""
Use trigonometry to calculate the arrowheads vertex positions according to the line direction.
Parameters
----------
A : array
x,y Starting point of line segment
B : array
x,y Ending point of line segment
Returns
-------
C, v1, v2 : tuple
The point of head with distance d from point B, the v1 xy and v2 xy points of the two base vertices of the arrowhead.
"""
w = 0.003 # Half of the triangle base width
h = w / 0.26794919243 # tan(15)
AB = [B[0] - A[0], B[1] - A[1]]
mag = math.sqrt(AB[0]**2.0 + AB[1]**2.0)
u0 = AB[0] / mag
u1 = AB[1] / mag
U = [u0, u1] # Unit vector of AB
V = [-U[1], U[0]] # Unit vector perpendicular to AB
C = [ B[0] - d * u0, B[1] - d * u1 ]
v1 = [C[0] - h * U[0] + w * V[0], C[1] - h * U[1] + w * V[1]]
v2 = [C[0] - h * U[0] - w * V[0], C[1] - h * U[1] - w * V[1]]
return (C, v1, v2)
(Although there are a number of questions regarding how to best fit a plane to some 3D data on SO, I couldn't find an answer for this issue.)
Given N (x, y, z) points, I need the best fit plane
a*x + b*y + c*z + d = 0
defined through the a, b, c, d coefficients that minimize the mean of the orthogonal distances from the points to the plane. The point-plane orthogonal distance (for a given (x0, y0, z0) point) is defined as:
d = |a*x0 + b*y0 + c*z0 + d|/sqrt(a^2 + b^2 + c^2)
I set up two methods (code below):
Singular-Value Decomposition (source)
Basin-Hopping minimization of the mean orthogonal distances
As I understand it, the SVD method should produce the exact best fit plane by minimizing the orthogonal distances analytically. What I find instead is that the BH method gives better results than the supposedly exact SVD method, even for a low number of BH runs.
By "better" I mean that the final mean orthogonal distance value is smaller with the BH method, than with the SVD method.
What am I missing here?
import numpy as np
import scipy.optimize as optimize
def perp_error(params, xyz):
"""
Mean of the absolute values for the perpendicular distance of the
'xyz' points, to the plane defined by the coefficients 'a,b,c,d' in
'params'.
"""
a, b, c, d = params
x, y, z = xyz
length = np.sqrt(a**2 + b**2 + c**2)
return (np.abs(a * x + b * y + c * z + d) / length).mean()
def minPerpDist(x, y, z, N_min):
"""
Basin-Hopping method, minimize mean absolute values of the
orthogonal distances.
"""
def unit_length(params):
"""
Constrain the vector perpendicular to the plane to be of unit length.
"""
a, b, c, d = params
return a**2 + b**2 + c**2 - 1
# Random initial guess for the a,b,c,d plane coefficients.
initial_guess = np.random.uniform(-10., 10., 4)
# Constrain the vector perpendicular to the plane to be of unit length.
cons = ({'type': 'eq', 'fun': unit_length})
min_kwargs = {"constraints": cons, "args": [x, y, z]}
# Use Basin-Hopping to obtain the best fit coefficients.
sol = optimize.basinhopping(
perp_error, initial_guess, minimizer_kwargs=min_kwargs, niter=N_min)
abcd = list(sol.x)
return abcd
def SVD(X):
"""
Singular value decomposition method.
Source: https://gist.github.com/lambdalisue/7201028
"""
# Find the average of points (centroid) along the columns
C = np.average(X, axis=0)
# Create CX vector (centroid to point) matrix
CX = X - C
# Singular value decomposition
U, S, V = np.linalg.svd(CX)
# The last row of V matrix indicate the eigenvectors of
# smallest eigenvalues (singular values).
N = V[-1]
# Extract a, b, c, d coefficients.
x0, y0, z0 = C
a, b, c = N
d = -(a * x0 + b * y0 + c * z0)
return a, b, c, d
# Generate a random plane.
seed = np.random.randint(100000)
print("Seed: {}".format(seed))
np.random.seed(seed)
a, b, c, d = np.random.uniform(-10., 10., 4)
print("Orig abc(d=1): {:.3f} {:.3f} {:.3f}\n".format(a / d, b / d, c / d))
# Generate random (x, y, z) points.
N = 200
x, y = np.random.uniform(-5., 5., (2, N))
z = -(a * x + b * y + d) / c
# Add scatter in z.
z = z + np.random.uniform(-.2, .2, N)
# Solve using SVD method.
a, b, c, d = SVD(np.array([x, y, z]).T)
print("SVD abc(d=1): {:.3f} {:.3f} {:.3f}".format(a / d, b / d, c / d))
# Orthogonal mean distance
print("Perp err: {:.5f}\n".format(perp_error((a, b, c, d), (x, y, z))))
# Solve using Basin-Hopping.
abcd = minPerpDist(x, y, z, 500)
a, b, c, d = abcd
print("BH abc(d=1): {:.3f} {:.3f} {:.3f}".format(a / d, b / d, c / d))
print("Perp err: {:.5f}".format(perp_error(abcd, (x, y, z))))
I believe I found the reason for the discrepancy.
When I minimize the perpendicular distance of points to a plane using Basin-Hopping, I am using the absolute valued point-plane distance:
d_abs = |a*x0 + b*y0 + c*z0 + d| / sqrt(a^2 + b^2 + c^2)
The SVD method on the other hand, apparently minimizes the squared point-plane distance:
d_sqr = (a*x0 + b*y0 + c*z0 + d)^2 / (a^2 + b^2 + c^2)
If, in the code shared in the question, I use the squared distance in the perp_error() function instead of the absolute valued distance, both methods give the exact same answer.
I am trying to solve a system of polynomial equations obtained by comparing coefficients of different polynomials.
# Statement of Problem:
# We are attempting to find complex numbers a, b, c, d, e, J, u, v, r, s where
# ((a*x + c)^2)*(x^3 + (3K)*x + 2K) - ((b*x^2 + d*x + e)^2) = a^2*(x - r)^2*(x - s)^3 and
# ((a*x + c)^2)*(x^3 + (3K)*x + 2K)) - ((b*x^2 + d*x + e - 1)^2) = a^2*(x - u)*(x - v)^4
R.<x> = CC['x']
a, b, c, d, e, r, s, u, v, K = var('a, b, c, d, e, r, s, u, v, K')
y2 = x^3 + (3*K)*x + 2*K
q0 = ((a*x + c)^2)*(y2) - ((b*x^2 + d*x + e)^2)
p0 = (a^2)*((x-r)^2)*((x-s)^3)
t = (b^2 - 2*a*c)/a^2
Q0 = q0.expand()
P0 = p0.expand()
P0 = P0.substitute(s = ((t - 2*r)/3))
Relations0 = []
i = 0
while i < 6:
Relations0.append(P0.coefficient(x, n = i) - Q0.coefficient(x, n = i))
i = i+1
q1 = ((a*x + c)^2)*(y2) - ((b*x^2 + d*x + e - 1)^2)
p1 = (a^2)*(x-u)*((x-v)^4)
Q1 = q1.expand()
P1 = p1.expand()
P1 = P1.substitute(u = t - 4*v)
Relations1 = []
i = 0
while i < 6:
Relations1.append(P1.coefficient(x, n = i) - Q1.coefficient(x, n = i))
i = i+1
Relations = Relations0 + Relations1
Telling Sage to solve the system of polynomials using solve(Relations, a,b,c,d,e,r,v,K) seems highly inefficient and has only led to Sage having its memory limit exceeded. Moreover, trying to reduce the number of equations and variables by solving for some of the variables is also inefficient and has not given any fruitful results. Since attempting to find all solutions has proven so difficult, is there any way to extract only a single solution?
Both equations have degree 5, which makes 12 identities. However, the degree 5 identities are identical and always satisfied for both equations. Thus you have effectively 10 or less equations for 10 variables.
Divide by a^2, i.e., replace c, b, d, e by c/a, b/a, d/a, e/a and introduce f=1/a to reduce the degrees of the coefficient equations.
Then the resulting coefficient equations for
(x + c)^2*(x^3 + 3*K*x + 2*K) - (b*x^2 + d*x + e)^2 = (x - r)^2*(x - s)^3;
(x + c)^2*(x^3 + 3*K*x + 2*K) - (b*x^2 + d*x + e - f)^2 = (x - u)*(x - v)^4;
or on http://magma.maths.usyd.edu.au/calc/
A<b, c, d, e, f, r, s, u, v, K> :=PolynomialRing(Rationals(),10,"glex");
P<x> := PolynomialRing(A);
eq1 := (x + c)^2*(x^3 + 3*K*x + 2*K) - (b*x^2 + d*x + e)^2 - (x - r)^2*(x - s)^3;
eq2 := (x + c)^2*(x^3 + 3*K*x + 2*K) - (b*x^2 + d*x + e - f)^2 - (x - u)*(x - v)^4;
I := ideal<A|Coefficients(eq1) cat Coefficients(eq2-eq1)>; I;
giving
Ideal of Polynomial ring of rank 10 over Rational Field
Order: Graded Lexicographical
Variables: b, c, d, e, f, r, s, u, v, K
Basis:
[
r^2*s^3 + 2*c^2*K - e^2,
-3*r^2*s^2 - 2*r*s^3 + 3*c^2*K + 4*c*K - 2*d*e,
3*r^2*s + 6*r*s^2 + s^3 - 2*b*e + 6*c*K - d^2 + 2*K,
-2*b*d + c^2 - r^2 - 6*r*s - 3*s^2 + 3*K,
-b^2 + 2*c + 2*r + 3*s,
-r^2*s^3 + u*v^4 + 2*e*f - f^2,
3*r^2*s^2 + 2*r*s^3 - 4*u*v^3 - v^4 + 2*d*f,
-3*r^2*s - 6*r*s^2 - s^3 + 6*u*v^2 + 4*v^3 + 2*b*f,
r^2 + 6*r*s + 3*s^2 - 4*u*v - 6*v^2,
-2*r - 3*s + u + 4*v
]
have degrees 5,4,3,2,2,5,4,3,2,1 giving an upper bound of 28800 for the number of solutions. As the Groebner basis algorithms that are commonly used have a complexity bound of O(d^(n^2)) for the better algorithms, your runtime will be optimistically be characterized by the number 28800^10 (Bezout bound instead of d^n in (d^n)^n), which is rather large however small the constant. Even removing one variable for the linear equation will not change much in these estimates.
Thus any symbolic solution will take a long time and result in univariate polynomials of rather high degrees as part of any triangular polynomial basis.
Given 3 points in space (3D): A = (x1, y1, z1), B = (x2, y2, z2) C = (x3, y3, z3); then how to find the center and radius of the circle (arc) that passes through these three points, i.e. find circle equation? Using Python and Numpy here is my initial code
import numpy as np
A = np.array([x1, y1, z1])
B = np.array([x2, y2, z2])
C = np.array([x3, y3, z3])
#Find vectors connecting the three points and the length of each vector
AB = B - A
BC = C - B
AC = C - A
# Triangle Lengths
a = np.linalg.norm(AB)
b = np.linalg.norm(BC)
c = np.linalg.norm(AC)
From the Circumradius definition, the radius can be found using:
R = (a * b * c) / np.sqrt(2.0 * a**2 * b**2 +
2.0 * b**2 * c**2 +
2.0 * c**2 * a**2 -
a**4 - b**4 - c**4)
However, I am having problems finding the Cartesian coordinates of the center. One possible solution is to use the "Barycentric Coordinates" of the triangle points to find the trilinear coordinates of the circumcenter (Circumcenter).
First (using this source) we find the circumcenter barcyntric coordinates:
#barcyntric coordinates of center
b1 = a**2 * (b**2 + c**2 - a**2)
b2 = b**2 * (c**2 + a**2 - b**2)
b3 = c**2 * (a**2 + b**2 - c**2)
Then the Cartesian coordinates of the center (P) would be:
Px = (b1 * A[0]) + (b2 * B[0]) + (b3 * C[0])
Py = (b1 * A[1]) + (b2 * B[1]) + (b3 * C[1])
Pz = (b1 * A[2]) + (b2 * B[2]) + (b3 * C[2])
However, the barcyntric coordinates values above do not seem to be correct. When solved with an example of known values, the radius is correct, but the coordinates of the center are not.
Example: For these three points:
A = np.array([2.0, 1.5, 0.0])
B = np.array([6.0, 4.5, 0.0])
C = np.array([11.75, 6.25, 0.0])
The radius and center coordinates are:
R = 15.899002930062595
P = [13.4207317073, -9.56097560967, 0]
Any ideas on how to find the center coordinates?
There are two issues with your code.
The first is in the naming convention. For all the formulas you are using to hold, the side of length a has to be the one opposite the point A, and similarly for b and B and c and C. You can solve that by computing them as:
a = np.linalg.norm(C - B)
b = np.linalg.norm(C - A)
c = np.linalg.norm(B - A)
The second has to do with the note in your source for the barycentric coordinates of the circumcenter: not necessarily homogeneous. That is, they need not be normalized in any way, and the formula you are using to compute the Cartesian coordinates from the barycentric ones is only valid when they add up to one.
Fortunately, you only need to divide the resulting Cartesian coordinates by b1 + b2 + b3 to get the result you are after. Streamlining a little bit your code for efficiency, I get the results you expect:
>>> A = np.array([2.0, 1.5, 0.0])
>>> B = np.array([6.0, 4.5, 0.0])
>>> C = np.array([11.75, 6.25, 0.0])
>>> a = np.linalg.norm(C - B)
>>> b = np.linalg.norm(C - A)
>>> c = np.linalg.norm(B - A)
>>> s = (a + b + c) / 2
>>> R = a*b*c / 4 / np.sqrt(s * (s - a) * (s - b) * (s - c))
>>> b1 = a*a * (b*b + c*c - a*a)
>>> b2 = b*b * (a*a + c*c - b*b)
>>> b3 = c*c * (a*a + b*b - c*c)
>>> P = np.column_stack((A, B, C)).dot(np.hstack((b1, b2, b3)))
>>> P /= b1 + b2 + b3
>>> R
15.899002930062531
>>> P
array([ 13.42073171, -9.56097561, 0. ])
As an extension to the original problem: Now suppose we have an arc of known length (e.g. 1 unit) extending from point A as defined above (away from point B). How to find the 3D coordinates of the end point (N)? The new point N lies on the same circle passing through points A, B & C.
This can be solved by first finding the angle between the two vectors PA & PN:
# L = Arc Length
theta = L / R
Now all we need to do is rotate the vector PA (Radius) by this angle theta in the correct direction. In order to do that, we need the 3D rotation matrix. For that we use the Euler–Rodrigues formula:
def rotation_matrix_3d(axis, theta):
axis = axis / np.linalg.norm(axis)
a = np.cos(theta / 2.0)
b, c, d = axis * np.sin(theta / 2.0)
rot = np.array([[a*a+b*b-c*c-d*d, 2*(b*c-a*d), 2*(b*d+a*c)],
[ 2*(b*c+a*d), a*a+c*c-b*b-d*d, 2*(c*d-a*b)],
[ 2*(b*d-a*c), 2*(c*d+a*b), a*a+d*d-b*b-c*c]])
return rot
Next, we need to find the rotation axis to rotate PA around. This can be achieved by finding the axis normal to the plane of the circle passing through A, B & C. The coordinates of point N can then be found from the center of the circle.
PA = P - A
PB = P - B
xx = np.cross(PB, PA)
r3d = rotation_matrix_3d(xx, theta)
PN = np.dot(r3d, PA)
N = P - PN
as an example, we want to find coordinates of point N that are 3 degrees away from point A, the coordinates would be
N = (1.43676498, 0.8871264, 0.)
Which is correct! (manually verified using CAD program)