My question is that how to write theese equations in array and solve?
from scipy import linalg
import numpy as np
import matplotlib.pyplot as plt
x = np.array[-23,1100,2300],[2300,1500,550],[550,1600,]
I tried to write in the array above, but I couldn't figure out how to replace 'In' and 'Vs2' in the question. Can you help me solve the question?
You want to solve these equations for several voltages, which suggests the use of a for-loop. For clarity, it's usually better to use identifiers for values, thus for instance, R1 rather than 1100. Put the R1 in formulae and let the computer do the simple arithmetic for you.
You may be thinking of using the linalg solve function since you need to solve a square matrix of order three. The unknowns are the currents. Therefore, do the algebra so that you have expressions for the coefficients of the matrix, and for the right side of the equation, in terms of resistances and voltages.
For the matrix (as indicated in the documentation at https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve.html#scipy.linalg.solve),
a = np.array([[f1(Rs, Vs), f2(Rs, Vs), f3(Rs, Vs)], [...], [...]])
For the vector on the right side,
b = np.array([f4(Rs, Vs), f5(Rs,Vs), f6(Rs, Vs)])
Then currents = solve(a, b)
Notice that f1, f2, etc are those functions that you have to calculate algebraically.
Now put this code in a loop, more or less like this:
for vs2 in [10,15,20,25]:
currents = solve(a, b)
Because you've got the resistances and vs2's in your algebraic expressions you'll get the corresponding currents. You'll need to collect the currents corresponding to voltages for plotting.
Addition: Partial result of algebraic manipulation:
More: How I would avoid most of the pesky algebra using the sympy library:
>>> R1, R2, R3, R4, R5, Vs1 = 1100, 2300, 1500, 550, 1600, 23
>>> from sympy import *
>>> var('I1,I2,I3,Vs2')
(I1, I2, I3, Vs2)
>>> eq1 = -Vs1 + R1*I1 + R2 * (I1-I2)
>>> eq1
3400*I1 - 2300*I2 - 23
>>> eq2 = R2*(I2-I1)+R3*I2+R4*(I2-I3)
>>> eq2
-2300*I1 + 4350*I2 - 550*I3
>>> eq3 = R4*(I3-I2)+R5*I3 + Vs2
>>> eq3
-550*I2 + 2150*I3 + Vs2
>>> from scipy import linalg
>>> import numpy as np
>>> for Vs2 in [10,15,20,25]:
... ls = np.array([[3400,-2300,0],[-2300,4350,-550],[0,-550,2150]])
... rs = np.array([23, 0, -Vs2])
... I = linalg.solve(ls, rs)
... Vs2, I
...
(10, array([ 0.01007914, 0.0048996 , -0.00339778]))
(15, array([ 0.00975305, 0.00441755, -0.00584667]))
(20, array([ 0.00942696, 0.0039355 , -0.00829557]))
(25, array([ 0.00910087, 0.00345346, -0.01074446]))
In order to solve a linear system of equations for unknown vector x=In which is classically written as Ax=b, you need to specify a coefficient matrix A and right hand side vector b to linalg.solve function. Based on your question, you just have to re-write in matrix form your three equations in terms of unknown currents to get A and b which was done with sympy but it is pretty overkill here imo. Here follows an easier to read solution with analytic A:
from scipy.linalg import lu_factor, lu_solve
import numpy as np
import matplotlib.pyplot as plt
# your data
R1 = 1100
R2 = 2300
R3 = 1500
R4 = 550
R5 = 1600
Vs1 = 23
# Vs2 range of interest as a list
Vs2_range = [10,15,20,25]
# construct A: the coefficient matrix of the left-hand side in terms of In = [I1, I2, I3]
A = np.array([[ R1+R2, -R2, 0],
[ -R2, R2+R3+R4, -R4],
[ 0, -R4, R4+R5]])
# pre-compute pivoted LU decomposition of A to solve Ax=b (because only b is changing here)
A_LU = lu_factor(A)
# initialize results
res = np.empty((len(Vs2_range),3))
# loop over Vs2 values
for i,Vs2 in enumerate(Vs2_range):
# construct b: the right hand side vector for each Vs2
b = np.array([Vs1,0,-Vs2])
# then solve the linear system Ax=b
In = lu_solve(A_LU,b)
# stock results as rows of the res array
res[i,:] = In
# plot each current In (column of res) vs Vs2_range
for i in range(3):
plt.plot(Vs2_range,res[:,i],'-+',label='I'+str(i+1))
plt.xlabel('Vs2 [V]')
plt.ylabel('I [A]')
plt.legend()
which gives:
Hope this helps.
Related
I made a system of equations for an optimization of a pro-link motorcycle into matlab that I'm turning into python code beacuse i need to load it into another software. The matlab code is the following:
clc
clear
Lmono= 320;
Lbielletta=145;
IPS= 16.03;
Pivot= -20;
R1x=(-46.72)-((Pivot)*sind(IPS));
R1y=((180.26)-((Pivot)*cosd(IPS)));
R_1x=(-43.52)-((Pivot)*sind(IPS));
R_1y=((-151.37)-((Pivot)*cosd(IPS)));
R4=60;
R3=203.727;
Ip=100.1;
eta=36.79;
syms phi o
eqns = [(Lbielletta)^2 == ((R3*cosd(phi)-R4*sind(o)+R_1x)^2)+((-R3*sind(phi)-R4*cosd(o)-R_1y)^2)
(Lmono)^2 == ((((R3*cosd(phi))-(R4*sind(o))-((Ip*cosd(eta+o)))+(R1x))^2) + (((R3*sind(phi))+(R4*cosd(o))-((Ip*sind(eta+o)))+(R1y))^2))];
[phi ,o]=vpasolve(eqns,[phi o]);
I wrote this in python :
import math as m
Lb = 145.0
Lm = 320.0
IPS= 16.03;
Pivot= -20.0;
R1x=(-46.72)-((Pivot)*m.sin(IPS))
R1y=((180.26)-((Pivot)*m.cos(IPS)))
R_1x=(-43.52)-((Pivot)*m.sin(IPS))
R_1y=((-151.37)-((Pivot)*m.cos(IPS)))
R4=60.0
R3=203.727
Ip=100.1
eta=36.79
import sympy as sym
from sympy import sin, cos
sym.init_printing()
phi,o = sym.symbols('phi,o')
f = sym.Eq(((R3*cos(phi)-R4*sin(o)+R_1x)**2)+((-R3*sin(phi)-R4*cos(o)-R_1y)**2),Lb**2)
g = sym.Eq(((((R3*cos(phi))-(R4*sin(o))-((Ip*cos(eta+o)))+(R1x))**2) + (((R3*sin(phi))+(R4*cos(o))-((Ip*sin(eta+o)))+(R1y))**2)),Lm**2)
print(sym.nonlinsolve([f,g],(phi,o)))
But when I run the code it loads for about 30 seconds (in matlab it takes 1-2 seconds) and then returns this:
runfile('C:/Users/Administrator/.spyder-py3/temp.py', wdir='C:/Users/Administrator/.spyder-py3')
EmptySet
EmptySet?
can someone help me?
Oscar's answer is "correct", but since you are new to Python there are a few things that you'd want to know.
First, in Matlab you are using sind and cosd which require the angle in degrees. On the other hand, the trigonometric functions exposed by the math, numpy, sympy modules require the angle to be in radians. Hence, we need to convert Lb, Lm, ... to radians.
NOTE: since I don't know the kind of problem you are solving, I have applied the m.radians to every number. This is probably wrong: you have to fix it!
Once we are done with that, we can use nsolve to numerically solve the system of equation, by providing an initial guess.
import math as m
import sympy as sym
from sympy import sin, cos, lambdify, nsolve, Add
sym.init_printing()
phi,o = sym.symbols('phi,o')
Lb = m.radians(145.0)
Lm = m.radians(320.0)
IPS= m.radians(16.03)
Pivot= m.radians(-20.0)
R1x=m.radians(-46.72)-((Pivot)*m.sin(IPS))
R1y=m.radians(180.26)-((Pivot)*m.cos(IPS))
R_1x=m.radians(-43.52)-((Pivot)*m.sin(IPS))
R_1y=m.radians(-151.37)-((Pivot)*m.cos(IPS))
R4=m.radians(60.0)
R3=m.radians(203.727)
Ip=m.radians(100.1)
eta=m.radians(36.79)
f = sym.Eq(((R3*cos(phi)-R4*sin(o)+R_1x)**2)+((-R3*sin(phi)-R4*cos(o)-R_1y)**2),Lb**2)
g = sym.Eq(((((R3*cos(phi))-(R4*sin(o))-((Ip*cos(eta+o)))+(R1x))**2) + (((R3*sin(phi))+(R4*cos(o))-((Ip*sin(eta+o)))+(R1y))**2)),Lm**2)
print(nsolve([f, g], [phi, o], [0, 0]))
# out: Matrix([[0.560675440923978], [-0.0993239452750302]])
Since you are solving an optimization problem and the equations are non-linear, it is likely that there are more than one solution. We can create a contour plot of the two equations: the intersections between the contours represent the solutions:
# convert symbolic expression to numerical functions for evaluation
fn = lambdify([phi, o], f.rewrite(Add))
gn = lambdify([phi, o], g.rewrite(Add))
# plot the 0-level contour of fn and gn: the intersection between
# the curves are the solutions you are looking for
import numpy as np
import matplotlib.pyplot as plt
pp, oo = np.mgrid[0:2*np.pi:100j, 0:2*np.pi:100j]
fig, ax = plt.subplots()
ax.contour(pp, oo, fn(pp, oo), levels=[0], cmap="Greens_r")
ax.contour(pp, oo, gn(pp, oo), levels=[0], cmap="winter")
ax.set_xlabel("phi [rad]")
ax.set_ylabel("o [rad]")
plt.show()
From this picture you can see that there are two solutions. We can find the other solution by providing a better initial guess to nsolve:
print(nsolve([f, g], [phi, o], [0.5, 3]))
# out: Matrix([[0.451286281041857], [2.54087384334971]])
You can use SymPy's nsolve with an initial guess:
In [7]: sym.nsolve([f, g], [phi, o], [0, 0])
Out[7]:
⎡0.228868116194702⎤
⎢ ⎥
⎣0.345540167046199⎦
I am trying to fit a quadratic function to some data, and I'm trying to do this without using numpy's polyfit function.
Mathematically I tried to follow this website https://neutrium.net/mathematics/least-squares-fitting-of-a-polynomial/ but somehow I don't think that I'm doing it right. If anyone could assist me that would be great, or If you could suggest another way to do it that would also be awesome.
What I've tried so far:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
ones = np.ones(3)
A = np.array( ((0,1),(1,1),(2,1)))
xfeature = A.T[0]
squaredfeature = A.T[0] ** 2
b = np.array( (1,2,0), ndmin=2 ).T
b = b.reshape(3)
features = np.concatenate((np.vstack(ones), np.vstack(xfeature), np.vstack(squaredfeature)), axis = 1)
featuresc = features.copy()
print(features)
m_det = np.linalg.det(features)
print(m_det)
determinants = []
for i in range(3):
featuresc.T[i] = b
print(featuresc)
det = np.linalg.det(featuresc)
determinants.append(det)
print(det)
featuresc = features.copy()
determinants = determinants / m_det
print(determinants)
plt.scatter(A.T[0],b)
u = np.linspace(0,3,100)
plt.plot(u, u**2*determinants[2] + u*determinants[1] + determinants[0] )
p2 = np.polyfit(A.T[0],b,2)
plt.plot(u, np.polyval(p2,u), 'b--')
plt.show()
As you can see my curve doesn't compare well to nnumpy's polyfit curve.
Update:
I went through my code and removed all the stupid mistakes and now it works, when I try to fit it over 3 points, but I have no idea how to fit over more than three points.
This is the new code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
ones = np.ones(3)
A = np.array( ((0,1),(1,1),(2,1)))
xfeature = A.T[0]
squaredfeature = A.T[0] ** 2
b = np.array( (1,2,0), ndmin=2 ).T
b = b.reshape(3)
features = np.concatenate((np.vstack(ones), np.vstack(xfeature), np.vstack(squaredfeature)), axis = 1)
featuresc = features.copy()
print(features)
m_det = np.linalg.det(features)
print(m_det)
determinants = []
for i in range(3):
featuresc.T[i] = b
print(featuresc)
det = np.linalg.det(featuresc)
determinants.append(det)
print(det)
featuresc = features.copy()
determinants = determinants / m_det
print(determinants)
plt.scatter(A.T[0],b)
u = np.linspace(0,3,100)
plt.plot(u, u**2*determinants[2] + u*determinants[1] + determinants[0] )
p2 = np.polyfit(A.T[0],b,2)
plt.plot(u, np.polyval(p2,u), 'r--')
plt.show()
Instead using Cramer's Rule, actually solve the system using least squares. Remember that Cramer's Rule will only work if the total number of points you have equals the desired order of polynomial plus 1.
If you don't have this, then Cramer's Rule will not work as you're trying to find an exact solution to the problem. If you have more points, the method is unsuitable as we will create an overdetermined system of equations.
To adapt this to more points, numpy.linalg.lstsq would be a better fit as it solves the solution to the Ax = b by computing the vector x that minimizes the Euclidean norm using the matrix A. Therefore, remove the y values from the last column of the features matrix and solve for the coefficients and use numpy.linalg.lstsq to solve for the coefficients:
import numpy as np
import matplotlib.pyplot as plt
ones = np.ones(4)
xfeature = np.asarray([0,1,2,3])
squaredfeature = xfeature ** 2
b = np.asarray([1,2,0,3])
features = np.concatenate((np.vstack(ones),np.vstack(xfeature),np.vstack(squaredfeature)), axis = 1) # Change - remove the y values
determinants = np.linalg.lstsq(features, b)[0] # Change - use least squares
plt.scatter(xfeature,b)
u = np.linspace(0,3,100)
plt.plot(u, u**2*determinants[2] + u*determinants[1] + determinants[0] )
plt.show()
I get this plot now, which matches what the dashed curve is in your graph, also matching what numpy.polyfit gives you:
I have set of equation in form: Y=aA+bB
where Y-is know vector of floats (only this one is known!); a, b are unkown scalar (float) and A, B are unknown vectors of floats. Each equation have it own Y, a, b, whereas all equation share the same unknow vectors A and B.
I have set of such equation so my problem is to minimize function:
(Y-aA-bB)+(Y'-a'A-b'B)+....
I have also many inequality constrains of type: Ai>Aj (Ai i-th element of vector A), Bi>= Bk, Bi>0, a>a', ...
Is there any software or library (ideally for python) which can handle this problem?
General remarks
This is a linear problem (at least in the linear least-squares sense, continue reading)!
It's also incompletely specified as it's not clear if there should be always a feasible solution in your case or if you want to minimize some given loss in general. Your text sounds like the latter, but in this case one has to chose the loss (which makes a difference in regards to possible algorithms). Let's take the euclidean-norm (probably the best pick here)!
Ignoring constraints for a moment, we can view this problem as basic least-squares solution to a linear matrix equation problem (euclidean-norm vs. squared euclidean-norm does not make a difference!).
min || b - Ax ||^2
Here:
M = number of Y's
N = size of Y
b = (Y0,
Y1,
...) -> shape: M*N (flattened: Y_x = (y_x_0, y_x_1).T)
A = ((a0, 0, 0, ..., b0, 0, 0, ...),
(0, a0, 0, ..., 0, b0, 0, ...),
(0, 0, a0, ..., 0, 0, b0, ...),
...
(a1, 0, 0, ..., b1, 0, 0, ...)) -> shape: (M*N, N*2)
x = (A0, A1, A2, ... B0, B1, B2, ...) -> shape: N*2 (one for A, one for B)
What you should do
If unconstrained:
Convert to standard-form and use numpy's lstsq
If constrained:
Either use customized optimization algorithms, or:
Linear-programming (if minimizing absolute-differences / l1-norm)
I'm too lazy to formulate it for scipy's linprog
Not that hard, but l1-norm is non-trivial using scipy's API
Much easier to formulate with cvxpy (obj=cvxpy.norm(X, 1))
Quadratic-programming / Second-order-cone-programming (if minimizing euclidean norm / l2-norm)
Again, too lazy to formuate it; no special solver available at scipy yet
Could be easily formulated with cvxpy (obj=cvxpy.norm(X, 2))
Emergency: use general-purpose constrained nonlinear-optimization algorithms like SLSQP -> see code
Some hacky code (not the best approach!)
This code:
Is just a demo!
Uses general nonlinear optimization algorithms from scipy
Therefore:
easier to formulate
Less fast & robust than LP, QP, SOCP
But will achieve approximately the same result as convergence on convex optimization problems is guaranteed
Uses automatic-differentiation whenever needed
(author too lazy to add gradients)
this can really hurt if performance is important
Is really ugly in terms of np.repeat vs. broadcasting!
Code:
import numpy as np
from scipy.optimize import minimize
np.random.seed(1)
""" Fake-problem (usually the job of the question-author!) """
def get_partial(N=10):
Y = np.random.uniform(size=N)
a, b = np.random.uniform(size=2)
return Y, a, b
""" Optimization """
def optimize(list_partials, N, M):
""" General approach:
This is a linear system of equations (with constraints)
Basic (unconstrained) form: min || b - Ax ||^2
"""
Y_all = np.vstack(map(lambda x: x[0], list_partials)).ravel() # flat 1d
a_all = np.hstack(map(lambda x: np.repeat(x[1], N), list_partials)) # repeat to be of same shape
b_all = np.hstack(map(lambda x: np.repeat(x[2], N), list_partials)) # """
def func(x):
A = x[:N]
B = x[N:]
return np.linalg.norm(Y_all - a_all * np.repeat(A, M) - b_all * np.repeat(B, M))
""" Example constraints: A >= B element-wise """
cons = ({'type': 'ineq',
'fun' : lambda x: x[:N] - x[N:]})
res = minimize(func, np.zeros(N*2), constraints=cons, method='SLSQP', options={'disp': True})
print(res)
print(Y_all - a_all * np.repeat(res.x[:N], M) - b_all * np.repeat(res.x[N:], M))
""" Test """
M = 4
N = 3
list_partials = [get_partial(N) for i in range(M)]
optimize(list_partials, N, M)
Output:
Optimization terminated successfully. (Exit mode 0)
Current function value: 0.9019356096498999
Iterations: 12
Function evaluations: 96
Gradient evaluations: 12
fun: 0.9019356096498999
jac: array([ 1.03786588e-04, 4.84041870e-04, 2.08129734e-01,
1.57609582e-04, 2.87599862e-04, -2.07959406e-01])
message: 'Optimization terminated successfully.'
nfev: 96
nit: 12
njev: 12
status: 0
success: True
x: array([ 1.82177105, 0.62803449, 0.63815278, -1.16960281, 0.03147683,
0.63815278])
[ 3.78873785e-02 3.41189867e-01 -3.79020251e-01 -2.79338679e-04
-7.98836875e-02 7.94168282e-02 -1.33155595e-01 1.32869391e-01
-3.73398306e-01 4.54460178e-01 2.01297470e-01 3.42682496e-01]
I did not check the result! If there is an error it's an implementation-error, not a conceptional one (my opinion)!
I agree with sascha that this is a linear problem. As I do not like constrains very much, I prefer, actually, to make it a non-linear without constrains. I do so by setting the vector A=(a1**2, a1**2+a2**2, a1**2+a2**2+a3**2, ...) like this it is ensured that it is all positive and A_i > A_j for i>j. That makes errors a bit problematic, as you now have to consider error propagation to get A1, A2, etc. including correlation, but I will have an important point on that at the end. The "simple" solution would look as follows:
import numpy as np
from scipy.optimize import leastsq
from random import random
np.set_printoptions(linewidth=190)
def generate_random_vector(n, sortIt=True):
out=np.fromiter( (random() for x in range(n) ),np.float)
if sortIt:
out.sort()
return out
def residuals(parameters,dataVec,dataLength,vecDims):
aParams=parameters[:dataLength]
bParams=parameters[dataLength:2*dataLength]
AParams=parameters[-2*vecDims:-vecDims]
BParams=parameters[-vecDims:]
YList=dataVec
AVec=[a**2 for a in AParams]##assures A_i > 0
BVec=[b**2 for b in BParams]
AAVec=np.cumsum(AVec)##assures A_i>A_j for i>j
BBVec=np.cumsum(BVec)
dist=[ np.array(Y)-a*np.array(AAVec)-b*np.array(BBVec) for Y,a,b in zip(YList,aParams,bParams) ]
dist=np.ravel(dist)
return dist
if __name__=="__main__":
aList=generate_random_vector(20, sortIt=False)
bList=generate_random_vector(20, sortIt=False)
AVec=generate_random_vector(5)
BVec=generate_random_vector(5)
YList=[a*AVec+b*BVec for a,b in zip(aList,bList)]
aGuess=20*[.2]
bGuess=20*[.3]
AGuess=5*[.4]
BGuess=5*[.5]
bestFitValues, covMX, infoDict, messages ,ier = leastsq(residuals, aGuess+bGuess+AGuess+BGuess ,args=(YList,20,5) ,full_output=True)
print "a"
print aList
besta = bestFitValues[:20]
print besta
print "b"
print bList
bestb = bestFitValues[20:40]
print bestb
print "A"
print AVec
bestA = bestFitValues[-2*5:-5]
realBestA = np.cumsum([x**2 for x in bestA])
print realBestA
print "B"
print BVec
bestB = bestFitValues[-5:]
realBestB = np.cumsum([x**2 for x in bestB])
print realBestB
print covMX
The problem on errors and correlation is that the solution to the problem is not unique. If Y = a A + b B is a solution and we, e.g., rotate such that A = c E + s F and B = -s E + c F then also Y = (ac-bs) E + (as+bc) F =e E + f F is a solution. The parameter space is, hence, completely flat at "the solution" resulting in huge errors and apocalyptic correlations.
i am having the following information(dataframe) in python
product baskets scaling_factor
12345 475 95.5
12345 108 57.7
12345 2 1.4
12345 38 21.9
12345 320 88.8
and I want to run the following non-linear regression and estimate the parameters.
a ,b and c
Equation that i want to fit:
scaling_factor = a - (b*np.exp(c*baskets))
In sas we usually run the following model:(uses gauss newton method )
proc nlin data=scaling_factors;
parms a=100 b=100 c=-0.09;
model scaling_factor = a - (b * (exp(c*baskets)));
output out=scaling_equation_parms
parms=a b c;
is there a similar way to estimate the parameters in Python using non linear regression, how can i see the plot in python.
For problems like these I always use scipy.optimize.minimize with my own least squares function. The optimization algorithms don't handle large differences between the various inputs well, so it is a good idea to scale the parameters in your function so that the parameters exposed to scipy are all on the order of 1 as I've done below.
import numpy as np
baskets = np.array([475, 108, 2, 38, 320])
scaling_factor = np.array([95.5, 57.7, 1.4, 21.9, 88.8])
def lsq(arg):
a = arg[0]*100
b = arg[1]*100
c = arg[2]*0.1
now = a - (b*np.exp(c * baskets)) - scaling_factor
return np.sum(now**2)
guesses = [1, 1, -0.9]
res = scipy.optimize.minimize(lsq, guesses)
print(res.message)
# 'Optimization terminated successfully.'
print(res.x)
# [ 0.97336709 0.98685365 -0.07998282]
print([lsq(guesses), lsq(res.x)])
# [7761.0093358076601, 13.055053196410928]
Of course, as with all minimization problems it is important to use good initial guesses since all of the algorithms can get trapped in a local minimum. The optimization method can be changed by using the method keyword; some of the possibilities are
‘Nelder-Mead’
‘Powell’
‘CG’
‘BFGS’
‘Newton-CG’
The default is BFGS according to the documentation.
Agreeing with Chris Mueller, I'd also use scipy but scipy.optimize.curve_fit.
The code looks like:
###the top two lines are required on my linux machine
import matplotlib
matplotlib.use('Qt4Agg')
import matplotlib.pyplot as plt
from matplotlib.pyplot import cm
import numpy as np
from scipy.optimize import curve_fit #we could import more, but this is what we need
###defining your fitfunction
def func(x, a, b, c):
return a - b* np.exp(c * x)
###OP's data
baskets = np.array([475, 108, 2, 38, 320])
scaling_factor = np.array([95.5, 57.7, 1.4, 21.9, 88.8])
###let us guess some start values
initialGuess=[100, 100,-.01]
guessedFactors=[func(x,*initialGuess ) for x in baskets]
###making the actual fit
popt,pcov = curve_fit(func, baskets, scaling_factor,initialGuess)
#one may want to
print popt
print pcov
###preparing data for showing the fit
basketCont=np.linspace(min(baskets),max(baskets),50)
fittedData=[func(x, *popt) for x in basketCont]
###preparing the figure
fig1 = plt.figure(1)
ax=fig1.add_subplot(1,1,1)
###the three sets of data to plot
ax.plot(baskets,scaling_factor,linestyle='',marker='o', color='r',label="data")
ax.plot(baskets,guessedFactors,linestyle='',marker='^', color='b',label="initial guess")
ax.plot(basketCont,fittedData,linestyle='-', color='#900000',label="fit with ({0:0.2g},{1:0.2g},{2:0.2g})".format(*popt))
###beautification
ax.legend(loc=0, title="graphs", fontsize=12)
ax.set_ylabel("factor")
ax.set_xlabel("baskets")
ax.grid()
ax.set_title("$\mathrm{curve}_\mathrm{fit}$")
###putting the covariance matrix nicely
tab= [['{:.2g}'.format(j) for j in i] for i in pcov]
the_table = plt.table(cellText=tab,
colWidths = [0.2]*3,
loc='upper right', bbox=[0.483, 0.35, 0.5, 0.25] )
plt.text(250,65,'covariance:',size=12)
###putting the plot
plt.show()
###done
Eventually, giving you:
I have a python (NumPy) function which creates a uniform random quaternion. I would like to get two quaternion multiplication as 2-dimensional returned array from the same or an another function. The formula of quaternion multiplication in my recent case is Q1*Q2 and Q2*Q1. Here, Q1=(w0, x0, y0, z0) and Q2=(w1, x1, y1, z1) are two quaternions. The expected two quaternion multiplication output (as 2-d returned array) should be
return([-x1*x0 - y1*y0 - z1*z0 + w1*w0, x1*w0 + y1*z0 - z1*y0 +
w1*x0, -x1*z0 + y1*w0 + z1*x0 + w1*y0, x1*y0 - y1*x0 + z1*w0 +
w1*z0])
Can anyone help me please? My codes are here:
def randQ(N):
#Generates a uniform random quaternion
#James J. Kuffner 2004
#A random array 3xN
s = random.rand(3,N)
sigma1 = sqrt(1.0 - s[0])
sigma2 = sqrt(s[0])
theta1 = 2*pi*s[1]
theta2 = 2*pi*s[2]
w = cos(theta2)*sigma2
x = sin(theta1)*sigma1
y = cos(theta1)*sigma1
z = sin(theta2)*sigma2
return array([w, x, y, z])
I know that the question is old but as I found it interesting, for future reference I herewith write an answer: if no special data type for quaternions is desirable, then a quaternion can be written as a tuple of a real number and a normal vector as an array of floats. Thus, mathematically, based on the process mentioned here, the Hamilton product of two quaternions $\hat{q}_1=(w_1,\mathbf{v}_1k$ and $\hat{q}_2=(w_2,\mathbf{v}_2)$ would be:
$$\hat{q}_1 \hat{q}_2=(w_1 w_2-\mathbf{v}^T_1\mathbf{v}_2, w_1 \mathbf{v}_2+w_2 \mathbf{v}_1+\mathbf{v}_1\times \mathbf{v}_2)$$
Sorry for the math notation that cannot be rendered in Stack Overflow.
Thus in numpy:
import numpy as np
q1=(w1,v1)
q2=(w2,v2)
q1q2=(w1*w2-np.matmul(v1.T,v2),w1*v2+w2*v1+np.cross(v1,v2))
A simple rendition of your request would be:
In [70]: def multQ(Q1,Q2):
...: w0,x0,y0,z0 = Q1 # unpack
...: w1,x1,y1,z1 = Q2
...: return([-x1*x0 - y1*y0 - z1*z0 + w1*w0, x1*w0 + y1*z0 - z1*y0 +
...: w1*x0, -x1*z0 + y1*w0 + z1*x0 + w1*y0, x1*y0 - y1*x0 + z1*w0 +
...: w1*z0])
...:
In [72]: multQ(randQ(1),randQ(2))
Out[72]:
[array([-0.37695449, 0.79178506]),
array([-0.38447116, 0.22030199]),
array([ 0.44019022, 0.56496059]),
array([ 0.71855397, 0.07323243])]
The result is a list of 4 arrays. Just wrap it in np.array() to get a 2d array:
In [73]: M=np.array(_)
In [74]: M
Out[74]:
array([[-0.37695449, 0.79178506],
[-0.38447116, 0.22030199],
[ 0.44019022, 0.56496059],
[ 0.71855397, 0.07323243]])
I haven't tried to understand or clean up your description - just rendering it as working code.
A 2-Dimensional Array is an array like this: foo[0][1]
You don't need to do that. Multiplying two quaternions yields one single quaternion. I don't see why you would need a two-dimensional array, or how you would even use one.
Just have a function that takes two arrays as arguments:
def multQuat(q1, q2):
then return the relevant array.
return array([-q2[1] * q1[1], ...])
I know the post is pretty old but would like to add a function using the pyquaternion library to calculate quaternion multiplication. The quaternion multiplication mentioned in the question is called the Hamilton product. You can use it like below...
from pyquaternion import Quaternion
q1 = Quaternion()
q2 = Quaternion()
q1_q2 = q1*q2
You can find more about this library here http://kieranwynn.github.io/pyquaternion/
There is a Python module that adds a quaternion dtype to NumPy.
Please check out the documentation for the quaternion module here.
Here is an example from the documentation. It looks native to the usage of NumPy.
>>> import numpy as np
>>> import quaternion
>>> np.quaternion(1,0,0,0)
quaternion(1, 0, 0, 0)
>>> q1 = np.quaternion(1,2,3,4)
>>> q2 = np.quaternion(5,6,7,8)
>>> q1 * q2
quaternion(-60, 12, 30, 24)
>>> a = np.array([q1, q2])
>>> a
array([quaternion(1, 2, 3, 4), quaternion(5, 6, 7, 8)], dtype=quaternion)
>>> np.exp(a)
array([quaternion(1.69392, -0.78956, -1.18434, -1.57912),
quaternion(138.909, -25.6861, -29.9671, -34.2481)], dtype=quaternion)