Symbolic Calculus and Integration in Python - python

I am trying to numerically compute a double integral.
The issue is that (I think) I need a mix of symbolic integration and numerical integration.
The integral looks something like this:
I cannot use numpy.integrate because it is not just a double integral because of the power (1/a) in the middle.
I cannot get a number for the innermost integral (to then raise to the power) because it ends up being a function that depends on x which I would then need to integrate.
I tried with symbolic calculus, using a nested sym.integrate like here
sym.integrate((sym.integrate(sym.exp(-(w**2)/(2*sigmaw)-alpha*((x-w)**2)/(2*sigma)),(w,-sym.oo, sym.oo)))**(1/alpha),(x,-sym.oo, sym.oo))
however, it just spits back the expression itself and no number.
I think I would need to get a symbolic expression for the inner integral to use as a function for numerical integration.
Is it even possible?
If not in python, with another language like R?
Any experience with things of this sort?

I worked with Maxima (https://maxima.sourceforge.io) since OP seems to be saying the exact system used isn't too important.
The integrand is just a product of Gaussian bumps, so its integral over the real line is not too hard. Maxima doesn't have the strongest integrator in the world, but anyway it seems to handle this problem okay.
Start by assuming all the parameters are positive; if not specified, Maxima will ask for the sign during the calculation.
(%i2) assume (alpha > 0, sigmaw > 0, sigma > 0);
(%o2) [alpha > 0, sigmaw > 0, sigma > 0]
Define the inner integrand.
(%i3) I: exp(-(w**2)/(2*sigmaw)-alpha*((x-w)**2)/(2*sigma));
2 2
alpha (x - w) w
(- --------------) - --------
2 sigma 2 sigmaw
(%o3) %e
Compute the inner integral.
(%i4) I1: integrate (I, w, minf, inf);
(%o4) (sqrt(2) sqrt(%pi) sqrt(sigma) sqrt(sigmaw)
2
alpha x
- ------------------------
2 alpha sigmaw + 2 sigma
%e )/sqrt(alpha sigmaw + sigma)
The pretty-printer (ASCII art) display is hard to read here, maybe this 1-d representation makes more sense. grind produces the 1-d display.
(%i5) grind(%);
(sqrt(2)*sqrt(%pi)*sqrt(sigma)*sqrt(sigmaw)
*%e^-((alpha*x^2)/(2*alpha*sigmaw+2*sigma)))
/sqrt(alpha*sigmaw+sigma)$
(%o5) done
Define the outer integrand.
(%i7) I2: I1^(1/alpha);
1 1 1 1
------- ------- ------- -------
2 alpha 2 alpha 2 alpha 2 alpha
(%o7) (2 %pi sigma sigmaw
2
x 1
- ------------------------ -------
2 alpha sigmaw + 2 sigma 2 alpha
%e )/(alpha sigmaw + sigma)
Compute the outer integral. The final result is named foo here.
(%i9) foo: integrate (I2, x, minf, inf);
1 1 1 1
------- + 1/2 ------- ------- -------
2 alpha 2 alpha 2 alpha 2 alpha
(%o9) (%pi 2 sigma sigmaw
1
-------
2 alpha
sqrt(2 alpha sigmaw + 2 sigma))/(alpha sigmaw + sigma)
Evaluate the outer integral for specific values of the parameters.
(%i10) ev (foo, alpha = 3, sigma = 3/7, sigmaw = 7/4);
1/6 1/6 1/6 1/3 2/3
2 3 7 159 %pi
(%o10) ----------------------------
sqrt(14)
(%i11) float(%);
(%o11) 5.790416728790489
Compute a numerical approximation. Note quad_qagi is suitable for infinite intervals.
(%i12) ev (quad_qagi (lambda([x], quad_qagi (I, w, minf, inf)[1]^(1/alpha)), x, minf, inf),
alpha = 3, sigma = 3/7, sigmaw = 7/4);
(%o12) [5.790416728790598, 7.216782674725913E-9, 270, 0]
Looks like that supports the symbolic result.
(%i13) first(%) - %o11;
(%o13) 1.092459456231154E-13
The outer integral again, in 1-d display which might be useful for copying into another program:
(%i14) grind(foo);
(%pi^(1/(2*alpha)+1/2)*2^(1/(2*alpha))*sigma^(1/(2*alpha))
*sigmaw^(1/(2*alpha))
*sqrt(2*alpha*sigmaw+2*sigma))
/(alpha*sigmaw+sigma)^(1/(2*alpha))$
(%o14) done
I recommend pretty strongly to try to get to a symbolic result if possible; numerical integration is often tricky. In the example given, if it turned out that you could only do the inner integral but not the outer one, that would still be a pretty big win. You could plug the symbolic solution for the inner integral into a numerical approximation for the outer one.

this doesn't answer your question but it will surely help you, as other have already pointed out other useful tools.
for the integration at hand, you don't really need to do symbolic integration.
numerical integration is simply summing on a defined finite grid, and integrating over w is simply summing over the w axis, same as x.
the main problem is how to choose the integration grid, since it cannot be infinite, for gaussians I'd say at least 10 times their sigma for as low error as you can get, as for the grid spacing, I'd make it as small as you can wait for it to run.
so for the above integration, this would be equivalent, make sure you don't increase the grid steps until you have a picture of how much memory it will need, or else your pc will hang.
import numpy as np
# define constants
sigmaw = 0.1
sigma = 0.1
alpha = 0.2
# define grid
max_w = 2
min_w = -max_w
min_x = -3
max_x = -min_x
steps_w = 2000 # don't increase this too much or you'll run out of memory
steps_x = 1000 # don't increase this too much or you'll run out of memory
dw = (max_w - min_w) / steps_w
dx = (max_x - min_x) / steps_x
x_vec = np.linspace(min_x, max_x, steps_x)
w_vec = np.linspace(min_w, max_w, steps_w)
x, w = np.meshgrid(x_vec, w_vec, sparse=True)
# do integration
inner_term = np.exp(-(w ** 2) / (2 * sigmaw) - alpha * ((x - w) ** 2) / (2 * sigma))
inner_integral = np.sum(inner_term, axis=0) * dw
del inner_term # to free some memory
inner_integral_powered = inner_integral ** (1 / alpha)
del inner_integral # to free some memory
outer_integral = np.sum(inner_integral_powered) * dx
print(outer_integral)

Numerical integration works by sampling the integrand at some values of the argument. In particular, the Newton-Cotes formulas sample uniformly, while different flavors of Gaussian integration sample irregularly.
So in your case, the integrator will require an evaluation of the inner integral for various values of x to integrate on x, implying each time a numerical integration on w with known x.
Note that as your domain is unbounded, you will have to use a change of variable to make it finite.
If the inner integral has an analytical expression, you can of course use it and integrate numerically on x.

Related

Differences between Rs deSolve and Pythons odeint

I'm currently exploring the Lorenz system with Python and R and have noticed subtle differences in the ode packages. odeint from Python and ode both say they use lsoda to calculate their derivatives. However, using the lsoda command for both seems to give far different results. I have tried ode45 for the ode function in R to get something more similar to Python but am wondering why I can't get exactly the same results:
from scipy.integrate import odeint
def lorenz(x, t):
return [
10 * (x[1] - x[0]),
x[0] * (28 - x[2]) - x[1],
x[0] * x[1] - 8 / 3 * x[2],
]
dt = 0.001
t_train = np.arange(0, 0.1, dt)
x0_train = [-8, 7, 27]
x_train = odeint(lorenz, x0_train, t_train)
x_train[0:5, :]
array([[-8. , 7. , 27. ],
[-7.85082366, 6.98457874, 26.87275343],
[-7.70328919, 6.96834721, 26.74700467],
[-7.55738803, 6.95135316, 26.62273959],
[-7.41311133, 6.93364263, 26.49994363]])
library(deSolve)
n <- round(100, 0)
# Lorenz Parameters: sigma, rho, beta
parameters <- c(s = 10, r = 28, b = 8 / 3)
state <- c(X = -8, Y = 7, Z = 27) # Initial State
# Lorenz Function used to generate Lorenz Derivatives
lorenz <- function(t, state, parameters) {
with(as.list(c(state, parameters)), {
dx <- parameters[1] * (state[2] - state[1])
dy <- state[1] * (parameters[2] - state[3]) - state[2]
dz <- state[1] * state[2] - parameters[3] * state[3]
list(c(dx, dy, dz))
})
}
times <- seq(0, ((n) - 1) * 0.001, by = 0.001)
# ODE45 used to determine Lorenz Matrix
out <- ode(y = state, times = times,
func = lorenz, parms = parameters, method = "ode45")[, -1]
out[1:nrow(out), , drop = FALSE]
X Y Z
[1,] -8.00000000 7.000000 27.00000
[2,] -7.85082366 6.984579 26.87275
[3,] -7.70328918 6.968347 26.74700
[4,] -7.55738803 6.951353 26.62274
[5,] -7.41311133 6.933643 26.49994
I had to call out[1:nrow(out), , drop = FALSE] to get the fully provided decimal places, it appears that head rounds to the nearest fifth. I understand it's incredibly subtle, but was hoping to get the exact same results. Does anyone know if this is something more than a rounding issue between R and Python?
Thanks in advance.
All numerical methods that solve ODEs are approximations that work up to a given precision. The precision of the deSolve solvers is set to atol=1e-6, rtol=1e-6 by default, where atol is absolute and rtol is relative tolerance. Furthermore, ode45 has some additional parameters to fine-tune the automatic step size algorithm, and it can make use of interpolation.
To increase tolerance, set for example:
out <- ode(y = state, times = times, func = lorenz,
parms = parameters, method = "ode45", atol = 1e-10, rtol = 1e-10)
Finally, I would recommend to use an odepack solver like lsoda or vode instead of the classical ode45. More details can be found in the ode and lsoda help pages and for ode45 in the ?rkMethod help page.
Similar parameters may also exist for odeint.
A final note: as Lorenz is a chaotic system, local errors will lead to diverging behaviour due to error magnification. This is an essential feature of chaotic systems, which are by theory unpredictable on the long run. So whatever you do, and how much precision you set, simulated trajectories are not "the real ones", they just show a similar pattern.

Pure Python 3.6.3 - finding the difference in degrees between 2 3d vectors

For some ungodly reason I'm trying to make a program to display wireframe graphics in pure python 3.6.3 with the turtle library. I've got to the point where I would like to skip drawing unnecessary tris for optimisation purposes. Unnecessary tris meaning tris that should be obscured by other parts of the model - ie, the normal facing away from the 3d camera.
The model data that the program is working with is just a huge 3d array with the following formatting for each tri.
[[Vert],[Vert],[Vert],[Normal]]
My current version of the code only has one model made for it (a cube) and looks like this:
from turtle import *
Cube = [[[-50,50,-50],[-50,50,50,],[50,50,50],[0,1,0]],
[[-50,50,-50],[50,50,50,],[50,50,-50],[0,1,0]],
[[-50,50,-50],[-50,50,50],[-50,-50,50],[1,0,0]],
[[-50,50,-50],[-50,-50,-50],[-50,-50,50],[1,0,0]],
[[-50,50,50],[50,50,50],[50,-50,50],[0,0,1]],
[[-50,50,50],[50,-50,50],[-50,50,50],[0,0,1]],
[[-50,-50,-50],[-50,-50,50,],[50,-50,50],[0,-1,0]],
[[-50,-50,-50],[50,-50,50,],[50,-50,-50],[0,-1,0]],
[[50,50,-50],[50,50,50],[50,-50,50],[-1,0,0]],
[[50,50,-50],[50,-50,-50],[50,-50,50],[-1,0,0]],
[[-50,50,-50],[50,50,-50],[50,-50,-50],[0,0,-1]],
[[-50,50,-50],[50,-50,-50],[-50,50,-50],[0,0,-1]]]
CamVector = [0,1,0]
def DrawModel(Model):
for i in range(0,len(Model)):
goto(Model[i][0][0],Model[i][0][1])
pd()
goto(Model[i][1][0],Model[i][1][1])
goto(Model[i][2][0],Model[i][2][1])
goto(Model[i][0][0],Model[i][0][1])
pu()
Model = Cube
DrawModel(Model)
But I would like to compare each tri's normal to the CamVector so the code ends up looking like this:
def DrawModel(Model):
for i in range(0,len(Model)):
AngleAwayFromCamera = *Math voodoo*
if AngleAwayFromCamera <= 90:
*draw tri*
If anyone has any idea on how to help that could be explained to someone with with a walnut-sized brain like myself that would be great. I've looked at a lot of documentation but most has flown right over my head - Probably because I failed GCSE maths.
Without going too much into the mathematical details, there's something called a dot product in mathematics:
Basically, it's a way of combining two vectors (call them a and b) to get a single number. This number is equal to the magnitude of a, multiplied by the magnitude of b, multiplied by the cosine of the angle between them (which we can call θ).
Thanks to this equation, by shifting things around, we can eventually get to what we want, which is θ.
Say we have a: [1, 2, 3] and b: [4, 5, 6]. We can calculate their magnitudes by squaring their elements and taking the square root of the sum. Therefore, the magnitude of a is (1 ** 2 + 2 ** 2 + 3 ** 2) ** 0.5 = 14 ** 0.5, and that of b is (4 ** 2 + 5 ** 2 + 6 ** 2) ** 0.5 = 77 ** 0.5.
Multiplying them together gives us 1078 ** 0.5. Therefore, the dot product is equal to (1078 ** 0.5) * cos θ.
It turns out that the dot product can be calculated by multiplying corresponding elements of two vectors together and summing the result. So, for a and b above, the dot product is 1 * 4 + 2 * 5 + 3 * 6 = 32.
Given these two different (but equal) expressions of the dot product, we can equate them to solve for θ, as follows (arccos is the function that turns cos θ into θ):
(1078 ** 0.5) * cos θ = 32
cos θ = 32 / (1078 ** 0.5)
θ = arccos(32 / (1078 ** 0.5))
θ ≈ 12.93 (in degrees)
Now, all that is left is to implement this in code:
from numpy import arccos
def angle_between_vectors(v1, v2):
def magnitude(v):
return sum(e ** 2 for e in v) ** 0.5
dot_product = sum(e1 * e2 for e1, e2 in zip(v1, v2))
magnitudes = magnitude(v1) * magnitude(v2)
angle = arccos(dot_product / magnitudes)
return angle
Applying this function to a and b above and converting from radians to degrees (divide by π and multiply by 180) gives us 12.93, as expected.

Calculating inverse trigonometric functions with formulas

I have been trying to create custom calculator for calculating trigonometric functions. Aside from Chebyshev pylonomials and/or Cordic algorithm I have used Taylor series which have been accurate by few places of decimal.
This is what i have created to calculate simple trigonometric functions without any modules:
from __future__ import division
def sqrt(n):
ans = n ** 0.5
return ans
def factorial(n):
k = 1
for i in range(1, n+1):
k = i * k
return k
def sin(d):
pi = 3.14159265359
n = 180 / int(d) # 180 degrees = pi radians
x = pi / n # Converting degrees to radians
ans = x - ( x ** 3 / factorial(3) ) + ( x ** 5 / factorial(5) ) - ( x ** 7 / factorial(7) ) + ( x ** 9 / factorial(9) )
return ans
def cos(d):
pi = 3.14159265359
n = 180 / int(d)
x = pi / n
ans = 1 - ( x ** 2 / factorial(2) ) + ( x ** 4 / factorial(4) ) - ( x ** 6 / factorial(6) ) + ( x ** 8 / factorial(8) )
return ans
def tan(d):
ans = sin(d) / sqrt(1 - sin(d) ** 2)
return ans
Unfortunately i could not find any sources that would help me interpret inverse trigonometric function formulas for Python. I have also tried putting sin(x) to the power of -1 (sin(x) ** -1) which didn't work as expected.
What could be the best solution to do this in Python (In the best, I mean simplest with similar accuracy as Taylor series)? Is this possible with power series or do i need to use cordic algorithm?
The question is broad in scope, but here are some simple ideas (and code!) that might serve as a starting point for computing arctan. First, the good old Taylor series. For simplicity, we use a fixed number of terms; in practice, you might want to decide the number of terms to use dynamically based on the size of x, or introduce some kind of convergence criterion. With a fixed number of terms, we can evaluate efficiently using something akin to Horner's scheme.
def arctan_taylor(x, terms=9):
"""
Compute arctan for small x via Taylor polynomials.
Uses a fixed number of terms. The default of 9 should give good results for
abs(x) < 0.1. Results will become poorer as abs(x) increases, becoming
unusable as abs(x) approaches 1.0 (the radius of convergence of the
series).
"""
# Uses Horner's method for evaluation.
t = 0.0
for n in range(2*terms-1, 0, -2):
t = 1.0/n - x*x*t
return x * t
The above code gives good results for small x (say smaller than 0.1 in absolute value), but the accuracy drops off as x becomes larger, and for abs(x) > 1.0, the series never converges, no matter how many terms (or how much extra precision) we throw at it. So we need a better way to compute for larger x. One solution is to use argument reduction, via the identity arctan(x) = 2 * arctan(x / (1 + sqrt(1 + x^2))). This gives the following code, which builds on arctan_taylor to give reasonable results for a wide range of x (but beware possible overflow and underflow when computing x*x).
import math
def arctan_taylor_with_reduction(x, terms=9, threshold=0.1):
"""
Compute arctan via argument reduction and Taylor series.
Applies reduction steps until x is below `threshold`,
then uses Taylor series.
"""
reductions = 0
while abs(x) > threshold:
x = x / (1 + math.sqrt(1 + x*x))
reductions += 1
return arctan_taylor(x, terms=terms) * 2**reductions
Alternatively, given an existing implementation for tan, you could simply find a solution y to the equation tan(y) = x using traditional root-finding methods. Since arctan is already naturally bounded to lie in the interval (-pi/2, pi/2), bisection search works well:
def arctan_from_tan(x, tolerance=1e-15):
"""
Compute arctan as the inverse of tan, via bisection search. This assumes
that you already have a high quality tan function.
"""
low, high = -0.5 * math.pi, 0.5 * math.pi
while high - low > tolerance:
mid = 0.5 * (low + high)
if math.tan(mid) < x:
low = mid
else:
high = mid
return 0.5 * (low + high)
Finally, just for fun, here's a CORDIC-like implementation, which is really more appropriate for a low-level implementation than for Python. The idea here is that you precompute, once and for all, a table of arctan values for 1, 1/2, 1/4, etc., and then use those to compute general arctan values, essentially by computing successive approximations to the true angle. The remarkable part is that, after the precomputation step, the arctan computation involves only additions, subtractions, and multiplications by by powers of 2. (Of course, those multiplications aren't any more efficient than any other multiplication at the level of Python, but closer to the hardware, this could potentially make a big difference.)
cordic_table_size = 60
cordic_table = [(2**-i, math.atan(2**-i))
for i in range(cordic_table_size)]
def arctan_cordic(y, x=1.0):
"""
Compute arctan(y/x), assuming x positive, via CORDIC-like method.
"""
r = 0.0
for t, a in cordic_table:
if y < 0:
r, x, y = r - a, x - t*y, y + t*x
else:
r, x, y = r + a, x + t*y, y - t*x
return r
Each of the above methods has its strengths and weaknesses, and all of the above code can be improved in a myriad of ways. I encourage you to experiment and explore.
To wrap it all up, here are the results of calling the above functions on a small number of not-very-carefully-chosen test values, comparing with the output of the standard library math.atan function:
test_values = [2.314, 0.0123, -0.56, 168.9]
for value in test_values:
print("{:20.15g} {:20.15g} {:20.15g} {:20.15g}".format(
math.atan(value),
arctan_taylor_with_reduction(value),
arctan_from_tan(value),
arctan_cordic(value),
))
Output on my machine:
1.16288340166519 1.16288340166519 1.16288340166519 1.16288340166519
0.0122993797673 0.0122993797673 0.0122993797673002 0.0122993797672999
-0.510488321916776 -0.510488321916776 -0.510488321916776 -0.510488321916776
1.56487573286064 1.56487573286064 1.56487573286064 1.56487573286064
The simplest way to do any inverse function is to use binary search.
definitions
let assume function
x = g(y)
And we want to code its inverse:
y = f(x) = f(g(y))
x = <x0,x1>
y = <y0,y1>
bin search on floats
You can do it on integer math accessing mantissa bits like in here:
Any Faster RMS Value Calculation in C?
but if you do not know the exponent of the result prior to computation then you need to use floats for bin search too.
so the idea behind binary search is to change mantissa of y from y1 to y0 bit by bit from MSB to LSB. Then call direct function g(y) and if the result cross x revert the last bit change.
In case of using floats you can use variable that will hold approximate value of the mantissa bit targeted instead of integer bit access. That will eliminate unknown exponent problem. So at the beginning set y = y0 and actual bit to MSB value so b=(y1-y0)/2. After each iteration halve it and do as many iterations as you got mantissa bits n... This way you obtain result in n iterations within (y1-y0)/2^n accuracy.
If your inverse function is not monotonic break it into monotonic intervals and handle each as separate binary search.
The function increasing/decreasing just determine the crossing condition direction (use of < or >).
C++ acos example
so y = acos(x) is defined on x = <-1,+1> , y = <0,M_PI> and decreasing so:
double f64_acos(double x)
{
const int n=52; // mantisa bits
double y,y0,b;
int i;
// handle domain error
if (x<-1.0) return 0;
if (x>+1.0) return 0;
// x = <-1,+1> , y = <0,M_PI> , decreasing
for (y= 0.0,b=0.5*M_PI,i=0;i<n;i++,b*=0.5) // y is min, b is half of max and halving each iteration
{
y0=y; // remember original y
y+=b; // try set "bit"
if (cos(y)<x) y=y0; // if result cross x return to original y decreasing is < and increasing is >
}
return y;
}
I tested it like this:
double x0,x1,y;
for (x0=0.0;x0<M_PI;x0+=M_PI*0.01) // cycle all angle range <0,M_PI>
{
y=cos(x0); // direct function (from math.h)
x1=f64_acos(y); // my inverse function
if (fabs(x1-x0)>1e-9) // check result and output to log if error
Form1->mm_log->Lines->Add(AnsiString().sprintf("acos(%8.3lf) = %8.3lf != %8.3lf",y,x0,x1));
}
Without any difference found... so the implementation is working correctly. Of coarse binary search on 52 bit mantissa is usually slower then polynomial approximation ... on the other hand the implementation is so simple ...
[Notes]
If you do not want to take care of the monotonic intervals you can try
approximation search
As you are dealing with goniometric functions you need to handle singularities to avoid NaN or division by zero etc ...
If you're interested here more bin search examples (mostly on integers)
Power by squaring for negative exponents it contains

Minimizing the sum of 3 variables subject to equality and integrality constraints

I am working on a programming (using Python) problem where I have to solve the following type of linear equation in 3 variables:
x, y, z are all integers.
Equation example: 2x + 5y + 8z = 14
Condition: Minimize x + y + z
I have been trying to search for an algorithm for finding a solution to this, in an optimum way. If anybody has any idea please guide me through algorithm or code-sources.
I am just curious, what can be done if this problem is extrapolated to n variables?
I don't want to use hit & trial loops to keep checking for values. Also, there may be a scenario that equation has no solution.
UPDATE
Adding lower bounds condition:
x, y, z >= 0
x, y, z are natural
Any triple (x, y, z), with z = (14 - 2x - 5y) / 8, satisfies your constraint.
Note that x + y + (14 - 2x - 5y) / 8 is unbounded from below. This function decreases when each of x and y decrease, with no finite minimum.
You have an equality-constrained integer program (IP) in just 3 dimensions. The equality constraint 2 x + 5 y + 8 z = 14 defines a plane in 3-dimensional space. Parametrizing it,
x = 7 - 2.5 u - 4 v
y = u
z = v
we obtain an unconstrained IP in 2 dimensions. Given the integrality constraints, we have u <- {0,2} and v <- {0,1}. Enumerating all four (u,v) pairs, we conclude that the minimum is 4 and that it is attained at (u,v) = (2,0) and (u,v) = (0,1), which correspond to (x,y,z) = (2,2,0) and (x,y,z) = (3,0,1), respectively.
Using PuLP to solve the integer program:
from pulp import *
# decision variables
x = LpVariable("x", 0, None, LpInteger)
y = LpVariable("y", 0, None, LpInteger)
z = LpVariable("z", 0, None, LpInteger)
# define integer program (IP)
prob = LpProblem("problem", LpMinimize)
prob += x+y+z # objective function
prob += 2*x + 5*y + 8*z == 14 # equality constraint
# solve IP
prob.solve()
# print results
print LpStatus[prob.status]
print value(x)
print value(y)
print value(z)
which produces x = 3, y = 0 and z = 1.
Another tool to solve this type of problems is SCIP. There is also an easy to use Python interface available on GitHub: PySCIPOpt.
In general (mixed) integer programming problems are very hard to solve (NP complexity) and often even simple looking instances with only a few variables and constraints can take hours to prove the optimal solution.
From your first equation:
x = (14 - 5y - 8x) / 2
so, you now only need to minimize
(14 - 5y - 8z) / 2 + y + z
which is
(14 - 3y - 6z) / 2
But we can ignore the ' / 2' part for minimization purposes.
Presumably, there must be some other constraints on your problem, since as described the solution is that both y and z may grow without bound.
I do not know any general fast solution for n variables, or not using hit & trail loops. But for the given specific equation 2x + 5y + 8z = 14, there maybe some shortcut based on observation.
Notice that the range is very small for any possible solutions:
0<=x<=7, 0<=y<=2, 0<=z<=1
Also other than x = 7, you have at least to use 2 variables.
(x+y+z = 7 for this case)
Let's find what we got if using only 2 variables:
If you choose to use (x,z) or (y,z), as z can only be 1, x or y is trivial.
(x+y+z = 4 for (x,z), no solution for (y,z))
If you choose to use (x,y), as x's coefficient is even and y's coefficient is odd, you must choose even number of y to achieve an even R.H.S. (14). Which means y must be 2, x is then trivial.
(x+y+z = 4 for this case)
Let's find what we got if using all 3 variables:
Similarly, z must be 1, so basically it's using 2 variables (x,y) to achieve 14-8 = 6 which is even.
Again we use similar argument, so we must choose even number of y which is 2, however at this point 2y + 1z > 14 already, which means there is no solution using all 3 variables.
Therefore simply by logic, reduce the equation by using 1 or 2 variables, we can find that minimum x+y+z is 4 to achieve 14 (x=3,y=0,z=1 or x=2,y=2,z=0)

Evenly distributing n points on a sphere

I need an algorithm that can give me positions around a sphere for N points (less than 20, probably) that vaguely spreads them out. There's no need for "perfection", but I just need it so none of them are bunched together.
This question provided good code, but I couldn't find a way to make this uniform, as this seemed 100% randomized.
This blog post recommended had two ways allowing input of number of points on the sphere, but the Saff and Kuijlaars algorithm is exactly in psuedocode I could transcribe, and the code example I found contained "node[k]", which I couldn't see explained and ruined that possibility. The second blog example was the Golden Section Spiral, which gave me strange, bunched up results, with no clear way to define a constant radius.
This algorithm from this question seems like it could possibly work, but I can't piece together what's on that page into psuedocode or anything.
A few other question threads I came across spoke of randomized uniform distribution, which adds a level of complexity I'm not concerned about. I apologize that this is such a silly question, but I wanted to show that I've truly looked hard and still come up short.
So, what I'm looking for is simple pseudocode to evenly distribute N points around a unit sphere, that either returns in spherical or Cartesian coordinates. Even better if it can even distribute with a bit of randomization (think planets around a star, decently spread out, but with room for leeway).
The Fibonacci sphere algorithm is great for this. It is fast and gives results that at a glance will easily fool the human eye. You can see an example done with processing which will show the result over time as points are added. Here's another great interactive example made by #gman. And here's a simple implementation in python.
import math
def fibonacci_sphere(samples=1000):
points = []
phi = math.pi * (3. - math.sqrt(5.)) # golden angle in radians
for i in range(samples):
y = 1 - (i / float(samples - 1)) * 2 # y goes from 1 to -1
radius = math.sqrt(1 - y * y) # radius at y
theta = phi * i # golden angle increment
x = math.cos(theta) * radius
z = math.sin(theta) * radius
points.append((x, y, z))
return points
1000 samples gives you this:
The golden spiral method
You said you couldn’t get the golden spiral method to work and that’s a shame because it’s really, really good. I would like to give you a complete understanding of it so that maybe you can understand how to keep this away from being “bunched up.”
So here’s a fast, non-random way to create a lattice that is approximately correct; as discussed above, no lattice will be perfect, but this may be good enough. It is compared to other methods e.g. at BendWavy.org but it just has a nice and pretty look as well as a guarantee about even spacing in the limit.
Primer: sunflower spirals on the unit disk
To understand this algorithm, I first invite you to look at the 2D sunflower spiral algorithm. This is based on the fact that the most irrational number is the golden ratio (1 + sqrt(5))/2 and if one emits points by the approach “stand at the center, turn a golden ratio of whole turns, then emit another point in that direction,” one naturally constructs a spiral which, as you get to higher and higher numbers of points, nevertheless refuses to have well-defined ‘bars’ that the points line up on.(Note 1.)
The algorithm for even spacing on a disk is,
from numpy import pi, cos, sin, sqrt, arange
import matplotlib.pyplot as pp
num_pts = 100
indices = arange(0, num_pts, dtype=float) + 0.5
r = sqrt(indices/num_pts)
theta = pi * (1 + 5**0.5) * indices
pp.scatter(r*cos(theta), r*sin(theta))
pp.show()
and it produces results that look like (n=100 and n=1000):
Spacing the points radially
The key strange thing is the formula r = sqrt(indices / num_pts); how did I come to that one? (Note 2.)
Well, I am using the square root here because I want these to have even-area spacing around the disk. That is the same as saying that in the limit of large N I want a little region R ∈ (r, r + dr), Θ ∈ (θ, θ + dθ) to contain a number of points proportional to its area, which is r dr dθ. Now if we pretend that we are talking about a random variable here, this has a straightforward interpretation as saying that the joint probability density for (R, Θ) is just c r for some constant c. Normalization on the unit disk would then force c = 1/π.
Now let me introduce a trick. It comes from probability theory where it’s known as sampling the inverse CDF: suppose you wanted to generate a random variable with a probability density f(z) and you have a random variable U ~ Uniform(0, 1), just like comes out of random() in most programming languages. How do you do this?
First, turn your density into a cumulative distribution function or CDF, which we will call F(z). A CDF, remember, increases monotonically from 0 to 1 with derivative f(z).
Then calculate the CDF’s inverse function F-1(z).
You will find that Z = F-1(U) is distributed according to the target density. (Note 3).
Now the golden-ratio spiral trick spaces the points out in a nicely even pattern for θ so let’s integrate that out; for the unit disk we are left with F(r) = r2. So the inverse function is F-1(u) = u1/2, and therefore we would generate random points on the disk in polar coordinates with r = sqrt(random()); theta = 2 * pi * random().
Now instead of randomly sampling this inverse function we’re uniformly sampling it, and the nice thing about uniform sampling is that our results about how points are spread out in the limit of large N will behave as if we had randomly sampled it. This combination is the trick. Instead of random() we use (arange(0, num_pts, dtype=float) + 0.5)/num_pts, so that, say, if we want to sample 10 points they are r = 0.05, 0.15, 0.25, ... 0.95. We uniformly sample r to get equal-area spacing, and we use the sunflower increment to avoid awful “bars” of points in the output.
Now doing the sunflower on a sphere
The changes that we need to make to dot the sphere with points merely involve switching out the polar coordinates for spherical coordinates. The radial coordinate of course doesn't enter into this because we're on a unit sphere. To keep things a little more consistent here, even though I was trained as a physicist I'll use mathematicians' coordinates where 0 ≤ φ ≤ π is latitude coming down from the pole and 0 ≤ θ ≤ 2π is longitude. So the difference from above is that we are basically replacing the variable r with φ.
Our area element, which was r dr dθ, now becomes the not-much-more-complicated sin(φ) dφ dθ. So our joint density for uniform spacing is sin(φ)/4π. Integrating out θ, we find f(φ) = sin(φ)/2, thus F(φ) = (1 − cos(φ))/2. Inverting this we can see that a uniform random variable would look like acos(1 - 2 u), but we sample uniformly instead of randomly, so we instead use φk = acos(1 − 2 (k + 0.5)/N). And the rest of the algorithm is just projecting this onto the x, y, and z coordinates:
from numpy import pi, cos, sin, arccos, arange
import mpl_toolkits.mplot3d
import matplotlib.pyplot as pp
num_pts = 1000
indices = arange(0, num_pts, dtype=float) + 0.5
phi = arccos(1 - 2*indices/num_pts)
theta = pi * (1 + 5**0.5) * indices
x, y, z = cos(theta) * sin(phi), sin(theta) * sin(phi), cos(phi);
pp.figure().add_subplot(111, projection='3d').scatter(x, y, z);
pp.show()
Again for n=100 and n=1000 the results look like:
Further research
I wanted to give a shout out to Martin Roberts’s blog. Note that above I created an offset of my indices by adding 0.5 to each index. This was just visually appealing to me, but it turns out that the choice of offset matters a lot and is not constant over the interval and can mean getting as much as 8% better accuracy in packing if chosen correctly. There should also be a way to get his R2 sequence to cover a sphere and it would be interesting to see if this also produced a nice even covering, perhaps as-is but perhaps needing to be, say, taken from only a half of the unit square cut diagonally or so and stretched around to get a circle.
Notes
Those “bars” are formed by rational approximations to a number, and the best rational approximations to a number come from its continued fraction expression, z + 1/(n_1 + 1/(n_2 + 1/(n_3 + ...))) where z is an integer and n_1, n_2, n_3, ... is either a finite or infinite sequence of positive integers:
def continued_fraction(r):
while r != 0:
n = floor(r)
yield n
r = 1/(r - n)
Since the fraction part 1/(...) is always between zero and one, a large integer in the continued fraction allows for a particularly good rational approximation: “one divided by something between 100 and 101” is better than “one divided by something between 1 and 2.” The most irrational number is therefore the one which is 1 + 1/(1 + 1/(1 + ...)) and has no particularly good rational approximations; one can solve φ = 1 + 1/φ by multiplying through by φ to get the formula for the golden ratio.
For folks who are not so familiar with NumPy -- all of the functions are “vectorized,” so that sqrt(array) is the same as what other languages might write map(sqrt, array). So this is a component-by-component sqrt application. The same also holds for division by a scalar or addition with scalars -- those apply to all components in parallel.
The proof is simple once you know that this is the result. If you ask what's the probability that z < Z < z + dz, this is the same as asking what's the probability that z < F-1(U) < z + dz, apply F to all three expressions noting that it is a monotonically increasing function, hence F(z) < U < F(z + dz), expand the right hand side out to find F(z) + f(z) dz, and since U is uniform this probability is just f(z) dz as promised.
This is known as packing points on a sphere, and there is no (known) general, perfect solution. However, there are plenty of imperfect solutions. The three most popular seem to be:
Create a simulation. Treat each point as an electron constrained to a sphere, then run a simulation for a certain number of steps. The electrons' repulsion will naturally tend the system to a more stable state, where the points are about as far away from each other as they can get.
Hypercube rejection. This fancy-sounding method is actually really simple: you uniformly choose points (much more than n of them) inside of the cube surrounding the sphere, then reject the points outside of the sphere. Treat the remaining points as vectors, and normalize them. These are your "samples" - choose n of them using some method (randomly, greedy, etc).
Spiral approximations. You trace a spiral around a sphere, and evenly-distribute the points around the spiral. Because of the mathematics involved, these are more complicated to understand than the simulation, but much faster (and probably involving less code). The most popular seems to be by Saff, et al.
A lot more information about this problem can be found here
In this example code node[k] is just the kth node. You are generating an array N points and node[k] is the kth (from 0 to N-1). If that is all that is confusing you, hopefully you can use that now.
(in other words, k is an array of size N that is defined before the code fragment starts, and which contains a list of the points).
Alternatively, building on the other answer here (and using Python):
> cat ll.py
from math import asin
nx = 4; ny = 5
for x in range(nx):
lon = 360 * ((x+0.5) / nx)
for y in range(ny):
midpt = (y+0.5) / ny
lat = 180 * asin(2*((y+0.5)/ny-0.5))
print lon,lat
> python2.7 ll.py
45.0 -166.91313924
45.0 -74.0730322921
45.0 0.0
45.0 74.0730322921
45.0 166.91313924
135.0 -166.91313924
135.0 -74.0730322921
135.0 0.0
135.0 74.0730322921
135.0 166.91313924
225.0 -166.91313924
225.0 -74.0730322921
225.0 0.0
225.0 74.0730322921
225.0 166.91313924
315.0 -166.91313924
315.0 -74.0730322921
315.0 0.0
315.0 74.0730322921
315.0 166.91313924
If you plot that, you'll see that the vertical spacing is larger near the poles so that each point is situated in about the same total area of space (near the poles there's less space "horizontally", so it gives more "vertically").
This isn't the same as all points having about the same distance to their neighbours (which is what I think your links are talking about), but it may be sufficient for what you want and improves on simply making a uniform lat/lon grid.
What you are looking for is called a spherical covering. The spherical covering problem is very hard and solutions are unknown except for small numbers of points. One thing that is known for sure is that given n points on a sphere, there always exist two points of distance d = (4-csc^2(\pi n/6(n-2)))^(1/2) or closer.
If you want a probabilistic method for generating points uniformly distributed on a sphere, it's easy: generate points in space uniformly by Gaussian distribution (it's built into Java, not hard to find the code for other languages). So in 3-dimensional space, you need something like
Random r = new Random();
double[] p = { r.nextGaussian(), r.nextGaussian(), r.nextGaussian() };
Then project the point onto the sphere by normalizing its distance from the origin
double norm = Math.sqrt( (p[0])^2 + (p[1])^2 + (p[2])^2 );
double[] sphereRandomPoint = { p[0]/norm, p[1]/norm, p[2]/norm };
The Gaussian distribution in n dimensions is spherically symmetric so the projection onto the sphere is uniform.
Of course, there's no guarantee that the distance between any two points in a collection of uniformly generated points will be bounded below, so you can use rejection to enforce any such conditions that you might have: probably it's best to generate the whole collection and then reject the whole collection if necessary. (Or use "early rejection" to reject the whole collection you've generated so far; just don't keep some points and drop others.) You can use the formula for d given above, minus some slack, to determine the min distance between points below which you will reject a set of points. You'll have to calculate n choose 2 distances, and the probability of rejection will depend on the slack; it's hard to say how, so run a simulation to get a feel for the relevant statistics.
This answer is based on the same 'theory' that is outlined well by this answer
I'm adding this answer as:
-- None of the other options fit the 'uniformity' need 'spot-on' (or not obviously-clearly so). (Noting to get the planet like distribution looking behavior particurally wanted in the original ask, you just reject from the finite list of the k uniformly created points at random (random wrt the index count in the k items back).)
--The closest other impl forced you to decide the 'N' by 'angular axis', vs. just 'one value of N' across both angular axis values ( which at low counts of N is very tricky to know what may, or may not matter (e.g. you want '5' points -- have fun ) )
--Furthermore, it's very hard to 'grok' how to differentiate between the other options without any imagery, so here's what this option looks like (below), and the ready-to-run implementation that goes with it.
with N at 20:
and then N at 80:
here's the ready-to-run python3 code, where the emulation is that same source: " http://web.archive.org/web/20120421191837/http://www.cgafaq.info/wiki/Evenly_distributed_points_on_sphere " found by others. ( The plotting I've included, that fires when run as 'main,' is taken from: http://www.scipy.org/Cookbook/Matplotlib/mplot3D )
from math import cos, sin, pi, sqrt
def GetPointsEquiAngularlyDistancedOnSphere(numberOfPoints=45):
""" each point you get will be of form 'x, y, z'; in cartesian coordinates
eg. the 'l2 distance' from the origion [0., 0., 0.] for each point will be 1.0
------------
converted from: http://web.archive.org/web/20120421191837/http://www.cgafaq.info/wiki/Evenly_distributed_points_on_sphere )
"""
dlong = pi*(3.0-sqrt(5.0)) # ~2.39996323
dz = 2.0/numberOfPoints
long = 0.0
z = 1.0 - dz/2.0
ptsOnSphere =[]
for k in range( 0, numberOfPoints):
r = sqrt(1.0-z*z)
ptNew = (cos(long)*r, sin(long)*r, z)
ptsOnSphere.append( ptNew )
z = z - dz
long = long + dlong
return ptsOnSphere
if __name__ == '__main__':
ptsOnSphere = GetPointsEquiAngularlyDistancedOnSphere( 80)
#toggle True/False to print them
if( True ):
for pt in ptsOnSphere: print( pt)
#toggle True/False to plot them
if(True):
from numpy import *
import pylab as p
import mpl_toolkits.mplot3d.axes3d as p3
fig=p.figure()
ax = p3.Axes3D(fig)
x_s=[];y_s=[]; z_s=[]
for pt in ptsOnSphere:
x_s.append( pt[0]); y_s.append( pt[1]); z_s.append( pt[2])
ax.scatter3D( array( x_s), array( y_s), array( z_s) )
ax.set_xlabel('X'); ax.set_ylabel('Y'); ax.set_zlabel('Z')
p.show()
#end
tested at low counts (N in 2, 5, 7, 13, etc) and seems to work 'nice'
Try:
function sphere ( N:float,k:int):Vector3 {
var inc = Mathf.PI * (3 - Mathf.Sqrt(5));
var off = 2 / N;
var y = k * off - 1 + (off / 2);
var r = Mathf.Sqrt(1 - y*y);
var phi = k * inc;
return Vector3((Mathf.Cos(phi)*r), y, Mathf.Sin(phi)*r);
};
The above function should run in loop with N loop total and k loop current iteration.
It is based on a sunflower seeds pattern, except the sunflower seeds are curved around into a half dome, and again into a sphere.
Here is a picture, except I put the camera half way inside the sphere so it looks 2d instead of 3d because the camera is same distance from all points.
http://3.bp.blogspot.com/-9lbPHLccQHA/USXf88_bvVI/AAAAAAAAADY/j7qhQsSZsA8/s640/sphere.jpg
Healpix solves a closely related problem (pixelating the sphere with equal area pixels):
http://healpix.sourceforge.net/
It's probably overkill, but maybe after looking at it you'll realize some of it's other nice properties are interesting to you. It's way more than just a function that outputs a point cloud.
I landed here trying to find it again; the name "healpix" doesn't exactly evoke spheres...
edit: This does not answer the question the OP meant to ask, leaving it here in case people find it useful somehow.
We use the multiplication rule of probability, combined with infinitessimals. This results in 2 lines of code to achieve your desired result:
longitude: φ = uniform([0,2pi))
azimuth: θ = -arcsin(1 - 2*uniform([0,1]))
(defined in the following coordinate system:)
Your language typically has a uniform random number primitive. For example in python you can use random.random() to return a number in the range [0,1). You can multiply this number by k to get a random number in the range [0,k). Thus in python, uniform([0,2pi)) would mean random.random()*2*math.pi.
Proof
Now we can't assign θ uniformly, otherwise we'd get clumping at the poles. We wish to assign probabilities proportional to the surface area of the spherical wedge (the θ in this diagram is actually φ):
An angular displacement dφ at the equator will result in a displacement of dφ*r. What will that displacement be at an arbitrary azimuth θ? Well, the radius from the z-axis is r*sin(θ), so the arclength of that "latitude" intersecting the wedge is dφ * r*sin(θ). Thus we calculate the cumulative distribution of the area to sample from it, by integrating the area of the slice from the south pole to the north pole.
(where stuff=dφ*r)
We will now attempt to get the inverse of the CDF to sample from it: http://en.wikipedia.org/wiki/Inverse_transform_sampling
First we normalize by dividing our almost-CDF by its maximum value. This has the side-effect of cancelling out the dφ and r.
azimuthalCDF: cumProb = (sin(θ)+1)/2 from -pi/2 to pi/2
inverseCDF: θ = -sin^(-1)(1 - 2*cumProb)
Thus:
let x by a random float in range [0,1]
θ = -arcsin(1-2*x)
with small numbers of points you could run a simulation:
from random import random,randint
r = 10
n = 20
best_closest_d = 0
best_points = []
points = [(r,0,0) for i in range(n)]
for simulation in range(10000):
x = random()*r
y = random()*r
z = r-(x**2+y**2)**0.5
if randint(0,1):
x = -x
if randint(0,1):
y = -y
if randint(0,1):
z = -z
closest_dist = (2*r)**2
closest_index = None
for i in range(n):
for j in range(n):
if i==j:
continue
p1,p2 = points[i],points[j]
x1,y1,z1 = p1
x2,y2,z2 = p2
d = (x1-x2)**2+(y1-y2)**2+(z1-z2)**2
if d < closest_dist:
closest_dist = d
closest_index = i
if simulation % 100 == 0:
print simulation,closest_dist
if closest_dist > best_closest_d:
best_closest_d = closest_dist
best_points = points[:]
points[closest_index]=(x,y,z)
print best_points
>>> best_points
[(9.921692138442777, -9.930808529773849, 4.037839326088124),
(5.141893371460546, 1.7274947332807744, -4.575674650522637),
(-4.917695758662436, -1.090127967097737, -4.9629263893193745),
(3.6164803265540666, 7.004158551438312, -2.1172868271109184),
(-9.550655088997003, -9.580386054762917, 3.5277052594769422),
(-0.062238110294250415, 6.803105171979587, 3.1966101417463655),
(-9.600996012203195, 9.488067284474834, -3.498242301168819),
(-8.601522086624803, 4.519484132245867, -0.2834204048792728),
(-1.1198210500791472, -2.2916581379035694, 7.44937337008726),
(7.981831370440529, 8.539378431788634, 1.6889099589074377),
(0.513546008372332, -2.974333486904779, -6.981657873262494),
(-4.13615438946178, -6.707488383678717, 2.1197605651446807),
(2.2859494919024326, -8.14336582650039, 1.5418694699275672),
(-7.241410895247996, 9.907335206038226, 2.271647103735541),
(-9.433349952523232, -7.999106443463781, -2.3682575660694347),
(3.704772125650199, 1.0526567864085812, 6.148581714099761),
(-3.5710511242327048, 5.512552040316693, -3.4318468250897647),
(-7.483466337225052, -1.506434920354559, 2.36641535124918),
(7.73363824231576, -8.460241422163824, -1.4623228616326003),
(10, 0, 0)]
Take the two largest factors of your N, if N==20 then the two largest factors are {5,4}, or, more generally {a,b}. Calculate
dlat = 180/(a+1)
dlong = 360/(b+1})
Put your first point at {90-dlat/2,(dlong/2)-180}, your second at {90-dlat/2,(3*dlong/2)-180}, your 3rd at {90-dlat/2,(5*dlong/2)-180}, until you've tripped round the world once, by which time you've got to about {75,150} when you go next to {90-3*dlat/2,(dlong/2)-180}.
Obviously I'm working this in degrees on the surface of the spherical earth, with the usual conventions for translating +/- to N/S or E/W. And obviously this gives you a completely non-random distribution, but it is uniform and the points are not bunched together.
To add some degree of randomness, you could generate 2 normally-distributed (with mean 0 and std dev of {dlat/3, dlong/3} as appropriate) and add them to your uniformly distributed points.
OR... to place 20 points, compute the centers of the icosahedronal faces. For 12 points, find the vertices of the icosahedron. For 30 points, the mid point of the edges of the icosahedron. you can do the same thing with the tetrahedron, cube, dodecahedron and octahedrons: one set of points is on the vertices, another on the center of the face and another on the center of the edges. They cannot be mixed, however.
Based on fnord's answer, here is a Unity3D version with added ranges :
Code :
// golden angle in radians
static float Phi = Mathf.PI * ( 3f - Mathf.Sqrt( 5f ) );
static float Pi2 = Mathf.PI * 2;
public static Vector3 Point( float radius , int index , int total , float min = 0f, float max = 1f , float angleStartDeg = 0f, float angleRangeDeg = 360 )
{
// y goes from min (-) to max (+)
var y = ( ( index / ( total - 1f ) ) * ( max - min ) + min ) * 2f - 1f;
// golden angle increment
var theta = Phi * index ;
if( angleStartDeg != 0 || angleRangeDeg != 360 )
{
theta = ( theta % ( Pi2 ) ) ;
theta = theta < 0 ? theta + Pi2 : theta ;
var a1 = angleStartDeg * Mathf.Deg2Rad;
var a2 = angleRangeDeg * Mathf.Deg2Rad;
theta = theta * a2 / Pi2 + a1;
}
// https://stackoverflow.com/a/26127012/2496170
// radius at y
var rY = Mathf.Sqrt( 1 - y * y );
var x = Mathf.Cos( theta ) * rY;
var z = Mathf.Sin( theta ) * rY;
return new Vector3( x, y, z ) * radius;
}
Gist : https://gist.github.com/nukadelic/7449f0872f708065bc1afeb19df666f7/edit
Preview:
# create uniform spiral grid
numOfPoints = varargin[0]
vxyz = zeros((numOfPoints,3),dtype=float)
sq0 = 0.00033333333**2
sq2 = 0.9999998**2
sumsq = 2*sq0 + sq2
vxyz[numOfPoints -1] = array([(sqrt(sq0/sumsq)),
(sqrt(sq0/sumsq)),
(-sqrt(sq2/sumsq))])
vxyz[0] = -vxyz[numOfPoints -1]
phi2 = sqrt(5)*0.5 + 2.5
rootCnt = sqrt(numOfPoints)
prevLongitude = 0
for index in arange(1, (numOfPoints -1), 1, dtype=float):
zInc = (2*index)/(numOfPoints) -1
radius = sqrt(1-zInc**2)
longitude = phi2/(rootCnt*radius)
longitude = longitude + prevLongitude
while (longitude > 2*pi):
longitude = longitude - 2*pi
prevLongitude = longitude
if (longitude > pi):
longitude = longitude - 2*pi
latitude = arccos(zInc) - pi/2
vxyz[index] = array([ (cos(latitude) * cos(longitude)) ,
(cos(latitude) * sin(longitude)),
sin(latitude)])
#robert king It's a really nice solution but has some sloppy bugs in it. I know it helped me a lot though, so never mind the sloppiness. :)
Here is a cleaned up version....
from math import pi, asin, sin, degrees
halfpi, twopi = .5 * pi, 2 * pi
sphere_area = lambda R=1.0: 4 * pi * R ** 2
lat_dist = lambda lat, R=1.0: R*(1-sin(lat))
#A = 2*pi*R^2(1-sin(lat))
def sphere_latarea(lat, R=1.0):
if -halfpi > lat or lat > halfpi:
raise ValueError("lat must be between -halfpi and halfpi")
return 2 * pi * R ** 2 * (1-sin(lat))
sphere_lonarea = lambda lon, R=1.0: \
4 * pi * R ** 2 * lon / twopi
#A = 2*pi*R^2 |sin(lat1)-sin(lat2)| |lon1-lon2|/360
# = (pi/180)R^2 |sin(lat1)-sin(lat2)| |lon1-lon2|
sphere_rectarea = lambda lat0, lat1, lon0, lon1, R=1.0: \
(sphere_latarea(lat0, R)-sphere_latarea(lat1, R)) * (lon1-lon0) / twopi
def test_sphere(n_lats=10, n_lons=19, radius=540.0):
total_area = 0.0
for i_lons in range(n_lons):
lon0 = twopi * float(i_lons) / n_lons
lon1 = twopi * float(i_lons+1) / n_lons
for i_lats in range(n_lats):
lat0 = asin(2 * float(i_lats) / n_lats - 1)
lat1 = asin(2 * float(i_lats+1)/n_lats - 1)
area = sphere_rectarea(lat0, lat1, lon0, lon1, radius)
print("{:} {:}: {:9.4f} to {:9.4f}, {:9.4f} to {:9.4f} => area {:10.4f}"
.format(i_lats, i_lons
, degrees(lat0), degrees(lat1)
, degrees(lon0), degrees(lon1)
, area))
total_area += area
print("total_area = {:10.4f} (difference of {:10.4f})"
.format(total_area, abs(total_area) - sphere_area(radius)))
test_sphere()
This works and it's deadly simple. As many points as you want:
private function moveTweets():void {
var newScale:Number=Scale(meshes.length,50,500,6,2);
trace("new scale:"+newScale);
var l:Number=this.meshes.length;
var tweetMeshInstance:TweetMesh;
var destx:Number;
var desty:Number;
var destz:Number;
for (var i:Number=0;i<this.meshes.length;i++){
tweetMeshInstance=meshes[i];
var phi:Number = Math.acos( -1 + ( 2 * i ) / l );
var theta:Number = Math.sqrt( l * Math.PI ) * phi;
tweetMeshInstance.origX = (sphereRadius+5) * Math.cos( theta ) * Math.sin( phi );
tweetMeshInstance.origY= (sphereRadius+5) * Math.sin( theta ) * Math.sin( phi );
tweetMeshInstance.origZ = (sphereRadius+5) * Math.cos( phi );
destx=sphereRadius * Math.cos( theta ) * Math.sin( phi );
desty=sphereRadius * Math.sin( theta ) * Math.sin( phi );
destz=sphereRadius * Math.cos( phi );
tweetMeshInstance.lookAt(new Vector3D());
TweenMax.to(tweetMeshInstance, 1, {scaleX:newScale,scaleY:newScale,x:destx,y:desty,z:destz,onUpdate:onLookAtTween, onUpdateParams:[tweetMeshInstance]});
}
}
private function onLookAtTween(theMesh:TweetMesh):void {
theMesh.lookAt(new Vector3D());
}

Categories

Resources