Scipy optimize raises ValueError despite x0 being within bounds - python

I'm trying to fit a sigmoid curve onto a small set of points, basically generating a probability curve from a set of observations. I'm using scipy.optimize.curve_fit, with a slightly modified logistic function (so as to be bound completely within [0,1]). Currently I have had the greatest success with the dogbox method, and an exact tr_solver.
When I attempt to run the code, for certain data points it will raise:
ValueError: `x0` violates bound constraints.
I did not run into this issue (using the same code and data) until I updated to the most recent version of numpy/scipy (numpy 1.17.0, scipy 1.3.1), so I believe it to be a result of this update (I cannot downgrade, as other libraries that I require for other aspects of this project require these versions)
I'm running this on a large dataset (N ~15000), and for very specific values the curve fit fails, claiming that the initial guess is outside of the bound constraints. This is not the case, and even checking quickly via the print statement before the curve fit in the provided example confirms this.
At first I had thought that it was a numpy precision error and that a value this small was considered to be out of bounds, but altering it slightly or providing a new, arbitrary number of a similar magnitude does not cause a ValueError. Additionally, other failed values are as big as ~1e-10, so I assume it must be something else.
Here is an example that fails for me every time:
import numpy as np
import scipy as sp
from scipy.special import expit, logit
import scipy.optimize
def f(x,x0,g,c,k):
y = c*expit(k*10.*(x-x0)) + g*(1.-c)
return y
# x0 g c k
p0 = np.array([8.841357069490852e-01, 4.492363462957287e-19, 5.547073496706608e-01, 7.435378446218519e+00])
bounds = np.array([[-1.,1.], [0.,1.], [0.,1.], [0.,20.]])
x = np.array([1.0, 1.0, 1.0, 1.0, 1.0, 0.8911796599834791, 1.0, 1.0, 1.0, 0.33232919909076103, 1.0])
y = np.array([0.999, 0.999, 0.999, 0.999, 0.999, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001])
s = np.array([0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9])
print([pval >= b[0] and pval <= b[1] for pval,b in zip(p0,bounds)])
fit,cov = sp.optimize.curve_fit(f,x,y,p0=p0,sigma=s,bounds=([b[0] for b in bounds],[b[1] for b in bounds]),method='dogbox',tr_solver='exact')
print(fit)
print(cov)
Here is the specific error stack, everything after the above call to curve fit.
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\scipy\optimize\minpack.py", line 763, in curve_fit
**kwargs)
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\scipy\optimize\_lsq\least_squares.py", line 927, in least_squares
tr_solver, tr_options, verbose)
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\scipy\optimize\_lsq\dogbox.py", line 310, in dogbox
J = jac(x, f)
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\scipy\optimize\_lsq\least_squares.py", line 874, in jac_wrapped
kwargs=kwargs, sparsity=jac_sparsity)
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\scipy\optimize\_numdiff.py", line 362, in approx_derivative
raise ValueError("`x0` violates bound constraints.")
ValueError: `x0` violates bound constraints.
If anyone has any insight as to what may be causing this, I would greatly appreciate the help! I did some searching and couldn't find any answers that may relate to this scenario, so I decided to open this question up. Thanks!
EDIT 9/9/19:
np.__version__ is 1.17.2 and sp.__version__ is 1.3.1, when I originally posted this I was on numpy 1.17.0 but upgrading has not fixed the issue. I'm running this on Python 3.6.6 on 64-bit Windows 10.
If I change either the second or fourth bound to be +/-np.inf (or change both), then the code does in fact complete -- but I am still unsure how my x0 is invalid (and I still need to have the fit bounded to these values)
EDIT: 1/22/20
upgraded np.__version__ to 1.18.1 and sp.__version__ to 1.4.1, to no avail. I have opened an issue on the scipy github repository for this error. However, it seems that they are also unable to reproduce the issue and therefore cannot address it.

Horrible hack. Do not do it at home :) but if you just need to get work done at your own risk:
In
C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\scipy\optimize\_numdiff.py
Find:
if np.any((x0 < lb) | (x0 > ub)):
raise ValueError("`x0` violates bound constraints.")
Replace with:
if np.any(((x0 - lb) < -1e-12) | (x0 > ub)):
raise ValueError("`x0` violates bound constraints.")
Where -1e-12 is what you think your case can tolerate as an error of your bound constraint (x0-lb) < 0. Here x0 is a guess and lb is a lower bound.
I do not know what numerical horrors would result out of this hack. But if you just want to get going...

Related

Scipy curve fit doesn't perform a fit and raises "Covariance of the parameters could not be estimated" error

I am trying to do a simple linear curve fit with scipy, normally this method works fine for me. This time however for a reason unknown to me it doesn't work.
(I suspect that maybe the numbers are so big that it reaches the limit of what can be stored under a given data type.)
Regardless of the reason, the idea is to make a plot that looks like this:
As you see on the axis here the numbers are of a common order of magnitude. However this time I tried to make a fit to much bigger data points on the order of 1E10, for this I tried to use the following code (here I present only the code for making a scatter plot and then fitting only one data set).
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
ucrt_T = 2/np.sqrt(3)
ucrt_U = 0.1/np.sqrt(3)
T = [314.1, 325.1, 335.1, 345.1, 355.1, 365.1, 374.1, 384.1, 393.1]
T_to_4th = [9733560790.61, 11170378213.80, 12609495509.84, 14183383217.88, 15900203737.92, 17768359469.96, 19586229219.65, 21765930026.49, 23878782252.31]
ucrt_T_lst = [143130823.11, 158701221.00, 173801148.95, 189829733.26, 206814686.75, 224783722.22, 241820148.88, 261735288.93, 280568229.17]
UBlack = [1.9,3.1, 4.4, 5.6, 7.0, 8.7, 10.2, 11.8, 13.4]
def lin_function(x,a,b):
return a*x + b
def line_fit_2():
#Dodanie pozostałych punktów na wykresie
plt.scatter(UBlack, T_to_4th, color='blue')
plt.errorbar(UBlack, T_to_4th, yerr=ucrt_T, fmt='o')
#Seria CZARNA
VltBlack = np.array(UBlack)
Tt4 = np.array(T_to_4th)
popt, pcov = curve_fit(lin_function, VltBlack, Tt4, absolute_sigma=False)
perr = np.sqrt(np.diag(pcov))
y = lin_function(VltBlack, *popt)
#Stylistyka i wygląd wykresu
#plt.plot(Pressure1, y, '--', color = 'g', label="fit with: $a={:.3f}\pm{:.3f}$, $b={:.3f}\pm{:.3f}$" .format(popt[0], perr[0], popt[1], perr[1]))
plt.plot(VltBlack, y, '--', color='green')
plt.ylabel(r'$T^4$ w $[K^4]$')
plt.xlabel(r'Napięcie termometru U w [mV]')
plt.legend(['Fit', 'Data points'])
plt.grid()
plt.show()
line_fit_2()
If you will run it you will find out that the scatter plot is created however the fit isn't executed properly, as only a horizontal line will be added. Additionally an error OptimizeWarning: Covariance of the parameters could not be estimated category=OptimizeWarning) is raised.
I would be very happy to know what I am doing wrong or how to resolve this problem. All help is appreciated!
You've pretty much already answered your question, so I'll just confirm your suspicion: the reason the OptimizeWarning is raised is because the underlying optimization algorithm doesn't work properly/diverges due to large parameter numbers.
The solution is very simple, just scale your input parameters before using the fitting tool. Just keep the scaling in mind when you add labels to your x/y axis:
T_to_4th = np.array([9733560790.61, 11170378213.80, 12609495509.84, 14183383217.88, 15900203737.92, 17768359469.96, 19586229219.65, 21765930026.49, 23878782252.31])/10e6
ucrt_T_lst = np.array([143130823.11, 158701221.00, 173801148.95, 189829733.26, 206814686.75, 224783722.22, 241820148.88, 261735288.93, 280568229.17])/10e6
What I did is just divide the lists with big numbers by 10e6. This means that the values are no longer in kPa for example, but in mega kPa (which would be GPa now).
To divide the entire list by the same value, first convert it to a numpy array.
Hope this helps :)

How to Use NumPy 1.4 Polynomial Class to Fit Values

How do you use the new Polynomials sub-package in numpy to give it new x values and get an output of y values?
https://numpy.org/doc/stable/reference/routines.polynomials.package.html
In prior versions of numpy it went something like this:
poly = np.poly1d(np.polyfit(x, y, 3)
new_x = np.linspace(0, 100)
new_y = poly(new_x)
The new version I am struggling to give it x values that give me the y values of each?
from numpy.polynomial import Polynomial
poly = Polynomial(Polynomial.fit(x, y, 3))
When I give it an array of x it just returns the coefficients.
You can directly call the resulting series to evaluate it:
from numpy.polynomial import Polynomial
poly = Polynomial.fit(x, y, 3)
new_y = poly(new_x)
Check this page of the documentation it has several examples.
Unfortunately, the answer by #Joan Charmant and the supportive comment #rh109019 do not work.
The intuitive way suggested by #Joan Charmant is, basically, what the question's about: it doesn't work.
Evidently, there is a new method introduced in numpy.polynomial.polynomial devoted specifically to evaluating polynomials. See here.
Here's my code where I'm comparing the two approaches.
import numpy as np
Pgauge = np.asarray([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0])
NIST = np.asarray([1.1, 2.1, 3.1, 4.1, 5.1, 6.1, 7.1, 8.1])
calibrationCurve = np.polynomial.polynomial.Polynomial.fit(Pgauge,
NIST,
deg=1
)
print("The polynomial: {}".format(calibrationCurve))
x = np.asarray([0, 1]) # values of x to evaluate the polynomial at
c = calibrationCurve.coef # coefficients of the polynomial
print("The intuitive (wrong) way: {}".format(calibrationCurve(x)))
print("The correct way: {}".format(np.polynomial.polynomial.polyval(x, c)))
The first print command prints out the polynomial:4.6+3.5x.
If we want to evaluate it at the points 0 and 1 (x = np.asarray([0, 1])), we expect to get 4.6 and 8.1 respectively.
The second print command (that reads "The intuitive (wrong) way"), uses the method suggested by #Joan Charmant. It gives [0.1, 1.1] as the result. Which is wrong. Though seemingly, it looks ok: it gives two numbers as expected. But the numbers themselves are wrong. I don't know how these numbers were calculated. But if I had a bigger series of data, I wouldn't go with a calculator through it and assume I've got a correct result.
The last print command makes use of the polyval method suggested in the user manual that I cited above. And it works perfectly well. It gives [4.6, 8.1] as the result.
It so happens that my answer is wrong as well (see all the comments below by #user2357112 supports Monica).
But still, I'll leave it here for the folks who, like me, fell the victim of the confusing new numpy.polynomial library.
FIRST: why my code is wrong?
Everything's ok with it. But the line print("The polynomial: {}".format(calibrationCurve)) doesn't give me what, I think, it must give me. It takes the correct polynomial, changes its coefficients somehow and prints out a new polynomial with the changed coefficients. Still, it does store the correct polynomial in its memory and when you do the thing suggested by #Joan Charmant it may give you the correct answer if you ask it properly.
SECOND: how to use the new numpy.polynomial library in order to get a correct result?
Due to that peculiarity, you have to introduce a new line of code. Namely, do the Polynomial.fit() and immediately afterwards use the .convert() method. Then work with the converted polynomial only.
Here's my code that works correctly now.
import numpy as np
Pgauge = np.asarray([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0])
NIST = np.asarray([1.1, 2.1, 3.1, 4.1, 5.1, 6.1, 7.1, 8.1])
calibrationCurveMessedUp = np.polynomial.polynomial.Polynomial.fit(Pgauge,
NIST,
deg=1
)
calibrationCurve = calibrationCurveMessedUp.convert()
print("The polynomial: {}".format(calibrationCurve))
print("The rounded polynomial coefficients: {}".format(calibrationCurve.coef))
x = np.asarray([0, 1]) # values of x to evaluate the polynomial at
print(calibrationCurve(x))
THIRD: a little note.
Apparently, there is a possibility to get the correct polynomial without the additional line of code. Probably, you have to give the correct window and domain parameters to the Polynomial.fit() function. Or may be there is another way.
If anybody knows such a way, you're welcome to edit my current answer and add your code.

"RuntimeWarning: invalid value encountered in power" Using Scipy's ODR

I'm attempting to fit a function using Scipy's Orthogonal distance regression (odr) package and I keep getting the following error:
"RuntimeWarning: invalid value encountered in power"
this happened when I would use scipy's curve_fit function but I could always safely ignore the warning. But now it seems this is causing a numerical error that halts the fitting. I have based my code off of the example I found here:
python scipy.odrpack.odr example (with sample input / output)?
Here is my code:
import numpy as np
import scipy.odr.odrpack as odrpack
def divergence(x,xDiv):
return ( 1 - (x/xDiv) )**( -2.4 )
xValues = np.linspace(.25,.37,12)
yValues = np.array([ 6.94970607, 9.12475506, 10.65969954, 12.30241672,
14.44154148, 16.00261267, 19.98693664, 25.93076421,
30.89483997, 35.27106466, 50.81645983, 68.06009144])
xErrors = .0005*np.ones(len(xValues))
yErrors = np.array([ 0.31905094, 0.37956865, 0.24837562, 0.68320078, 1.25915789,
1.40241088, 0.33305157, 1.37165251, 0.32658393, 0.52253429,
1.04506858, 1.30633573])
wcModel = odrpack.Model(divergence)
mydata = odrpack.RealData(xValues, yValues, sx=xErrors, sy=yErrors)
myodr = odrpack.ODR(mydata, wcModel, beta0=[.8])
myoutput = myodr.run()
myoutput.pprint()
From looking at previous questions about this error I found here:
NumPy, RuntimeWarning: invalid value encountered in power
I suspected that the problem is that I'm raising a negatuve value to a power of a fractional value. But what I'm raising to the power -2.4 (1-x/xDiv) isn't negative (at least around the initial guess of xDiv=.8). But when I try to make my y-values of complex type I get a new error:
"ValueError: y could not be made into a suitable array"
from the line with the command
myoutput = myodr.run().
The only examples I can find that use this odr package are fitting to polynomials so I suspect that might be the problem?

Resistivity non linear fit in Python

I'm trying to fit Einstein approximation of resistivity in a solid in a set of experimental data.
I have resistivity vs temperature (from 200 to 4 K)
import xlrd as xd
import matplotlib.pyplot as plt
import numpy as np
import pylab as pl
import scipy as sp
from scipy.optimize import curve_fit
#retrieve data from file
data = pl.loadtxt('salita.txt')
Temp = data[:, 1]
Res = data[:, 2]
#define fitting function
def einstein_func( T, ro0, AE, TE):
nl = np.sinh(TE/(2*T))
return ro0 + AE*nl*T
p0 = sp.array([1 , 1, 1])
coeffs, cov = curve_fit(einstein_func, Temp, Res, p0)
But I get these warnings
crio.py:14: RuntimeWarning: divide by zero encountered in divide
nl = np.sinh(TE/(2*T))
crio.py:14: RuntimeWarning: overflow encountered in sinh
nl = np.sinh(TE/(2*T))
crio.py:15: RuntimeWarning: divide by zero encountered in divide
return ro0 + AE*np.sinh(TE/(2*T))*T
crio.py:15: RuntimeWarning: overflow encountered in sinh
return ro0 + AE*np.sinh(TE/(2*T))*T
crio.py:15: RuntimeWarning: invalid value encountered in multiply
return ro0 + AE*np.sinh(TE/(2*T))*T
Traceback (most recent call last):
File "crio.py", line 19, in <module>
coeffs, cov = curve_fit(einstein_func, Temp, Res, p0)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/optimize/minpack.py", line 511, in curve_fit
raise RuntimeError(msg)
RuntimeError: Optimal parameters not found: Number of calls to function has reached maxfev = 800.
I don't understand why it keeps saying that there is a divide by zero in sinh, since I have strictly positive values. Varying my starting guess has no effect on it.
EDIT: My dataset is organized like this:
4.39531E+0 1.16083E-7
4.39555E+0 -5.92258E-8
4.39554E+0 -3.79045E-8
4.39525E+0 -2.13213E-8
4.39619E+0 -4.02736E-8
4.43130E+0 -1.42142E-8
4.45900E+0 -2.60594E-8
4.46129E+0 -9.00232E-8
4.46181E+0 1.42142E-7
4.46195E+0 -2.13213E-8
4.46225E+0 4.26426E-8
4.46864E+0 -2.60594E-8
4.47628E+0 1.37404E-7
4.47747E+0 9.47612E-9
4.48008E+0 2.84284E-8
4.48795E+0 1.35035E-7
4.49804E+0 1.39773E-7
4.51151E+0 -1.75308E-7
4.54916E+0 -1.63463E-7
4.59176E+0 -2.36902E-9
where the first column is temperature and the second one is resistivity (negative values are due to noise in trial current since the sample is a PbIn alloy which becomes superconductive at temperature lower than 6.7-6.9K, here we are at 4.5K).
Argument I'm providing to sinh are Numpy arrays, with a linear function ro0 + AE*T my code works. I've tried with scipy.optimize.minimize but the result is the same.
Now I see that I have almost nine hundred values in my file, may that be the problem?
I have edited my dataset removing some lines and now the only warning showing is
RuntimeWarning: overflow encountered in sinh
How can I workaround it?
Here are a couple of observations that could help:
You could try the least-squares fit directly with leastsq, providing the Jacobian, which might help tame it.
I'm guessing you don't want the superconducting temperatures in your data set at all if you're fitting to an Einstein model (do you have a source for this eqn, btw?)
Do make sure your initial guesses are as good as they could possibly be (ro0=AE=TE=1 probably won't cut it).
Plot your data and make sure there aren't any weird artefacts
You seem to be indexing your data array in the wrong way in your code example: if the data is structured as you say, you want:
Temp = data[:, 0]
Res = data[:, 1]
(Python indexes start at 0).

Python Numerical Integration for Volume of Region

For a program, I need an algorithm to very quickly compute the volume of a solid. This shape is specified by a function that, given a point P(x,y,z), returns 1 if P is a point of the solid and 0 if P is not a point of the solid.
I have tried using numpy using the following test:
import numpy
from scipy.integrate import *
def integrand(x,y,z):
if x**2. + y**2. + z**2. <=1.:
return 1.
else:
return 0.
g=lambda x: -2.
f=lambda x: 2.
q=lambda x,y: -2.
r=lambda x,y: 2.
I=tplquad(integrand,-2.,2.,g,f,q,r)
print I
but it fails giving me the following errors:
Warning (from warnings module):
File "C:\Python27\lib\site-packages\scipy\integrate\quadpack.py", line 321
warnings.warn(msg, IntegrationWarning)
IntegrationWarning: The maximum number of subdivisions (50) has been achieved.
If increasing the limit yields no improvement it is advised to analyze
the integrand in order to determine the difficulties. If the position of a
local difficulty can be determined (singularity, discontinuity) one will
probably gain from splitting up the interval and calling the integrator
on the subranges. Perhaps a special-purpose integrator should be used.
Warning (from warnings module):
File "C:\Python27\lib\site-packages\scipy\integrate\quadpack.py", line 321
warnings.warn(msg, IntegrationWarning)
IntegrationWarning: The algorithm does not converge. Roundoff error is detected
in the extrapolation table. It is assumed that the requested tolerance
cannot be achieved, and that the returned result (if full_output = 1) is
the best which can be obtained.
Warning (from warnings module):
File "C:\Python27\lib\site-packages\scipy\integrate\quadpack.py", line 321
warnings.warn(msg, IntegrationWarning)
IntegrationWarning: The occurrence of roundoff error is detected, which prevents
the requested tolerance from being achieved. The error may be
underestimated.
Warning (from warnings module):
File "C:\Python27\lib\site-packages\scipy\integrate\quadpack.py", line 321
warnings.warn(msg, IntegrationWarning)
IntegrationWarning: The integral is probably divergent, or slowly convergent.
So, naturally, I looked for "special-purpose integrators", but could not find any that would do what I needed.
Then, I tried writing my own integration using the Monte Carlo method and tested it with the same shape:
import random
# Monte Carlo Method
def get_volume(f,(x0,x1),(y0,y1),(z0,z1),prec=0.001,init_sample=5000):
xr=(x0,x1)
yr=(y0,y1)
zr=(z0,z1)
vdomain=(x1-x0)*(y1-y0)*(z1-z0)
def rand((p0,p1)):
return p0+random.random()*(p1-p0)
vol=0.
points=0.
s=0. # sum part of variance of f
err=0.
percent=0
while err>prec or points<init_sample:
p=(rand(xr),rand(yr),rand(zr))
rpoint=f(p)
vol+=rpoint
points+=1
s+=(rpoint-vol/points)**2
if points>1:
err=vdomain*(((1./(points-1.))*s)**0.5)/(points**0.5)
if err>0:
if int(100.*prec/err)>=percent+1:
percent=int(100.*prec/err)
print percent,'% complete\n error:',err
print int(points),'points used.'
return vdomain*vol/points
f=lambda (x,y,z): ((x**2)+(y**2)<=4.) and ((z**2)<=9.) and ((x**2)+(y**2)>=0.25)
print get_volume(f,(-2.,2.),(-2.,2.),(-2.,2.))
but this works too slowly. For this program I will be using this numerical integration about 100 times or so, and I will also be doing it on larger shapes, which will take minutes if not an hour or two at the rate it goes now, not to mention that I want a better precision than 2 decimal places.
I have tried implementing a MISER Monte Carlo method, but was having some difficulties and I'm still unsure how much faster it would be.
So, I am asking if there are any libraries that can do what I am asking, or if there are any better algorithms which work several times faster (for the same accuracy). Any suggestions are welcome, as I've been working on this for quite a while now.
EDIT:
If I cannot get this working in Python, I am open to switching to any other language that is both compilable and has relatively easy GUI functionality. Any suggestions are welcome.
As the others have already noted, finding the volume of domain that's given by a Boolean function is hard. You could used pygalmesh (a small project of mine) which sits on top of CGAL and gives you back a tetrahedral mesh. This
import numpy
import pygalmesh
import meshplex
class Custom(pygalmesh.DomainBase):
def __init__(self):
super(Custom, self).__init__()
return
def eval(self, x):
return (x[0]**2 + x[1]**2 + x[2]**2) - 1.0
def get_bounding_sphere_squared_radius(self):
return 2.0
mesh = pygalmesh.generate_mesh(Custom(), cell_size=1.0e-1)
gives you
From there on out, you can use a variety of packages for extracting the volume. One possibility: meshplex (another one out of my zoo):
import meshplex
mp = meshplex.MeshTetra(mesh.points, mesh.cells["tetra"])
print(numpy.sum(mp.cell_volumes))
gives
4.161777
which is close enough to the true value 4/3 pi = 4.18879020478.... If want more precision, decrease the cell_size in the mesh generation above.
Your function is not a continuous function, I think it's difficult to do the integration.
How about:
import numpy as np
def sphere(x,y,z):
return x**2 + y**2 + z**2 <= 1
x, y, z = np.random.uniform(-2, 2, (3, 2000000))
sphere(x, y, z).mean() * (4**3), 4/3.0*np.pi
output:
(4.1930560000000003, 4.1887902047863905)
Or VTK:
from tvtk.api import tvtk
n = 151
r = 2.0
x0, x1 = -r, r
y0, y1 = -r, r
z0, z1 = -r, r
X,Y,Z = np.mgrid[x0:x1:n*1j, y0:y1:n*1j, z0:z1:n*1j]
s = sphere(X, Y, Z)
img = tvtk.ImageData(spacing=((x1-x0)/(n-1), (y1-y0)/(n-1), (z1-z0)/(n-1)),
origin=(x0, y0, z0), dimensions=(n, n, n))
img.point_data.scalars = s.astype(float).ravel()
blur = tvtk.ImageGaussianSmooth(input=img)
blur.set_standard_deviation(1)
contours = tvtk.ContourFilter(input = blur.output)
contours.set_value(0, 0.5)
mp = tvtk.MassProperties(input = contours.output)
mp.volume, mp.surface_area
output:
4.186006622559839, 12.621690438955586
It seems to be very difficult without giving a little hint on the boundary:
import numpy as np
from scipy.integrate import *
def integrand(z,y,x):
return 1. if x**2 + y**2 + z**2 <= 1. else 0.
g=lambda x: -2
h=lambda x: 2
q=lambda x,y: -np.sqrt(max(0, 1-x**2-y**2))
r=lambda x,y: np.sqrt(max(0, 1-x**2-y**2))
I=tplquad(integrand,-2.,2.,g,h,q,r)
print I

Categories

Resources