Simple iteration in python - python

help in writing the code. What am I doing wrong?
My goal is to find the roots of an equation by an iterative method with precision: ξ = 10^-5f(x) (or ε = 0.00001).
Equation: 2.056x^43+3x^31+4x^12+x^8-3,478 = 0.
Code:
# -*- coding: utf-8 -*-
import math
#Definition of function
def phi (x):
return 2.056*(x**43)+3*(x**31)+4*(x**12)+(x**8)-3.478
#Recursive search function
def findRoot (f, x, q, epsilon):
fx=f(x)
#Checking the ending condition
if (1 / (1-q) * abs (fx-x) <epsilon):
print 'Root value', fx
print '1 / (1-q) * abs (fx-x)=', 1 / (1-q) * abs (fx-x)
else:
print 'Current approximation', fx
print '1 / (1-q) * abs (fx-x)=', 1 / (1-q) * abs(fx-x)
findRoot (f, fx, q, epsilon)
findRoot(phi, 0.5, 0.5, 0.00001)
Execution
Current approximation -3.4731171861
1 / (1-q) * abs (fx-x)= 7.94623437221
Current approximation -3.66403074312e+23
1 / (1-q) * abs (fx-x)= 7.32806148624e+23
Traceback (most recent call last):
File "Zavd1f.py", line 17, in <module>
findRoot(phi, 0.5, 0.5, 0.00001)
File "Zavd1f.py", line 16, in findRoot
findRoot (f, fx, q, epsilon)
File "Zavd1f.py", line 16, in findRoot
findRoot (f, fx, q, epsilon)
File "Zavd1f.py", line 8, in findRoot
fx=f(x)
File "Zavd1f.py", line 5, in phi
return 2.056*(x**43)+3*(x**31)+4*(x**12)+(x**8)-3.478
OverflowError: (34, 'Numerical result out of range')

This iterative method simply applies the functional value repeatedly as the argument. This works only when your initial guess is within the radius of convergence. Yours is not. You need to implement an algorithm that gets closer with each iteration.
The current algorithm has an initial guess at 0.5; this raised to high powers (8 is high enough) is close to 0, so we get a result very close to the constant term.
f(0.5) => -3.47...
f(-3.47) => -3.66...e+23 (really big)
f(really_big) => out of bounds
So ... it's either your starting value, or your algorithm, or your implementation of that algorithm.
I played with your code a little; I think you might be trying to implement a bisection algorithm (from q=0.5) or Newton's method. In either case, you've neglected to code the next guess.
You simply use f(x) as the next guess for the root, an x value. This is incorrect; you need to remember that this is a y value; you use it to compute a better guess for your next x value. f(x) is not, itself, that next guess.
Since you didn't post your algorithm, I'm not sure how twice the error (the expression you hard-coded three times) is supposed to related to the iterative process.

Related

Force variable to be non-negative in Fipy

I am currently using a sweep loop to solve a differential equation (eq0) with respect to my cell variable phi using FiPy in python. Because my equation is non-linear, I am using a sweep loop as shown in an extract of my code below.
while res0 > resphi_tol:
res0 = eq0.sweep(var=phi, dt=dt)
But I keep getting the following error:
C:\Python27\lib\site-packages\fipy\variables\variable.py:1100: RuntimeWarning: invalid value encountered in power
return self._BinaryOperatorVariable(lambda a,b: pow(a,b), other, value1mattersForUnit=True)
C:\Python27\lib\site-packages\fipy\variables\variable.py:1186: RuntimeWarning: invalid value encountered in less_equal
return self._BinaryOperatorVariable(lambda a,b: a<=b, other)
Traceback (most recent call last):
.. File "SBM_sphere3.py", line 59, in
....res0 = eq0.sweep(var=phi, dt=dt)
..
File "C:\Python27\lib\site-packages\fipy\terms\term.py", line 207, in sweep
....solver._solve()
.. File "C:\Python27\lib\site-packages\fipy\solvers\pysparse\pysparseSolver.py", line 68, in _solve
....self.solve(self.matrix, array, self.RHSvector)
.. File "C:\Python27\lib\site-packages\fipy\solvers\pysparse\linearLUSolver.py", line 53, in _solve__
....LU = superlu.factorize(L.matrix.to_csr())
.. File "C:\Python27\lib\site-packages\pysparse\misc__init__.py", line 29, in newFunc
....return func(*args, **kwargs)
.. File "C:\Python27\lib\site-packages\pysparse__init__.py", line 47, in factorize
....return self.factorizeFnc(*args, **kwargs)
RuntimeError: Factor is exactly singular
I am pretty sure this error is due to term phi^(2/3) present in eq0. If I replace this term by abs(phi)^(2/3), the error goes away.
I assume the sweep loop returns a negative value for a few cells in phi at some point, resulting in error since we can't power a negative value with a non-integer exponent.
So my question is: is there a way to force sweep to avoid negative solutions?
I have tried to include a line that sets all negative values to 0 before sweeping:
while res0 > resphi_tol:
phi.setValue(0.,where=phi<0.)
res0 = eq0.sweep(var=phi, dt=dt)
The error is still there (because sweep tries to calculate the new matrix of coefficients just after solving the linearized system?).
Edit: I'm using Python 2.7.14 with FiPy 3.2. I'm sharing below the parts of my code which I think are relevant for the query. The entire code is quite extense.
Some context: I'm solving balance equations for suspension flow. eq0 corresponds to the mass balance equation for the particle phase, and phi is the volume fraction of particles.
from pylab import *
from fipy import *
from fipy.tools import numerix
from scipy import misc
import osmotic_pressure_functions as opf
kic=96.91
lic=0.049
dt=1.e-2
steps=10
tol=1.e-6
Nx=8
Ny=4
Lx=Nx/Ny
dL=1./Ny
mesh = PeriodicGrid2DTopBottom(nx=Nx, ny=Ny, dx=dL, dy=dL)
x, y = mesh.cellCenters
phi = CellVariable(mesh=mesh, hasOld=True, value=0.,name='Volume fraction')
phi.constrain(0.01, mesh.facesLeft)
phi.constrain(0., mesh.facesRight)
rad=0.1
var1 = DistanceVariable(name='distance to center', mesh=mesh, value=numerix.sqrt((x-Nx*dL/2.)**2+(y-Ny*dL/2.)**2))
pi_ci = CellVariable(mesh=mesh, value=0.,name='Colloid-interface energy map')
pi_ci.setValue(kic*exp(-1.*(var1-rad)/(1.*lic)), where=(var1 > rad))
pi_ci.setValue(kic, where=(var1 <= rad))
def pi_cc_entr(x):
return opf.vantHoff(x)
def pi_cc_vdw(x):
return opf.van_der_waals(x,0.74,0.1)
def pi_cc(x):
return pi_cc_entr(x) + pi_cc_vdw(x)
diffusioncoeff = misc.derivative(pi_cc,phi,dx=1.e-6)
eq0 = TransientTerm() + ConvectionTerm(-pi_ci.faceGrad) == DiffusionTerm(coeff=diffusioncoeff)
step=0
t=0.
for step in range(steps):
print 'Step ', step
phi.updateOld()
res0 = 1e+10
while res0 > tol :
phi.setValue(0., where=phi<0)
res0 = eq0.sweep(var=phi, dt=dt) #ERROR HAPPENS HERE
Functions vantHoff and van_der_waals are being defined in a separate file.
def vantHoff(phi):
return phi
def van_der_waals(phi,phi_cp,nd_v):
return (nd_v*phi**3) / ((phi_cp-(phi_cp)**(1./3.)*(phi)**(2./3.))**2)
The error arises because the coefficient of the DiffusionTerm is all nan. This, in turn, is because the diffusion coefficient is defined as
(((((((Volume fraction + -1e-06) + (((pow((Volume fraction + -1e-06), 3)) * 0.1) / (pow((0.74 - ((pow((Volume fraction + -1e-06), 0.6666666666666666)) * 0.9045041696510275)), 2)))) * -0.5) + 0.0) + (((Volume fraction + 0.0) + (((pow((Volume fraction + 0.0), 3)) * 0.1) / (pow((0.74 - ((pow((Volume fraction + 0.0), 0.6666666666666666)) * 0.9045041696510275)), 2)))) * 0.0)) + (((Volume fraction + 1e-06) + (((pow((Volume fraction + 1e-06), 3)) * 0.1) / (pow((0.74 - ((pow((Volume fraction + 1e-06), 0.6666666666666666)) * 0.9045041696510275)), 2)))) * 0.5)) / 1e-06)
and Volume fraction (phi) is all zero, so -1e-06 is being raised to fractional powers, which is undefined.
The factors of -1e-06 arise from your use of scipy.misc.derivative() to apparently calculate symbolic derivatives? I don't believe it's intended for that. You'll likely have better luck with SymPy.

SymPy dsolve returns different results for mathematically equivalent differential equations

Here is the content of my script:
from sympy import *
x = symbols('x')
init_printing(use_unicode=True)
f = symbols('f', cls=Function)
diffeq = Eq(x**2 * f(x).diff(x, x) + x * f(x).diff(x) - f(x) , 1/((1+x**2)**(3)) )
print dsolve(diffeq, f(x))
This program returns the following output:
Eq(f(x), (C1*x**2 + C1 + C2*x**4 + C2*x**2 - 15*x**4*atan(x) - 15*x**3 - 18*x**2*atan(x) - 13*x - 3*atan(x))/(16*x*(x**2 + 1)))
But when I define the variable diffeq like this:
diffeq = Eq(f(x).diff(x, x) + f(x).diff(x)/x - f(x)/x**(2) , 1 / ((1+x**2)**(3) * x**(2)) )
then I receive the output:
Traceback (most recent call last):
File "/home/foo/odeSympyTrial01.py", line 12, in <module>
print dsolve(diffeq, f(x))
File "/usr/lib/python2.7/dist-packages/sympy/solvers/ode.py", line 625, in dsolve
x0=x0, n=n, **kwargs)
File "/usr/lib/python2.7/dist-packages/sympy/solvers/deutils.py", line 235, in _desolve
raise NotImplementedError(dummy + "solve" + ": Cannot solve " + str(eq))
NotImplementedError: solve: Cannot solve Derivative(f(x), x, x) + Derivative(f(x), x)/x - f(x)/x**2 - 1/(x**2*(x**2 + 1)**3)
And when I define the variable diffeq like this:
diffeq = Eq(f(x).diff(x, x) * x**(2) + f(x).diff(x) * x**(2) /x - f(x) * x**(2) /x**(2) , 1* x**(2)/((1+x**2)**(3) * x**(2)) )
then I receive the output:
Eq(f(x), (C1*x**2 + C1 + C2*x**4 + C2*x**2 - 15*x**4*atan(x) - 15*x**3 - 18*x**2*atan(x) - 13*x - 3*atan(x))/(16*x*(x**2 + 1)))
In every one of these cases, the differential equations diffeq are mathematically equal. Therefore in my opinion dsolve() should return the same output for each case. Somebody please help me to understand why dsolve() returns an error in the second case. How should the nonhomogeneous linear ordinary differential equation be expressed to ensure dsolve() does not return an error?
Short explanation: the logic of SymPy ODE module is often naive and sometimes incorrect.
As written originally, with
x**2 * f(x).diff(x, x) + x * f(x).diff(x) - f(x)
this matches the form of Cauchy–Euler equation (also known as Euler's equation): the power of x in each coefficient is the order of derivative. SymPy detects this structure and applies an appropriate method. But if you divide by x**2,
f(x).diff(x, x) + f(x).diff(x)/x - f(x)/x**(2)
this is no longer the case: the second derivative does not have the power of x**2 so the match fails. A more careful check could detect the latent Cauchy-Euler structure here, but that's not implemented, as one can see by looking at the source.
You can check that this is indeed what's going on with
classify_ode(diffeq, f(x))
which will return 'nth_linear_euler_eq_nonhomogeneous_variation_of_parameters' in the first case but not the second.
While looking at the source, one can also seen an example of wrong logic.
if coeff.is_Mul:
if coeff.has(f(x)):
return False
return x**order in coeff.args
For example, x**2*sin(x) will pass this check with order=2, which means that SymPy will mistake x**2*sin(x)*f(x).diff(x, x) - f(x) = 0 for Euler's equation. And indeed,
dsolve(x**2*sin(x)*f(x).diff(x, x) - f(x), f(x))
"solves" the equation incorrectly. Do not trust ODE solutions from SymPy.

I´m getting the error, "atan2() takes exactly 2 arguments (1 given)"

I´m new to programing so I have no idea what went wrong. please help
from math import atan2, pi
x = int(input("value of x"))
y = int(input("value of y"))
r = (x**2 + y**2) ** 0.5
ang = atan2(y/x)
print("Hypotenuse is", r, "angle is", ang)
In Python, there are 2 arctangent functions: atan is simply the inverse of tan; but atan2 takes 2 arguments. In your case since you know both catheti, you could as well use the 2-argument function atan2:
ang = atan2(y, x)
Alternatively, you might write
ang = atan(y / x)
The rationale for atan2 is that it works correctly even if x is 0; while with atan(y / x) a ZeroDivisionError: float division by zero would be raised.
Additionally, atan can only give an angle between -π/2 ... +π/2, whereas atan2 knows the signs of both y and x, and thus can know which of the 4 quadrants the value falls to; its value ranges from -π to +π. Though, of course you wouldn't have a triangle with negative width or height...
The reason for that error is that atan2 requires two arguments. Observe:
>>> from math import atan, atan2
>>> atan(2)
1.1071487177940904
>>> atan2(4, 2)
1.1071487177940904
Note that atan(y/x) does not work if x is zero but atan2(y, x) will continue to work just fine:
>>> atan(4/0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ZeroDivisionError: division by zero
>>> atan2(4, 0)
1.5707963267948966

Fitting a variable Sinc function in python

I would like to fit a sinc function to a bunch of datalines.
Using a gauss the fit itself does work but the data does not seem to be sufficiently gaussian, so I figured I could just switch to sinc..
I just tried to put together a short piece of self running code but realized, that I probably do not fully understand, how arrays are handled if handed over to a function, which could be part of the reason, why I get error messages calling my program
So my code currently looks as follows:
from numpy import exp
from scipy.optimize import curve_fit
from math import sin, pi
def gauss(x,*p):
print(p)
A, mu, sigma = p
return A*exp(-1*(x[:]-mu)*(x[:]-mu)/sigma/sigma)
def sincSquare_mod(x,*p):
A, mu, sigma = p
return A * (sin(pi*(x[:]-mu)*sigma) / (pi*(x[:]-mu)*sigma))**2
p0 = [1., 30., 5.]
xpos = range(100)
fitdata = gauss(xpos,p0)
p1, var_matrix = curve_fit(sincSquare_mod, xpos, fitdata, p0)
What I get is:
Traceback (most recent call last):
File "orthogonal_fit_test.py", line 18, in <module>
fitdata = gauss(xpos,p0)
File "orthogonal_fit_test.py", line 7, in gauss
A, mu, sigma = p
ValueError: need more than 1 value to unpack
From my understanding p is not handed over correctly, which is odd, because it is in my actual code. I then get a similar message from the sincSquare function, when fitted, which could probably be the same type of error. I am fairly new to the star operator, so there might be a glitch hidden...
Anybody some ideas? :)
Thanks!
You need to make three changes,
def gauss(x, A, mu, sigma):
return A*exp(-1*(x[:]-mu)*(x[:]-mu)/sigma/sigma)
def sincSquare_mod(x, A, mu, sigma):
x=np.array(x)
return A * (np.sin(pi*(x[:]-mu)*sigma) / (pi*(x[:]-mu)*sigma))**2
fitdata = gauss(xpos,*p0)
1, See Documentation
2, replace sin by the numpy version for array broadcasting
3, straight forward right? :P
Note, i think you are looking for p1, var_matrix = curve_fit(gauss,... rather than the one in the OP, which appears do not have a solution.
Also worth noting is that you will get rounding errors as x*Pi gets close to zero that might get magnified. You can approximate as demonstrated below for better results (VB.NET, sorry):
Private Function sinc(x As Double) As Double
x = (x * Math.PI)
'The Taylor Series expansion of Sin(x)/x is used to limit rounding errors for small values of x
If x < 0.01 And x > -0.01 Then
Return 1.0 - x ^ 2 / 6.0 + x ^ 4 / 120.0
End If
Return Math.Sin(x) / x
End Function
http://www.wolframalpha.com/input/?i=taylor+series+sin+%28x%29+%2F+x&dataset=&equal=Submit

Given f, is there an automatic way to calculate fprime for Newton's method?

The following was ported from the pseudo-code from the Wikipedia article on Newton's method:
#! /usr/bin/env python3
# https://en.wikipedia.org/wiki/Newton's_method
import sys
x0 = 1
f = lambda x: x ** 2 - 2
fprime = lambda x: 2 * x
tolerance = 1e-10
epsilon = sys.float_info.epsilon
maxIterations = 20
for i in range(maxIterations):
denominator = fprime(x0)
if abs(denominator) < epsilon:
print('WARNING: Denominator is too small')
break
newtonX = x0 - f(x0) / denominator
if abs(newtonX - x0) < tolerance:
print('The root is', newtonX)
break
x0 = newtonX
else:
print('WARNING: Not able to find solution within the desired tolerance of', tolerance)
print('The last computed approximate root was', newtonX)
Question
Is there an automated way to calculate some form of fprime given some form of f in Python 3.x?
A common way of approximating the derivative of f at x is using a finite difference:
f'(x) = (f(x+h) - f(x))/h Forward difference
f'(x) = (f(x+h) - f(x-h))/2h Symmetric
The best choice of h depends on x and f: mathematically the difference approaches the derivative as h tends to 0, but the method suffers from loss of accuracy due to catastrophic cancellation if h is too small. Also x+h should be distinct from x. Something like h = x*1e-15 might be appropriate for your application. See also implementing the derivative in C/C++.
You can avoid approximating f' by using the secant method. It doesn't converge as fast as Newton's, but it's computationally cheaper and you avoid the problem of having to calculate the derivative.
You can approximate fprime any number of ways. One of the simplest would be something like:
lambda fprime x,dx=0.1: (f(x+dx) - f(x-dx))/(2*dx)
the idea here is to sample f around the point x. The sampling region (determined by dx) should be small enough that the variation in f over that region is approximately linear. The algorithm that I've used is known as the midpoint method. You could get more accurate by using higher order polynomial fits for most functions, but that would be more expensive to calculate.
Of course, you'll always be more accurate and efficient if you know the analytical derivative.
Answer
Define the functions formula and derivative as the following directly after your import.
def formula(*array):
calculate = lambda x: sum(c * x ** p for p, c in enumerate(array))
calculate.coefficients = array
return calculate
def derivative(function):
return (p * c for p, c in enumerate(function.coefficients[1:], 1))
Redefine f using formula by plugging in the function's coefficients in order of increasing power.
f = formula(-2, 0, 1)
Redefine fprime so that it is automatically created using functions derivative and formula.
fprime = formula(*derivative(f))
That should solve your requirement to automatically calculate fprime from f in Python 3.x.
Summary
This is the final solution that produces the original answer while automatically calculating fprime.
#! /usr/bin/env python3
# https://en.wikipedia.org/wiki/Newton's_method
import sys
def formula(*array):
calculate = lambda x: sum(c * x ** p for p, c in enumerate(array))
calculate.coefficients = array
return calculate
def derivative(function):
return (p * c for p, c in enumerate(function.coefficients[1:], 1))
x0 = 1
f = formula(-2, 0, 1)
fprime = formula(*derivative(f))
tolerance = 1e-10
epsilon = sys.float_info.epsilon
maxIterations = 20
for i in range(maxIterations):
denominator = fprime(x0)
if abs(denominator) < epsilon:
print('WARNING: Denominator is too small')
break
newtonX = x0 - f(x0) / denominator
if abs(newtonX - x0) < tolerance:
print('The root is', newtonX)
break
x0 = newtonX
else:
print('WARNING: Not able to find solution within the desired tolerance of', tolerance)
print('The last computed approximate root was', newtonX)

Categories

Resources