Python: How to integrate a math function using only math module? - python

Just started learning python, and was asked to define a python function that integrate a math function.
We were instructed that the python function must be in the following form: (for example, to calculate the area of y = 2x + 3 between x=1 and x=2 )
integrate( 2 * x + 3, 1, 2 )
(it should return the area below)
and we are not allowed to use/import any libraries other than math (and the built in integration tool is not allowed either).
Any idea how I should go about it? When I wrote the program, I always get x is undefined, but if I define x as a value ( lets say 0 ) then the 2*x+3 part in the parameters is always taken as a value instead of a math equation, so I can't really use it inside?
It would be very helpful, not just to this assignment, but many in the future if I know how a python function can take a math equation as parameter, so thanks alot.

Let's say your integration function looks like this:
def integrate(func, lo_x, hi_x):
#... Stuff to perform the integral, which will need to evaluate
# the passed function for various values of x, like this
y = func(x)
#... more stuff
return value
Then you can call it like this:
value = integrate(lambda x: 2 * x + 3, 1, 2)
edit
However, if the call to the integration function has to look exactly like
integrate( 2 * x + 3, 1, 2 )
then things are a bit trickier. If you know that the function is only going to be called with a polynomial function you could do it by making x an instance of a polynomial class, as suggested by M. Arthur Vaïsse in his answer.
Or, if the integrate( 2 * x + 3, 1, 2 ) comes from a string, eg from a command line argument or a raw_input() call, then you could extract the 2 * x + 3 (or whatever) from the string using standard Python string methods and then build a lambda function from that using exec.

Here come an implementation that fill the needs I think. It allow you to define mathematical function such as 2x+3 and propose an implementation of integral calculation by step as described here [http://en.wikipedia.org/wiki/Darboux_integral]
import math
class PolynomialEquation():
""" Allow to create function that are polynomial """
def __init__(self,coef):
"""
coef : coeficients of the polynome.
An equation initialized with [1,2,3] as parameters is equivalent to:
y = 1 + 2X + 3X²
"""
self.coef = coef
def __call__(self, x):
"""
Make the object callable like a function.
Return the value of the equation for x
"""
return sum( [self.coef[i]*(x**i) for i in range(len(self.coef)) ])
def step_integration(function, start, end, steps=100):
"""
Proceed to a step integration of the function.
The more steps there are, the more the approximation is good.
"""
step_size = (end-start)/steps
values = [start + i*step_size for i in range(1,steps+1)]
return sum([math.fabs(function(value)*step_size) for value in values])
if __name__ == "__main__":
#check that PolynomialEquation.value works properly. Assert make the program crash if the test is False.
#y = 2x+3 -> y = 3+2x -> PolynomialEquation([3,2])
eq = PolynomialEquation([3,2])
assert eq(0) == 3
assert eq(1) == 5
assert eq(2) == 7
#y = 1 + 2X + 3X² -> PolynomialEquation([1,2,3])
eq2 = PolynomialEquation([1,2,3])
assert eq2(0) == 1
assert eq2(1) == 6
assert eq2(2) == 17
print(step_integration(eq, 0, 10))
print(step_integration(math.sin, 0, 10))
EDIT : in truth the implementation is only the upper Darboux integral. The true Darboux integral could be computed if really needed by computing the lower Darboux integral ( replace range(1, steps+1) by range(steps) in step_integration function give you the lower Darboux function. And then increase the step parameter while the difference between the two Darboux function is greater than a small value depending on your precision need (could be 0.001 for example). Thus a 100 step integration is suppose to give you a decent approximation of the integral value.

Related

Implementing a function defined by double infinite sums in python

I want to implement a python function that evaluates a mathematical function (of four variables) defined by a double infinite sum at a particular point.
I've done this by using
def myfunction(x):
value = 0.0
for m in range(1,100):
for n in range(1,100):
value = value + 4.0*(1.0/((n + m)*np.pi**5)) * np.sin(n*x.detach().numpy()[:, 0:1]) * np.sin(m**2 * x.detach().numpy()[:, 1:2]) * np.sin(n*np.pi*x.detach().numpy()[:, 2:3])* np.cos(m*2*x.detach().numpy()[:, 3:4])
return torch.from_numpy(value)
Is there a better (more efficient) way to do this? When I call this function, it runs very slowly (and I'm only summing indices up to 100).

How to calculate a sigmoid function without using an exp() function in Python?

I'm working in somewhat of a limited development environment. I'm writing a neural network in Python. I don't have access to numpy and as it is I can't even import the math module. So my options are limited. I need to calculate the sigmoid function, however I'm not sure how the exp() function works under the hood. I understand exponents and that I can use code like:
base = .57
exp = base ** exponent
However I'm not sure what exponent should be? How do functions like numpy.exp() calculate the exponent? This is what I need to replicate.
The exponential function exp(a) is equivalent to e ** a, where e is Euler's number.
>>> e = 2.718281828459045
>>> def exp(a):
... return e ** a
...
>>> import math # accuracy test
>>> [math.exp(i) - exp(i) for i in range(1, 12, 3)]
[0.0, 7.105427357601002e-15, 2.2737367544323206e-13, 1.4551915228366852e-11]
def sigmoid(z):
e = 2.718281828459
return 1.0/(1.0 + e**(-1.0*z))
# This is the formula for sigmoid in pure python
# where z = hypothesis. You have to find the value of hypothesis
you can use ** just fine for your use case it will work with both float and integer input
print(2**3)
8
print(2**0.5 )
1.4142135623730951
if you really need a drop in replacement for numpy.exp()
you can just make a function that behaves like it is written in the docs https://numpy.org/doc/stable/reference/generated/numpy.exp.html
from typing import List
def not_numpy_exp(x:[List[float],float]):
e = 2.718281828459045 # close enough
if type(x) == list:
return [e ** _x for _x in x]
else:
return e**x
how the exp() function works under the hood
If you mean math.exp from built-in module math in this place it does simply
exp(x, /)
Return e raised to the power of x.
where e should be understand as math.e (2.718281828459045). If import math is not allowed you might do
pow(2.718281828459045, x)
instead of exp(x)

Why does this function return different values than it prints?

Beginner question - I'm trying to solve CodeAbbey's Problem #174, "Calculation of Pi", and so far I have written a function that accurately calculates the sidelengths of a regular Polygon with 6*N corners, thus approximating a circle.
In the code below, the function x(R,d) prints the correct values for "h" and "side" (compare the values given in the example on CodeAbbey), but when I ran my code through pythontutor, I saw that it returns slightly different values, for example 866025403784438700 instead of 866025403784438646 for the first value of h.
Can someone help me understand why this is?
As you can probably tell, I'm an amateur. I took the isqrt function from here, as the math.sqrt(x) method seems to give very imprecise results for large values of x
def isqrt(x):
# Returns the integer square root. This seems to be unproblematic
if x < 0:
raise ValueError('square root not defined for negative numbers')
n = int(x)
if n == 0:
return 0
a, b = divmod(n.bit_length(), 2)
x = 2**(a+b)
while True:
y = (x + n//x)//2
if y >= x:
return x
x = y
def x(R,d):
# given Radius and sidelength of initial polygon,
# this should return sidelength of new polygon.
h = isqrt(R**2 - d**2)
side = isqrt(d**2 + (R-h)**2)
print (h, side) # the values in this line are slightly
return (h, side) # different than the ones here. Why?
def approximate_pi(K,N):
R = int(10**K)
d = R // 2
for i in range(N):
d = (x(R,d)[1] // 2)
return int(6 * 2**(N) * d)
print (approximate_pi(18,4))
That's an artifact of Python Tutor. It's not something actually happening in your code.
From a very brief look at the Python Tutor source code, it looks like the Python execution backend is a slightly hacked-up, mostly standard CPython instance with debug instrumentation through bdb, but the visualization is in Javascript. The printed output comes from the Python standard output, but the visualization goes through Javascript, and the Python integers get converted to Javascript Number values, losing precision because Number is 64-bit floating point.
It has to do with rounding to the nearest integer.
In your function divmod(n.bit_length(), 2), try to change 2 to 2.0, it will give similar value to what you saw on their plateform.

Getting the error "cannot assign function to call" when computing derivatives with SymPy

This code is a derivative code for a Taylor expansion that is 5 derivatives long. So ds(i) is supposed to replace its zero valued variables with the new x values (the derivative values). I keep getting the error "cannot assign function to call"
def derivatives(f, x, a, n):
f = f(x)
x = var
a = 1.0
n = 5
ds = np.zeros(n)
exp = f(x)
for i in range(n):
exp = sp.diff(exp,x)
ds(i) = exp.replace(x, a)
return ds
You probably meant ds[i], not ds(i). Square brackers for indexing vs round parentheses for function calls. That said, the code has other issues, from undefined var to using a NumPy array (?) to store SymPy objects. In general, it's advisable to keep in mind that SymPy works primarily with expressions not with functions. Expressions do not "take arguments", they are not like callable functions in Python.
And all of this is unnecessary, because SymPy computes n-th derivative on its own. Example, the 5th derivative of exp(2*x) at 0:
x = sp.symbols('x')
f = sp.exp(2*x) # an expression, not a function
n = 5
a = 0
print(f.diff(x, n).subs(x, a)) # take derivative n times, then plug a for x
prints 32. Or, if you want a Taylor expansion up to and including x**n:
print(f.series(x, a, n + 1))
prints 1 + 2*x + 2*x**2 + 4*x**3/3 + 2*x**4/3 + 4*x**5/15 + O(x**6).

Python sympy - simplify nonzero factors from an equation

I am using the sympy library for python3, and I am handling equations, such as the following one:
a, b = symbols('a b', positive = True)
my_equation = Eq((2 * a + b) * (a - b) / 2, 0)
my_equations gets printed exactly as I have defined it ((2 * a + b) * (a - b) / 2 == 0, that is), and I am unable to reduce it even using simplify or similar functions.
What I am trying to achieve is simplifying the nonzero factors from the equation (2 * a + b and 1 / 2); ideally, I'd want to be able to simplify a - b as well, if I am sure that a != b.
Is there any way I can reach this goal?
The point is that simplify() is not capable (yet) of complex reasoning about assumptions. I tested it on Wolfram Mathematica's simplify, and it works. It looks like it's a missing feature in SymPy.
Anyway, I propose a function to do what you're looking for.
Define this function:
def simplify_eq_with_assumptions(eq):
assert eq.rhs == 0 # assert that right-hand side is zero
assert type(eq.lhs) == Mul # assert that left-hand side is a multipl.
newargs = [] # define a list of new multiplication factors.
for arg in eq.lhs.args:
if arg.is_positive:
continue # arg is positive, let's skip it.
newargs.append(arg)
# rebuild the equality with the new arguments:
return Eq(eq.lhs.func(*newargs), 0)
Now you can call:
In [5]: simplify_eq_with_assumptions(my_equation)
Out[5]: a - b = 0
You can easily adapt this function to your needs. Hopefully, in some future version of SymPy it will be sufficient to call simplify.

Categories

Resources