I have created a class which takes a distribution, and fits it. The method has the option for choosing between a few predefined functions.
As part of printing the class, I print the result of the fit in the form of an equation, where the fit-results and subsequent errors are displayed on the over the figure.
My question is is there a tidy way to handle when a number is negative, such that the string for printing is formed as: "y = mx - c", and not "y = mx + -c".
I developed this with a linear fit, where I simply assess the sign of the constant, and form the string in one of two ways:
def fit_result_string(self, results, errors):
if self.fit_model is utl.linear:
if results[1] > 0:
fit_str = r"y = {:.3}($\pm${:.3})x + {:.3}($\pm${:.3})".format(
results[0],
errors[0],
results[1],
errors[1])
else:
fit_str = r"y = {:.3}($\pm${:.3})x - {:.3}($\pm${:.3})".format(
results[0],
errors[0],
abs(results[1]),
errors[1])
return fit_str
I now want to build this up to also be able to form a string containing the results if the fit model is changed to a 2nd, 3rd, or 4th degree polynomial, while handling the sign of each coefficient.
Is there a better way to do this than using a whole bunch of if-else statements?
Thanks in advance!
Define a function which returns '+' or '-' according to the given number, and call it inside a f-string.
def plus_minus_string(n):
return '+' if n >= 0 else '-'
print(f"y = {m}x {plus_minus_string(c)} {abs(c)}")
Examples:
>>> m = 2
>>> c = 5
>>> print(f"y = {m}x {plus_minus_string(c)} {abs(c)}")
y = 2x + 5
>>> c = -4
>>> print(f"y = {m}x {plus_minus_string(c)} {abs(c)}")
y = 2x - 4
You will need to change it a bit to fit to your code, but it's quite straight-forward I hope.
Related
I'm using Sympy to make a custom function which converts complex square roots into their complex numbers. When I input -sqrt(-2 + 2*sqrt(3)*I) I get the expected output of -1 - sqrt(3)*I, however, inputting -sqrt(-2.0 + 2*sqrt(3)*I) (has a -2.0 instead of -2), I get the output -1.0 - 0.707106781186547*sqrt(6)*I.
I've tried to convert the input expression to a string, gotten rid of the '.0 ' and then executed a piece of code to return it to the type sympy.core.add.Mul, which usually works with other strings, but the variable expression is still a string.
expression = str(input_expression).replace('.0 ', '')
exec(f'expression = {expression}')
How do I get rid of the redundant use of floats in my expression, while maintaining its type of sympy.core.add.Mul, so that my function will give a nice output?
P.S. The number 0.707106781186547 is an approximation of 1/sqrt(2). The fact that this number is present in the second output means that my function is running properly, it just isn't outputting in the desired way.
Edit:
For whatever reason, unindenting and getting rid of the function as a whole, running the code as its own program gives the expected output. It's only when the code is in function form that it doesn't work.
Code as Requested:
from IPython.display import display, Math
from sympy.abc import *
from sympy import *
def imaginary_square_root(x, y):
return(sqrt((x + sqrt(x**2 + y**2)) / (2)) + I*((y*sqrt(2)) / (2*sqrt(x + sqrt(x**2 + y**2))))) # calculates the square root of a complex number
def find_imaginary_square_root(polynomial): # 'polynomial' used because this function is meant to change expressions including variables such as 'x'
polynomial = str(polynomial).replace('.0 ', ' ')
exec(f'polynomial = {polynomial}')
list_of_square_roots = [] # list of string instances of square roots and their contents
list_of_square_root_indexes = [] # list of indexes at which the square roots can be found in the string
polynomial_string = str(polynomial)
temp_polynomial_string = polynomial_string # string used and chopped up, hence the prefix 'temp_...'
current_count = 0 # counter variable used for two seperate jobs
while 'sqrt' in temp_polynomial_string: # gets indexes of every instance of 'sqrt'
list_of_square_root_indexes.append(temp_polynomial_string.index('sqrt') + current_count)
temp_polynomial_string = temp_polynomial_string[list_of_square_root_indexes[-1] + 4:]
current_count += list_of_square_root_indexes[-1] + 4
for square_root_location in list_of_square_root_indexes:
current_count = 1 # second job for 'current_count'
for index, char in enumerate(polynomial_string[square_root_location + 5:]):
if char == '(':
current_count += 1
elif char == ')':
current_count -= 1
if not current_count: # when current_count == 0, we know that the end of the sqrt contents have been reached
list_of_square_roots.append(polynomial_string[square_root_location:square_root_location + index + 6]) # adds the square root with contents to a list
break
for individual_square_root in list_of_square_roots:
if individual_square_root in str(polynomial):
evaluate = individual_square_root[5:-1]
x = re(evaluate)
y = im(evaluate)
polynomial = polynomial.replace(eval(individual_square_root), imaginary_square_root(x, y)) # replace function used here is Sympy's replace function for polynomials
return polynomial
poly = str(-sqrt(-2.0 + 2*sqrt(3)*I))
display(Math(latex(find_imaginary_square_root(poly))))
What exactly are you trying to accomplish? I still do not understand. You have a whole chunck of code. Try this out:
from sympy import *
def parse(expr): print(simplify(expr).evalf().nsimplify())
parse(-sqrt(-2.0 + 2*sqrt(3)*I))
-1 - sqrt(3)*I
I think everything that you're fighting to do here can be made easier with what sympy has built in. First, assuming that you're taking in user given strings, I'd recommend using the built in parser's of sympy. Second, sympy will do this exact calculation for you, although with a caveat.
from sympy.parsing.sympy_parser import parse_expr
def simplify_string(polynomial_str):
polynomial = parse_expr(polynomial_str)
return polynomial.powsimp().evalf()
Usage examples:
>>>simplify_string('-sqrt(-2 + 2*sqrt(3)*I)')
-1.0 - 1.73205080756888*I
>>>simplify_string('sqrt(sqrt(1 + sqrt(2)*I) + I*sqrt(3 - I*sqrt(5)))')
1.54878147282944 + 0.78803305913*I
>>>simpify_string('sqrt((3 + sqrt(2 + sqrt(3)*I)*I)*x**2 + (3 + sqrt(5)*I)*x + I*4)'
(x**2*(3.0 + I*(2.0 + 1.73205080756888*I)**0.5) + x*(3.0 + 2.23606797749979*I) + 4.0*I)**0.5
The problem is, that sympy will either work in floats, or exact. If you want sympy to calculate out the numerical value of a square root, it's going to display what could be an int as a float for clarity. You can't fix the typecasting, but a lot of the work that you're trying to do, sympy has built in under the hood.
Edit
You can use .nsimplify() on the polynomial to bring things back to nice looking numbers if possible, but you won't be able to have both evaluated roots, and nice displays in the same form.
The sqrtdenest batteries are already included. If you replace ints expressed as floats it will work:
>>> from sympy import sqrtdenest, sqrt, Float
>>> eq = -sqrt(-2.0 + 2*sqrt(3)*I)
Define a function that will extract Floats that are equal to ints
>>> intfloats = lambda x: dict([(i,int(i)) for i in x.atoms(Float) if i==int(i)])
Use it to transform eq and then apply the sqrtdenest
>>> eq.xreplace(intfloats(eq))
-sqrt(-2 + 2*sqrt(3)*I)
>>> sqrtdenest(_)
-1 + sqrt(3)
A problem with using nsimplify (or any mass simplification) is that it may do more than you want. It's best to use the most specific transformation as possible to limit the impact (and work).
/!\ sqrtdenest appears to have a problem that I will report: it is dropping the I
I'm using scipy.integrate's odeint function to evaluate the time evolution of to find solutions to the equation
$$ \dot x = -\frac{f(x)}{g(x)}, $$
where $f$ and $g$ are both functions of $x$. $f,g$ are given by series of the form
$$ f(x) = x(1 + \sum_k b_k x^{k/2}) $$
$$ g(x) = 1 + \sum_k a_k (1 + k/2) x^{k/2}. $$
All positive initial values for $x$ should result in the solution blowing up in time, but they aren't...well, not always.
The coefficients $a_n, b_n$ are long polynomials, where $b_n$ is dependent on $x$ in a certain way, and $a_n$ is dependent on several terms being held constant.
Depending on the way I compute $g(x)$, I get very different behavior.
The first way I tried is as follows. 'a' and 'b' are 1x8 and 1x9 numpy arrays. Note that in the function g(x, a), a is multiplied by gterms in line 3, and does not appear in line 2.
def g(x, a):
gterms = [(0.5*k + 1.) * x**(0.5*k) for k in range( len(a) )]
return = 1. + np.sum(a*gterms)
def rhs(u,t)
x = u
a, b = An(), Bn(x) #An() and Bn(x) are functions that return an array of coefficients
return -f(x, b)/g(x, a)
t = np.linspace(.,.,.)
solution = odeint(rhs, <some initial value>, t)
The second way was this:
def g(x, a):
gterms = [(0.5*k + 1.) * a[k] * x**(0.5*k) for k in range( len(a) )]
return = 1. + np.sum(gterms)
def rhs(u,t)
x = u
a, b = An(), Bn(x) #An() and Bn(x) are functions that return an array of coefficients
return -f(x, b)/g(x, a)
t = np.linspace(.,.,.)
solution = odeint(rhs, <some initial value>, t)
Note the difference: using the first method, I stuck the array 'a' into the sum in line 3, whereas using the second method, I suck the values of 'a' into the list 'gterms' in line 2 instead.
The first method gives the expected behavior: solutions blow up positive x. However, the second method does not do this. The second method gives a bifurcation for some x0 > 0 that acts as a source. For initial conditions greater than x0, solutions blow up as expected, but initial conditions less than x0 have the solutions tending to 0 very slowly.
Something else of note: in the rhs function, if I change it from
def rhs(u,t)
x = u
...
return .
to
def rhs(u,t)
x = u[0]
...
return .
the same exact change occurs
So my question is: what is the difference between the two different methods I used? I can't tell for the life of me what is actually going on here. Sorry for being so verbose.
I have a very long math formula (just to put you in context: it has 293095 characters) which in practice will be the body of a python function. This function has 15 input parameters as in:
def math_func(t,X,P,n1,n2,R,r):
x,y,z = X
a,b,c = P
u1,v1,w1 = n1
u2,v2,w2 = n2
return <long math formula>
The formula uses simple math operations + - * ** / and one function call to arctan. Here an extract of it:
r*((-16*(r**6*t*u1**6 - 6*r**6*u1**5*u2 - 15*r**6*t*u1**4*u2**2 +
20*r**6*u1**3*u2**3 + 15*r**6*t*u1**2*u2**4 - 6*r**6*u1*u2**5 -
r**6*t*u2**6 + 3*r**6*t*u1**4*v1**2 - 12*r**6*u1**3*u2*v1**2 -
18*r**6*t*u1**2*u2**2*v1**2 + 12*r**6*u1*u2**3*v1**2 +
3*r**6*t*u2**4*v1**2 + 3*r**6*t*u1**2*v1**4 - 6*r**6*u1*u2*v1**4 -
3*r**6*t*u2**2*v1**4 + r**6*t*v1**6 - 6*r**6*u1**4*v1*v2 -
24*r**6*t*u1**3*u2*v1*v2 + 36*r**6*u1**2*u2**2*v1*v2 +
24*r**6*t*u1*u2**3*v1*v2 - 6*r**6*u2**4*v1*v2 -
12*r**6*u1**2*v1**3*v2 - 24*r**6*t*u1*u2*v1**3*v2 +
12*r**6*u2**2*v1**3*v2 - 6*r**6*v1**5*v2 - 3*r**6*t*u1**4*v2**2 + ...
Now the point is that in practice the bulk evaluation of this function will be done for fixed values of P,n1,n2,R and r which reduces the set of free variables to only four, and "in theory" the formula with less parameters should be faster.
So the question is: How can I implement this optimization in Python?
I know I can put everything in a string and do some sort of replace,compile and eval like in
formula = formula.replace('r','1').replace('R','2')....
code = compile(formula,'formula-name','eval')
math_func = lambda t,x,y,z: eval(code)
It would be good if some operations (like power) are substituted by their value, for example 18*r**6*t*u1**2*u2**2*v1**2 should become 18*t for r=u1=u2=v1=1. I think compile should do so but in any case I'm not sure. Does compile actually perform this optimization?
My solution speeds up the computation but if I can squeeze it more it will be great. Note: preferable within standard Python (I could try Cython later).
In general I'm interesting in a pythonic way to accomplish my goal maybe with some extra libraries: what is a reasonably good way of doing this? Is my solution a good approach?
EDIT: (To give more context)
The huge expression is the output of a symbolic line integral over an arc of circle. The arc is given in space by the radius r, two ortho-normal vectors (like the x and y axis in a 2D version) n1=(u1,v1,w1),n2=(u2,v2,w2) and the center P=(a,b,c). The rest is the point over which I'm performing the integration X=(x,y,z) and a parameter R for the function I'm integrating.
Sympy and Maple just take ages to compute this, the actual output is from Mathematica.
If you are curious about the formula here it is (pseudo-pseudo-code):
G(u) = P + r*(1-u**2)/(1+u**2)*n1 + r*2*u/(1+u**2)*n2
integral of (1-|X-G(t)|^2/R^2)^3 over t
You could use Sympy:
>>> from sympy import symbols
>>> x,y,z,a,b,c,u1,v1,w1,u2,v2,w2,t,r = symbols("x,y,z,a,b,c,u1,v1,w1,u2,v2,w2,t,r")
>>> r=u1=u2=v1=1
>>> a = 18*r**6*t*u1**2*u2**2*v1**2
>>> a
18*t
Then you can create a Python function like this:
>>> from sympy import lambdify
>>> f = lambdify(t, a)
>>> f(1)
18
And that f function is indeed simply 18*t:
>>> import dis
>>> dis.dis(f)
1 0 LOAD_CONST 1 (18)
3 LOAD_FAST 0 (_Dummy_18)
6 BINARY_MULTIPLY
7 RETURN_VALUE
If you want to compile the resulting code into machine code, you can try a JIT compiler such as Numba, Theano, or Parakeet.
Here's how I would approach this problem:
compile() your function to an AST (Abstract Syntax Tree) instead of a normal bytecode function - see the standard ast module for details.
Traverse the AST, replacing all references to the fixed parameters with their fixed value. There are libraries such as macropy that may be useful for this, I don't have any specific recommendation.
Traverse the AST again, performing whatever optimizations this might enable, such as Mult(1, X) => X. You don't have to worry about operations between two constants, as Python (since 2.6) optimizes that already.
compile() the AST into a normal function. Call it, and hope that the speed was increased by a sufficient amount to justify all the pre-optimization.
Note that Python will never optimize things like 1*X on its own, as it cannot know what type X will be at runtime - it could be an instance of a class that implements the multiplication operation in an arbitrary way, so the result is not necessarily X. Only your knowledge that all the variables are ordinary numbers, obeying the usual rules of arithmetic, makes this optimization valid.
The "right way" to solve a problem like this is one or more of:
Find a more efficient formulation
Symbolically simplify and reduce terms
Use vectorization (e.g. NumPy)
Punt to low-level libraries that are already optimized (e.g. in languages like C or Fortran that implicitly do strong expression optimization, rather than Python, which does nada).
Let's say for a moment, though, that approaches 1, 3, and 4 are not available, and you have to do this in Python. Then simplifying and "hoisting" common subexpressions is your primary tool.
The good news is, there are a lot of opportunities. The expression r**6, for example, is repeated 26 times. You could save 25 computations by simply assigning r_6 = r ** 6 once, then replacing r**6 every time it occurs.
When you start looking for common expressions here, you'll find them everywhere. It'd be nice to mechanize that process, right? In general, that requires a full expression parser (e.g. from the ast module) and is an exponential-time optimization problem. But your expression is a bit of a special case. While long and varied, it's not especially complicated. It has few internal parenthetical groupings, so we can get away with a quicker and dirtier approach.
Before the how, the resulting code is:
sa = r**6 # 26 occurrences
sb = u1**2 # 5 occurrences
sc = u2**2 # 5 occurrences
sd = v1**2 # 5 occurrences
se = u1**4 # 4 occurrences
sf = u2**3 # 3 occurrences
sg = u1**3 # 3 occurrences
sh = v1**4 # 3 occurrences
si = u2**4 # 3 occurrences
sj = v1**3 # 3 occurrences
sk = v2**2 # 1 occurrence
sl = v1**6 # 1 occurrence
sm = v1**5 # 1 occurrence
sn = u1**6 # 1 occurrence
so = u1**5 # 1 occurrence
sp = u2**6 # 1 occurrence
sq = u2**5 # 1 occurrence
sr = 6*sa # 6 occurrences
ss = 3*sa # 5 occurrences
st = ss*t # 5 occurrences
su = 12*sa # 4 occurrences
sv = sa*t # 3 occurrences
sw = v1*v2 # 5 occurrences
sx = sj*v2 # 3 occurrences
sy = 24*sv # 3 occurrences
sz = 15*sv # 2 occurrences
sA = sr*u1 # 2 occurrences
sB = sy*u1 # 2 occurrences
sC = sb*sc # 2 occurrences
sD = st*se # 2 occurrences
# revised formula
sv*sn - sr*so*u2 - sz*se*sc +
20*sa*sg*sf + sz*sb*si - sA*sq -
sv*sp + sD*sd - su*sg*u2*sd -
18*sv*sC*sd + su*u1*sf*sd +
st*si*sd + st*sb*sh - sA*u2*sh -
st*sc*sh + sv*sl - sr*se*sw -
sy*sg*u2*sw + 36*sa*sC*sw +
sB*sf*sw - sr*si*sw -
su*sb*sx - sB*u2*sx +
su*sc*sx - sr*sm*v2 - sD*sk
That avoids 81 computations. It's just a rough cut. Even the result could be further improved. The subexpressions sr*sw and su*sd for example, could be pre-computed as well. But we'll leave that next level for another day.
Note that this doesn't include the starting r*((-16*(. The majority of the simplification can be (and needs to be) done on the core of the expression, not on its outer terms. So I stripped those away for now; they can be added back once the common core is computed.
How do you do this?
f = """
r**6*t*u1**6 - 6*r**6*u1**5*u2 - 15*r**6*t*u1**4*u2**2 +
20*r**6*u1**3*u2**3 + 15*r**6*t*u1**2*u2**4 - 6*r**6*u1*u2**5 -
r**6*t*u2**6 + 3*r**6*t*u1**4*v1**2 - 12*r**6*u1**3*u2*v1**2 -
18*r**6*t*u1**2*u2**2*v1**2 + 12*r**6*u1*u2**3*v1**2 +
3*r**6*t*u2**4*v1**2 + 3*r**6*t*u1**2*v1**4 - 6*r**6*u1*u2*v1**4 -
3*r**6*t*u2**2*v1**4 + r**6*t*v1**6 - 6*r**6*u1**4*v1*v2 -
24*r**6*t*u1**3*u2*v1*v2 + 36*r**6*u1**2*u2**2*v1*v2 +
24*r**6*t*u1*u2**3*v1*v2 - 6*r**6*u2**4*v1*v2 -
12*r**6*u1**2*v1**3*v2 - 24*r**6*t*u1*u2*v1**3*v2 +
12*r**6*u2**2*v1**3*v2 - 6*r**6*v1**5*v2 - 3*r**6*t*u1**4*v2**2
""".strip()
from collections import Counter
import re
expre = re.compile('(?<!\w)\w+\*\*\d+')
multre = re.compile('(?<!\w)\w+\*\w+')
expr_saved = 0
stmts = []
secache = {}
seindex = 0
def subexpr(e):
global seindex
cached = secache.get(e)
if cached:
return cached
base = ord('a') if seindex < 26 else ord('A') - 26
name = 's' + chr(seindex + base)
seindex += 1
secache[e] = name
return name
def hoist(e, flat, c):
"""
Hoist the expression e into name defined by flat.
c is the count of how many times seen in incoming
formula.
"""
global expr_saved
assign = "{} = {}".format(flat, e)
s = "{:30} # {} occurrence{}".format(assign, c, '' if c == 1 else 's')
stmts.append(s)
print "{} needless computations quashed with {}".format(c-1, flat)
expr_saved += c - 1
def common_exp(form):
"""
Replace ALL exponentiation operations with a hoisted
sub-expression.
"""
# find the exponentiation operations
exponents = re.findall(expre, form)
# find and count exponentiation operations
expcount = Counter(re.findall(expre, form))
# for each exponentiation, create a hoisted sub-expression
for e, c in expcount.most_common():
hoist(e, subexpr(e), c)
# replace all exponentiation operations with their sub-expressions
form = re.sub(expre, lambda x: subexpr(x.group(0)), form)
return form
def common_mult(f):
"""
Replace multiplication operations with a hoisted
sub-expression if they occur > 1 time. Also, only
replaces one sub-expression at a time (the most common)
because it may affect further expressions
"""
mults = re.findall(multre, f)
for e, c in Counter(mults).most_common():
# unlike exponents, only replace if >1 occurrence
if c == 1:
return f
# occurs >1 time, so hoist
hoist(e, subexpr(e), c)
# replace in loop and return
return re.sub('(?<!\w)' + re.escape(e), subexpr(e), f)
# return f.replace(e, flat(e))
return f
# fix all exponents
form = common_exp(f)
# fix selected multiplies
prev = form
while True:
form = common_mult(form)
if form == prev:
# have converged; no more replacements possible
break
prev = form
print "--"
mults = re.split(r'\s*[+-]\s*', form)
smults = ['*'.join(sorted(terms.split('*'))) for terms in mults]
print smults
# print the hoisted statements and the revised expression
print '\n'.join(stmts)
print
print "# revised formula"
print form
Parsing with regular expressions is dicey business. That journey is prone to error, sorrow, and regret. I guarded against bad outcomes by hoisting some exponentiations that didn't strictly need to be, and by plugging random values into both the before and after formulas to make sure they both give the same results. I recommend the "punt to C" strategy if this is production code. But if you can't...
I was using this question to help me create a Scientific Notation function, however instead of 4.08E+10 I wanted this: 4.08 x 10^10. So I made a working function like so:
def SciNotation(num,sig):
x='%.2e' %num #<-- Instead of 2, input sig here
x= x.split('e')
if (x[1])[0] == "-":
return x[0]+" x 10^"+ x[1].lstrip('0')
else:
return x[0]+" x 10^"+ (x[1])[1:].lstrip('0')
num = float(raw_input("Enter number: "))
sig = raw_input("Enter significant figures: ")
print SciNotation(num,2)
This function, when given an input of 99999 will print an output of 1.00 x 10^5 (2 significant figures). However, I need to make use of my sig variable (# of significant figures inputted by user). I know I have to input the sig variable into Line 2 of my code but I can't seem to get to work.
So far I have tried (with inputs num=99999, sig=2):
x='%.%de' %(num,sig)
TypeError: not all arguments converted during string formatting
x='%d.%de' %(num,sig)
x = 99999.2e (incorrect output)
x='{0}.{1}e'.format(num,sig)
x = 99999.0.2e (incorrect output)
Any help would be appreciated!
If you must do this, then the easiest way will be to just use the built in formating, and then just replace the e+05 or e-12 with whatever you'd rather have:
def sci_notation(number, sig_fig=2):
ret_string = "{0:.{1:d}e}".format(number, sig_fig)
a, b = ret_string.split("e")
# remove leading "+" and strip leading zeros
b = int(b)
return a + " * 10^" + str(b)
print sci_notation(10000, sig_fig=4)
# 1.0000 * 10^4
Use the new string formatting. The old style you're using is deprecated anyway:
In [1]: "{0:.{1}e}".format(3.0, 5)
Out[1]: '3.00000e+00'
I have to write a function, s(x) = x * sin(3/x) in python that is capable of taking single values or vectors/arrays, but I'm having a little trouble handling the cases when x is zero (or has an element that's zero). This is what I have so far:
def s(x):
result = zeros(size(x))
for a in range(0,size(x)):
if (x[a] == 0):
result[a] = 0
else:
result[a] = float(x[a] * sin(3.0/x[a]))
return result
Which...doesn't work for x = 0. And it's kinda messy. Even worse, I'm unable to use sympy's integrate function on it, or use it in my own simpson/trapezoidal rule code. Any ideas?
When I use integrate() on this function, I get the following error message: "Symbol" object does not support indexing.
This takes about 30 seconds per integrate call:
import sympy as sp
x = sp.Symbol('x')
int2 = sp.integrate(x*sp.sin(3./x),(x,0.000001,2)).evalf(8)
print int2
int1 = sp.integrate(x*sp.sin(3./x),(x,0,2)).evalf(8)
print int1
The results are:
1.0996940
-4.5*Si(zoo) + 8.1682775
Clearly you want to start the integration from a small positive number to avoid the problem at x = 0.
You can also assign x*sin(3./x) to a variable, e.g.:
s = x*sin(3./x)
int1 = sp.integrate(s, (x, 0.00001, 2))
My original answer using scipy to compute the integral:
import scipy.integrate
import math
def s(x):
if abs(x) < 0.00001:
return 0
else:
return x*math.sin(3.0/x)
s_exact = scipy.integrate.quad(s, 0, 2)
print s_exact
See the scipy docs for more integration options.
If you want to use SymPy's integrate, you need a symbolic function. A wrong value at a point doesn't really matter for integration (at least mathematically), so you shouldn't worry about it.
It seems there is a bug in SymPy that gives an answer in terms of zoo at 0, because it isn't using limit correctly. You'll need to compute the limits manually. For example, the integral from 0 to 1:
In [14]: res = integrate(x*sin(3/x), x)
In [15]: ans = limit(res, x, 1) - limit(res, x, 0)
In [16]: ans
Out[16]:
9⋅π 3⋅cos(3) sin(3) 9⋅Si(3)
- ─── + ──────── + ────── + ───────
4 2 2 2
In [17]: ans.evalf()
Out[17]: -0.164075835450162