For Uni I'm doing this assignment where I have to approximate the difference between the sine function and its n-th Taylor approximation. When running the code plotting these two functions I run into the following error:
TypeError: ufunc 'add' output (typecode 'O') could not be coerced to provided output parameter (typecode 'd') according to the casting rule ''same_kind''
The weird thing (in my opinion) is that the program works fine for n <= 20, but when I choose anything above that, it throws this error.
Does anyone know where in my code the problem may lie? Thanks in advance.
import matplotlib.pyplot as plt
import numpy as np
import math
def constant(n, x):
return np.full(x.shape, (2*math.pi)**(n+1)/(math.factorial(n+1)))
def taylor_n(n,x):
val = 0
for i in range(1, n+1):
if i%2 == 1:
val += (-1)**((i-1)/2)* x**i/math.factorial(i)
return val
N = [1, 5, 10, 20, 50]
x = np.linspace(0,2*math.pi,100)
for n in N:
plt.plot(x, abs(np.sin(x) - taylor_n(n, x)))
plt.plot(x, constant(n, x))
Looks like float underflow. If cast to decimal it works:
import decimal
...
def taylor_n(n,x):
val = 0
for i in range(1, n+1):
if i%2 == 1:
val += np.array((-1) ** ((i - 1) / 2) * x ** i / math.factorial(i), dtype=np.dtype(decimal.Decimal))
return val
Inside taylor_n function expression ((-1) ** ((i - 1) / 2) * x ** i / math.factorial(i)) has type float64, but when i becomes greater the type of expression becomes complex128 and these types can't be summed.
Problem was only while N=50 (actually, N>20), another values calculated correctly. For N=50 plot is:
Related
I write a function phix that is input a x and N, then get a summation x_new.
def phix(x, N):
lam=np.floor(N**(1/3))
x_new=0
for i in range(0, int(lam)):
if (i+1)*N**(-1/3)>x>i*N**(-1/3):
x_new=x_new+(2*i+1)/(2*N**(1/3))
return x_new
Here is a error:
N=1
x = np.random.random_sample(N)
x_new=phix(x, N)
x_val = np.array(list(Counter(x_new).keys()))
TypeError: 'float' object is not iterable
In fact, I want to input N=2 or N=3 and phix(x, N) return a vector of x_new not a value. I am not sure how to do that? I try the following code but it seems does not work.
def phix(x,N):
lam=np.floor(N**(1/3))
x_new=np.zeros(N)
for i in range(0, N):
x_newa=0
for j in range(0, int(lam)):
if (j+1)*N**(-1/3)>x[i]>j*N**(-1/3):
x_newa=x_newa+(2*j+1)/(2*N**(1/3))
x_new[i]=x_newa
return(x_new)
I do the following example.
N=3
x = np.random.random_sample(N)
x_new=phix(x,N)
print(x_new)
But it shows all same values
[0.34668064 0.34668064 0.34668064]
OK, I don't know why you have two loops in there. There only needs to be one loop, running from 0 to cube-root(N). The function should accept one value (x) and return one value. The value being added does not depend on x at all - x is only used to determine whether an element of the summation is included or not.
So, I believe this produces the result you want. For each of your random values of x, there is one result. And as I said, when N is 3, the inner loop only runs once, so the result can ONLY be 0 or 0.34668. When N=1000 there is a bit more variation, but there are still only 10! possible results.
import numpy as np
def phix(x,N):
ncr = N**(1/3)
lam = int(ncr)
sumx = 0
for i in range(lam):
if i/ncr < x < (i+1)/ncr:
sumx += (i+i+1) / (2*ncr)
return sumx
N = 1000
x = np.random.random_sample(N)
for x1 in x:
print( x1, phix(x1,N) )
Output (truncated):
0.16252465361984203 0.15000000000000002
0.6527047022599177 0.6500000000000001
0.7733129495624551 0.7500000000000001
0.03800607206261242 0.05000000000000001
0.7116353720754358 0.7500000000000001
0.01845039536391846 0.05000000000000001
0.3398936159178093 0.3500000000000001
0.44312359112375477 0.45000000000000007
0.3010799287710728 0.3500000000000001
0.37401793425303764 0.3500000000000001
0.7049621859196674 0.7500000000000001
0.5044002562214386 0.55
0.30073336035132303 0.3500000000000001
0.31630770524340746 0.3500000000000001
0.8465422342801152 0.8500000000000002
0.39679187879066746 0.3500000000000001
0.10910213537513935 0.15000000000000002
0.8932112016365839 0.8500000000000002
0.9858585971124458 0
0.49024772936880123 0.45000000000000007
(993 more)
import numpy as np
def phix(x,N):
ncr = N**(1/3)
lam = int(ncr)
sumx = 0
for i in range(lam):
if i/ncr < x < (i+1)/ncr:
sumx += (i+i+1) / (2*ncr)
return sumx
N = 1000
x = np.random.random_sample(N)
for x1 in x:
print( x1, phix(x1,N) )
I have a function and I would like to find its maximum and minimum values. My function is this:
def function(x, y):
exp = (math.pow(x, 2) + math.pow(y, 2)) * -1
return math.exp(exp) * math.cos(x * y) * math.sin(x * y)
I have an interval for x [-1, 1] and y [-1, 1]. I would like to find a way, limited to this interval, to discover the max and min values of this function.
Using, for instance, scipy's fmin (which contains an implementation of the Nelder-Mead algorithm), you can try this:
import numpy as np
from scipy.optimize import fmin
import math
def f(x):
exp = (math.pow(x[0], 2) + math.pow(x[1], 2)) * -1
return math.exp(exp) * math.cos(x[0] * x[1]) * math.sin(x[0] * x[1])
fmin(f,np.array([0,0]))
which yields the following output:
Optimization terminated successfully.
Current function value: -0.161198
Iterations: 60
Function evaluations: 113
array([ 0.62665701, -0.62663095])
Please keep in mind that:
1) with scipy you need to convert your function into a function accepting an array (I showed how to do it in the example above);
2) fmin uses, like most of its pairs, an iterative algorithm, therefore you must provide a starting point (in my example, I provided (0,0)). You can provide different starting points to obtain different minima/maxima.
Here is something which gives fairly close estimates (not exact).
import math
import random
import sys
def function(x, y):
exp = (math.pow(x, 2) + math.pow(y, 2)) * -1
return math.exp(exp) * math.cos(x * y) * math.sin(x * y)
max_func = - sys.maxint - 1
min_func = sys.maxint
maximal_x, maximal_y = None, None
minimal_x, minimal_y = None, None
for i in xrange(1000000):
randx = random.random()*2 - 1
randy = random.random()*2 - 1
result = function(randx, randy)
max_func = max(max_func, result)
if max_func == result:
maximal_x, maximal_y = randx, randy
min_func = min(min_func, result)
if min_func == result:
minimal_x, minimal_y = randx, randy
print "Maximal (x, y):", (maximal_x, maximal_y)
print "Max func value:", max_func, '\n'
print "Minimal (x, y):", (minimal_x, minimal_y)
print "Min func value:", min_func
For a scalar variable x, we know how to write down a numerically stable sigmoid function in python:
def sigmoid(x):
if x >= 0:
return 1. / ( 1. + np.exp(-x) )
else:
return exp(x) / ( 1. + np.exp(x) )
For a list of scalars, say z = [x_1, x_2, x_3, ...], and suppose we don't know the sign of each x_i beforehand, we could generalize the above definition and try:
def sigmoid(z):
result = []
for x in z:
if x >= 0:
result.append(1. / ( 1. + np.exp(-x) ) )
else:
result.append( exp(x) / ( 1. + np.exp(x) ) )
return result
This seems to work. However, I feel this is perhaps not the most pythonic way. How should I improve the definition in terms of 'cleanness'? Say, is there a way to use comprehension to shorten the function definition?
I'm sorry if this has been asked, because I cannot find similar questions on SO. Thank you very much for your time and help!
You are right, you can do better by using np.where, the numpy equivalent of if:
def sigmoid(x):
return np.where(x >= 0,
1 / (1 + np.exp(-x)),
np.exp(x) / (1 + np.exp(x)))
This function takes a numpy array x and returns a numpy array, too:
data = np.arange(-5,5)
sigmoid(data)
#array([0.00669285, 0.01798621, 0.04742587, 0.11920292, 0.26894142,
# 0.5 , 0.73105858, 0.88079708, 0.95257413, 0.98201379])
Fully correct answer (no warnings) was provided by #hao peng but solution wasn't explained clearly. This would be too long for a comment, so I'll go for an answer.
Let's start with analysis of a few answers (pure numpy answers only):
#DYZ accepted answer
This one is correct mathematically but still gives us a warning. Let's look at the code:
def sigmoid(x):
return np.where(
x >= 0, # condition
1 / (1 + np.exp(-x)), # For positive values
np.exp(x) / (1 + np.exp(x)) # For negative values
)
As both branches are evaluated (they are arguments, they have to be), the first branch will give us a warning for negative values and the second for positive.
Although the warnings will be raised, results from overflows will not be incorporated, hence the result is correct.
Downsides
unnecessary evaluation of both branches (twice as many operations as needed)
warnings are thrown
#ynn answer
This one is almost correct, BUT will work only on floating point values, see below:
def sigmoid(x):
return np.piecewise(
x,
[x > 0],
[lambda i: 1 / (1 + np.exp(-i)), lambda i: np.exp(i) / (1 + np.exp(i))],
)
sigmoid(np.array([0.0, 1.0])) # [0.5 0.73105858] correct
sigmoid(np.array([0, 1])) # [0, 0] incorrect
Why? Longer answer was provided by
#mhawke in another thread, but the main point is:
It seems that piecewise() converts the return values to the same type
as the input so, when an integer is input an integer conversion is
performed on the result, which is then returned.
Downsides
no automatic casting due to strange behavior of piecewise function
Improved #hao peng answer
Idea of stable sigmoid comes from the fact that:
Both versions are equally efficient in terms of operations if coded correctly (one exp evaluation is enough). Now:
e^x will overflow when x is positive
e^-x will overflow when x is negative
Hence we have to branch on x equal to zero. Using numpy's masking we can transform only the part of array which is positive or negative with specific sigmoid implementations.
See code comments for additional points:
def _positive_sigmoid(x):
return 1 / (1 + np.exp(-x))
def _negative_sigmoid(x):
# Cache exp so you won't have to calculate it twice
exp = np.exp(x)
return exp / (exp + 1)
def sigmoid(x):
positive = x >= 0
# Boolean array inversion is faster than another comparison
negative = ~positive
# empty contains junk hence will be faster to allocate
# Zeros has to zero-out the array after allocation, no need for that
# See comment to the answer when it comes to dtype
result = np.empty_like(x, dtype=np.float)
result[positive] = _positive_sigmoid(x[positive])
result[negative] = _negative_sigmoid(x[negative])
return result
Time measurements
Results (50 times case test from ynn):
289.5070939064026 #DYZ
222.49267292022705 #ynn
230.81086134910583 #this
Indeed piecewise seems faster (not sure about the reasons, maybe masking and additional masking ops make it slower).
Code below was used:
import time
import numpy as np
def _positive_sigmoid(x):
return 1 / (1 + np.exp(-x))
def _negative_sigmoid(x):
# Cache exp so you won't have to calculate it twice
exp = np.exp(x)
return exp / (exp + 1)
def sigmoid(x):
positive = x >= 0
# Boolean array inversion is faster than another comparison
negative = ~positive
# empty contains juke hence will be faster to allocate than zeros
result = np.empty_like(x)
result[positive] = _positive_sigmoid(x[positive])
result[negative] = _negative_sigmoid(x[negative])
return result
N = int(1e4)
x = np.random.uniform(size=(N, N))
start: float = time.time()
for _ in range(50):
y1 = np.where(x > 0, 1 / (1 + np.exp(-x)), np.exp(x) / (1 + np.exp(x)))
y1 += 1
end: float = time.time()
print(end - start)
start: float = time.time()
for _ in range(50):
y2 = np.piecewise(
x,
[x > 0],
[lambda i: 1 / (1 + np.exp(-i)), lambda i: np.exp(i) / (1 + np.exp(i))],
)
y2 += 1
end: float = time.time()
print(end - start)
start: float = time.time()
for _ in range(50):
y2 = sigmoid(x)
y2 += 1
end: float = time.time()
print(end - start)
def sigmoid(x):
"""
A numerically stable version of the logistic sigmoid function.
"""
pos_mask = (x >= 0)
neg_mask = (x < 0)
z = np.zeros_like(x)
z[pos_mask] = np.exp(-x[pos_mask])
z[neg_mask] = np.exp(x[neg_mask])
top = np.ones_like(x)
top[neg_mask] = z[neg_mask]
return top / (1 + z)
This piece of code comes from assignment3 of cs231n, I don't really understand why we should calculate it in this way, but I know this may be the code that you are looking for. Hope to be helpful.
The accepted answer is correct but, as pointed out by this comment, it calculates both branches and is thus problematic.
Rather, you may want to use np.piecewise(). This is much faster, meaningful (np.where is not intended to define a piecewise function) and free of misleading warnings caused by entering into both branches.
Benchmark
Source Code
import numpy as np
import time
N: int = int(1e+4)
np.random.seed(0)
x: np.ndarray = np.random.random((N, N))
x *= 1e+3
start: float = time.time()
y1 = np.where(x > 0, 1 / (1 + np.exp(-x)), np.exp(x) / (1 + np.exp(x)))
end: float = time.time()
print()
print(end - start)
start: float = time.time()
y2 = np.piecewise(x, [x > 0], [lambda i: 1 / (1 + np.exp(-i)), lambda i: np.exp(i) / (1 + np.exp(i))])
end: float = time.time()
print(end - start)
assert (np.array_equal(y1, y2))
Result
np.piecewise() is silent and twice faster!
test.py:12: RuntimeWarning: overflow encountered in exp
y1 = np.where(x > 0, 1 / (1 + np.exp(-x)), np.exp(x) / (1 + np.exp(x)))
test.py:12: RuntimeWarning: invalid value encountered in true_divide
y1 = np.where(x > 0, 1 / (1 + np.exp(-x)), np.exp(x) / (1 + np.exp(x)))
6.32736349105835
3.138420343399048
Another alternative to your code is the following:
def sigmoid(z):
return [(1. / (1. + np.exp(-x)) if x >= 0 else (np.exp(x) / (1. + np.exp(x))) for x in z]
I wrote one trick, I guess np.where or torch.where are implemented in the same manner to deal with binary conditions:
def sigmoid(x, max_v=1.0):
sign = (torch.sign(x) + 3)//3
x = torch.abs(x)
res = max_v/(1 + torch.exp(-x))
res = res * sign + (1 - sign) * (max_v - res)
return res
I am trying to apply numpy to this code I wrote for trapezium rule integration:
def integral(a,b,n):
delta = (b-a)/float(n)
s = 0.0
s+= np.sin(a)/(a*2)
for i in range(1,n):
s +=np.sin(a + i*delta)/(a + i*delta)
s += np.sin(b)/(b*2.0)
return s * delta
I am trying to get the return value from the new function something like this:
return delta *((2 *np.sin(x[1:-1])) +np.sin(x[0])+np.sin(x[-1]) )/2*x
I am trying for a long time now to make any breakthrough but all my attempts failed.
One of the things I attempted and I do not get is why the following code gives too many indices for array error?
def integral(a,b,n):
d = (b-a)/float(n)
x = np.arange(a,b,d)
J = np.where(x[:,1] < np.sin(x[:,0])/x[:,0])[0]
Every hint/advice is very much appreciated.
You forgot to sum over sin(x):
>>> def integral(a, b, n):
... x, delta = np.linspace(a, b, n+1, retstep=True)
... y = np.sin(x)
... y[0] /= 2
... y[-1] /= 2
... return delta * y.sum()
...
>>> integral(0, np.pi / 2, 10000)
0.9999999979438324
>>> integral(0, 2 * np.pi, 10000)
0.0
>>> from scipy.integrate import quad
>>> quad(np.sin, 0, np.pi / 2)
(0.9999999999999999, 1.1102230246251564e-14)
>>> quad(np.sin, 0, 2 * np.pi)
(2.221501482512777e-16, 4.3998892617845996e-14)
I tried this meanwhile, too.
import numpy as np
def T_n(a, b, n, fun):
delta = (b - a)/float(n) # delta formula
x_i = lambda a,i,delta: a + i * delta # calculate x_i
return 0.5 * delta * \
(2 * sum(fun(x_i(a, np.arange(0, n + 1), delta))) \
- fun(x_i(a, 0, delta)) \
- fun(x_i(a, n, delta)))
Reconstructed the code using formulas at bottom of this page
https://matheguru.com/integralrechnung/trapezregel.html
The summing over the range(0, n+1) - which gives [0, 1, ..., n] -
is implemented using numpy. Usually, you would collect the values using a for loop in normal Python.
But numpy's vectorized behaviour can be used here.
np.arange(0, n+1) gives a np.array([0, 1, ...,n]).
If given as argument to the function (here abstracted as fun) - the function formula for x_0 to x_n
will be then calculated. and collected in a numpy-array. So fun(x_i(...)) returns a numpy-array of the function applied on x_0 to x_n. This array/list is summed up by sum().
The entire sum() is multiplied by 2, and then the function value of x_0 and x_n subtracted afterwards. (Since in the trapezoid formula only the middle summands, but not the first and the last, are multiplied by 2). This was kind of a hack.
The linked German page uses as a function fun(x) = x ^ 2 + 3
which can be nicely defined on the fly by using a lambda expression:
fun = lambda x: x ** 2 + 3
a = -2
b = 3
n = 6
You could instead use a normal function definition, too: defun fun(x): return x ** 2 + 3.
So I tested by typing the command:
T_n(a, b, n, fun)
Which correctly returned:
## Out[172]: 27.24537037037037
For your case, just allocate np.sin tofun and your values for a, b, and n into this function call.
Like:
fun = np.sin # by that eveywhere where `fun` is placed in function,
# it will behave as if `np.sin` will stand there - this is possible,
# because Python treats its functions as first class citizens
a = #your value
b = #your value
n = #your value
Finally, you can call:
T_n(a, b, n, fun)
And it will work!
I am doing some numerical analysis whereby I have a series of python lists of the form
listn = [1, 3.1, 4.2]
I want to transform these into functions mapped onto a domain between x_0 and x_1, so I can pass the function object to a higher order function that I am using to analyse the data. (Outside the specified domain, the function is chosen to be zero). The function produced needs to be continuous for my purposes, and at the moment I just returning a pieces wise linear function.
I have come up with the convoluted solution below, but there must be a nicer way of doing this in a few lines??
def to_function(array_like, x_0=0, x_1=1):
assert x_1 > x_0, "X_0 > X_1"
def g(s, a=array_like, lower=x_0, upper=x_1):
if lower < s <= upper:
scaled = (1.0*(s-lower) / (upper - lower)) * (len(a) - 1)
dec, whole = math.modf(scaled)
return (1.0 - dec) * a[int(whole)] + dec * a[int(whole + 1)]
else:
return 0
return g
b = to_function([0, 1, 2, 3, 4, 5], x_0=0, x_1=5)
print b(1)
print b(2)
print b(3)
print b(3.4)
Will scipy's 1d interpolation functions work?
import numpy as np
from scipy.interpolate import interp1d
x = y = np.arange(5)
f = interp1d(x,y, kind="linear", fill_value=0., bounds_error=False)
print f(0)
print f(2)
print f(3)
print f(3.4)
Which gives:
1.0
2.0
3.0
3.4