I'm keep getting this error when I execute annuity_rate(5, 100, 510) or I try negative values. How can I fix this?
It works well with a large number but somehow not working for negative and small numbers .
def pv_annuity(r, n, pmt):
""" Return the present value of an annuity of pmt to be received
each period for n periods"""
pv = pmt * (1 - (1 + r) ** (-n)) / r
return pv
def annuity_rate(n, pmt, pv):
""" return the rate of interest required to amortize the pv in n periods
with equal periodic payments of pmt"""
rate_low, rate_high = 0, 1
while True:
rate = (rate_high + rate_low) / 2
#print('trying rate', rate)
test_pv = pv_annuity(rate, n, pmt)
#print(test_pv)
if abs(pv - test_pv) <= 0.01:
break
if test_pv > pv:
rate_low = (rate_high + rate_low) / 2
if test_pv < pv:
rate_high = (rate_high + rate_low) / 2
return rate
Using your example of annuity_rate(5, 100, 510):
The rate_high will keep decreasing no matter what conditions are met, since each loop starts with division of the rate by 2.
pv_test gets higher than the pv, then will convert into zero when the rate is so low (test_pv = pmt * (1 - (1 + r) ** (-n)) / r) since the nominator is decreasing faster than the denominator.
After that, the rate keeps decreasing with zero test_pv, at an even faster rate (rate_high = (rate_high + rate_low) / 2).
The rate finally reaches 5e-324, and upon further reduction, it becomes almost zero, leading to ( pmt * (1 - (1 + r) ** (-n)) / r) being a division by zero.
Suggested solution:
Change the payment in (pv_annuity(rate, n, pmt)), it should reflect the changes in the rate.
Related
I have the following simple function to evaluate.
def f0(wt):
term1 = (1 + np.cos(wt)**2) * (1 / 3 - 2 / (wt)**2)
term2 = np.sin(wt)**2
term3 = 4 / (wt)**3 * np.cos(wt) * np.sin(wt)
return 0.5 * (term1 + term2 + term3)
For small values of wt (order of 1e-4 and below), I seem to have numerical problems in the evaluation of the function. Indeed, the term1 and term3 have very large and almost opposite values, but term2 is very small.
I think I improved things slightly by splitting the sum of the 3 terms into two parts, as showed here
def f1(wt):
# Split the calculation to have more stability hopefully
term1 = (1 + np.cos(wt)**2) * (1 / 3 - 2 / (wt)**2)
term2 = np.sin(wt)**2
term3 = 4 / (wt)**3 * np.cos(wt) * np.sin(wt)
partial = term1 + term3
return 0.5 * (partial + term2)
However, for very small but positive values of wt, I think there are still numerical problems. I expect this function to be smooth for any positive value of wt, but, as you can see from the plot attached, at values below 1e-3, there are wild artifacts.
My question is: how can I improve the numerical precision of Numpy, if I am already using the data type float64?
Note: I am on a Windows 10 machine with 64 bits. I have read on other Stack Overflow threads that the class np.float128 is not available.
Full code snippet
import numpy as np
import matplotlib.pyplot as plt
wt = np.logspace(-6, 1, 1000)
def f0(wt):
term1 = (1 + np.cos(wt)**2) * (1 / 3 - 2 / (wt)**2)
term2 = np.sin(wt)**2
term3 = 4 / (wt)**3 * np.cos(wt) * np.sin(wt)
return 0.5 * (term1 + term2 + term3)
def f1(wt):
# Split the calculation to have more stability hopefully
term1 = (1 + np.cos(wt)**2) * (1 / 3 - 2 / (wt)**2)
term2 = np.sin(wt)**2
term3 = 4 / (wt)**3 * np.cos(wt) * np.sin(wt)
partial = term1 + term3
return 0.5 * (partial + term2)
plt.figure()
plt.loglog(wt, f0(wt), label='f0')
plt.loglog(wt, f1(wt), label='f1')
plt.grid()
plt.legend()
plt.xlabel('wt')
plt.show()
How about you replace the sin and cosin with the first few terms of their Taylor series. Then sympy is able to give you a simple result that is hopefully better suited numerically.
First I slightly change your function so it gives me a sympy expression.
from sympy import *
t = symbols('t')
def f0(wt):
term1 = (1 + sympy.cos(wt)**2) * (sympy.Rational(1,3) - 2 / (wt)**2)
term2 = sympy.sin(wt)**2
term3 = 4 / (wt)**3 * sympy.cos(wt) * sympy.sin(wt)
return sympy.Rational(1,2)*(term1 + term2 + term3)
expr = f0(t)
expr
Now I replace sin and cos with their taylor polynomials.
def taylor(f, n):
return sum(t**i/factorial(i) * f(t).diff(t, i).subs(t,0) for i in range(n))
tsin = taylor(sin, 7)
tcos = taylor(cos, 7)
expr2 = simplify(expr.subs(sin(t),tsin).subs(cos(t),tcos))
f1 = lambdify(t, expr2, 'numpy')
expr2
And finally I plot it using exactly your code. Notice that I am using sympys option to make a numpy ufunc.
wt = np.logspace(-6, 1, 1000)
plt.figure()
plt.loglog(wt, f0(wt), label='f0')
plt.loglog(wt, f1(wt), label='f1')
plt.grid()
plt.legend()
plt.xlabel('wt')
plt.show()
Obviously this function is only good around zero and for values between 1 and 10 you should take the original function. But in case you need convincing and don't care that the function with replaced taylor polynomial looks nice you can crank the degree up to 25 making it visually agree with your function at least up until 10.
And you can combine the functions so it calculates the values around zero with my function and the other with yours like this.
def f2(wt):
cond = np.abs(wt) > 1/10
return np.piecewise(wt, [cond, ~cond], [f0,f1])
The problem you are facing is catastrophic cancellation and it must not be solved using higher precision as doing so will generally postpone the actual problem. The root of the problem which is a numerical instability must be solved by reformulating the mathematical expression.
Note that f1 is a bit better than f0 but the cancellation issue lies in term1 + term3.
By transforming the expression simple development/factorization operations and using trigonometric identities one can get the following function:
def f2(wt):
sw = np.sin(wt)
sw2 = np.sin(2*wt)
return (sw/wt)**2 + 1/3 + (sw2 / wt - 2) / wt**2 + sw**2 / 3
This function is a bit more accurate but still contains a cancellation causing the same issue. This happens because of the expression E = (sw2 / wt - 2) / wt**2 which is the root of the problem. Indeed, np.sin(2*wt) tends towards 2 when wt is near 0. Thus sw2 / wt - 2 is close to 0 and the expression E is numerically unstable because of a close-to-zero value divided by another close-to-zero value. If one can reformulate analytically E to remove the singularity, then the resulting expression will likely be numerically stable. For more information you can look at the sinc function and how to compute an approximation of this function (also available in Numpy).
One simple way to solve this is to use numerical tools like Taylor series. Taylor series can approximate the expression of E close to zero accurately (because of its derivatives). Actually, one can use Taylor series to compute the whole expression and not only E. However, using Taylor series for values close to 1 give inaccurate results. In fact, the accuracy of the method drops very quickly above 1. One solution is to only use the Taylor series for small values.
Here is the resulting implementation:
def f3(wt):
sw = np.sin(wt)
sw2 = np.sin(2*wt)
reference = (sw/wt)**2 + 1/3 + (sw2 / wt - 2) / wt**2 + sw**2 / 3
# O(13) Taylor series computation used only for near-zero values
taylor = ( ( 4. / 15.) * wt**2
- ( 29. / 315.) * wt**4
+ ( 37. / 2835.) * wt**6
- (151. / 155925.) * wt**8
+ (268. / 6081075.) * wt**10
- (866. / 638512875.) * wt**12)
# Select the best implementation
return np.where(np.logical_and(wt >= -0.2, wt <= 0.2), taylor, reference)
This implementation appear to be very accurate in practice (>=12 digits of precision) while being still relatively fast. Here is the result:
I'm practicing using Binet formula to compute Fibonacci number, by following the Binet formula, I came up with the following code and passed the test case in leetcode:
class Solution(object):
def fib(self, N):
goldenratio = (1 + 5 ** 0.5) / 2
ratio2 = (1 - 5 ** 0.5) / 2
return int((goldenratio**N-ratio2**N)/(5**0.5))
But I don't understand the solution given by leetcode (gives correct results of course):
class Solution:
def fib(self, N):
golden_ratio = (1 + 5 ** 0.5) / 2
return int((golden_ratio ** N + 1) / 5 ** 0.5)
My question about leetcode solution is: why do they plus 1 after "golden_ratio ** N"? According to Binet formula, I think my code is correct, but I want to know why leetcode uses another way but still get correct results.
Here is the link for Binet formula:
https://artofproblemsolving.com/wiki/index.php/Binet%27s_Formula
Your code is a digital rendering of the exact formula: φ^n - ψ^n; this is correct to the precision limits of your mechanical representation, but fails as the result grows beyond that point.
The given solution is a reasonable attempt to correct that fault: instead of subtracting a precise correction amount, since that amount is trivially shown to be less than 1, the given solution merely adds 1 and truncates to the floor integer, yielding the correct result further out than your "exact" implementation.
Try generating some results:
def fib_exact(n):
goldenratio = (1 + 5 ** 0.5) / 2
ratio2 = (1 - 5 ** 0.5) / 2
return int((goldenratio**n - ratio2**n)/(5**0.5))
def fib_trunc(n):
golden_ratio = (1 + 5 ** 0.5) / 2
return int((golden_ratio ** n + 1) / 5 ** 0.5)
for n in range(100):
a = fib_trunc(n)
b = fib_exact(n)
print(n, a-b, a, b)
I have been trying to create custom calculator for calculating trigonometric functions. Aside from Chebyshev pylonomials and/or Cordic algorithm I have used Taylor series which have been accurate by few places of decimal.
This is what i have created to calculate simple trigonometric functions without any modules:
from __future__ import division
def sqrt(n):
ans = n ** 0.5
return ans
def factorial(n):
k = 1
for i in range(1, n+1):
k = i * k
return k
def sin(d):
pi = 3.14159265359
n = 180 / int(d) # 180 degrees = pi radians
x = pi / n # Converting degrees to radians
ans = x - ( x ** 3 / factorial(3) ) + ( x ** 5 / factorial(5) ) - ( x ** 7 / factorial(7) ) + ( x ** 9 / factorial(9) )
return ans
def cos(d):
pi = 3.14159265359
n = 180 / int(d)
x = pi / n
ans = 1 - ( x ** 2 / factorial(2) ) + ( x ** 4 / factorial(4) ) - ( x ** 6 / factorial(6) ) + ( x ** 8 / factorial(8) )
return ans
def tan(d):
ans = sin(d) / sqrt(1 - sin(d) ** 2)
return ans
Unfortunately i could not find any sources that would help me interpret inverse trigonometric function formulas for Python. I have also tried putting sin(x) to the power of -1 (sin(x) ** -1) which didn't work as expected.
What could be the best solution to do this in Python (In the best, I mean simplest with similar accuracy as Taylor series)? Is this possible with power series or do i need to use cordic algorithm?
The question is broad in scope, but here are some simple ideas (and code!) that might serve as a starting point for computing arctan. First, the good old Taylor series. For simplicity, we use a fixed number of terms; in practice, you might want to decide the number of terms to use dynamically based on the size of x, or introduce some kind of convergence criterion. With a fixed number of terms, we can evaluate efficiently using something akin to Horner's scheme.
def arctan_taylor(x, terms=9):
"""
Compute arctan for small x via Taylor polynomials.
Uses a fixed number of terms. The default of 9 should give good results for
abs(x) < 0.1. Results will become poorer as abs(x) increases, becoming
unusable as abs(x) approaches 1.0 (the radius of convergence of the
series).
"""
# Uses Horner's method for evaluation.
t = 0.0
for n in range(2*terms-1, 0, -2):
t = 1.0/n - x*x*t
return x * t
The above code gives good results for small x (say smaller than 0.1 in absolute value), but the accuracy drops off as x becomes larger, and for abs(x) > 1.0, the series never converges, no matter how many terms (or how much extra precision) we throw at it. So we need a better way to compute for larger x. One solution is to use argument reduction, via the identity arctan(x) = 2 * arctan(x / (1 + sqrt(1 + x^2))). This gives the following code, which builds on arctan_taylor to give reasonable results for a wide range of x (but beware possible overflow and underflow when computing x*x).
import math
def arctan_taylor_with_reduction(x, terms=9, threshold=0.1):
"""
Compute arctan via argument reduction and Taylor series.
Applies reduction steps until x is below `threshold`,
then uses Taylor series.
"""
reductions = 0
while abs(x) > threshold:
x = x / (1 + math.sqrt(1 + x*x))
reductions += 1
return arctan_taylor(x, terms=terms) * 2**reductions
Alternatively, given an existing implementation for tan, you could simply find a solution y to the equation tan(y) = x using traditional root-finding methods. Since arctan is already naturally bounded to lie in the interval (-pi/2, pi/2), bisection search works well:
def arctan_from_tan(x, tolerance=1e-15):
"""
Compute arctan as the inverse of tan, via bisection search. This assumes
that you already have a high quality tan function.
"""
low, high = -0.5 * math.pi, 0.5 * math.pi
while high - low > tolerance:
mid = 0.5 * (low + high)
if math.tan(mid) < x:
low = mid
else:
high = mid
return 0.5 * (low + high)
Finally, just for fun, here's a CORDIC-like implementation, which is really more appropriate for a low-level implementation than for Python. The idea here is that you precompute, once and for all, a table of arctan values for 1, 1/2, 1/4, etc., and then use those to compute general arctan values, essentially by computing successive approximations to the true angle. The remarkable part is that, after the precomputation step, the arctan computation involves only additions, subtractions, and multiplications by by powers of 2. (Of course, those multiplications aren't any more efficient than any other multiplication at the level of Python, but closer to the hardware, this could potentially make a big difference.)
cordic_table_size = 60
cordic_table = [(2**-i, math.atan(2**-i))
for i in range(cordic_table_size)]
def arctan_cordic(y, x=1.0):
"""
Compute arctan(y/x), assuming x positive, via CORDIC-like method.
"""
r = 0.0
for t, a in cordic_table:
if y < 0:
r, x, y = r - a, x - t*y, y + t*x
else:
r, x, y = r + a, x + t*y, y - t*x
return r
Each of the above methods has its strengths and weaknesses, and all of the above code can be improved in a myriad of ways. I encourage you to experiment and explore.
To wrap it all up, here are the results of calling the above functions on a small number of not-very-carefully-chosen test values, comparing with the output of the standard library math.atan function:
test_values = [2.314, 0.0123, -0.56, 168.9]
for value in test_values:
print("{:20.15g} {:20.15g} {:20.15g} {:20.15g}".format(
math.atan(value),
arctan_taylor_with_reduction(value),
arctan_from_tan(value),
arctan_cordic(value),
))
Output on my machine:
1.16288340166519 1.16288340166519 1.16288340166519 1.16288340166519
0.0122993797673 0.0122993797673 0.0122993797673002 0.0122993797672999
-0.510488321916776 -0.510488321916776 -0.510488321916776 -0.510488321916776
1.56487573286064 1.56487573286064 1.56487573286064 1.56487573286064
The simplest way to do any inverse function is to use binary search.
definitions
let assume function
x = g(y)
And we want to code its inverse:
y = f(x) = f(g(y))
x = <x0,x1>
y = <y0,y1>
bin search on floats
You can do it on integer math accessing mantissa bits like in here:
Any Faster RMS Value Calculation in C?
but if you do not know the exponent of the result prior to computation then you need to use floats for bin search too.
so the idea behind binary search is to change mantissa of y from y1 to y0 bit by bit from MSB to LSB. Then call direct function g(y) and if the result cross x revert the last bit change.
In case of using floats you can use variable that will hold approximate value of the mantissa bit targeted instead of integer bit access. That will eliminate unknown exponent problem. So at the beginning set y = y0 and actual bit to MSB value so b=(y1-y0)/2. After each iteration halve it and do as many iterations as you got mantissa bits n... This way you obtain result in n iterations within (y1-y0)/2^n accuracy.
If your inverse function is not monotonic break it into monotonic intervals and handle each as separate binary search.
The function increasing/decreasing just determine the crossing condition direction (use of < or >).
C++ acos example
so y = acos(x) is defined on x = <-1,+1> , y = <0,M_PI> and decreasing so:
double f64_acos(double x)
{
const int n=52; // mantisa bits
double y,y0,b;
int i;
// handle domain error
if (x<-1.0) return 0;
if (x>+1.0) return 0;
// x = <-1,+1> , y = <0,M_PI> , decreasing
for (y= 0.0,b=0.5*M_PI,i=0;i<n;i++,b*=0.5) // y is min, b is half of max and halving each iteration
{
y0=y; // remember original y
y+=b; // try set "bit"
if (cos(y)<x) y=y0; // if result cross x return to original y decreasing is < and increasing is >
}
return y;
}
I tested it like this:
double x0,x1,y;
for (x0=0.0;x0<M_PI;x0+=M_PI*0.01) // cycle all angle range <0,M_PI>
{
y=cos(x0); // direct function (from math.h)
x1=f64_acos(y); // my inverse function
if (fabs(x1-x0)>1e-9) // check result and output to log if error
Form1->mm_log->Lines->Add(AnsiString().sprintf("acos(%8.3lf) = %8.3lf != %8.3lf",y,x0,x1));
}
Without any difference found... so the implementation is working correctly. Of coarse binary search on 52 bit mantissa is usually slower then polynomial approximation ... on the other hand the implementation is so simple ...
[Notes]
If you do not want to take care of the monotonic intervals you can try
approximation search
As you are dealing with goniometric functions you need to handle singularities to avoid NaN or division by zero etc ...
If you're interested here more bin search examples (mostly on integers)
Power by squaring for negative exponents it contains
Line of code in question:
summing += yval * np.log( sigmoid(np.dot(w.transpose(),xi.transpose())))
+(1-yval)* np.log(max(0.001, 1-sigmoid(np.dot(w.transpose(),xi.transpose()))))
Error:
File "classify.py", line 67, in sigmoid
return 1/(1+ math.exp(-gamma))
OverflowError: math range error
The sigmoid function is just 1/(1+ math.exp(-gamma)).
I'm getting a math range error. Does anyone see why?
You can avoid this problem by using different cases for positive and negative gamma:
def sigmoid(gamma):
if gamma < 0:
return 1 - 1/(1 + math.exp(gamma))
else:
return 1/(1 + math.exp(-gamma))
The math range error is likely because your gamma argument is a large negative value, so you are calling exp() with a large positive value. It is very easy to exceed your floating point range that way.
The problem is that, when gamma becomes large, math.exp(gamma) overflows. You can avoid this problem by noticing that
sigmoid(x) = 1 / (1 + exp(-x))
= exp(x) / (exp(x) + 1)
= 1 - 1 / (1 + exp(x))
= 1 - sigmoid(-x)
This gives you a numerically stable implementation of sigmoid which guarantees you never even call math.exp with a positive value:
def sigmoid(gamma):
if gamma < 0:
return 1 - 1 / (1 + math.exp(gamma))
return 1 / (1 + math.exp(-gamma))
SciPy/Numpy seems to support many filters, but not the root-raised cosine filter. Is there a trick to easily create one rather than calculating the transfer function? An approximation would be fine as well.
The commpy package has several filters included with it. The order of return variables was switched in an earlier version (as of this edit, current version is 0.7.0). To install, foemphasized textllow instructions here or here.
Here's a use example for 1024 symbols of QAM16:
import numpy as np
from commpy.modulation import QAMModem
from commpy.filters import rrcosfilter
N = 1024 # Number of symbols
os = 8 #over sampling factor
# Create modulation. QAM16 makes 4 bits/symbol
mod1 = QAMModem(16)
# Generate the bit stream for N symbols
sB = np.random.randint(0, 2, N*mod1.num_bits_symbol)
# Generate N complex-integer valued symbols
sQ = mod1.modulate(sB)
sQ_upsampled = np.zeros(os*(len(sQ)-1)+1,dtype = np.complex64)
sQ_upsampled[::os] = sQ
# Create a filter with limited bandwidth. Parameters:
# N: Filter length in samples
# 0.8: Roll off factor alpha
# 1: Symbol period in time-units
# 24: Sample rate in 1/time-units
sPSF = rrcosfilter(N, alpha=0.8, Ts=1, Fs=over_sample)[1]
# Analog signal has N/2 leading and trailing near-zero samples
qW = np.convolve(sPSF, sQ_upsampled)
Here's some explanation of the parameters. N is the number of baud samples. You need 4 times as many bits (in the case of QAM) as samples. I made the sPSF array return with N elements so we can see the signal with leading and trailing samples. See the Wikipedia Root-raised-cosine filter page for explanation of parameter alpha. Ts is the symbol period in seconds and Fs is the number of filter samples per Ts. I like to pretend Ts=1 to keep things simple (unit symbol rate). Then Fs is the number of complex waveform samples per baud point.
If you use return element 0 from rrcosfilter to get the sample time indexes, you need to insert the correct symbol period and filter sample rate in Ts and Fs for the index values to be correctly scaled.
It would be nice to have the root-raised cosine filter standardized in a common package. Here is my implementation in the meantime based on commpy. It vectorized with numpy, and normalized without consideration of the symbol rate.
def raised_root_cosine(upsample, num_positive_lobes, alpha):
"""
Root raised cosine (RRC) filter (FIR) impulse response.
upsample: number of samples per symbol
num_positive_lobes: number of positive overlaping symbols
length of filter is 2 * num_positive_lobes + 1 samples
alpha: roll-off factor
"""
N = upsample * (num_positive_lobes * 2 + 1)
t = (np.arange(N) - N / 2) / upsample
# result vector
h_rrc = np.zeros(t.size, dtype=np.float)
# index for special cases
sample_i = np.zeros(t.size, dtype=np.bool)
# deal with special cases
subi = t == 0
sample_i = np.bitwise_or(sample_i, subi)
h_rrc[subi] = 1.0 - alpha + (4 * alpha / np.pi)
subi = np.abs(t) == 1 / (4 * alpha)
sample_i = np.bitwise_or(sample_i, subi)
h_rrc[subi] = (alpha / np.sqrt(2)) \
* (((1 + 2 / np.pi) * (np.sin(np.pi / (4 * alpha))))
+ ((1 - 2 / np.pi) * (np.cos(np.pi / (4 * alpha)))))
# base case
sample_i = np.bitwise_not(sample_i)
ti = t[sample_i]
h_rrc[sample_i] = np.sin(np.pi * ti * (1 - alpha)) \
+ 4 * alpha * ti * np.cos(np.pi * ti * (1 + alpha))
h_rrc[sample_i] /= (np.pi * ti * (1 - (4 * alpha * ti) ** 2))
return h_rrc
commpy doesn't seem to be released yet. But here is my nugget of knowledge.
beta = 0.20 # roll off factor
Tsample = 1.0 # sampling period, should at least twice the rate of the symbol
oversampling_rate = 8 # oversampling of the bit stream, this gives samples per symbol
# must be at least 2X the bit rate
Tsymbol = oversampling_rate * Tsample # pulse duration should be at least 2 * Ts
span = 50 # number of symbols to span, must be even
n = span*oversampling_rate # length of the filter = samples per symbol * symbol span
# t_step must be from -span/2 to +span/2 symbols.
# each symbol has 'sps' number of samples per second.
t_step = Tsample * np.linspace(-n/2,n/2,n+1) # n+1 to include 0 time
BW = (1 + beta) / Tsymbol
a = np.zeros_like(t_step)
for item in list(enumerate(t_step)):
i,t = item
# t is n*Ts
if (1-(2.0*beta*t/Tsymbol)**2) == 0:
a[i] = np.pi/4 * np.sinc(t/Tsymbol)
print 'i = %d' % i
elif t == 0:
a[i] = np.cos(beta * np.pi * t / Tsymbol)/ (1-(2.0*beta*t/Tsymbol)**2)
print 't = 0 captured'
print 'i = %d' % i
else:
numerator = np.sinc( np.pi * t/Tsymbol )*np.cos( np.pi*beta*t/Tsymbol )
denominator = (1.0 - (2.0*beta*t/Tsymbol)**2)
a[i] = numerator / denominator
#a = a/sum(a) # normalize total power
plot_filter = 0
if plot_filter == 1:
w,h = signal.freqz(a)
fig = plt.figure()
plt.subplot(2,1,1)
plt.title('Digital filter (raised cosine) frequency response')
ax1 = fig.add_subplot(211)
plt.plot(w/np.pi, 20*np.log10(abs(h)),'b')
#plt.plot(w/np.pi, abs(h),'b')
plt.ylabel('Amplitude (dB)', color = 'b')
plt.xlabel(r'Normalized Frequency ($\pi$ rad/sample)')
ax2 = ax1.twinx()
angles = np.unwrap(np.angle(h))
plt.plot(w/np.pi, angles, 'g')
plt.ylabel('Angle (radians)', color = 'g')
plt.grid()
plt.axis('tight')
plt.show()
plt.subplot(2,1,2)
plt.stem(a)
plt.show()
I think the correct response is to generate the desire impulse response. For a raised cosine filter the function is
h(n) = (sinc(n/T)*cos(pi * alpha* n /T)) / (1-4*(alpha*n/T)**2)
Select the number of points for your filter and generate the weights.
output = scipy.signal.convolve(signal_in, h)
This is basically the same function as in CommPy but much smaller in code:
def rcosfilter(N, beta, Ts, Fs):
t = (np.arange(N) - N / 2) / Fs
return np.where(np.abs(2*t) == Ts / beta,
np.pi / 4 * np.sinc(t/Ts),
np.sinc(t/Ts) * np.cos(np.pi*beta*t/Ts) / (1 - (2*beta*t/Ts) ** 2))
SciPy will support any filter. Just calculate the impulse response and use any of the appropriate scipy.signal filter/convolve functions.