Attempting to implement the FFT Algorithm in python, and hitting a bug that I can't seem to figure out.
Here is my code:
def FFT(co, inverse=False):
if len(co) <= 1:
return co
even = FFT(co[::2], inverse)
odd = FFT(co[1::2], inverse)
y = [0] * len(co)
z = -1 if inverse else 1
for k in range(0, len(co)//2):
w = np.round(math.e**(z*1j*math.pi*(2*k / len(co))), decimals=10)
y[k] = even[k] + w*odd[k]
y[k + len(co)//2] = even[k] - w*odd[k]
return y
when I run
x1 = FFT([1, 1, 2, 0])
print x1
print np.fft.fft([1, 1, 2, 0])
I get:
[(4+0j), (-1+1j), (2+0j), (-1-1j)]
[ 4.+0.j -1.-1.j 2.+0.j -1.+1.j]
So for index 1 and index 3, it's the complex conjugate. Any ideas?
The definition of the forward Discrete Fourier Transform used by np.fft.fft is given by:
You should notice the negative sign in the argument to the complex exponential.
In your implementation on the other hand, you are using a positive sign for the forward transform, and such an inversing in the sign of the arguments to the complex exponential results in conjugating the frequency spectrum.
So, for your implementation to yield the same results as np.fft.fft you simply have to invert the sign of the forward and backward transforms with:
z = +1 if inverse else -1
(instead of z = -1 if inverse else 1).
Related
I have two solutions to this problem actually, they are both applied below to a test case. The thing is that none of them is perfect: first one only take into account the two end points, the other one can't be made "arbitrarily smooth": there is a limit in the amount of smoothness one can achieve (the one I am showing).
I am sure there is a better solution, that kind-of go from the first solution to the other and all the way to no smoothing at all. It may already be implemented somewhere. Maybe solving a minimization problem with an arbitrary number of splines equidistributed?
Thank you very much for your help
Ps: the seed used is a challenging one
import matplotlib.pyplot as plt
from scipy import interpolate
from scipy.signal import savgol_filter
import numpy as np
import random
def scipy_bspline(cv, n=100, degree=3):
""" Calculate n samples on a bspline
cv : Array ov control vertices
n : Number of samples to return
degree: Curve degree
"""
cv = np.asarray(cv)
count = cv.shape[0]
degree = np.clip(degree,1,count-1)
kv = np.clip(np.arange(count+degree+1)-degree,0,count-degree)
# Return samples
max_param = count - (degree * (1-periodic))
spl = interpolate.BSpline(kv, cv, degree)
return spl(np.linspace(0,max_param,n))
def round_up_to_odd(f):
return np.int(np.ceil(f / 2.) * 2 + 1)
def generateRandomSignal(n=1000, seed=None):
"""
Parameters
----------
n : integer, optional
Number of points in the signal. The default is 1000.
Returns
-------
sig : numpy array
"""
np.random.seed(seed)
print("Seed was:", seed)
steps = np.random.choice(a=[-1, 0, 1], size=(n-1))
roughSig = np.concatenate([np.array([0]), steps]).cumsum(0)
sig = savgol_filter(roughSig, round_up_to_odd(n/10), 6)
return sig
# Generate a random signal to illustrate my point
n = 1000
t = np.linspace(0, 10, n)
seed = 45136. # Challenging seed
sig = generateRandomSignal(n=1000, seed=seed)
sigInit = np.copy(sig)
# Add noise to the signal
mean = 0
std = sig.max()/3.0
num_samples = n/5
idxMin = n/2-100
idxMax = idxMin + num_samples
tCut = t[idxMin+1:idxMax]
noise = np.random.normal(mean, std, size=num_samples-1) + 2*std*np.sin(2.0*np.pi*tCut/0.4)
sig[idxMin+1:idxMax] += noise
# Define filtering range enclosing the noisy area of the signal
idxMin -= 20
idxMax += 20
# Extreme filtering solution
# Spline between first and last points, the points in between have no influence
sigTrim = np.delete(sig, np.arange(idxMin,idxMax))
tTrim = np.delete(t, np.arange(idxMin,idxMax))
f = interpolate.interp1d(tTrim, sigTrim, kind='quadratic')
sigSmooth1 = f(t)
# My attempt. Not bad but not perfect because there is a limit in the maximum
# amount of smoothing we can add (degree=len(tSlice) is the maximum)
# If I could do degree=10*len(tSlice) and converging to the first solution
# I would be done!
sigSlice = sig[idxMin:idxMax]
tSlice = t[idxMin:idxMax]
cv = np.stack((tSlice, sigSlice)).T
p = scipy_bspline(cv, n=len(tSlice), degree=len(tSlice))
tSlice = p.T[0]
sigSliceSmooth = p.T[1]
sigSmooth2 = np.copy(sig)
sigSmooth2[idxMin:idxMax] = sigSliceSmooth
# Plot
plt.figure()
plt.plot(t, sig, label="Signal")
plt.plot(t, sigSmooth1, label="Solution 1")
plt.plot(t, sigSmooth2, label="Solution 2")
plt.plot(t[idxMin:idxMax], sigInit[idxMin:idxMax], label="What I'd want (kind of, smoother will be even better actually)")
plt.plot([t[idxMin],t[idxMax]], [sig[idxMin],sig[idxMax]],"o")
plt.legend()
plt.show()
sys.exit()
Yes, a minimization is a good way to approach this smoothing problem.
Least squares problem
Here is a suggestion for a least squares formulation: let s[0], ..., s[N] denote the N+1 samples of the given signal to smooth, and let L and R be the desired slopes to preserve at the left and right endpoints. Find the smoothed signal u[0], ..., u[N] as the minimizer of
min_u (1/2) sum_n (u[n] - s[n])² + (λ/2) sum_n (u[n+1] - 2 u[n] + u[n-1])²
subject to
s[0] = u[0], s[N] = u[N] (value constraints),
L = u[1] - u[0], R = u[N] - u[N-1] (slope constraints),
where in the minimization objective, the sums are over n = 1, ..., N-1 and λ is a positive parameter controlling the smoothing strength. The first term tries to keep the solution close to the original signal, and the second term penalizes u for bending to encourage a smooth solution.
The slope constraints require that
u[1] = L + u[0] = L + s[0] and u[N-1] = u[N] - R = s[N] - R. So we can consider the minimization as over only the interior samples u[2], ..., u[N-2].
Finding the minimizer
The minimizer satisfies the Euler–Lagrange equations
(u[n] - s[n]) / λ + (u[n+2] - 4 u[n+1] + 6 u[n] - 4 u[n-1] + u[n-2]) = 0
for n = 2, ..., N-2.
An easy way to find an approximate solution is by gradient descent: initialize u = np.copy(s), set u[1] = L + s[0] and u[N-1] = s[N] - R, and do 100 iterations or so of
u[2:-2] -= (0.05 / λ) * (u - s)[2:-2] + np.convolve(u, [1, -4, 6, -4, 1])[4:-4]
But with some more work, it is possible to do better than this by solving the E–L equations directly. For each n, move the known quantities to the right-hand side: s[n] and also the endpoints u[0] = s[0], u[1] = L + s[0], u[N-1] = s[N] - R, u[N] = s[N]. The you will have a linear system "A u = b", and matrix A has rows like
0, ..., 0, 1, -4, (6 + 1/λ), -4, 1, 0, ..., 0.
Finally, solve the linear system to find the smoothed signal u. You could use numpy.linalg.solve to do this if N is not too large, or if N is large, try an iterative method like conjugate gradients.
you can apply a simple smoothing method and plot the smooth curves with different smoothness values to see which one works best.
def smoothing(data, smoothness=0.5):
last = data[0]
new_data = [data[0]]
for datum in data[1:]:
new_value = smoothness * last + (1 - smoothness) * datum
new_data.append(new_value)
last = datum
return new_data
You can plot this curve for multiple values of smoothness and pick the curve which suits your needs. You can also apply this method only on a range of values in the actual curve by defining start and end
I have a function f(x) which I know has two zeros within an interval and I need to compute both x values for wich the function cross 0.
I usually use
import scipy.optimize as opt
opt.brentq(f, xmin, xmax)
But the problem is this method is working if the function has one 0 in the interval, and it is not very simple to know where to divide in two parts.
The function is also time costly to evaluate...
I think a good approach would be to pre-process the search of the zeros by sampling f before searching for the zeros. During that pre-process, you evaluate f to detect if the sign of the function has changed.
def preprocess(f,xmin,xmax,step):
first_sign = f(xmin) > 0 # True if f(xmin) > 0, otherwise False
x = xmin + step
while x <= xmax: # This loop detects when the function changes its sign
fstep = f(x)
if first_sign and fstep < 0:
return x
elif not(first_sign) and fstep > 0:
return x
x += step
return x # If you ever reach here, that means that there isn't a zero in the function !!!
With this function , you can separate your initial interval in several smaller intervals. For example :
import scipy.optimize as opt
step = ...
xmid = preprocess(f,xmin,max,step)
z0 = opt.brentq(f,xmin,xmid)
z1 = opt.brentq(f,xmid,xmax)
Depending of the functions f you use, you may need to separate your interval in more than two sub-intervals. Just iterates through [xmin,xmax] like this :
x_list = []
x = x_min
while x < xmax: # This discovers when f changes its sign
x_list.append(x)
x = preprocess(f,x,xmax,step)
x_list.append(xmax)
z_list = []
for i in range(len(x_list) - 1):
z_list.append(opt.brentq(f,x_list[i],x_list[i + 1]))
In the end, z_list contains all the zeros in the given interval [xmin,xmax].
Keep in mind that this algorithm is time-consuming but will do the job.
I am trying to calculate the derivative of a function at x = 0, but I keep getting odd answers with all functions I have tried. For example with f(x)=x**2 I get the derivative to be 2 at all points. My finite difference coefficients are correct, it is second order accurate for the second derivative with respect to x.
from numpy import *
from matplotlib.pyplot import *
def f1(x):
return x**2
n = 100 # grid points
x = zeros(n+1,dtype=float) # array to store values of x
step = 0.02/float(n) # step size
f = zeros(n+1,dtype=float) # array to store values of f
df = zeros(n+1,dtype=float) # array to store values of calulated derivative
for i in range(0,n+1): # adds values to arrays for x and f(x)
x[i] = -0.01 + float(i)*step
f[i] = f1(x[i])
# have to calculate end points seperately using one sided form
df[0] = (f[2]-2*f[1]+f[0])/step**2
df[1] = (f[3]-2*f[2]+f[1])/step**2
df[n-1] = (f[n-1]-2*f[n-2]+f[n-3])/step**2
df[n] = (f[n]-2*f[n-1]+f[n-2])/step**2
for i in range(2,n-1): # add values to array for derivative
df[i] = (f[i+1]-2*f[i]+f[i-1])/step**2
print df # returns an array full of 2...
The second derivative of x^2 is the constant 2, and you use the central difference quotient for the second derivative, as you can also see by the square in the denominator. Your result is absolutely correct, your code does exactly what you told it to do.
To get the first derivative with a symmetric difference quotient, use
df[i] = ( f[i+1] - f[i-1] ) / ( 2*step )
first order derivative at point x of function f1 (for the case f1(x)=x^2) can be obtained:
def f1(x):
return (x**2)
def derivative (f, x, step=0.0000000000001):
return ((f(x+step)-f(x))/step)
hope that helps
I have a generic question on how to solve optimization problems of the Min-Max type, using the PICOS package in Python. I found little information in this context while searching the PICOS documentation and on the web as well.
I can imagine a simple example of the below form.
Given a matrix M, find x* = argmin_x [ max_y x^T M y ], where x > 0, y > 0, sum(x) = 1 and sum(y) = 1.
I have tried a few methods, starting with the most straightforward idea of having minimax, minmax keywords in the objective function of PICOS Problem class. It turns out that none of these keywords are valid, see the package documentation for objective functions. Furthermore, having nested objective functions also turns out to be invalid.
In the last of my naive attempts, I have two functions, Max() and Min() which are both solving a linear optimization problem. The outer function, Min(), should minimize the inner function Max(). So, I have used Max() in the objective function of the outer optimization problem.
import numpy as np
import picos as pic
import cvxopt as cvx
def MinMax(mat):
## Perform a simple min-max SDP formulated as:
## Given a matrix M, find x* = argmin_x [ max_y x^T M y ], where x > 0, y > 0, sum(x) = sum(y) = 1.
prob = pic.Problem()
## Constant parameters
M = pic.new_param('M', cvx.matrix(mat))
v1 = pic.new_param('v1', cvx.matrix(np.ones((mat.shape[0], 1))))
## Variables
x = prob.add_variable('x', (mat.shape[0], 1), 'nonnegative')
## Setting the objective function
prob.set_objective('min', Max(x, M))
## Constraints
prob.add_constraint(x > 0)
prob.add_constraint((v1 | x) == 1)
## Print the problem
print("The optimization problem is formulated as follows.")
print prob
## Solve the problem
prob.solve(verbose = 0)
objVal = prob.obj_value()
solution = np.array(x.value)
return (objVal, solution)
def Max(xVar, M):
## Given a vector l, find y* such that l y* = max_y l y, where y > 0, sum(y) = 1.
prob = pic.Problem()
# Variables
y = prob.add_variable('y', (M.size[1], 1), 'nonnegative')
v2 = pic.new_param('v1', cvx.matrix(np.ones((M.size[1], 1))))
# Setting the objective function
prob.set_objective('max', ((xVar.H * M) * y))
# Constraints
prob.add_constraint(y > 0)
prob.add_constraint((v2 | y) == 1)
# Solve the problem
prob.solve(verbose = 0)
sol = prob.obj_value()
return sol
def print2Darray(arr):
# print a 2D array in a readable (matrix like) format on the standard output
for ridx in range(arr.shape[0]):
for cidx in range(arr.shape[1]):
print("%.2e \t" % arr[ridx,cidx]),
print("")
print("========")
return None
if __name__ == '__main__':
## Testing the Simple min-max SDP
mat = np.random.rand(4,4)
print("## Given a matrix M, find x* = argmin_x [ max_y x^T M y ], where x > 0, y > 0, sum(x) = sum(y) = 1.")
print("M = ")
print2Darray(mat)
(optval, solution) = MinMax(mat)
print("Optimal value of the function is %.2e and it is attained by x = %s and that of y = %.2e." % (optval, np.array_str(solution)))
When I run the above code, it gives me the following error message.
10:stackoverflow pavithran$ python minmaxSDP.py
## Given a matrix M, find x* = argmin_x [ max_y x^T M y ], where x > 0, y > 0, sum(x) = sum(y) = 1.
M =
1.46e-01 9.23e-01 6.50e-01 7.30e-01
6.13e-01 6.80e-01 8.35e-01 4.32e-02
5.19e-01 5.99e-01 1.45e-01 6.91e-01
6.68e-01 8.46e-01 3.67e-01 3.43e-01
========
Traceback (most recent call last):
File "minmaxSDP.py", line 80, in <module>
(optval, solution) = MinMax(mat)
File "minmaxSDP.py", line 19, in MinMax
prob.set_objective('min', Max(x, M))
File "minmaxSDP.py", line 54, in Max
prob.solve(verbose = 0)
File "/Library/Python/2.7/site-packages/picos/problem.py", line 4135, in solve
self.solver_selection()
File "/Library/Python/2.7/site-packages/picos/problem.py", line 6102, in solver_selection
raise NotAppropriateSolverError('no solver available for problem of type {0}'.format(tp))
picos.tools.NotAppropriateSolverError: no solver available for problem of type MIQP
10:stackoverflow pavithran$
At this point, I am stuck and unable to fix this problem.
Is it just that PICOS does not natively support min-max problem or is my way of encoding the problem, incorrect?
Please note: The reason I am insisting on using PICOS is that ideally, I would like to know the answer to my question in the context of solving a min-max semidefinite program (SDP). But I think the addition of semidefinite constraints is not hard, once I can figure out how to do a simple min-max problem using PICOS.
The first answer is that min-max problems are not natively supported in PICOS. However, whenever the inner maximization problem is a convex optimization problem, you can reformulate it as a minimization problem (by taking the Lagrangian dual), and so you get a min-min problem.
Your particular problem is a standard zero-sum game, and can be reformulated as: (assuming M is of dimension n x m):
min_x max_{i=1...m} [M^T x]_i = min_x,t t s.t. [M^T x]_i <= t (for i=1...m)
In Picos:
import picos as pic
import cvxopt as cvx
n=3
m=4
M = cvx.normal(n,m) #generate a random matrix
P = pic.Problem()
x = P.add_variable('x',n,lower=0)
t = P.add_variable('t',1)
P.add_constraint(M.T*x <= t)
P.add_constraint( (1|x) == 1)
P.minimize(t)
print 'the solution is x='
print x
If you also need the optimal y, then you can show that it corresponds to the optimal value of the constraint M'x <= t:
print 'the solution of the inner max-problem is y='
print P.constraints[0].dual
Best,
Guillaume.
I need to generate numbers on a positive given interval (a,b) distributed following an exponential distribution. Using the Inverse CDF Method, I made a generator of a number exponentialy distributed. But, of course, this number is a positive number and I want it to be on the given interval. What should I do to only generate on the interval?
The code to generate a number exponentially distributed using the inverse cdf method is, in
Python
u = random.uniform(0,1)
return (-1/L)*math.log(u)
where L is a given positive parameter.
Thanks in advance
The probability of an outcome x would normally be L exp(-Lx). However, when we are restricted to [a,b], the probability of x in [a,b] is scaled up by 1/the fraction of the CDF that occurs between a and b: integral from a to b(L exp(-Lt)dt) = -(exp(-Lb) - exp(-La)).
Therefore, the pdf at x is
L exp(-Lx))/(exp(-La) - exp(-Lb),
giving a cdf at x of
integral from a to x[ L exp(-Lt)/(exp(-La) - exp(-Lb))dt]
= [-exp(-Lx) + exp(-La)]/[exp(-La) - exp(-Lb)] = u
Now invert:
exp(-Lx) = exp(-La) - u[exp(-La) - exp(-Lb)]
-Lx = -La + log( 1 - u[1 - exp(-Lb)/exp(-La)])
x = a + (-1/L) log( 1 - u[1 - exp(-Lb)/exp(-La)])
giving code:
u = random.uniform(0,1)
return a + (-1/L)*math.log( 1 - u*(1 - math.exp(-L*b)/math.exp(-L*a)) )
be aware: for large L or a, math.exp(-L*a) will round to 0, leading to ZeroDivisionError.