Lowpass Filter in python - python

I am trying to convert a Matlab code to Python. I want to implement fdesign.lowpass() of Matlab in Python. What will be the exact substitute of this Matlab code using scipy.signal.firwin():
demod_1_a = mod_noisy * 2.*cos(2*pi*Fc*t+phi);
d = fdesign.lowpass('N,Fc', 10, 40, 1600);
Hd = design(d);
y = filter(Hd, demod_1_a);

A very basic approach would be to invoke
# spell out the args that were passed to the Matlab function
N = 10
Fc = 40
Fs = 1600
# provide them to firwin
h = scipy.signal.firwin(numtaps=N, cutoff=40, nyq=Fs/2)
# 'x' is the time-series data you are filtering
y = scipy.signal.lfilter(h, 1.0, x)
This should yield a filter similar to the one that ends up being made in the Matlab code.
If your goal is to obtain functionally equivalent results, this should provide a useful
filter.
However, if your goal is that the python code provide exactly the same results,
then you'll have to look under the hood of the design call (in Matlab); From my quick check, it's not trivial to parse through the Matlab calls to identify exactly what it is doing, i.e. what design method is used and so on, and how to map that into corresponding scipy calls. If you really want compatibility, and you only need to do this for a limited number
of filters, you could, by hand, look at the Hd.Numerator field -- this array of numbers directly corresponds to the h variable in the python code above. So if you copy those
numbers into an array by hand, you'll get numerically equivalent results.

Related

Converting MatLab for loop with array code to python

I was given code in Matlab made by someone else and asked to convert to python. However, I do not know MatLab.This is the code:
for i = 1:nWind
[input(a:b,:), t(a:b,1)] = EulerMethod(A(:,:,:,i),S(:,:,i),B(:,i),n,scale(:,i),tf,options);
fprintf("%d\n",i);
for j = 1:b
vwa = generate_wind([input(j,9);input(j,10)],A(:,:,:,i),S(:,:,i),B(:,i),n,scale(:,i));
wxa(j) = vwa(1);
wya(j) = vwa(2);
end
% Pick random indexes for filtered inputs
rand_index = randi(tf/0.01-1,1,filter_size);
inputf(c:d,:) = input(a+rand_index,:);
wxf(c:d,1) = wxa(1,a+rand_index);
wyf(c:d,1) = wya(1,a+rand_index);
wzf(c:d,1) = 0;
I am confused on what [input(a:b,:), t(a:b,1)] mean and if wxf, wzf, wyf are part of the MatLab library or if it's made. Also, EulerMethod and generate_wind are seprate classes. Can someone help me convert this code to python?
The only thing I really changed so far is changing the for loop from:
for i = 1:nWind
to
for i in range(1,nWind):
There's several things to unpack here.
First, MATLAB indexing is 1-based, while Python indexing is 0-based. So, your for i = 1:nWind from MATLAB should translate to for i in range(0,nWind) in Python (with the zero optional). For nWind = 5, MATLAB would produce 1,2,3,4,5 while Python range would produce 0,1,2,3,4.
Second, wxf, wyf, and wzf are local variables. MATLAB is unique in that you can assign into specific indices at the same time variables are declared. These lines are assigning the first rows of wxa and wya (since their first index is 1) into the first columns of wxf and wyf (since their second index is 1). MATLAB will also expand an array if you assign past its end.
Without seeing the rest of the code, I don't really know what c and d are doing. If c is initialized to 1 before the loop and there's something like c = d+1; later, then it would be that your variables wxf, wyf, and wzf are being initialized on the first iteration of the loop and expanded on later iterations. This is a common (if frowned upon) pattern in MATLAB. If this is the case, you'd replicate it in Python by initializing to an empty array before the loop and using the array's extend() method inside the loop (though I bet it's frowned upon in Python, as well). But really, we need you to edit your question to include a, b, c, and d if you want to be sure this is really the case.
Third, EulerMethod and generate_wind are functions, not classes. EulerMethod returns two outputs, which you'd probably replicate in Python by returning a tuple.
[input(a:b,:), t(a:b,1)] = EulerMethod(...); is assigning the two outputs of EulerMethod into specific ranges of input and t. Similar concepts as in points 1 and 2 apply here.
Those are the answers to what you expressed confusion about. Without sitting down and doing it myself, I don't have enough experience in Python to give more Python-specific recommendations.

Parallelize Python Loop: Questions

I have some code that is calculating the value of a large number of discrete actions and outputting the best action and it's value.
A_max = 0
for i in...
A = f(i)
if A > A_max
x = i
A_max = A
I'd like to parallelize this code in order to save time. Now, my understanding is that as calculating f(i) doesn't depend on calculating f(j) first, I can just use joblib.Parallel for that part of the code and get something like:
results = Parallel(n_jobs=-1)(delayed(f)(i) for i in...)
A_max = max(results)
x = list.index(A_max)
is this correct?
My next issue is that my code contains a dictionary that the function f alters as it does it calculation. My understanding is that if the code is parallelized, each concurrent process will be altering the same dictionary. Is this correct and if so would creating copies of the dictionary at the beginning of f solve the issue?
Finally, in the documentation I'm seeing references to backends called "Lorky" and "threading", what is the difference between these backends?

What is the best way to display numeric and symbolic expressions in python?

I need to produce calculation reports that detail step by step calculations, showing the formulas that are used and then showing how the results are achieved.
I have looked at using sympy to display symbolic equations. The problem is that a sympy symbol is stored as a variable, and therefore I cannot also store the numerical value of that symbol.
For example, for the formula σ=My/I , I need to show the value of each symbol, then the symbolic formula, then the formula with values substituted in, and finally the resolution of the formula.
M=100
y= 25
I=5
σ=My/I
σ=100*25/5
σ=5000
I’m new to programming and this is something I’m struggling with. I’ve thought of perhaps building my own class but not sure how to make the distinction the different forms. In the example above, σ is at one point a numerical value, one half of an symbolic expression, and also one half of a numerical expression.
Hopefully the following helps. This produces more or less what you want. You cannot get your fifth line of workings easily as you'll see in the code.
from sympy import *
# define all variables needed
# trying to keep things clear that symbols are different from their numeric values
M_label, y_label, l_label = ("M", "y", "l")
M_symbol, y_symbol, l_symbol = symbols(f"{M_label} {y_label} {l_label}", real=True)
M_value, y_value, l_value = (100, 25, 5)
# define the dictionary whose keys are string names
# and whose values are a tuple of symbols and numerical values
symbols_values = {M_label: (M_symbol, M_value),
y_label: (y_symbol, y_value),
l_label: (l_symbol, l_value)}
for name, symbol_value in symbols_values.items():
print(f"{name} = {symbol_value[1]}") # an f-string or formatted string
sigma = M_symbol * y_symbol / l_symbol
print(f"sigma = {sigma}")
# option 1
# changes `/5` to 5**(-1) since this is exactly how sympy views division
# credit for UnevaluatedExpr
# https://stackoverflow.com/questions/49842196/substitute-in-sympy-wihout-evaluating-or-simplifying-the-expression
sigma_substituted = sigma\
.subs(M_symbol, UnevaluatedExpr(M_value))\
.subs(y_symbol, UnevaluatedExpr(y_value))\
.subs(l_symbol, UnevaluatedExpr(l_value))
print(f"sigma = {sigma_substituted}")
# option 2
# using string substitution
# note this could replace words like `log`, `cos` or `exp` to something completely different
# this is why it is unadvised. The code above is far better for that purpose
sigma_substituted = str(sigma)\
.replace(M_label, str(M_value))\
.replace(y_label, str(y_value))\
.replace(l_label, str(l_value))
print(f"sigma = {sigma_substituted}")
sigma_simplified = sigma\
.subs(M_symbol, M_value)\
.subs(y_symbol, y_value)\
.subs(l_symbol, l_value)
print(f"sigma = {sigma_simplified}")
Also note that if you wanted to change the symbols_values dictionary to keys being the symbols and values being the numerical values, you will have a hard time or seemingly buggy experience using the keys. That is because if you have x1 = Symbol("x") and x2 = Symbol("x"), SymPy sometimes treats the above as 2 completely different variables even though they are defined the same way. It is far easier to use strings as keys.
If you begin to use more variables and choose to work this way, I suggest using lists and for loops instead of writing the same code over and over.

How does this class work? (Related to Quantopian, Python and Pandas)

From here: https://www.quantopian.com/posts/wsj-example-algorithm
class Reversion(CustomFactor):
"""
Here we define a basic mean reversion factor using a CustomFactor. We
take a ratio of the last close price to the average price over the
last 60 days. A high ratio indicates a high price relative to the mean
and a low ratio indicates a low price relative to the mean.
"""
inputs = [USEquityPricing.close]
window_length = 60
def compute(self, today, assets, out, prices):
out[:] = -prices[-1] / np.mean(prices, axis=0)
Reversion() seems to return a pandas.DataFrame, and I have absolutely no idea why.
For one thing, where is inputs and window_length used?
And what exactly is out[:]?
Is this specific behavior related to Quantopian in particular or Python/Pandas?
TL;DR
Reversion() doesn't return a DataFrame, it returns an instance of the
Reversion class, which you can think of as a formula for performing a
trailing window computation. You can run that formula over a particular time
period using either quantopian.algorithm.pipeline_output or
quantopian.research.run_pipeline, depending on whether you're writing a
trading algorithm or doing offline research in a notebook.
The compute method is what defines the "formula" computed by a Reversion
instance. It calculates a reduction over a 2D numpy array of prices, where
each row of the array corresponds to a day and each column of the array
corresponds to a stock. The result of that computation is a 1D array
containing a value for each stock, which is copied into out. out is also
a numpy array. The syntax out[:] = <expression> says "copy the values from
<expression> into out".
compute writes its result directly into an output array instead of simply
returning because doing so allows the CustomFactor base class to ensure
that the output has the correct shape and dtype, which can be nontrivial for
more complex cases.
Having a function "return" by overwriting an input is unusual and generally
non-idiomatic Python. I wouldn't recommend implementing a similar API unless
you're very sure that there isn't a better solution.
All of the code in the linked example is open source and can be found in
Zipline, the framework on top of
which Quantopian is built. If you're interested in the implementation, the
following files are good places to start:
zipline/pipeline/engine.py
zipline/pipeline/term.py
zipline/pipeline/graph.py
zipline/pipeline/pipeline.py
zipline/pipeline/factors/factor.py
You can also find a detailed tutorial on the Pipeline API
here.
I think there are two kinds of answers to your question:
How does the Reversion class fit into the larger framework of a
Zipline/Quantopian algorithm? In other words, "how is the Reversion class
used"?
What are the expected inputs to Reversion.compute() and what computation
does it perform on those inputs? In other words, "What, concretely, does the
Reversion.compute() method do?
It's easier to answer (2) with some context from (1).
How is the Reversion class used?
Reversion is a subclass of CustomFactor, which is part of Zipline's
Pipeline API. The primary purpose of the Pipeline API is to make it easy
for users to perform a certain special kind of computation efficiently over
many sources of data. That special kind of computation is a cross-sectional
trailing-window computation, which has the form:
Every day, for some set of data sources, fetch the last N days of data for all
known assets and apply a reduction function to produce a single value per
asset.
A very simple cross-sectional trailing-window computation would be something
like "close-to-close daily returns", which has the form:
Every day, fetch the last two days' of close prices and, for each asset,
calculate the percent change between the asset's previous day close price and
its current current close price.
To describe a cross-sectional trailing-window computation, we need at least
three pieces of information:
On what kinds of data (e.g. price, volume, market cap) does the computation
operate?
On how long of a trailing window of data (e.g. 1 day, 20 days, 100 days)
does the computation operate?
What reduction function does the computation perform over the data described
by (1) and (2)?
The CustomFactor class defines an API for consolidating these three pieces of
information into a single object.
The inputs attribute describes the set of inputs needed to perform a
computation. In the snippet from the question, the only input is
USEquityPricing.close, which says that we just need trailing daily close
prices. In general, however, we can ask for any number of inputs. For
example, to compute VWAP (Volume-Weighted Average Price), we would use
something like inputs = [USEquityPricing.close, USEquityPricing.volume] to
say that we want trailing close prices and trailing daily volumes.
The window_length attribute describes the number of days of trailing data
required to perform a computation. In the snippet above we're requesting 60
days of trailing close prices.
The compute method describes the trailing-window computation to be
performed. In the section below, I've outlined exactly how compute performs
its computation. For now, it's enough to know that compute is essentially a
reduction function from some number of 2-dimensional arrays to a single
1-dimensional array.
You might notice that we haven't defined an actual set of dates on which we
might want to compute a Reversion factor. This is by design, since we'd like
to be able to use the same Reversion instance to perform calculations at
different points in time.
Quantopian defines two APIs for computing expressions like Reversion: an
"online" mode designed for use in actual trading algorithms, and a "batch" mode
designed for use in research and development. In both APIs, we first construct
a Pipeline object that holds all the computations we want to perform. We
then feed our pipeline object into a function that actually performs the
computations we're interested in.
In the batch API, we call run_pipeline passing our pipeline, a start date,
and an end date. A simple research notebook computing a custom factor might
look like this:
from quantopian.pipeline import Pipeline, CustomFactor
from quantopian.research import run_pipeline
class Reversion(CustomFactor):
# Code from snippet above.
reversion = Reversion()
pipeline = Pipeline({'reversion': reversion})
result = run_pipeline(pipeline, start_date='2014-01-02', end_date='2015-01-02')
do_stuff_with(result)
In a trading algorithm, we're generally interested in the most recently
computed values from our pipeline, so there's a slightly different API: we
"attach" a pipeline to our algorithm on startup, and we request the latest
output from the pipeline at the start of each day. A simple trading algorithm
using Reversion might look something like this:
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline, CustomFactor
class Reversion(CustomFactor):
# Code from snippet above.
def initialize(context):
reversion = Reversion()
pipeline = Pipeline({'reversion': reversion})
algo.attach_pipeline(pipeline, name='my_pipe')
def before_trading_start(context, data):
result = algo.pipeline_output(name='my_pipe')
do_stuff_with(result)
The most important thing to understand about the two examples above is that
simply constructing an instance of Reversion doesn't perform any
computation. In particular, the line:
reversion = Reversion()
doesn't fetch any data or call the compute method. It simply creates an
instance of the Reversion class, which knows that it needs 60 days of close
prices each day to run its compute function. Similarly,
USEquityPricing.close isn't a DataFrame or a numpy array or anything like
that: it's just a sentinel value that describes what kind of data Reversion
needs as an input.
One way to think about this is by an analogy to mathematics. An instance of
Reversion is like a formula for performing a calculation, and
USEquityPricing.close is like a variable in that formula.
Simply writing down the formula doesn't produce any values; it just gives us a
way to say "here's how to compute a result if you plug in values for all of
these variables".
We get a concrete result by actually plugging in values for our variables,
which happens when we call run_pipeline or pipeline_output.
So what, concretely, does Reversion.compute() do?
Both run_pipeline and pipeline_output ultimately boil down to calls to
PipelineEngine.run_pipeline, which is where actual computation happens.
To continue the analogy from above, if reversion is a formula, and
USEquityPricing.close is a variable in that formula, then PipelineEngine is
the grade school student whose homework assignment is to look up the value of
the variable and plug it into the formula.
When we call PipelineEngine.run_pipeline(pipeline, start_date, end_date), the
engine iterates through our requested expressions, loads the inputs for those
expressions, and then calls each expression's compute method once per trading
day between start_date and end_date with appropriate slices of the loaded
input data.
Concretely, the engine expects that each expression has a compute method with
a signature like:
def compute(self, today, assets, out, input1, input2, ..., inputN):
The first four arguments are always the same:
self is the CustomFactor instance in question (e.g. reversion in the
snippets above). This is how methods work in Python in general.
today is a pandas Timestamp representing the day on which compute is
being called.
assets is a 1-dimensional numpy array containing an integer for every
tradeable asset on today.
out is a 1-dimensional numpy array of the same shape as assets. The
contract of compute is that it should write the result of its computation
into out.
The remaining parameters are 2-D numpy arrays with shape (window_length, len(assets)).
Each of these parameters corresponds to an entry in the expression's inputs
list. In the case of Reversion, we only have a single input,
USEquityPricing.close, so there's only one extra parameter, prices, which
contains a 60 x len(assets) array containing 60 days of trailing close prices
for every asset that existed on today.
One unusual feature of compute is that it's expected to write its computed
results into out. Having functions "return" by mutating inputs is common in
low level languages like C or Fortran, but it's rare in Python and generally
considered non-idiomatic. compute writes its outputs into out partly for
performance reasons (we can avoid extra copying of large arrays in some cases),
and partly to make it so that CustomFactor implementors don't need to worry
about constructing output arrays with correct shapes and dtypes, which can be
tricky in more complex cases where a user has more than one return value.
The way you presented it, that compute method might as well be static as its not using anything from within the Reversion class, unless whatever out is implicitly uses a predefined CustomFactor class when slicing/setting its elemenents. Also, since they don't share their source code we can only guess how exactly the quantopian.pipeline.CustomFactor class is implemented and used internally so you won't be getting a 100% correct answer, but we can split it into two parts and explain it using only Python natives.
The first is setting something to a sequence slice, which is what happens within the compute() method - that is a special sequence (a Pandas data frame most likely, but we'll stick to how it basically operates) that has its __setslice__() magic method overriden so that it doesn't produce the expected result - the expected in this case being replacing each of the elements in out with a given sequence, e.g.:
my_list = [1, 2, 3, 4, 5]
print(my_list) # [1, 2, 3, 4, 5]
my_list[:] = [5, 4, 3, 2, 1]
print(my_list) # [5, 4, 3, 2, 1]
But in that example, the right hand side doesn't necessarily produce a same-sized sequence as out so it most likely does calculations with each of the out elements and updates them in the process. You can create such a list like:
class InflatingList(list): # I recommend extending collections.MutableSequence instead
def __setslice__(self, i, j, value):
for x in range(i, min(len(self), j)):
self[x] += value
So now when you use it it would appear, hmm, non-standard:
test_list = InflatingList([1, 2, 3, 4, 5])
print(test_list) # [1, 2, 3, 4, 5]
test_list[:] = 5
print(test_list) # [6, 7, 8, 9, 10]
test_list[2:4] = -3
print(test_list) # [6, 7, 5, 6, 10]
The second part purely depends on where else the Reversion class (or any other derivate of CustomFactor) is used - you don't have to explicitly use class properties for them to be useful to some other internal structure. Consider:
class Factor(object):
scale = 1.0
correction = 0.5
def compute(self, out, inflate=1.0):
out[:] = inflate
class SomeClass(object):
def __init__(self, factor, data):
assert isinstance(factor, Factor), "`factor` must be an instance of `Factor`"
self._factor = factor
self._data = InflatingList(data)
def read_state(self):
return self._data[:]
def update_state(self, inflate=1.0):
self._factor.compute(self._data, self._factor.scale)
self._data[:] = -self._factor.correction + inflate
So, while Factor doesn't directly use its scale/correction variables, some other class might. Here's what happens when you run it through its cycles:
test = SomeClass(Factor(), [1, 2, 3, 4, 5])
print(test.read_state()) # [1, 2, 3, 4, 5]
test.update_state()
print(test.read_state()) # [2.5, 3.5, 4.5, 5.5, 6.5]
test.update_state(2)
print(test.read_state()) # [5.0, 6.0, 7.0, 8.0, 9.0]
But now you get the chance to define your own Factor that SomeClass uses, so:
class CustomFactor(Factor):
scale = 2.0
correction = -1
def compute(self, out, inflate=1.0):
out[:] = -inflate # deflate instead of inflate
Can give you vastly different results for the same input data:
test = SomeClass(CustomFactor(), [1, 2, 3, 4, 5])
print(test.read_state()) # [1, 2, 3, 4, 5]
test.update_state()
print(test.read_state()) # [-7.5, -6.5, -5.5, -4.5, -3.5]
test.update_state(2)
print(test.read_state()) # [-15.0, -14.0, -13.0, -12.0, -11.0]
[Opinion time] I'd argue that this structure is badly designed and whenever you encounter a behavior that's not really expected, chances are that somebody was writing a solution in search of a problem that serves only to confuse the users and signal that the writer is very knowledgeable since he/she can bend the behavior of a system to their whims - in reality, the writer is most likely a douche who wastes everybody's valuable time so that he/she can pat him/herself on the back. Both Numpy and Pandas, while great libraries on their own, are guilty of that - they're even worse offenders because a lot of people get introduced to Python by using those libraries and then when they want to step out of the confines of those libraries they find them self wondering why my_list[2, 5, 12] doesn't work...

Finding a abstraction for repetitive code: Bootstrap analysis

Intro
There is a pattern that I use all the time in my Python code which analyzes
numerical data. All implementations seem overly redundant or very cumbersome or
just do not play nicely with NumPy functions. I'd like to find a better way to
abstract this pattern.
The Problem / Current State
A method of statistical error propagation is the bootstrap method. It works by
running the same analysis many times with slightly different inputs and look at
the distribution of final results.
To compute the actual value of ams_phys, I have the following equation:
ams_phys = (amk_phys**2 - 0.5 * ampi_phys**2) / aB - amcr
All the values that go into that equation have a statistical error associated
with it. These values are also computed from other equations. For instance
amk_phys is computed from this equation, where both numbers also have
uncertainties:
amk_phys_dist = mk_phys / a_inv
The value of mk_phys is given as (494.2 ± 0.3) in a paper. What I now do is
parametric bootstrap and generate R samples from a Gaussian distribution
with mean 494.2 and standard deviation 0.3. This is what I store in
mk_phys_dist:
mk_phys_dist = bootstrap.make_dist(494.2, 0.3, R)
The same is done for a_inv which is also quoted with an error in the
literature. Above equation is then converted into a list comprehension to yield
a new distribution:
amk_phys_dist = [mk_phys / a_inv
for a_inv, mk_phys in zip(a_inv_dist, mk_phys_dist)]
The first equation is then also converted into a list comprehension:
ams_phys_dist = [
(amk_phys**2 - 0.5 * ampi_phys**2) / aB - amcr
for ampi_phys, amk_phys, aB, amcr
in zip(ampi_phys_dist, amk_phys_dist, aB_dist, amcr_dist)]
To get the end result in terms of (Value ± Error), I then take the average and
standard deviation of this distribution of numbers:
ams_phys_val, ams_phys_avg, ams_phys_err \
= bootstrap.average_and_std_arrays(ams_phys_dist)
The actual value is supposed to be computed with the actual value coming in,
not the mean of this bootstrap distribution. Before I had the code replicated
for that, now I have the original value at the 0th position in the _dist
arrays. The arrays now contain 1 + R elements and the
bootstrap.average_and_std_arrays function will separate that element.
This kind of line occurs for every number that I might want to quote in my
writing. I got annoyed by the writing and created a snippet for it:
$1_val, $1_avg, $1_err = bootstrap.average_and_std_arrays($1_dist)
The need for the snippet strongly told me that I need to do some refactoring.
Also the list comprehensions are always of the following pattern:
foo_dist = [ ... bar ...
for bar in bar_dist]
It feels bad to write bar three times there.
The Class Approach
I have tried to make those _dist things a Boot class such that I would not
write ampi_dist and ampi_val but could just use ampi.val without having
to explicitly call this average_and_std_arrays functions and type a bunch of
names for it.
class Boot(object):
def __init__(self, dist):
self.dist = dist
def __str__(self):
return str(self.dist)
#property
def cen(self):
return self.dist[0]
#property
def val(self):
x = np.array(self.dist)
return np.mean(x[1:,], axis=0)
#property
def err(self):
x = np.array(self.dist)
return np.std(x[1:,], axis=0)
However, this still does not solve the problem of the list comprehensions. I
fear that I still have to repeat myself there three times. I could make the
Boot object inherit from list, such that I could at least write it like
this (without the _dist):
bar = Boot([... foo ... for foo in foo])
Magic Approach
Ideally all those list comprehensions would be gone such that I could just
write
bar = ... foo ...
where the dots mean some non-trivial operation. Those can be simple arithmetic
as above, but that could also be a function call to something that does not
support being called with multiple values (like NumPy function do support).
For instance the scipy.optimize.curve_fit function needs to be called a bunch of times:
popt_dist = [op.curve_fit(linear, mpi, diff)[0]
for mpi, diff in zip(mpi_dist, diff_dist)]
One would have to write a wrapper for that because it does not automatically loops over list of arrays.
Question
Do you see a way to abstract this process of running every transformation with
1 + R sets of data? I would like to get rid of those patterns and the huge
number of variables in each namespace (_dist, _val, _avg, ...) as this
makes passing it to function rather tedious.
Still I need to have a lot of freedom in the ... foo ... part where I need to
call arbitrary functions.

Categories

Resources