how does the python's membership function work? - python

Can someone explain to me what is this fuzz.trimf(x, [0, 5, 10]) membership taking in, The first one is the range array and in this case that is the 'x' and what is that [0,5,10] for? please explain.

The purpose of membership functions are to generalize a function using valuation.
In the case of the trimf() function, the membership function being created is triangularly shaped. In order to determine the bounds of the generalization being created based on the actual data, the user must input scalars as constraints on how large or small the user wants the generalization to be.
Those scalars are the second parameter of the trimf() function and are represented by the list [0, 5, 10].
If you are familiar with the underlying math, the attached image shows the equation used to determine the value of the membership function.
In the attached image,
The a would be your 0
The b would be your 5
And the c would be your 10.

Related

How to scale a second set of observations to the first set in a non-parametric way?

Imagine we have a vector of continuous transformations from a process [u_1, u_2, ..., u_n] for a set of inputs [p_1, p_2, ..., p_n], i.e. f(p_i) = u_i. We next develop another set of inputs [q_1, q_2, ..., q_m] where m not necessarily equals n and get observations [v_1, v_2, ..., v_m]. Problem is that the process f is somewhat stochastic i.e. the hidden states of f may change in some unpredictable way. Ergo, two different calls on the same input may result in two somewhat different vector of results. An easy way to think of this is that f computes a deterministic result but adds some random noise to that result before returning it. Is there a good non-parametric way to then scale the second set of output to make it comparable to the first?
One naïve approach would to be concatenate the first and second input and run the second call so that the transformations on both p and q are on the same scale and results are comparable. But imagine if I was doing this in a loop, and the function call was expensive, then I would have to figure out a good selection of inputs from earlier iterations to ensure that transformation results from my current iteration was comparable to previous iterations?

Python np.sqrt(x-a)*np.heaviside(x,a)

I am trying to implement a calculation from a research paper. In that calculation, the value of a function is supposed to be
0, for x<a
sqrt(x-a)*SOMETHING_ELSE, for x>=a
In my module, x and a are 1D numpy-arrays (of the same length). In my first attempt I implemented the function as
f = np.sqrt(x-a)*SOMETHING*np.heaviside(x,a)
But for x<a, np.sqrt() returns NaN and even though the heaviside function returns 0 in that case, in Python 0*NaN = NaN.
I could also replace all NaN in my resulting array with 0s afterwards but that would lead to warning outputs from numpy.sqrt() used on negative values that I would need to supress. Another solution is to treat the argument of the squareroot as an imaginary number by adding 0j and taking the real part afterwards:
f = np.real(np.sqrt(x-a+0j)*SOMETHING*np.heaviside(x,a))
But I feel like both solutions are not really elegant and the second solution is unnecessarily complicated to read. Is there a more elegant way to do this in Python that I am missing here?
You can cheat with np.maximum in this case to not compute the square root of negative numbers.
Moreover, please note that np.heaviside does not use a as a threshold but 0 (the second parameter is the output of the heaviside in some case). You can use np.where instead.
Here is an example:
f = np.where(x<a, 0, np.sqrt(np.maximum(x-a, 0))*SOMETHING)
Note that in this specific case, the expression can be simplified and np.where is not even needed (because np.sqrt(np.maximum(x-a, 0)) gives 0). Thus, you can simply write:
f = np.sqrt(np.maximum(x-a, 0))*SOMETHING

Generating an NxM array of uniformly distributed random numbers over a stated interval (not [0,1)) in numpy

I am aware of the numpy.random.rand() command, however there doesn't seem to be any variables allowing you to adjust the uniform interval in which the numbers are chosen to something other than [0,1).
I considered using a for loop i.e. initiating a zero array of the needed size, and using numpy.random.unifom(a,b,N) to generate N random numbers in the interval (a,b), and then putting these into the initiated array. I am not aware of this module being to create an array of arbitrary dimension, like the rand above. This is clearly inelegant, although y main concern is the run time. I presume this method would have a much higher run time than using the appropriate random number generator from the start.
Edit and additional thought: the interval I am working in is [0,pi/8) which is less than 1. Strictly speaking, I won't be affecting the randomness of the generated numbers if I just rescale, but multiplying each generated random number would clearly be additional computational time, I presume by a factor the order of the number of elements.
np.random.uniform accepts a low and a high:
In [11]: np.random.uniform(-3, 3, 7) # 7 numbers between -3 and 3
Out[11]: array([ 2.68365104, -0.97817374, 1.92815971, -2.56190434, 2.48954842, -0.16202127, -0.37050593])
numpy.random.uniform accepts a size argument where you can just pass the size of your array as tuple. For generating an MxN array use
np.random.uniform(low,high, size=(M,N))

How does this class work? (Related to Quantopian, Python and Pandas)

From here: https://www.quantopian.com/posts/wsj-example-algorithm
class Reversion(CustomFactor):
"""
Here we define a basic mean reversion factor using a CustomFactor. We
take a ratio of the last close price to the average price over the
last 60 days. A high ratio indicates a high price relative to the mean
and a low ratio indicates a low price relative to the mean.
"""
inputs = [USEquityPricing.close]
window_length = 60
def compute(self, today, assets, out, prices):
out[:] = -prices[-1] / np.mean(prices, axis=0)
Reversion() seems to return a pandas.DataFrame, and I have absolutely no idea why.
For one thing, where is inputs and window_length used?
And what exactly is out[:]?
Is this specific behavior related to Quantopian in particular or Python/Pandas?
TL;DR
Reversion() doesn't return a DataFrame, it returns an instance of the
Reversion class, which you can think of as a formula for performing a
trailing window computation. You can run that formula over a particular time
period using either quantopian.algorithm.pipeline_output or
quantopian.research.run_pipeline, depending on whether you're writing a
trading algorithm or doing offline research in a notebook.
The compute method is what defines the "formula" computed by a Reversion
instance. It calculates a reduction over a 2D numpy array of prices, where
each row of the array corresponds to a day and each column of the array
corresponds to a stock. The result of that computation is a 1D array
containing a value for each stock, which is copied into out. out is also
a numpy array. The syntax out[:] = <expression> says "copy the values from
<expression> into out".
compute writes its result directly into an output array instead of simply
returning because doing so allows the CustomFactor base class to ensure
that the output has the correct shape and dtype, which can be nontrivial for
more complex cases.
Having a function "return" by overwriting an input is unusual and generally
non-idiomatic Python. I wouldn't recommend implementing a similar API unless
you're very sure that there isn't a better solution.
All of the code in the linked example is open source and can be found in
Zipline, the framework on top of
which Quantopian is built. If you're interested in the implementation, the
following files are good places to start:
zipline/pipeline/engine.py
zipline/pipeline/term.py
zipline/pipeline/graph.py
zipline/pipeline/pipeline.py
zipline/pipeline/factors/factor.py
You can also find a detailed tutorial on the Pipeline API
here.
I think there are two kinds of answers to your question:
How does the Reversion class fit into the larger framework of a
Zipline/Quantopian algorithm? In other words, "how is the Reversion class
used"?
What are the expected inputs to Reversion.compute() and what computation
does it perform on those inputs? In other words, "What, concretely, does the
Reversion.compute() method do?
It's easier to answer (2) with some context from (1).
How is the Reversion class used?
Reversion is a subclass of CustomFactor, which is part of Zipline's
Pipeline API. The primary purpose of the Pipeline API is to make it easy
for users to perform a certain special kind of computation efficiently over
many sources of data. That special kind of computation is a cross-sectional
trailing-window computation, which has the form:
Every day, for some set of data sources, fetch the last N days of data for all
known assets and apply a reduction function to produce a single value per
asset.
A very simple cross-sectional trailing-window computation would be something
like "close-to-close daily returns", which has the form:
Every day, fetch the last two days' of close prices and, for each asset,
calculate the percent change between the asset's previous day close price and
its current current close price.
To describe a cross-sectional trailing-window computation, we need at least
three pieces of information:
On what kinds of data (e.g. price, volume, market cap) does the computation
operate?
On how long of a trailing window of data (e.g. 1 day, 20 days, 100 days)
does the computation operate?
What reduction function does the computation perform over the data described
by (1) and (2)?
The CustomFactor class defines an API for consolidating these three pieces of
information into a single object.
The inputs attribute describes the set of inputs needed to perform a
computation. In the snippet from the question, the only input is
USEquityPricing.close, which says that we just need trailing daily close
prices. In general, however, we can ask for any number of inputs. For
example, to compute VWAP (Volume-Weighted Average Price), we would use
something like inputs = [USEquityPricing.close, USEquityPricing.volume] to
say that we want trailing close prices and trailing daily volumes.
The window_length attribute describes the number of days of trailing data
required to perform a computation. In the snippet above we're requesting 60
days of trailing close prices.
The compute method describes the trailing-window computation to be
performed. In the section below, I've outlined exactly how compute performs
its computation. For now, it's enough to know that compute is essentially a
reduction function from some number of 2-dimensional arrays to a single
1-dimensional array.
You might notice that we haven't defined an actual set of dates on which we
might want to compute a Reversion factor. This is by design, since we'd like
to be able to use the same Reversion instance to perform calculations at
different points in time.
Quantopian defines two APIs for computing expressions like Reversion: an
"online" mode designed for use in actual trading algorithms, and a "batch" mode
designed for use in research and development. In both APIs, we first construct
a Pipeline object that holds all the computations we want to perform. We
then feed our pipeline object into a function that actually performs the
computations we're interested in.
In the batch API, we call run_pipeline passing our pipeline, a start date,
and an end date. A simple research notebook computing a custom factor might
look like this:
from quantopian.pipeline import Pipeline, CustomFactor
from quantopian.research import run_pipeline
class Reversion(CustomFactor):
# Code from snippet above.
reversion = Reversion()
pipeline = Pipeline({'reversion': reversion})
result = run_pipeline(pipeline, start_date='2014-01-02', end_date='2015-01-02')
do_stuff_with(result)
In a trading algorithm, we're generally interested in the most recently
computed values from our pipeline, so there's a slightly different API: we
"attach" a pipeline to our algorithm on startup, and we request the latest
output from the pipeline at the start of each day. A simple trading algorithm
using Reversion might look something like this:
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline, CustomFactor
class Reversion(CustomFactor):
# Code from snippet above.
def initialize(context):
reversion = Reversion()
pipeline = Pipeline({'reversion': reversion})
algo.attach_pipeline(pipeline, name='my_pipe')
def before_trading_start(context, data):
result = algo.pipeline_output(name='my_pipe')
do_stuff_with(result)
The most important thing to understand about the two examples above is that
simply constructing an instance of Reversion doesn't perform any
computation. In particular, the line:
reversion = Reversion()
doesn't fetch any data or call the compute method. It simply creates an
instance of the Reversion class, which knows that it needs 60 days of close
prices each day to run its compute function. Similarly,
USEquityPricing.close isn't a DataFrame or a numpy array or anything like
that: it's just a sentinel value that describes what kind of data Reversion
needs as an input.
One way to think about this is by an analogy to mathematics. An instance of
Reversion is like a formula for performing a calculation, and
USEquityPricing.close is like a variable in that formula.
Simply writing down the formula doesn't produce any values; it just gives us a
way to say "here's how to compute a result if you plug in values for all of
these variables".
We get a concrete result by actually plugging in values for our variables,
which happens when we call run_pipeline or pipeline_output.
So what, concretely, does Reversion.compute() do?
Both run_pipeline and pipeline_output ultimately boil down to calls to
PipelineEngine.run_pipeline, which is where actual computation happens.
To continue the analogy from above, if reversion is a formula, and
USEquityPricing.close is a variable in that formula, then PipelineEngine is
the grade school student whose homework assignment is to look up the value of
the variable and plug it into the formula.
When we call PipelineEngine.run_pipeline(pipeline, start_date, end_date), the
engine iterates through our requested expressions, loads the inputs for those
expressions, and then calls each expression's compute method once per trading
day between start_date and end_date with appropriate slices of the loaded
input data.
Concretely, the engine expects that each expression has a compute method with
a signature like:
def compute(self, today, assets, out, input1, input2, ..., inputN):
The first four arguments are always the same:
self is the CustomFactor instance in question (e.g. reversion in the
snippets above). This is how methods work in Python in general.
today is a pandas Timestamp representing the day on which compute is
being called.
assets is a 1-dimensional numpy array containing an integer for every
tradeable asset on today.
out is a 1-dimensional numpy array of the same shape as assets. The
contract of compute is that it should write the result of its computation
into out.
The remaining parameters are 2-D numpy arrays with shape (window_length, len(assets)).
Each of these parameters corresponds to an entry in the expression's inputs
list. In the case of Reversion, we only have a single input,
USEquityPricing.close, so there's only one extra parameter, prices, which
contains a 60 x len(assets) array containing 60 days of trailing close prices
for every asset that existed on today.
One unusual feature of compute is that it's expected to write its computed
results into out. Having functions "return" by mutating inputs is common in
low level languages like C or Fortran, but it's rare in Python and generally
considered non-idiomatic. compute writes its outputs into out partly for
performance reasons (we can avoid extra copying of large arrays in some cases),
and partly to make it so that CustomFactor implementors don't need to worry
about constructing output arrays with correct shapes and dtypes, which can be
tricky in more complex cases where a user has more than one return value.
The way you presented it, that compute method might as well be static as its not using anything from within the Reversion class, unless whatever out is implicitly uses a predefined CustomFactor class when slicing/setting its elemenents. Also, since they don't share their source code we can only guess how exactly the quantopian.pipeline.CustomFactor class is implemented and used internally so you won't be getting a 100% correct answer, but we can split it into two parts and explain it using only Python natives.
The first is setting something to a sequence slice, which is what happens within the compute() method - that is a special sequence (a Pandas data frame most likely, but we'll stick to how it basically operates) that has its __setslice__() magic method overriden so that it doesn't produce the expected result - the expected in this case being replacing each of the elements in out with a given sequence, e.g.:
my_list = [1, 2, 3, 4, 5]
print(my_list) # [1, 2, 3, 4, 5]
my_list[:] = [5, 4, 3, 2, 1]
print(my_list) # [5, 4, 3, 2, 1]
But in that example, the right hand side doesn't necessarily produce a same-sized sequence as out so it most likely does calculations with each of the out elements and updates them in the process. You can create such a list like:
class InflatingList(list): # I recommend extending collections.MutableSequence instead
def __setslice__(self, i, j, value):
for x in range(i, min(len(self), j)):
self[x] += value
So now when you use it it would appear, hmm, non-standard:
test_list = InflatingList([1, 2, 3, 4, 5])
print(test_list) # [1, 2, 3, 4, 5]
test_list[:] = 5
print(test_list) # [6, 7, 8, 9, 10]
test_list[2:4] = -3
print(test_list) # [6, 7, 5, 6, 10]
The second part purely depends on where else the Reversion class (or any other derivate of CustomFactor) is used - you don't have to explicitly use class properties for them to be useful to some other internal structure. Consider:
class Factor(object):
scale = 1.0
correction = 0.5
def compute(self, out, inflate=1.0):
out[:] = inflate
class SomeClass(object):
def __init__(self, factor, data):
assert isinstance(factor, Factor), "`factor` must be an instance of `Factor`"
self._factor = factor
self._data = InflatingList(data)
def read_state(self):
return self._data[:]
def update_state(self, inflate=1.0):
self._factor.compute(self._data, self._factor.scale)
self._data[:] = -self._factor.correction + inflate
So, while Factor doesn't directly use its scale/correction variables, some other class might. Here's what happens when you run it through its cycles:
test = SomeClass(Factor(), [1, 2, 3, 4, 5])
print(test.read_state()) # [1, 2, 3, 4, 5]
test.update_state()
print(test.read_state()) # [2.5, 3.5, 4.5, 5.5, 6.5]
test.update_state(2)
print(test.read_state()) # [5.0, 6.0, 7.0, 8.0, 9.0]
But now you get the chance to define your own Factor that SomeClass uses, so:
class CustomFactor(Factor):
scale = 2.0
correction = -1
def compute(self, out, inflate=1.0):
out[:] = -inflate # deflate instead of inflate
Can give you vastly different results for the same input data:
test = SomeClass(CustomFactor(), [1, 2, 3, 4, 5])
print(test.read_state()) # [1, 2, 3, 4, 5]
test.update_state()
print(test.read_state()) # [-7.5, -6.5, -5.5, -4.5, -3.5]
test.update_state(2)
print(test.read_state()) # [-15.0, -14.0, -13.0, -12.0, -11.0]
[Opinion time] I'd argue that this structure is badly designed and whenever you encounter a behavior that's not really expected, chances are that somebody was writing a solution in search of a problem that serves only to confuse the users and signal that the writer is very knowledgeable since he/she can bend the behavior of a system to their whims - in reality, the writer is most likely a douche who wastes everybody's valuable time so that he/she can pat him/herself on the back. Both Numpy and Pandas, while great libraries on their own, are guilty of that - they're even worse offenders because a lot of people get introduced to Python by using those libraries and then when they want to step out of the confines of those libraries they find them self wondering why my_list[2, 5, 12] doesn't work...

Get function given a list of values

Is there a way that I can give python a list of values like [ 1, 3, 4.5, 1] and obtain a function that relates to those values? like y = 3x+4 or something like that?
I don't want to plot it or anything, I just want to substitute values in that function and see what the result would be.
edit: is there a way that python can calculate how the data is related? like if I give it the list containing thousands of values and it returns me the function that was adjusted to those values.
Based on your comments to David Heffernan's answer,
I want is to know what the relation between the values is, I have thousands of values stored in a list and I want to know if python can tell me how they are related..
it seems like you are trying do a regression analysis (probably a linear regression) and fit the values.
You can use NumPy for linear regression analysis in Python. Here a sample from the NumPy cookbook.
Yes, the function is called map().
def y(x):
return 3*x+4
map(y, [1,3,4.5,1])
The map() function applies the function to every item and returns a list of the results.
Based on your revised question, I'm going to go ahead and add an answer. No, there is no such function. I imagine you're unlikely to find a function that comes close in any programming language. Your definitions aren't tight enough for anything to be reasonable yet. If we take a simple case with only two input integers you can have all sorts of relationships:
[10, 1]
possible relationships:
def x(y):
return y ** 0
def x(y):
return y / 10
def x(y)
return y % 10 + 1
... ... repeat. Admittedly, some of those are arbitrary, but they are valid relationships between the first and second values in the array you passed in. The possibilities for "solutions" become even more absurd as you ask for a relationship between 10, 15, or 35 numbers.
I assume you want to find out if the sequences [1, 2, 3, 4] and [ 1, 3, 4.5, 1] (or else the pairs [(1, 1), (2, 3), (3, 4.5), (4, 1)] are related with a (linear) function or not.
Try to plot these and see if they form somethign that looks like a (straight) line or not.
You can also look for correlation techniques. Check this site with basic statistic stuff (look down on correlation: Basic Statistics
What you're looking for is called "statistical regression". There are many methods by which you might do this; here's a site that might help: Least Squares Regression but ultimately, this is a field to which many books have been devoted. There's polynomial regressions, trig regressions, logarithmic...you'll have to know something about your data before you decide which model you apply; if you don't have any knowledge of what the dataset will look like before you process it, I'd suggest comparing the residuals of whatever you get and choosing the one with the lowest sum.
Short answer: No, no function.

Categories

Resources