Passing parameters to deterministic variables, pymc - python

I am trying to implement a very simple example of the law of large numbers using PyMC. The goal is to generate many sample averages of samples of different sizes. For example, in the code below, I'm taking repeatedly taking groups of 5 samples (samples_to_average = 5), calculating their mean, and then finding the 95% CI of the resulting trace.
The code below runs, but what I'd like to do is modify samples_to_average to be a list, so that I can calculate confidence intervals for a range of different sample sizes in a single pass.
import scipy.misc
import numpy as np
import pymc as mc
samples_to_average = 5
list_of_samples = mc.DiscreteUniform("response", lower=1, upper=10, size=1000)
#mc.deterministic
def sample_average(x=list_of_samples, n=samples_to_average):
samples = int(n)
selected = x[0:samples]
total = np.sum(selected)
sample_average = float(total) / samples
return sample_average
def getConfidenceInterval():
responseModel = mc.Model([samples_to_average, list_of_samples, sample_average])
mapRes = mc.MAP(responseModel)
mapRes.fit()
mcmc = mc.MCMC(responseModel)
mcmc.sample( 10000, 5000)
upper = np.percentile(mcmc.trace('sample_average')[:],95)
lower = np.percentile(mcmc.trace('sample_average')[:],5)
return (lower, upper)
print getConfidenceInterval()
Most examples I've seen using the deterministic decorator use global stochastic variables. However, to achieve my aim, I think what I need to do is create a stochastic variable (of the correct length) in getConfidenceInterval(), and pass this to sample_average (rather than supplying sample_average using globals / default parameter).
How can a variable created in getConfidenceInterval() be passed into sample_average(), or alternatively, what is another way that I can evaluate multiple models using different values of samples_to_average? I'd like to avoid globals if possible.

Before addressing your question, I would like to simplify the way sample_average is written so that it is more compact and easier to understand.
sample_average = mc.Lambda('sample_average', lambda x=list_of_samples, n=samples_to_average: np.mean(x[:n]))
Now you can generalize this to the case where samples_to_average is an array of parameters:
samples_to_average = np.arange(5, 25, 5)
sample_average = mc.Lambda('sample_average', lambda x=list_of_samples, n=samples_to_average: [np.mean(x[:t]) for t in n])
The getConfidenceInterval function would also have to be changed as shown below:
def getConfidenceInterval():
responseModel = mc.Model([samples_to_average, list_of_samples, sample_average])
mapRes = mc.MAP(responseModel)
mapRes.fit()
mcmc = mc.MCMC(responseModel)
mcmc.sample( 10000, 5000)
average = np.vstack((t for t in mcmc.trace('sample_average')))
upper = np.percentile(average, 95, axis = 0)
lower = np.percentile(average, 5, axis = 0)
return (lower, upper)
I used vstack to aggregate the sample averages into a 2D array and then used the axis option in Numpy's percentile function to compute percentiles along each column.

Related

is there a way to retrieve the nodes automatically computed by mpmath.quad integration routine?

I am trying to calculate an integral with mpmath.quad. I basically have to calculate three moments of a distribution, call it f (pseudocode):
integrate(f(x)/x, 0, infinity)
integrate(f(x)*x, 0, infinity)
integrate(f(x)ln(x)/x, 0, infinity)
As far as I understood, the tanh-sinh algorithm from mpmath applies a coordinate transformation to the integration interval, uses it to find suitable nodes xk-s and weights wk-s and returns (pseudocode):
sum f(xk)*wk
with N the number of nodes. Since in my original problem the calculation takes a long time, I was wishing to reuse the values of f calculated by the first integration for the other integrals, i.e., computing them from discrete samples with something like scipy.integrate.trapezoid or simpson. By employing a structure Simulator I simplified from an answer on this forum, I managed to cache the xk-s and f(xk)-s, but not the weights. I checked that the xk-s from different integrations are the same.
Now, if I naively apply a standard quadrature from the scipy module to my samples, say trapezoid(fs, xs), the result I get is different from that calculated by mpmath.quad, especially if the samples are few. While this fact does not surprise me, I would like to find out how to retrieve the weights mpmath.quad uses in the first calculation, say the one for f(x)/x, so I could avoid running the time consuming algorithm thrice.
I cannot understand the documentation as for this point. mpmath documentation gives a lot of examples, but none concerning the retrieval of calculated nodes, although it states several times the nodes are "cached". Where are they? mpmath.quad only returns the integral result!
So I would like to know: is what I'm trying to achieve sensible at all? And if it is, how could I accomplish the task?
Below is a code that reproduces the behaviour. Any help is very appreciated.
import numpy as np
import mpmath as mp
import matplotlib.pyplot as plt
from scipy.integrate import trapezoid, simpson
class Simulator:
def __init__(self, func, storex:np.ndarray=np.array([]), storef:np.ndarray=np.array([])):
self.func = func
self.storex = storex
self.storef = storef
def simulate(self, x, *args):
result = self.func(x, *args)
self.storex = np.append(self.storex, x)
self.storef = np.append(self.storef, result)
return result
def lorentz(x):
x = mp.mpf(x)
return mp.mpf(1)/(mp.power(x-mp.mpf(1), 2) + mp.mpf(1))
simratio = simproduct = Simulator(lorentz)
integralratio = mp.quad(lambda x: simratio.simulate(x)/x, [0, 1, mp.inf])
integralproduct = mp.quad(lambda x: simproduct.simulate(x)*x, [0, 1, mp.inf])
ratiox, ratioy = np.transpose(sorted([(x,y) for x, y in zip(simratio.storex, simratio.storef)])) #we get the sampled xs and f(x)s
integralproduct_trap = trapezoid(ratioy*ratiox, ratiox)
print("int[f(x)/x], quad: ", integralratio)
print("int[f(x)*x], quad: ", integralproduct)
print("int[f(x)*x], scipy.trapezoid: ", integralproduct_trap)

How do I force two arrays to be equal for use in pyplot?

I'm trying to plot a simple moving averages function but the resulting array is a few numbers short of the full sample size. How do I plot such a line alongside a more standard line that extends for the full sample size? The code below results in this error message:
ValueError: x and y must have same first dimension, but have shapes (96,) and (100,)
This is using standard matplotlib.pyplot. I've tried just deleting X values using remove and del as well as switching all arrays to numpy arrays (since that's the output format of my moving averages function) then tried adding an if condition to the append in the while loop but neither has worked.
import random
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
def movingaverage(values, window):
weights = np.repeat(1.0, window) / window
smas = np.convolve(values, weights, 'valid')
return smas
sampleSize = 100
min = -10
max = 10
window = 5
vX = np.array([])
vY = np.array([])
x = 0
val = 0
while x < sampleSize:
val += (random.randint(min, max))
vY = np.append(vY, val)
vX = np.append(vX, x)
x += 1
plt.plot(vX, vY)
plt.plot(vX, movingaverage(vY, window))
plt.show()
Expected results would be two lines on the same graph - one a simple moving average of the other.
Just change this line to the following:
smas = np.convolve(values, weights,'same')
The 'valid' option, only convolves if the window completely covers the values array. What you want is 'same', which does what you are looking for.
Edit: This, however, also comes with its own issues as it acts like there are extra bits of data with value 0 when your window does not fully sit on top of the data. This can be ignored if chosen, as is done in this solution, but another approach is to pad the array with specific values of your choosing instead (see Mike Sperry's answer).
Here is how you would pad a numpy array out to the desired length with 'nan's (replace 'nan' with other values, or replace 'constant' with another mode depending on desired results)
https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html
import numpy as np
bob = np.asarray([1,2,3])
alice = np.pad(bob,(0,100-len(bob)),'constant',constant_values=('nan','nan'))
So in your code it would look something like this:
import random
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
def movingaverage(values,window):
weights = np.repeat(1.0,window)/window
smas = np.convolve(values,weights,'valid')
shorted = int((100-len(smas))/2)
print(shorted)
smas = np.pad(smas,(shorted,shorted),'constant',constant_values=('nan','nan'))
return smas
sampleSize = 100
min = -10
max = 10
window = 5
vX = np.array([])
vY = np.array([])
x = 0
val = 0
while x < sampleSize:
val += (random.randint(min,max))
vY = np.append(vY,val)
vX = np.append(vX,x)
x += 1
plt.plot(vX,vY)
plt.plot(vX,(movingaverage(vY,window)))
plt.show()
To answer your basic question, the key is to take a slice of the x-axis appropriate to the data of the moving average. Since you have a convolution of 100 data elements with a window of size 5, the result is valid for the last 96 elements. You would plot it like this:
plt.plot(vX[window - 1:], movingaverage(vY, window))
That being said, your code could stand to have some optimization done on it. For example, numpy arrays are stored in fixed size static buffers. Any time you do append or delete on them, the entire thing gets reallocated, unlike Python lists, which have amortization built in. It is always better to preallocate if you know the array size ahead of time (which you do).
Secondly, running an explicit loop is rarely necessary. You are generally better off using the under-the-hood loops implemented at the lowest level in the numpy functions instead. This is called vectorization. Random number generation, cumulative sums and incremental arrays are all fully vectorized in numpy. In a more general sense, it's usually not very effective to mix Python and numpy computational functions, including random.
Finally, you may want to consider a different convolution method. I would suggest something based on numpy.lib.stride_tricks.as_strided. This is a somewhat arcane, but very effective way to implement a sliding window with numpy arrays. I will show it here as an alternative to the convolution method you used, but feel free to ignore this part.
All in all:
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
def movingaverage(values, window):
# this step creates a view into the same buffer
values = np.lib.stride_tricks.as_strided(values, shape=(window, values.size - window + 1), strides=values.strides * 2)
smas = values.sum(axis=0)
smas /= window # in-place to avoid temp array
return smas
sampleSize = 100
min = -10
max = 10
window = 5
v_x = np.arange(sampleSize)
v_y = np.cumsum(np.random.random_integers(min, max, sampleSize))
plt.plot(v_x, v_y)
plt.plot(v_x[window - 1:], movingaverage(v_y, window))
plt.show()
A note on names: in Python, variable and function names are conventionally name_with_underscore. CamelCase is reserved for class names. np.random.random_integers uses inclusive bounds just like random.randint, but allows you to specify the number of samples to generate. Confusingly, np.random.randint has an exclusive upper bound, more like random.randrange.

Calculating cross-correlation with fft returning backwards output

I'm trying to cross correlate two sets of data, by taking the fourier transform of both and multiplying the conjugate of the first fft with the second fft, before transforming back to time space. In order to test my code, I am comparing the output with the output of numpy.correlate. However, when I plot my code, (restricted to a certain window), it seems the two signals go in opposite directions/are mirrored about zero.
This is what my output looks like
My code:
import numpy as np
import pyplot as plt
phl_data = np.sin(np.arange(0, 10, 0.1))
mlac_data = np.cos(np.arange(0, 10, 0.1))
N = phl_data.size
zeroes = np.zeros(N-1)
phl_data = np.append(phl_data, zeroes)
mlac_data = np.append(mlac_data, zeroes)
# cross-correlate x = phl_data, y = mlac_data:
# take FFTs:
phl_fft = np.fft.fft(phl_data)
mlac_fft = np.fft.fft(mlac_data)
# fft of cross-correlation
Cw = np.conj(phl_fft)*mlac_fft
#Cw = np.fft.fftshift(Cw)
# transform back to time space:
Cxy = np.fft.fftshift(np.fft.ifft(Cw))
times = np.append(np.arange(-N+1, 0, dt),np.arange(0, N, dt))
plt.plot(times, Cxy)
plt.xlim(-250, 250)
# test against convolving:
c = np.correlate(phl_data, mlac_data, mode='same')
plt.plot(times, c)
plt.show()
(both data sets have been padded with N-1 zeroes)
The documentation to numpy.correlate explains this:
This function computes the correlation as generally defined in signal processing texts:
c_{av}[k] = sum_n a[n+k] * conj(v[n])
and:
Notes
The definition of correlation above is not unique and sometimes correlation may be defined differently. Another common definition is:
c'_{av}[k] = sum_n a[n] conj(v[n+k])
which is related to c_{av}[k] by c'_{av}[k] = c_{av}[-k].
Thus, there is not a unique definition, and the two common definitions lead to a reversed output.

Normal distributed sub-sampling from a numpy array in python

I have a numpy array whose values are distributed in the following manner
From this array I need to get a random sub-sample which is normally distributed.
I need to get rid of the values from the array which are above the red line in the picture. i.e. I need to get rid of some occurences of certain values from the array so that my distribution gets smoothened when the abrupt peaks are removed.
And my array's distribution should become like this:
Can this be achieved in python, without manually looking for entries corresponding to the peaks and remove some occurences of them ? Can this be done in a simpler way ?
The following kind of works, it is rather aggressive, though:
It works by ordering the samples, transforming to uniform and then trying to select a regular griddish subsample. If you feel it is too aggressive you could increase ns which is essentially the number of samples kept.
Also, please note that it requires the knowledge of the true distribution. In case of normal distribution you should be fine with using sample mean and unbiased variance estimate (the one with n-1).
Code (without plotting):
import scipy.stats as ss
import numpy as np
a = ss.norm.rvs(size=1000)
b = ss.uniform.rvs(size=1000)<0.4
a[b] += 0.1*np.sin(10*a[b])
def smooth(a, gran=25):
o = np.argsort(a)
s = ss.norm.cdf(a[o])
ns = int(gran / np.max(s[gran:] - s[:-gran]))
grid, dp = np.linspace(0, 1, ns, endpoint=False, retstep=True)
grid += dp/2
idx = np.searchsorted(s, grid)
c = np.flatnonzero(idx[1:] <= idx[:-1])
while c.size > 0:
idx[c+1] = idx[c] + 1
c = np.flatnonzero(idx[1:] <= idx[:-1])
idx = idx[:np.searchsorted(idx, len(a))]
return o[idx]
ap = a[smooth(a)]
c, b = np.histogram(a, 40)
cp, _ = np.histogram(ap, b)

Convolution computations in Numpy/Scipy

Profiling some computational work I'm doing showed me that one bottleneck in my program was a function that basically did this (np is numpy, sp is scipy):
def mix1(signal1, signal2):
spec1 = np.fft.fft(signal1, axis=1)
spec2 = np.fft.fft(signal2, axis=1)
return np.fft.ifft(spec1*spec2, axis=1)
Both signals have shape (C, N) where C is the number of sets of data (usually less than 20) and N is the number of samples in each set (around 5000). The computation for each set (row) is completely independent of any other set.
I figured that this was just a simple convolution, so I tried to replace it with:
def mix2(signal1, signal2):
outputs = np.empty_like(signal1)
for idx, row in enumerate(outputs):
outputs[idx] = sp.signal.convolve(signal1[idx], signal2[idx], mode='same')
return outputs
...just to see if I got the same results. But I didn't, and my questions are:
Why not?
Is there a better way to compute the equivalent of mix1()?
(I realise that mix2 probably wouldn't have been faster as-is, but it might have been a good starting point for parallelisation.)
Here's the full script I used to quickly check this:
import numpy as np
import scipy as sp
import scipy.signal
N = 4680
C = 6
def mix1(signal1, signal2):
spec1 = np.fft.fft(signal1, axis=1)
spec2 = np.fft.fft(signal2, axis=1)
return np.fft.ifft(spec1*spec2, axis=1)
def mix2(signal1, signal2):
outputs = np.empty_like(signal1)
for idx, row in enumerate(outputs):
outputs[idx] = sp.signal.convolve(signal1[idx], signal2[idx], mode='same')
return outputs
def test(num, chans):
sig1 = np.random.randn(chans, num)
sig2 = np.random.randn(chans, num)
res1 = mix1(sig1, sig2)
res2 = mix2(sig1, sig2)
np.testing.assert_almost_equal(res1, res2)
if __name__ == "__main__":
np.random.seed(0x1234ABCD)
test(N, C)
So I tested this out and can now confirm a few things:
1) numpy.convolve is not circular, which is what the fft code is giving you:
2) FFT does not internally pad to a power of 2. Compare the vastly different speeds of the following operations:
x1 = np.random.uniform(size=2**17-1)
x2 = np.random.uniform(size=2**17)
np.fft.fft(x1)
np.fft.fft(x2)
3) Normalization is not a difference -- if you do a naive circular convolution by adding up a(k)*b(i-k), you will get the result of the FFT code.
The thing is padding to a power of 2 is going to change the answer. I've heard tales that there are ways to deal with this by cleverly using prime factors of the length (mentioned but not coded in Numerical Recipes) but I've never seen people actually do that.
scipy.signal.fftconvolve does convolve by FFT, it's python code. You can study the source code, and correct you mix1 function.
As mentioned before, the scipy.signal.convolve function does not perform a circular convolution. If you want a circular convolution performed in realspace (in contrast to using fft's) I suggest using the scipy.ndimage.convolve function. It has a mode parameter which can be set to 'wrap' making it a circular convolution.
for idx, row in enumerate(outputs):
outputs[idx] = sp.ndimage.convolve(signal1[idx], signal2[idx], mode='wrap')

Categories

Resources