Signal generator specifitc noise modulation - python

I am simulating a signal (see below) and I later add noise to it.
But I don't want add noise to every single sample point. So is there a way to set a defined amount of my noise array to zero (zeros has to be positioned randomly). nse_kappa and noise_mu are parameters defined for the noise distribution
x = np.zeros((n_epochs, times.size))
for i in range(0,n_epochs):
print("Signal simulation of Epoch %s complete") %(i+1)
for j in range(0, freqs0.size):
x[i] = x[i] + (np.sqrt(Rxx0[j]) * np.sin(1 * np.pi * freqs0[j] * times))
x[i] = x[i] + np.random.normal(loc=noise_mu, scale= nse_kappa, size=times.size)

Where you add the noise, check if it should be added.
import random
for x in range(10):
if random.uniform(0,1) > 0.5:
print "Add noise"
else:
print "Do nothing"

Use random.sample for generating non-repetitive random integers as index and use them for setting your array values to zero.
import random
# your code
N = 100 # number of indices set to zero
ind = random.sample(range(len(x)),N)
x[ind] = 0.

Related

How to include numbers we need in a list which is generated by some stochastic algorithm

I need to implement a stochastic algorithm that provides as output the times and the states at the corresponding time points of a dynamic system. We include randomness in defining the time points by retrieving a random number from the uniform distribution. What I want to do, is to find the state at time points 0,1,2,...,24. Given the randomness of the algorithm, the time points 1, 2, 3,...,24 are not necessarily hit. We my include rounding at two decimal places, but even with rounding I can not find/insert all of these time points. The question is, how to change the code so as to be able to include in the list of the time points the numbers 1, 2,..., 24 while preserving the stochasticity of the algorithm ? Thanks for any suggestion.
import numpy as np
import random
import math as m
np.random.seed(seed = 5)
# Stoichiometric matrix
S = np.array([(-1, 0), (1, -1)])
# Reaction parameters
ke = 0.3; ka = 0.5
k = [ke, ka]
# Initial state vector at time t0
X1 = [200]; X2 = [0]
# We will update it for each time.
X = [X1, X2]
# Initial time is t0 = 0, which we will update.
t = [0]
# End time
tfinal = 24
# The propensity vector R concerning the last/updated value of time
def ReactionRates(k, X1, X2):
R = np.zeros((2,1))
R[0] = k[1] * X1[-1]
R[1] = k[0] * X2[-1]
return R
# We implement the Gillespie (SSA) algorithm
while True:
# Reaction propensities/rates
R = ReactionRates(k,X1,X2)
propensities = R
propensities_sum = sum(R)[0]
if propensities_sum == 0:
break
# we include randomness
u1 = np.random.uniform(0,1)
delta_t = (1/propensities_sum) * m.log(1/u1)
if t[-1] + delta_t > tfinal:
break
t.append(t[-1] + delta_t)
b = [0,R[0], R[1]]
u2 = np.random.uniform(0,1)
# Choose j
lambda_u2 = propensities_sum * u2
for j in range(len(b)):
if sum(b[0:j-1+1]) < lambda_u2 <= sum(b[1:j+1]):
break # out of for j
# make j zero based
j -= 1
# We update the state vector
X1.append(X1[-1] + S.T[j][0])
X2.append(X2[-1] + S.T[j][1])
# round t values
t = [round(tt,2) for tt in t]
print("The time steps:", t)
print("The second component of the state vector:", X2)
After playing with your model, I conclude, that interpolation works fine.
Basically, just append the following lines to your code:
ts = np.arange(tfinal+1)
xs = np.interp(ts, t, X2)
and if you have matplotlib installed, you can visualize using
import matplotlib.pyplot as plt
plt.plot(t, X2)
plt.plot(ts, xs)
plt.show()

Appending values in sub-arrays of an array in python

The goal here is to construct the one-particle distribution function of a system evolving under Brownian dynamics; one has to produce a random number drawn from a Gaussian distribution. To construct this quantity, I am thinking of running several simulations, and for specific times in each of the simulations, save the distances of each particle from the center of the 2D square and only in the end, create a histogram of all the values.
My problem is that during each of the simulations, time begins from zero and goes on with a certain time-step, for each of which the particles move randomly. So, the distances to be saved have to be labeled correctly for their corresponding times.
So, my thought was to create an array that will have in each row, 5 sub-arrays, one for each time I want to save the distances of the particles from the center of the square. I am trying to work this with numpy, but with no success. For each simulation, and for specific times, I create an array with all the distances, and I try to append it with numpy.append to the specific subarray, but this doesn't work correctly; as I understand the problem lies in the fact that I don't know how to index properly( and for all the simulations) the sub-arrays.
Beyond that, I think that the approach is not the best: either I will have to abandon the idea of using numpy and figure out how I can index with two indices the array properly, or figure out a way to use numpy more effectively.
So, to the point, the general question here is how I could add/append values to specific sub-arrays of an array ( either pre-constructed with numpy or not and treated as a list).
The alternative would be for someone to mention a more efficient way of creating the one-particle distribution function of a Brownian motion problem, which would be really helpful.
I am adding the relevant code below. Thank you all in advance.
Code:
import random
import math
import matplotlib.pyplot as plt
import numpy as np
# def dump(particles,step,n):
# fileoutput = open('coord.txt', 'a')
# fileoutput.write("ITEM: TIMESTEP \n")
# fileoutput.write("%i \n" % step)
# fileoutput.write("ITEM: NUMBER OF ATOMS \n")
# fileoutput.write("%i \n" % n)
# fileoutput.write("ITEM: BOX BOUNDS \n")
# fileoutput.write("%e %e xlo xhi \n" % (0.0, 100))
# fileoutput.write("%e %e xlo xhi \n" % (0.0, 100))
# fileoutput.write("%e %e xlo xhi \n" % (-0.25, 0.25))
# fileoutput.write("ITEM: ATOMS id type x y z \n")
# i = 0
# while i < n:
# x = particles[i][0]
# y = particles[i][1]
# #fileoutput.write("%i %i %f %f %f \n" % (i, 1, x*1e10, y*1e10, z*1e10))
# fileoutput.write("%i %i %f %f %f \n" % (i, 1, x, y, 0))
# i += 1
# fileoutput.close()
num_sims = 2
N = 49
L = 10
meanz = 0
varz = 1
sigma = 1
# tau = sigma**2*ksi/(kT)
# Starting time
t_0 = 0
# Time increments
dt = 10**(-4) # dt/tau
# Ending time
T = 10**2 # T/tau
# Produce random particles and avoid overlap:
particles = np.full((N, 2), L/2)
times = np.arange(t_0, T, dt)
check = 0
distances = np.empty([50*num_sims, 5])
for sim in range(0, num_sims):
step = 0
t_index = 0
for t in times:
r=[]
for i in range(0,N):
z = np.random.normal(meanz, varz)
particles[i][0] = particles[i][0] + ((2*dt*sigma**2)**(1/2))*z
z = random.gauss(meanz, varz)
particles[i][1] = particles[i][1] + ((2*dt*sigma**2)**(1/2))*z
if (t%(2*(10**5)*dt) == 0):
for j in range (0,N):
rj = ((particles[j][0]-L/2)**2 + (particles[j][1]-L/2)**2)**(1/2)
r.append(rj)
distances[t_index] = np.append(distances[t_index],r)
t_index += 1

Coupled map lattice in Python

I attempt to plot bifurcation diagram for following one-dimensional spatially extended system with boundary conditions
x[i,n+1] = (1-eps)*(r*x[i,n]*(1-x[i,n])) + 0.5*eps*( r*x[i-1,n]*(1-x[i-1,n]) + r*x[i+1,n]*(1-x[i+1,n])) + p
I am facing problem in getting desired output figure may be because of number of transients I am using. Can someone help me out by cross-checking my code, what values of nTransients should I choose or how many transients should I ignore ?
My Python code is as follows:
import numpy as np
from numpy import *
from pylab import *
L = 60 # no. of lattice sites
eps = 0.6 # diffusive coupling strength
r = 4.0 # control parameter r
np.random.seed(1010)
ic = np.random.uniform(0.1, 0.9, L) # random initial condition betn. (0,1)
nTransients = 900 # The iterates we'll throw away
nIterates = 1000 # This sets how much the attractor is filled in
nSteps = 400 # This sets how dense the bifurcation diagram will be
pLow = -0.4
pHigh = 0.0
pInc = (pHigh-pLow)/float(nSteps)
def LM(p, x):
x_new = []
for i in range(L):
if i==0:
x_new.append((1-eps)*(r*x[i]*(1-x[i])) + 0.5*eps*(r*x[L-1]*(1-x[L-1]) + r*x[i+1]*(1-x[i+1])) + p)
elif i==L-1:
x_new.append((1-eps)*(r*x[i]*(1-x[i])) + 0.5*eps*(r*x[i-1]*(1-x[i-1]) + r*x[0]*(1-x[0])) + p)
elif i>0 and i<L-1:
x_new.append((1-eps)*(r*x[i]*(1-x[i])) + 0.5*eps*(r*x[i-1]*(1-x[i-1]) + r*x[i+1]*(1-x[i+1])) + p)
return x_new
for p in arange(pLow, pHigh, pInc):
# set initial conditions
state = ic
# throw away the transient iterations
for i in range(nTransients):
state = LM(p, state)
# now stote the next batch of iterates
psweep = [] # store p values
x = [] # store iterates
for i in range(nIterates):
state = LM(p, state)
psweep.append(p)
x.append(state[L/2-1])
plot(psweep, x, 'k,') # Plot the list of (r,x) pairs as pixels
xlabel('Pinning Strength p')
ylabel('X(L/2)')
# Display plot in window
show()
Can someone also tell me figure displayed by pylab in the end has either dots or lines as a marker, if it is lines then how to get plot with dots.
This is my output image for reference, after using pixels:
It still isn't clear exactly what your desired output is, but I'm guessing you're aiming for something that looks like this image from Wikipedia:
Going with that assumption, I gave it my best shot, but I'm guessing your equations (with the boundary conditions and so on) give you something that simply doesn't look quite that pretty. Here's my result:
This plot by itself may not look like the best thing ever, however, if you zoom in, you can really see some beautiful detail (this is right from the center of the plot, where the two arms of the bifurcation come down, meet, and then branch away again):
Note that I have used horizontal lines, with alpha=0.1 (originally you were using solid, vertical lines, which was why the result didn't look good).
The code!
I essentially modified your program a little to make it vectorized: I removed the for loop over p, which made the whole thing run almost instantaneously. This enabled me to use a much denser sampling for p, and allowed me to plot horizontal lines.
from __future__ import print_function, division
import numpy as np
import matplotlib.pyplot as plt
L = 60 # no. of lattice sites
eps = 0.6 # diffusive coupling strength
r = 4.0 # control parameter r
np.random.seed(1010)
ic = np.random.uniform(0.1, 0.9, L) # random initial condition betn. (0,1)
nTransients = 100 # The iterates we'll throw away
nIterates = 100 # This sets how much the attractor is filled in
nSteps = 4000 # This sets how dense the bifurcation diagram will be
pLow = -0.4
pHigh = 0.0
pInc = (pHigh - pLow) / nSteps
def LM(p, x):
x_new = np.empty(x.shape)
for i in range(L):
if i == 0:
x_new[i] = ((1 - eps) * (r * x[i] * (1 - x[i])) + 0.5 * eps * (r * x[L - 1] * (1 - x[L - 1]) + r * x[i + 1] * (1 - x[i + 1])) + p)
elif i == L - 1:
x_new[i] = ((1 - eps) * (r * x[i] * (1 - x[i])) + 0.5 * eps * (r * x[i - 1] * (1 - x[i - 1]) + r * x[0] * (1 - x[0])) + p)
elif i > 0 and i < L - 1:
x_new[i] = ((1 - eps) * (r * x[i] * (1 - x[i])) + 0.5 * eps * (r * x[i - 1] * (1 - x[i - 1]) + r * x[i + 1] * (1 - x[i + 1])) + p)
return x_new
p = np.arange(pLow, pHigh, pInc)
state = np.tile(ic[:, np.newaxis], (1, p.size))
# set initial conditions
# throw away the transient iterations
for i in range(nTransients):
state = LM(p, state)
# now store the next batch of iterates
x = np.empty((p.size, nIterates)) # store iterates
for i in range(nIterates):
state = LM(p, state)
x[:, i] = state[L // 2 - 1]
# Plot the list of (r,x) pairs as pixels
plt.plot(p, x, c=(0, 0, 0, 0.1))
plt.xlabel('Pinning Strength p')
plt.ylabel('X(L/2)')
# Display plot in window
plt.show()
I don't want to try explaining the whole program to you: I've used a few standard numpy tricks, including broadcasting, but otherwise, I have not modified much. I've not modified your LM function at all.
Please don't hesitate to ask me in the comments if you have any questions! I'm happy to explain specifics that you want explained.
A note on transients and iterates: Hopefully, now that the program runs much faster, you can try playing with these elements yourself. To me, the number of transients seemed to decide for how long the plot remained "deterministic-looking". The number of iterates just increases the density of plot lines, so increasing this beyond a point didn't seem to make sense to me.
I tried increasing the number of transients all the way up till 10,000. Here's my result from that experiment, for your reference:

Make a number more probable to result from random

I'm using x = numpy.random.rand(1) to generate a random number between 0 and 1. How do I make it so that x > .5 is 2 times more probable than x < .5?
That's a fitting name!
Just do a little manipulation of the inputs. First set x to be in the range from 0 to 1.5.
x = numpy.random.uniform(1.5)
x has a 2/3 chance of being greater than 0.5 and 1/3 chance being smaller. Then if x is greater than 1.0, subtract .5 from it
if x >= 1.0:
x = x - 0.5
This is overkill for you, but it's good to know an actual method for generating a random number with any probability density function (pdf).
You can do that by subclassing scipy.stat.rv_continuous, provided you do it correctly. You will have to have a normalized pdf (so that its integral is 1). If you don't, numpy will automatically adjust the range for you. In this case, your pdf has a value of 2/3 for x<0.5, and 4/3 for x>0.5, with a support of [0, 1) (support is the interval over which it's nonzero):
import scipy.stats as spst
import numpy as np
import matplotlib.pyplot as plt
import ipdb
def pdf_shape(x, k):
if x < 0.5:
return 2/3.
elif 0.5 <= x and x < 1:
return 4/3.
else:
return 0.
class custom_pdf(spst.rv_continuous):
def _pdf(self, x, k):
return pdf_shape(x, k)
instance = custom_pdf(a=0, b=1)
samps = instance.rvs(k=1, size=10000)
plt.hist(samps, bins=20)
plt.show()
tmp = random()
if tmp < 0.5: tmp = random()
is pretty easy way to do it
ehh I guess this is 3x as likely ... thats what i get for sleeping through that class I guess
from random import random,uniform
def rand1():
tmp = random()
if tmp < 0.5:tmp = random()
return tmp
def rand2():
tmp = uniform(0,1.5)
return tmp if tmp <= 1.0 else tmp-0.5
sample1 = []
sample2 = []
for i in range(10000):
sample1.append(rand1()>=0.5)
sample2.append(rand2()>=0.5)
print sample1.count(True) #~ 75%
print sample2.count(True) #~ 66% <- desired i believe :)
First off, numpy.random.rand(1) doesn't return a value in the [0,1) range (half-open, includes zero but not one), it returns an array of size one, containing values in that range, with the upper end of the range having nothing to do with the argument passed in.
The function you're probably after is the uniform distribution one, numpy.random.uniform() since this will allow an arbitrary upper range.
And, to make the upper half twice as likely is a relatively simple matter.
Take, for example, a random number generator r(n) which returns a uniformly distributed integer in the range [0,n). All you need to do is adjust the values to change the distribution:
x = r(3) # 0, 1 or 2, # 1/3 probability each
if x == 2:
x = 1 # Now either 0 (# 1/3) or 1 (# 2/3)
Now the chances of getting zero are 1/3 while the chances of getting one are 2/3, basically what you're trying to achieve with your floating point values.
So I would simply get a random number in the range [0,1.5), then subtract 0.5 if it's greater than or equal to one.
x = numpy.random.uniform(high=1.5)
if x >= 1: x -= 0.5
Since the original distribution should be even across the [0,1.5) range, the subtraction should make [0.5,1.0) twice as likely (and [1.0,1.5) impossible), while keeping the distribution even within each section ([0,0.5) and [0.5,1)):
[0.0,0.5) [0.5,1.0) [1.0,1.5) before
<---------><---------><--------->
[0.0,0.5) [0.5,1.0) [0.5,1.0) after
You could take a "mixture model" approach where you split the process into two steps: first, decide whether to take option A or B, where B is twice as likely as A; then, if you chose A, return a random number between 0.0 and 0.5, else if you chose B, return one between 0.5 and 1.0.
In the example, the randint randomly returns 0, 1, or 2, so the else case is twice as likely as the if case.
m = numpy.random.randint(3)
if m==0:
x = numpy.random.uniform(0.0, 0.5)
else:
x = numpy.random.uniform(0.5, 1.0)
This is a little more expensive (two random draws instead of one) but it can generalize to more complicated distributions in a fairly straightforward way.
if you want a more fluid randomness, you can just square the output of the random function
(and subtract it from 1 to make x > 0.5 more probable instead of x < 0.5).
x = 1 - sqr(numpy.random.rand(1))

Least squares regression on 2d array

The numpy.linalg.lstsq(a,b) function accepts an array a with size nx2 and a 1-dimensional array b which is the dependent variable.
How would I go about doing a least squares regression where the data points are presented as a 2d array generated from an image file? The array looks something like this:
[[0, 0, 0, 0, e]
[0, 0, c, d, 0]
[b, a, f, 0, 0]]
where a, b, c, d, e, f are positive integer values.
I want to fit a line to these points. Can I use np.linalg.lstsq (and if so, how) or is there something which may make more sense (and if so, how)?
Thanks very much.
once a while I saw a similar python program from
# Prac 2 for Monte Carlo methods in a nutshell
# Richard Chopping, ANU RSES and Geoscience Australia, October 2012
# Useage
# python prac_q2.py [number of bootstrap runs]
# e.g. python prac_q2.py 10000
# would execute this and perform 10 000 bootstrap runs.
# Default is 100 runs.
# sys cause I need to access the arguments the script was called with
import sys
# math cause it's handy for scalar maths
import math
# time cause I want to benchmark how long things take
import time
# numpy cause it gives us awesome array / matrix manipulation stuff
import numpy
# scipy just in case
import scipy
# scipy.stats to make life simpler statistcally speaking
import scipy.stats as stats
def main():
print "Prac 2 solution: no graphs"
true_model = numpy.array([17.0, 10.0, 1.96])
# Here's a nifty way to write out numpy arrays.
# Unlike the data table in the prac handouts, I've got time first
# and height second.
# You can mix up the order but you need to change a lot of calculations
# to deal with this change.
data = numpy.array([[1.0, 26.94],
[2.0, 33.45],
[3.0, 40.72],
[4.0, 42.32],
[5.0, 44.30],
[6.0, 47.19],
[7.0, 43.33],
[8.0, 40.13]])
# Perform the least squares regression to find the best fit solution
best_fit = regression(data)
# Nifty way to get out elements from an array
m1,m2,m3 = best_fit
print "Best fit solution:"
print "m1 is", m1, "and m2 is", m2, "and m3 is", m3
# Calculate residuals from the best fit solution
best_fit_resid = residuals(data, best_fit)
print "The residuals from the best fit solution are:"
print best_fit_resid
print ""
# Bootstrap part
# --------------
# Number of bootstraps to run. 100 is a minimum and our default number.
num_booties = 100
# If we have an argument to the python script, use this as the
# number of bootstrap runs
if len(sys.argv) > 1:
num_booties = int(sys.argv[1])
# preallocate an array to store the results.
ensemble = numpy.zeros((num_booties, 3))
print "Starting up the bootstrap routine"
# How to do timing within a Python script - here I start a stopwatch running
start_time = time.clock()
for index in range(num_booties):
# Print every 10 % so we know where we're up to in long runs
if print_progress(index, num_booties):
percent = (float(index) / float(num_booties)) * 100.0
print "Have completed", percent, "percent"
# For each iteration of the bootstrap algorithm,
# first calculate mixed up residuals...
resamp_resid = resamp_with_replace(best_fit_resid)
# ... then generate new data...
new_data = calc_new_data(data, best_fit, resamp_resid)
# ... then perform another regression to generate a new set of m1, m2, m3
bootstrap_model = regression(new_data)
ensemble[index] = (bootstrap_model[0], bootstrap_model[1], bootstrap_model[2])
# Done with the loop
# Calculate the time the run took - what's the current time, minus when we started.
loop_time = time.clock() - start_time
print ""
print "Ensemble calculated based on", num_booties, "bootstrap runs."
print "Bootstrap runs took", loop_time, "seconds."
print ""
# Stats on the ensemble time
# --------------------------
B = num_booties
# Mean is pretty simple, 1.0/B to force it to use floating points
# This gives us an array of the means of the 3 model parameters
mean = 1.0/B * numpy.sum(ensemble, axis=0)
print "Mean is ([m1 m2 m3]):", mean
# Variance
var2 = 1.0/B * numpy.sum(((ensemble - mean)**2), axis=0)
print "Variance squared is ([m1 m2 m3]):", var2
# Bias
bias = mean - best_fit
print "Bias is ([m1 m2 m3]):", bias
bias_corr = best_fit - bias
print "Bias corrected solution is ([m1 m2 m3]):", bias_corr
print "The original solution was ([m1 m2 m3]):", best_fit
print "And the true solution is ([m1 m2 m3]):", true_model
print ""
# Confidence intervals
# ---------------------
# Sort column 1 to calculate confidence intervals
# Sorting in numpy sucks.
# Need to declare what the fields are (so it knows how to sort it)
# f8 => numpy's floating point number
# Then need to delcare what we sort it on
# Here we sort on the first column, then the second, then the third.
# f0,f1,f2 field 0, then field 1, then field 2.
# Then we make sure we sort it by column (axis = 0)
# Then we take a view of that data as a float64 so it works properly
sorted_m1 = numpy.sort(ensemble.view('f8,f8,f8'), order=['f0','f1','f2'], axis=0).view(numpy.float64)
# stats is my name for scipy.stats
# This has a wonderful function that calculates percentiles, including performing interpolation
# (important for low numbers of bootstrap runs)
m1_perc0p5 = stats.scoreatpercentile(sorted_m1,0.5)[0]
m1_perc2p5 = stats.scoreatpercentile(sorted_m1,2.5)[0]
m1_perc16 = stats.scoreatpercentile(sorted_m1,16)[0]
m1_perc84 = stats.scoreatpercentile(sorted_m1,84)[0]
m1_perc97p5 = stats.scoreatpercentile(sorted_m1,97.5)[0]
m1_perc99p5 = stats.scoreatpercentile(sorted_m1,99.5)[0]
print "m1 68% confidence interval is from", m1_perc16, "to", m1_perc84
print "m1 95% confidence interval is from", m1_perc2p5, "to", m1_perc97p5
print "m1 99% confidence interval is from", m1_perc0p5, "to", m1_perc99p5
print ""
# Now column 2, sort it...
sorted_m2 = numpy.sort(ensemble.view('f8,f8,f8'), order=['f1','f0','f2'], axis=0).view(numpy.float64)
# ... and do stats.
m2_perc0p5 = stats.scoreatpercentile(sorted_m2,0.5)[1]
m2_perc2p5 = stats.scoreatpercentile(sorted_m2,2.5)[1]
m2_perc16 = stats.scoreatpercentile(sorted_m2,16)[1]
m2_perc84 = stats.scoreatpercentile(sorted_m2,84)[1]
m2_perc97p5 = stats.scoreatpercentile(sorted_m2,97.5)[1]
m2_perc99p5 = stats.scoreatpercentile(sorted_m2,99.5)[1]
print "m2 68% confidence interval is from", m2_perc16, "to", m2_perc84
print "m2 95% confidence interval is from", m2_perc2p5, "to", m2_perc97p5
print "m2 99% confidence interval is from", m2_perc0p5, "to", m2_perc99p5
print ""
# and finally column 3, again, sort it..
sorted_m3 = numpy.sort(ensemble.view('f8,f8,f8'), order=['f2','f1','f0'], axis=0).view(numpy.float64)
# ... and do stats.
m3_perc0p5 = stats.scoreatpercentile(sorted_m3,0.5)[1]
m3_perc2p5 = stats.scoreatpercentile(sorted_m3,2.5)[1]
m3_perc16 = stats.scoreatpercentile(sorted_m3,16)[1]
m3_perc84 = stats.scoreatpercentile(sorted_m3,84)[1]
m3_perc97p5 = stats.scoreatpercentile(sorted_m3,97.5)[1]
m3_perc99p5 = stats.scoreatpercentile(sorted_m3,99.5)[1]
print "m3 68% confidence interval is from", m3_perc16, "to", m3_perc84
print "m3 95% confidence interval is from", m3_perc2p5, "to", m3_perc97p5
print "m3 99% confidence interval is from", m3_perc0p5, "to", m3_perc99p5
print ""
# End of the main function
#
#
# Helper functions go down here
#
#
# regression
# This takes a 2D numpy array and performs a least-squares regression
# using the formula on the practical sheet, page 3
# Stored in the top are the real values
# Returns an array of m1, m2 and m3.
def regression(data):
# While testing, just return the real values
# real_values = numpy.array([17.0, 10.0, 1.96])
# Creating the G matrix
# ---------------------
# Because I'm using numpy arrays here, we need
# to learn some notation.
# data[:,0] is the FIRST column
# Length of this = number of time samples in data
N = len(data[:,0])
# numpy.sum adds up all data in a row or column.
# Axis = 0 implies add up each column. [0] at end
# returns the sum of the first column
# This is the sum of Ti for i = 1..N
sum_Ti = numpy.sum(data, axis=0)[0]
# numpy.power takes each element of an array and raises them to a given power
# In this one call we also take the sum of the columns (as above) after they have
# been squared, and then just take the t column
sum_Ti2 = numpy.sum(numpy.power(data, 2), axis=0)[0]
# Now we need to get the cube of Ti, then sum that result
sum_Ti3 = numpy.sum(numpy.power(data, 3), axis=0)[0]
# Finally we need the quartic of Ti, then sum that result
sum_Ti4 = numpy.sum(numpy.power(data, 4), axis=0)[0]
# Now we can construct the G matrix
G = numpy.array([[N, sum_Ti, -0.5 * sum_Ti2],
[sum_Ti, sum_Ti2, -0.5 * sum_Ti3],
[-0.5 * sum_Ti2, -0.5 * sum_Ti3, 0.25 * sum_Ti4]])
# We also need to take the inverse of the G matrix
G_inv = numpy.linalg.inv(G)
# Creating the d matrix
# ---------------------
# Hello numpy.sum, my old friend...
sum_Yi = numpy.sum(data, axis=0)[1]
# numpy.prod multiplies the values in an array.
# We need to do the products along axis 1 (i.e. row by row)
# Then sum all the elements
sum_TiYi = numpy.sum(numpy.prod(data, axis=1))
# The final element we need is a bit tricky.
# We need the product as above
TiYi = numpy.prod(data, axis=1)
# Then we get tricky. * works how we need it here,
# remember that the Ti column is referenced by data[:,0] as above
Ti2Yi = TiYi * data[:,0]
# Then we sum
sum_Ti2Yi = numpy.sum(Ti2Yi)
#With all the elements, we make the d matrix
d = numpy.array([sum_Yi,
sum_TiYi,
-0.5 * sum_Ti2Yi])
# Do the linear algebra stuff
# To multiple numpy arrays in a matrix style,
# we need to use numpy.dot()
# Not the most useful notation, but there you go.
# To help out the Matlab users: http://www.scipy.org/NumPy_for_Matlab_Users
result = G_inv.dot(d)
#Return this result
return result
# residuals:
# Takes in a data array, and an array of best fit paramers
# calculates the difference between the observed and predicted data
# and returns an array
def residuals(data, best_fit):
# Extract ti from the data array
ti = data[:,0]
# We also need an array of the square of ti
ti2 = numpy.power(ti, 2)
# Extract yi
yi = data[:,1]
# Calculate residual (data minus predicted)
result = yi - best_fit[0] - (best_fit[1] * ti) + (0.5 * best_fit[2] * ti2)
return result
# resamp_with_replace:
# Perform a dataset resampling with replacement on parameter set.
# Uses numpy.random to generate the random numbers to pick the indices to look up.
# So for item 0, ... N, we look up a random index from the set and put that in
# our resampled data.
def resamp_with_replace(set):
# How many things do we need to do this for?
N = len(set)
# Preallocate our result array
result = numpy.zeros(N)
# Generate N random integers between 0 and N-1
indices = numpy.random.randint(0, N - 1, N)
# For i from the set 0...N-1 (that's what the range() command gives us),
# our result for that i is given by the index we randomly generated above
for i in range(N):
result[i] = set[indices[i]]
return result
# calc_new_data:
# Given a set of resampled residuals, use the model parameters to derive
# new data. This is used for bootstrapping the residuals.
# true_data is a numpy array of rows of ti, yi. We only need the ti column though.
# model is an array of three parameters, corresponding to m1, m2, m3.
# residuals are an array of our resudials
def calc_new_data(true_data, model, residuals):
# Extract the time information from the new data array
ti = true_data[:,0]
# Calculate new data using array maths
# This goes through and does the sums etc for each element of the array
# Nice and compact way to represent it.
y_new = residuals + model[0] + (model[1] * ti) - (0.5 * model[2] * ti**2)
# Our result needs to be an array of ti, y_new, so we need to combine them using
# the numpy.column_stack routine
result = numpy.column_stack((ti, y_new))
# Return this combined array
return result
# print_progress:
# Just a quick thing that returns true if we want to print for this index
# and false otherwise
def print_progress(index, total):
index = float(index)
total = float(total)
result = False
# Floating point maths is irritating
# We want to print at the start, every 10%, and at the end.
# This works up to index = 100,000
# Would also be lovely if Python had a switch statement
if (((index / total) * 100) <= 0.00001):
result = True
elif (((index / total) * 100) >= 9.99999) and (((index / total) * 100) <= 10.00001):
result = True
elif (((index / total) * 100) >= 19.99999) and (((index / total) * 100) <= 20.00001):
result = True
elif (((index / total) * 100) >= 29.99999) and (((index / total) * 100) <= 30.00001):
result = True
elif (((index / total) * 100) >= 39.99999) and (((index / total) * 100) <= 40.00001):
result = True
elif (((index / total) * 100) >= 49.99999) and (((index / total) * 100) <= 50.00001):
result = True
elif (((index / total) * 100) >= 59.99999) and (((index / total) * 100) <= 60.00001):
result = True
elif (((index / total) * 100) >= 69.99999) and (((index / total) * 100) <= 70.00001):
result = True
elif (((index / total) * 100) >= 79.99999) and (((index / total) * 100) <= 80.00001):
result = True
elif (((index / total) * 100) >= 89.99999) and (((index / total) * 100) <= 90.00001):
result = True
elif ((((index+1) / total) * 100) > 99.99999):
result = True
else:
result = False
return result
#
#
# End of helper functions
#
#
# So we can easily execute our script
if __name__ == "__main__":
main()
I guess you can take a look, here is link to complete information
Use sklearn instead of numpy (sklearn is derived from numpy but much better for this kind of calculation) :
from sklearn import linear_model
clf = linear_model.LinearRegression()
clf.fit ([[0, 0], [1, 1], [2, 2]], [0, 1, 2])
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1,
normalize=False)
clf.coef_
array([ 0.5, 0.5])

Categories

Resources