Appending values in sub-arrays of an array in python - python

The goal here is to construct the one-particle distribution function of a system evolving under Brownian dynamics; one has to produce a random number drawn from a Gaussian distribution. To construct this quantity, I am thinking of running several simulations, and for specific times in each of the simulations, save the distances of each particle from the center of the 2D square and only in the end, create a histogram of all the values.
My problem is that during each of the simulations, time begins from zero and goes on with a certain time-step, for each of which the particles move randomly. So, the distances to be saved have to be labeled correctly for their corresponding times.
So, my thought was to create an array that will have in each row, 5 sub-arrays, one for each time I want to save the distances of the particles from the center of the square. I am trying to work this with numpy, but with no success. For each simulation, and for specific times, I create an array with all the distances, and I try to append it with numpy.append to the specific subarray, but this doesn't work correctly; as I understand the problem lies in the fact that I don't know how to index properly( and for all the simulations) the sub-arrays.
Beyond that, I think that the approach is not the best: either I will have to abandon the idea of using numpy and figure out how I can index with two indices the array properly, or figure out a way to use numpy more effectively.
So, to the point, the general question here is how I could add/append values to specific sub-arrays of an array ( either pre-constructed with numpy or not and treated as a list).
The alternative would be for someone to mention a more efficient way of creating the one-particle distribution function of a Brownian motion problem, which would be really helpful.
I am adding the relevant code below. Thank you all in advance.
Code:
import random
import math
import matplotlib.pyplot as plt
import numpy as np
# def dump(particles,step,n):
# fileoutput = open('coord.txt', 'a')
# fileoutput.write("ITEM: TIMESTEP \n")
# fileoutput.write("%i \n" % step)
# fileoutput.write("ITEM: NUMBER OF ATOMS \n")
# fileoutput.write("%i \n" % n)
# fileoutput.write("ITEM: BOX BOUNDS \n")
# fileoutput.write("%e %e xlo xhi \n" % (0.0, 100))
# fileoutput.write("%e %e xlo xhi \n" % (0.0, 100))
# fileoutput.write("%e %e xlo xhi \n" % (-0.25, 0.25))
# fileoutput.write("ITEM: ATOMS id type x y z \n")
# i = 0
# while i < n:
# x = particles[i][0]
# y = particles[i][1]
# #fileoutput.write("%i %i %f %f %f \n" % (i, 1, x*1e10, y*1e10, z*1e10))
# fileoutput.write("%i %i %f %f %f \n" % (i, 1, x, y, 0))
# i += 1
# fileoutput.close()
num_sims = 2
N = 49
L = 10
meanz = 0
varz = 1
sigma = 1
# tau = sigma**2*ksi/(kT)
# Starting time
t_0 = 0
# Time increments
dt = 10**(-4) # dt/tau
# Ending time
T = 10**2 # T/tau
# Produce random particles and avoid overlap:
particles = np.full((N, 2), L/2)
times = np.arange(t_0, T, dt)
check = 0
distances = np.empty([50*num_sims, 5])
for sim in range(0, num_sims):
step = 0
t_index = 0
for t in times:
r=[]
for i in range(0,N):
z = np.random.normal(meanz, varz)
particles[i][0] = particles[i][0] + ((2*dt*sigma**2)**(1/2))*z
z = random.gauss(meanz, varz)
particles[i][1] = particles[i][1] + ((2*dt*sigma**2)**(1/2))*z
if (t%(2*(10**5)*dt) == 0):
for j in range (0,N):
rj = ((particles[j][0]-L/2)**2 + (particles[j][1]-L/2)**2)**(1/2)
r.append(rj)
distances[t_index] = np.append(distances[t_index],r)
t_index += 1

Related

How to include numbers we need in a list which is generated by some stochastic algorithm

I need to implement a stochastic algorithm that provides as output the times and the states at the corresponding time points of a dynamic system. We include randomness in defining the time points by retrieving a random number from the uniform distribution. What I want to do, is to find the state at time points 0,1,2,...,24. Given the randomness of the algorithm, the time points 1, 2, 3,...,24 are not necessarily hit. We my include rounding at two decimal places, but even with rounding I can not find/insert all of these time points. The question is, how to change the code so as to be able to include in the list of the time points the numbers 1, 2,..., 24 while preserving the stochasticity of the algorithm ? Thanks for any suggestion.
import numpy as np
import random
import math as m
np.random.seed(seed = 5)
# Stoichiometric matrix
S = np.array([(-1, 0), (1, -1)])
# Reaction parameters
ke = 0.3; ka = 0.5
k = [ke, ka]
# Initial state vector at time t0
X1 = [200]; X2 = [0]
# We will update it for each time.
X = [X1, X2]
# Initial time is t0 = 0, which we will update.
t = [0]
# End time
tfinal = 24
# The propensity vector R concerning the last/updated value of time
def ReactionRates(k, X1, X2):
R = np.zeros((2,1))
R[0] = k[1] * X1[-1]
R[1] = k[0] * X2[-1]
return R
# We implement the Gillespie (SSA) algorithm
while True:
# Reaction propensities/rates
R = ReactionRates(k,X1,X2)
propensities = R
propensities_sum = sum(R)[0]
if propensities_sum == 0:
break
# we include randomness
u1 = np.random.uniform(0,1)
delta_t = (1/propensities_sum) * m.log(1/u1)
if t[-1] + delta_t > tfinal:
break
t.append(t[-1] + delta_t)
b = [0,R[0], R[1]]
u2 = np.random.uniform(0,1)
# Choose j
lambda_u2 = propensities_sum * u2
for j in range(len(b)):
if sum(b[0:j-1+1]) < lambda_u2 <= sum(b[1:j+1]):
break # out of for j
# make j zero based
j -= 1
# We update the state vector
X1.append(X1[-1] + S.T[j][0])
X2.append(X2[-1] + S.T[j][1])
# round t values
t = [round(tt,2) for tt in t]
print("The time steps:", t)
print("The second component of the state vector:", X2)
After playing with your model, I conclude, that interpolation works fine.
Basically, just append the following lines to your code:
ts = np.arange(tfinal+1)
xs = np.interp(ts, t, X2)
and if you have matplotlib installed, you can visualize using
import matplotlib.pyplot as plt
plt.plot(t, X2)
plt.plot(ts, xs)
plt.show()

Population Monte Carlo implementation

I am trying to implement the Population Monte Carlo algorithm as described in this paper (see page 78 Fig.3) for a simple model (see function model()) with one parameter using Python. Unfortunately, the algorithm doesn't work and I can't figure out what's wrong. See my implementation below. The actual function is called abc(). All other functions can be seen as helper-functions and seem to work fine.
To check whether the algorithm workds, I first generate observed data with the only parameter of the model set to param = 8. Therefore, the posterior resulting from the ABC algorithm should be centered around 8. This is not the case and I'm wondering why.
I would appreciate any help or comments.
# imports
from math import exp
from math import log
from math import sqrt
import numpy as np
import random
from scipy.stats import norm
# globals
N = 300 # sample size
N_PARTICLE = 300 # number of particles
ITERS = 5 # number of decreasing thresholds
M = 10 # number of words to remember
MEAN = 7 # prior mean of parameter
SD = 2 # prior sd of parameter
def model(param):
recall_prob_all = 1/(1 + np.exp(M - param))
recall_prob_one_item = np.exp(np.log(recall_prob_all) / float(M))
return sum([1 if random.random() < recall_prob_one_item else 0 for item in range(M)])
## example
print "Output of model function: \n" + str(model(10)) + "\n"
# generate data from model
def generate(param):
out = np.empty(N)
for i in range(N):
out[i] = model(param)
return out
## example
print "Output of generate function: \n" + str(generate(10)) + "\n"
# distance function (sum of squared error)
def distance(obsData,simData):
out = 0.0
for i in range(len(obsData)):
out += (obsData[i] - simData[i]) * (obsData[i] - simData[i])
return out
## example
print "Output of distance function: \n" + str(distance([1,2,3],[4,5,6])) + "\n"
# sample new particles based on weights
def sample(particles, weights):
return np.random.choice(particles, 1, p=weights)
## example
print "Output of sample function: \n" + str(sample([1,2,3],[0.1,0.1,0.8])) + "\n"
# perturbance function
def perturb(variance):
return np.random.normal(0,sqrt(variance),1)[0]
## example
print "Output of perturb function: \n" + str(perturb(1)) + "\n"
# compute new weight
def computeWeight(prevWeights,prevParticles,prevVariance,currentParticle):
denom = 0.0
proposal = norm(currentParticle, sqrt(prevVariance))
prior = norm(MEAN,SD)
for i in range(len(prevParticles)):
denom += prevWeights[i] * proposal.pdf(prevParticles[i])
return prior.pdf(currentParticle)/denom
## example
prevWeights = [0.2,0.3,0.5]
prevParticles = [1,2,3]
prevVariance = 1
currentParticle = 2.5
print "Output of computeWeight function: \n" + str(computeWeight(prevWeights,prevParticles,prevVariance,currentParticle)) + "\n"
# normalize weights
def normalize(weights):
return weights/np.sum(weights)
## example
print "Output of normalize function: \n" + str(normalize([3.,5.,9.])) + "\n"
# sampling from prior distribution
def rprior():
return np.random.normal(MEAN,SD,1)[0]
## example
print "Output of rprior function: \n" + str(rprior()) + "\n"
# ABC using Population Monte Carlo sampling
def abc(obsData,eps):
draw = 0
Distance = 1e9
variance = np.empty(ITERS)
simData = np.empty(N)
particles = np.empty([ITERS,N_PARTICLE])
weights = np.empty([ITERS,N_PARTICLE])
for t in range(ITERS):
if t == 0:
for i in range(N_PARTICLE):
while(Distance > eps[t]):
draw = rprior()
simData = generate(draw)
Distance = distance(obsData,simData)
Distance = 1e9
particles[t][i] = draw
weights[t][i] = 1./N_PARTICLE
variance[t] = 2 * np.var(particles[t])
continue
for i in range(N_PARTICLE):
while(Distance > eps[t]):
draw = sample(particles[t-1],weights[t-1])
draw += perturb(variance[t-1])
simData = generate(draw)
Distance = distance(obsData,simData)
Distance = 1e9
particles[t][i] = draw
weights[t][i] = computeWeight(weights[t-1],particles[t-1],variance[t-1],particles[t][i])
weights[t] = normalize(weights[t])
variance[t] = 2 * np.var(particles[t])
return particles[ITERS-1]
true_param = 9
obsData = generate(true_param)
eps = [15000,10000,8000,6000,3000]
posterior = abc(obsData,eps)
#print posterior
I stumbled upon this question as I was looking for pythonic implementations of PMC algorithms, since, quite coincidentally, I'm currently in the process of applying the techniques in this exact paper to my own research.
Can you post the results you're getting? My guess is that 1) you're using a poor choice of distance function (and/or similarity thresholds), or 2) you're not using enough particles. I may be wrong here (I'm not very well-versed in sample statistics), but your distance function implicitly suggests to me that the ordering of your random draws matters. I'd have to think about this more to determine whether it actually has any effect on the convergence properties (it may not), but why don't you simply use the mean or median as your sample statistic?
I ran your code with 1000 particles and a true parameter value of 8, while using the absolute difference between sample means as my distance function, for three iterations with epsilons of [0.5, 0.3, 0.1]; the peak of my estimated posterior distribution seems to be approaching 8 just like it should on each iteration, alongside a reduction in the population variance. Note that there is still a noticeable rightward bias, but this is because of the asymmetry of your model (parameter values of 8 or less can never result in more than 8 observed successes, while all parameters values greater than 8 can, leading to a rightward skewedness in the distribution).
Here's the plot of my results:

Signal generator specifitc noise modulation

I am simulating a signal (see below) and I later add noise to it.
But I don't want add noise to every single sample point. So is there a way to set a defined amount of my noise array to zero (zeros has to be positioned randomly). nse_kappa and noise_mu are parameters defined for the noise distribution
x = np.zeros((n_epochs, times.size))
for i in range(0,n_epochs):
print("Signal simulation of Epoch %s complete") %(i+1)
for j in range(0, freqs0.size):
x[i] = x[i] + (np.sqrt(Rxx0[j]) * np.sin(1 * np.pi * freqs0[j] * times))
x[i] = x[i] + np.random.normal(loc=noise_mu, scale= nse_kappa, size=times.size)
Where you add the noise, check if it should be added.
import random
for x in range(10):
if random.uniform(0,1) > 0.5:
print "Add noise"
else:
print "Do nothing"
Use random.sample for generating non-repetitive random integers as index and use them for setting your array values to zero.
import random
# your code
N = 100 # number of indices set to zero
ind = random.sample(range(len(x)),N)
x[ind] = 0.

Least squares regression on 2d array

The numpy.linalg.lstsq(a,b) function accepts an array a with size nx2 and a 1-dimensional array b which is the dependent variable.
How would I go about doing a least squares regression where the data points are presented as a 2d array generated from an image file? The array looks something like this:
[[0, 0, 0, 0, e]
[0, 0, c, d, 0]
[b, a, f, 0, 0]]
where a, b, c, d, e, f are positive integer values.
I want to fit a line to these points. Can I use np.linalg.lstsq (and if so, how) or is there something which may make more sense (and if so, how)?
Thanks very much.
once a while I saw a similar python program from
# Prac 2 for Monte Carlo methods in a nutshell
# Richard Chopping, ANU RSES and Geoscience Australia, October 2012
# Useage
# python prac_q2.py [number of bootstrap runs]
# e.g. python prac_q2.py 10000
# would execute this and perform 10 000 bootstrap runs.
# Default is 100 runs.
# sys cause I need to access the arguments the script was called with
import sys
# math cause it's handy for scalar maths
import math
# time cause I want to benchmark how long things take
import time
# numpy cause it gives us awesome array / matrix manipulation stuff
import numpy
# scipy just in case
import scipy
# scipy.stats to make life simpler statistcally speaking
import scipy.stats as stats
def main():
print "Prac 2 solution: no graphs"
true_model = numpy.array([17.0, 10.0, 1.96])
# Here's a nifty way to write out numpy arrays.
# Unlike the data table in the prac handouts, I've got time first
# and height second.
# You can mix up the order but you need to change a lot of calculations
# to deal with this change.
data = numpy.array([[1.0, 26.94],
[2.0, 33.45],
[3.0, 40.72],
[4.0, 42.32],
[5.0, 44.30],
[6.0, 47.19],
[7.0, 43.33],
[8.0, 40.13]])
# Perform the least squares regression to find the best fit solution
best_fit = regression(data)
# Nifty way to get out elements from an array
m1,m2,m3 = best_fit
print "Best fit solution:"
print "m1 is", m1, "and m2 is", m2, "and m3 is", m3
# Calculate residuals from the best fit solution
best_fit_resid = residuals(data, best_fit)
print "The residuals from the best fit solution are:"
print best_fit_resid
print ""
# Bootstrap part
# --------------
# Number of bootstraps to run. 100 is a minimum and our default number.
num_booties = 100
# If we have an argument to the python script, use this as the
# number of bootstrap runs
if len(sys.argv) > 1:
num_booties = int(sys.argv[1])
# preallocate an array to store the results.
ensemble = numpy.zeros((num_booties, 3))
print "Starting up the bootstrap routine"
# How to do timing within a Python script - here I start a stopwatch running
start_time = time.clock()
for index in range(num_booties):
# Print every 10 % so we know where we're up to in long runs
if print_progress(index, num_booties):
percent = (float(index) / float(num_booties)) * 100.0
print "Have completed", percent, "percent"
# For each iteration of the bootstrap algorithm,
# first calculate mixed up residuals...
resamp_resid = resamp_with_replace(best_fit_resid)
# ... then generate new data...
new_data = calc_new_data(data, best_fit, resamp_resid)
# ... then perform another regression to generate a new set of m1, m2, m3
bootstrap_model = regression(new_data)
ensemble[index] = (bootstrap_model[0], bootstrap_model[1], bootstrap_model[2])
# Done with the loop
# Calculate the time the run took - what's the current time, minus when we started.
loop_time = time.clock() - start_time
print ""
print "Ensemble calculated based on", num_booties, "bootstrap runs."
print "Bootstrap runs took", loop_time, "seconds."
print ""
# Stats on the ensemble time
# --------------------------
B = num_booties
# Mean is pretty simple, 1.0/B to force it to use floating points
# This gives us an array of the means of the 3 model parameters
mean = 1.0/B * numpy.sum(ensemble, axis=0)
print "Mean is ([m1 m2 m3]):", mean
# Variance
var2 = 1.0/B * numpy.sum(((ensemble - mean)**2), axis=0)
print "Variance squared is ([m1 m2 m3]):", var2
# Bias
bias = mean - best_fit
print "Bias is ([m1 m2 m3]):", bias
bias_corr = best_fit - bias
print "Bias corrected solution is ([m1 m2 m3]):", bias_corr
print "The original solution was ([m1 m2 m3]):", best_fit
print "And the true solution is ([m1 m2 m3]):", true_model
print ""
# Confidence intervals
# ---------------------
# Sort column 1 to calculate confidence intervals
# Sorting in numpy sucks.
# Need to declare what the fields are (so it knows how to sort it)
# f8 => numpy's floating point number
# Then need to delcare what we sort it on
# Here we sort on the first column, then the second, then the third.
# f0,f1,f2 field 0, then field 1, then field 2.
# Then we make sure we sort it by column (axis = 0)
# Then we take a view of that data as a float64 so it works properly
sorted_m1 = numpy.sort(ensemble.view('f8,f8,f8'), order=['f0','f1','f2'], axis=0).view(numpy.float64)
# stats is my name for scipy.stats
# This has a wonderful function that calculates percentiles, including performing interpolation
# (important for low numbers of bootstrap runs)
m1_perc0p5 = stats.scoreatpercentile(sorted_m1,0.5)[0]
m1_perc2p5 = stats.scoreatpercentile(sorted_m1,2.5)[0]
m1_perc16 = stats.scoreatpercentile(sorted_m1,16)[0]
m1_perc84 = stats.scoreatpercentile(sorted_m1,84)[0]
m1_perc97p5 = stats.scoreatpercentile(sorted_m1,97.5)[0]
m1_perc99p5 = stats.scoreatpercentile(sorted_m1,99.5)[0]
print "m1 68% confidence interval is from", m1_perc16, "to", m1_perc84
print "m1 95% confidence interval is from", m1_perc2p5, "to", m1_perc97p5
print "m1 99% confidence interval is from", m1_perc0p5, "to", m1_perc99p5
print ""
# Now column 2, sort it...
sorted_m2 = numpy.sort(ensemble.view('f8,f8,f8'), order=['f1','f0','f2'], axis=0).view(numpy.float64)
# ... and do stats.
m2_perc0p5 = stats.scoreatpercentile(sorted_m2,0.5)[1]
m2_perc2p5 = stats.scoreatpercentile(sorted_m2,2.5)[1]
m2_perc16 = stats.scoreatpercentile(sorted_m2,16)[1]
m2_perc84 = stats.scoreatpercentile(sorted_m2,84)[1]
m2_perc97p5 = stats.scoreatpercentile(sorted_m2,97.5)[1]
m2_perc99p5 = stats.scoreatpercentile(sorted_m2,99.5)[1]
print "m2 68% confidence interval is from", m2_perc16, "to", m2_perc84
print "m2 95% confidence interval is from", m2_perc2p5, "to", m2_perc97p5
print "m2 99% confidence interval is from", m2_perc0p5, "to", m2_perc99p5
print ""
# and finally column 3, again, sort it..
sorted_m3 = numpy.sort(ensemble.view('f8,f8,f8'), order=['f2','f1','f0'], axis=0).view(numpy.float64)
# ... and do stats.
m3_perc0p5 = stats.scoreatpercentile(sorted_m3,0.5)[1]
m3_perc2p5 = stats.scoreatpercentile(sorted_m3,2.5)[1]
m3_perc16 = stats.scoreatpercentile(sorted_m3,16)[1]
m3_perc84 = stats.scoreatpercentile(sorted_m3,84)[1]
m3_perc97p5 = stats.scoreatpercentile(sorted_m3,97.5)[1]
m3_perc99p5 = stats.scoreatpercentile(sorted_m3,99.5)[1]
print "m3 68% confidence interval is from", m3_perc16, "to", m3_perc84
print "m3 95% confidence interval is from", m3_perc2p5, "to", m3_perc97p5
print "m3 99% confidence interval is from", m3_perc0p5, "to", m3_perc99p5
print ""
# End of the main function
#
#
# Helper functions go down here
#
#
# regression
# This takes a 2D numpy array and performs a least-squares regression
# using the formula on the practical sheet, page 3
# Stored in the top are the real values
# Returns an array of m1, m2 and m3.
def regression(data):
# While testing, just return the real values
# real_values = numpy.array([17.0, 10.0, 1.96])
# Creating the G matrix
# ---------------------
# Because I'm using numpy arrays here, we need
# to learn some notation.
# data[:,0] is the FIRST column
# Length of this = number of time samples in data
N = len(data[:,0])
# numpy.sum adds up all data in a row or column.
# Axis = 0 implies add up each column. [0] at end
# returns the sum of the first column
# This is the sum of Ti for i = 1..N
sum_Ti = numpy.sum(data, axis=0)[0]
# numpy.power takes each element of an array and raises them to a given power
# In this one call we also take the sum of the columns (as above) after they have
# been squared, and then just take the t column
sum_Ti2 = numpy.sum(numpy.power(data, 2), axis=0)[0]
# Now we need to get the cube of Ti, then sum that result
sum_Ti3 = numpy.sum(numpy.power(data, 3), axis=0)[0]
# Finally we need the quartic of Ti, then sum that result
sum_Ti4 = numpy.sum(numpy.power(data, 4), axis=0)[0]
# Now we can construct the G matrix
G = numpy.array([[N, sum_Ti, -0.5 * sum_Ti2],
[sum_Ti, sum_Ti2, -0.5 * sum_Ti3],
[-0.5 * sum_Ti2, -0.5 * sum_Ti3, 0.25 * sum_Ti4]])
# We also need to take the inverse of the G matrix
G_inv = numpy.linalg.inv(G)
# Creating the d matrix
# ---------------------
# Hello numpy.sum, my old friend...
sum_Yi = numpy.sum(data, axis=0)[1]
# numpy.prod multiplies the values in an array.
# We need to do the products along axis 1 (i.e. row by row)
# Then sum all the elements
sum_TiYi = numpy.sum(numpy.prod(data, axis=1))
# The final element we need is a bit tricky.
# We need the product as above
TiYi = numpy.prod(data, axis=1)
# Then we get tricky. * works how we need it here,
# remember that the Ti column is referenced by data[:,0] as above
Ti2Yi = TiYi * data[:,0]
# Then we sum
sum_Ti2Yi = numpy.sum(Ti2Yi)
#With all the elements, we make the d matrix
d = numpy.array([sum_Yi,
sum_TiYi,
-0.5 * sum_Ti2Yi])
# Do the linear algebra stuff
# To multiple numpy arrays in a matrix style,
# we need to use numpy.dot()
# Not the most useful notation, but there you go.
# To help out the Matlab users: http://www.scipy.org/NumPy_for_Matlab_Users
result = G_inv.dot(d)
#Return this result
return result
# residuals:
# Takes in a data array, and an array of best fit paramers
# calculates the difference between the observed and predicted data
# and returns an array
def residuals(data, best_fit):
# Extract ti from the data array
ti = data[:,0]
# We also need an array of the square of ti
ti2 = numpy.power(ti, 2)
# Extract yi
yi = data[:,1]
# Calculate residual (data minus predicted)
result = yi - best_fit[0] - (best_fit[1] * ti) + (0.5 * best_fit[2] * ti2)
return result
# resamp_with_replace:
# Perform a dataset resampling with replacement on parameter set.
# Uses numpy.random to generate the random numbers to pick the indices to look up.
# So for item 0, ... N, we look up a random index from the set and put that in
# our resampled data.
def resamp_with_replace(set):
# How many things do we need to do this for?
N = len(set)
# Preallocate our result array
result = numpy.zeros(N)
# Generate N random integers between 0 and N-1
indices = numpy.random.randint(0, N - 1, N)
# For i from the set 0...N-1 (that's what the range() command gives us),
# our result for that i is given by the index we randomly generated above
for i in range(N):
result[i] = set[indices[i]]
return result
# calc_new_data:
# Given a set of resampled residuals, use the model parameters to derive
# new data. This is used for bootstrapping the residuals.
# true_data is a numpy array of rows of ti, yi. We only need the ti column though.
# model is an array of three parameters, corresponding to m1, m2, m3.
# residuals are an array of our resudials
def calc_new_data(true_data, model, residuals):
# Extract the time information from the new data array
ti = true_data[:,0]
# Calculate new data using array maths
# This goes through and does the sums etc for each element of the array
# Nice and compact way to represent it.
y_new = residuals + model[0] + (model[1] * ti) - (0.5 * model[2] * ti**2)
# Our result needs to be an array of ti, y_new, so we need to combine them using
# the numpy.column_stack routine
result = numpy.column_stack((ti, y_new))
# Return this combined array
return result
# print_progress:
# Just a quick thing that returns true if we want to print for this index
# and false otherwise
def print_progress(index, total):
index = float(index)
total = float(total)
result = False
# Floating point maths is irritating
# We want to print at the start, every 10%, and at the end.
# This works up to index = 100,000
# Would also be lovely if Python had a switch statement
if (((index / total) * 100) <= 0.00001):
result = True
elif (((index / total) * 100) >= 9.99999) and (((index / total) * 100) <= 10.00001):
result = True
elif (((index / total) * 100) >= 19.99999) and (((index / total) * 100) <= 20.00001):
result = True
elif (((index / total) * 100) >= 29.99999) and (((index / total) * 100) <= 30.00001):
result = True
elif (((index / total) * 100) >= 39.99999) and (((index / total) * 100) <= 40.00001):
result = True
elif (((index / total) * 100) >= 49.99999) and (((index / total) * 100) <= 50.00001):
result = True
elif (((index / total) * 100) >= 59.99999) and (((index / total) * 100) <= 60.00001):
result = True
elif (((index / total) * 100) >= 69.99999) and (((index / total) * 100) <= 70.00001):
result = True
elif (((index / total) * 100) >= 79.99999) and (((index / total) * 100) <= 80.00001):
result = True
elif (((index / total) * 100) >= 89.99999) and (((index / total) * 100) <= 90.00001):
result = True
elif ((((index+1) / total) * 100) > 99.99999):
result = True
else:
result = False
return result
#
#
# End of helper functions
#
#
# So we can easily execute our script
if __name__ == "__main__":
main()
I guess you can take a look, here is link to complete information
Use sklearn instead of numpy (sklearn is derived from numpy but much better for this kind of calculation) :
from sklearn import linear_model
clf = linear_model.LinearRegression()
clf.fit ([[0, 0], [1, 1], [2, 2]], [0, 1, 2])
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1,
normalize=False)
clf.coef_
array([ 0.5, 0.5])

Python Fast Symmetrically Centered Image Downsampling Algorithm

I'm trying speed up a simple symmetrically centered image downsampling algorithm in Python. I've coded this up using a naive approach as a lower bound benchmark, however I'd like to get this to work significantly faster.
For simplicity's sake, my image is a circle at a resolution 4608x4608 (I'll be working with resolutions of this scale) with which I would like a downsampled image by a factor of 9 (i.e. 512x512). Below is the code I've generated that creates the image in high resolution and downsamples it by a factor of 9.
All this basically does is maps a pixel from high res. space onto one in low res space (symmetrically around the centroid) and sums all pixels in a given area in high res to that one pixel in low res.
import numpy as np
import matplotlib.pyplot as plt
import time
print 'rendering circle at high res'
# image dimensions.
dim = 4608
# generate high sampled image.
xx, yy = np.mgrid[:dim, :dim]
highRes = (xx - dim/2) ** 2 + (yy - dim/2) ** 2
print 'render done'
print 'downsampling'
t0 = time.time()
# center of the high sampled image.
cen = dim/2
ds = 9
# calculate offsets.
x = 0
offset = (x-cen+ds/2+dim)/ds
# calculate the downsample dimension.
x = dim-1
dsdim = 1 + (x-cen+ds/2+dim)/ds - offset
# generate a blank downsampled image.
lowRes = np.zeros((dsdim, dsdim))
for y in range(0, dim):
yy = (y-cen+ds/2+dim)/ds - offset
for x in range(0, dim):
xx = (x-cen+ds/2+dim)/ds - offset
lowRes[yy, xx] += highRes[y, x]
t1 = time.time()
total = t1-t0
print 'time taken %f seconds' % total
I have numpy with BLAS and LAPACK set up on my machine and I know a significant gain can be achieved by taking advantage of this, however I'm a bit stuck on how to proceed. This is my progress so far.
import numpy as np
import matplotlib.pyplot as plt
import time
print 'rendering circle at high res'
# image dimensions.
dim = 4608
# generate high sampled image.
xx, yy = np.mgrid[:dim, :dim]
highRes = (xx - dim/2) ** 2 + (yy - dim/2) ** 2
print 'render done'
print 'downsampling'
t0 = time.time()
# center of the high sampled image.
cen = dim/2
ds = 9
# calculate offsets.
x = 0
offset = (x-cen+ds/2+dim)/ds
# calculate the downsample dimension.
x = dim-1
dsdim = 1 + (x-cen+ds/2+dim)/ds - offset
# generate a blank downsampled image.
lowRes = np.zeros((dsdim, dsdim))
ar = np.arange(0, dim)
x, y = np.meshgrid(ar, ar)
# calculating symettrically centriod positions.
yy = (y-cen+ds/2+dim)/ds - offset
xx = (x-cen+ds/2+dim)/ds - offset
# to-do : code to map xx, yy into lowRes
t1 = time.time()
total = t1-t0
print 'time taken %f seconds' % total
This current version is about 16x faster on my machine but it is not complete. I'm not exactly sure how to map the new downsampled pixels from the high res. image efficiently.
There might be another way to speed it up? Not sure... Thanks!

Categories

Resources