I am writing code for summing the Fourier Series that ranges from [-n,n]. However, I'm having trouble with it iterating when it gets to n = 0. I wrote an 'if' statement inside my while loop so it can ignore it, but it seems like it isn't. Here's my code:
from __future__ import division
import numpy as np
import math
import matplotlib.pyplot as plt
#initial values
ni = -10
nf = 10
ti = -3
tf = 3
dt = 0.01
yi = 0 #initial f(t) value
j = complex(0,1)
#initialization
tarray = [ti]
yarray = [yi]
t = ti
n = ni
y = yi
cn = 1/(8*(np.pi)**3*n**3*j**3)*(j*4*np.pi*n) #part (b)
#iterating loop
while t<tf:
n = ni
y = yi
while n<nf:
if n == 0:
cn = 1/6
y += cn
n += 1
else:
y += cn*np.exp(j*np.pi*n*t)
n += 1
yarray.append(y)
t+=dt
tarray.append(t)
#converting list-array
tarray = np.array(tarray)
yarray = np.array(yarray)
#plotting
plt.plot(tarray,yarray, linewidth = 1)
plt.axis("tight")
plt.xlabel('t')
plt.ylabel('f(t) upto n partial sums')
plt.title('Fourier Series for n terms')
plt.legend()
plt.show()
I want it to iterate and create an array of y-values for n ranging from some negative number to some positive number (say for n from [-10,10]), but as soon as it hits n = 0 it seems to be plugging that in into the 'else' clause even though I want it to use what's in the 'if' clause, giving me a "ZeroDivisionError: complex division by zero". How do I fix this?
Edit: Put the entire code block here so you can see the context.
This is not the most elegant way at all but try this:
while t<tf:
n = ni
y = yi
while n<nf:
try:
1/n
cn = 1/6
y += cn
n += 1
except ZeroDivisionError:
y += cn*np.exp(j*np.pi*n*t) #1/n*np.sin(n*t)
n += 1
yarray.append(y)
t+=dt
tarray.append(t)
The coefficient cn is a function of n and should be updated in every loop. You made it constant (and even equal to 1/6 for positive n).
The inner loop could look like
y = 1/6 # starting with n = 0
for n in range(1,nf):
y -= 1/(2*np.pi*n)**2 * np.sin(np.pi*n*t) # see below
Corresponding coefficients for positive and negative n's are equal and exp(ix) - exp(-ix) = 2i sin(x), so it nicely reduces. (Double check the calculation.)
Related
I need to implement a stochastic algorithm that provides as output the times and the states at the corresponding time points of a dynamic system. We include randomness in defining the time points by retrieving a random number from the uniform distribution. What I want to do, is to find the state at time points 0,1,2,...,24. Given the randomness of the algorithm, the time points 1, 2, 3,...,24 are not necessarily hit. We my include rounding at two decimal places, but even with rounding I can not find/insert all of these time points. The question is, how to change the code so as to be able to include in the list of the time points the numbers 1, 2,..., 24 while preserving the stochasticity of the algorithm ? Thanks for any suggestion.
import numpy as np
import random
import math as m
np.random.seed(seed = 5)
# Stoichiometric matrix
S = np.array([(-1, 0), (1, -1)])
# Reaction parameters
ke = 0.3; ka = 0.5
k = [ke, ka]
# Initial state vector at time t0
X1 = [200]; X2 = [0]
# We will update it for each time.
X = [X1, X2]
# Initial time is t0 = 0, which we will update.
t = [0]
# End time
tfinal = 24
# The propensity vector R concerning the last/updated value of time
def ReactionRates(k, X1, X2):
R = np.zeros((2,1))
R[0] = k[1] * X1[-1]
R[1] = k[0] * X2[-1]
return R
# We implement the Gillespie (SSA) algorithm
while True:
# Reaction propensities/rates
R = ReactionRates(k,X1,X2)
propensities = R
propensities_sum = sum(R)[0]
if propensities_sum == 0:
break
# we include randomness
u1 = np.random.uniform(0,1)
delta_t = (1/propensities_sum) * m.log(1/u1)
if t[-1] + delta_t > tfinal:
break
t.append(t[-1] + delta_t)
b = [0,R[0], R[1]]
u2 = np.random.uniform(0,1)
# Choose j
lambda_u2 = propensities_sum * u2
for j in range(len(b)):
if sum(b[0:j-1+1]) < lambda_u2 <= sum(b[1:j+1]):
break # out of for j
# make j zero based
j -= 1
# We update the state vector
X1.append(X1[-1] + S.T[j][0])
X2.append(X2[-1] + S.T[j][1])
# round t values
t = [round(tt,2) for tt in t]
print("The time steps:", t)
print("The second component of the state vector:", X2)
After playing with your model, I conclude, that interpolation works fine.
Basically, just append the following lines to your code:
ts = np.arange(tfinal+1)
xs = np.interp(ts, t, X2)
and if you have matplotlib installed, you can visualize using
import matplotlib.pyplot as plt
plt.plot(t, X2)
plt.plot(ts, xs)
plt.show()
I have a list of lists m which I need to modify
I need that the sum of each row to be greater than A and the sum of each column to be lesser than B
I have something like this
x = 5 #or other number, not relevant
rows = len(m)
cols = len(m[0])
for r in range(rows):
while sum(m[r]) < A:
c = randint(0, cols-1)
m[r][c] += x
for c in range(cols):
cant = sum([m[r][c] for r in range(rows)])
while cant > B:
r = randint(0, rows-1)
if m[r][c] >= x: #I don't want negatives
m[r][c] -= x
My problem is: I need to satisfy both conditions and, this way, after the second for I won't be sure if the first condition is still met.
Any suggestions on how to satisfy both conditions and, of course, with the best execution? I could definitely consider the use of numpy
Edit (an example)
#input
m = [[0,0,0],
[0,0,0]]
A = 20
B = 25
# one desired output (since it chooses random positions)
m = [[10,0,15],
[15,0,5]]
I may need to add
This is for the generation of the random initial population of a genetic algorithm, the restrictions are to make them a possible solution, and I would need to run this like 80 times to get different possible solutions
Something like this should to the trick:
import numpy
from scipy.optimize import linprog
A = 10
B = 20
m = 2
n = m * m
# the coefficients of a linear function to minimize.
# setting this to all ones minimizes the sum of all variable
# values in the matrix, which solves the problem, but see below.
c = numpy.ones(n)
# the constraint matrix.
# This is matrix-multiplied with the current solution candidate
# to form the left hand side of a set of normalized
# linear inequality constraint equations, i.e.
#
# x_0 * A_ub[0][0] + x_1 * A_ub[0][1] <= b_0
# x_1 * A_ub[1][0] + x_1 * A_ub[1][1] <= b_1
# ...
A_ub = numpy.zeros((2 * m, n))
# row sums. Since the <= inequality is a fixed component,
# we just multiply everthing by (-1), i.e. we demand that
# the negative sums are smaller than the negative limit -A.
#
# Assign row ranges all at once, because numpy can do this.
for r in xrange(0, m):
A_ub[r][r * m:(r + 1) * m] = -1
# We want that the sum of the x in each (flattened)
# column is smaller than B
#
# The manual stepping for the column sums in row-major encoding
# is a little bit annoying here.
for r in xrange(0, m):
for j in xrange(0, m):
A_ub[r + m][r + m * j] = 1
# the actual upper limits for the normalized inequalities.
b_ub = [-A] * m + [B] * m
# hand the linear program to scipy
solution = linprog(c, A_ub=A_ub, b_ub=b_ub)
# bring the solution into the desired matrix form
print numpy.reshape(solution.x, (m, m))
Caveats
I use <=, not < as indicated in your question, because that's what numpy supports.
This minimizes the total sum of all values in the target vector.
For your use case, you probably want to minimize the distance
to the original sample, which the linear program cannot handle, since neither the squared error nor the absolute difference can be expressed using a linear combination (which is what c stands for). For that, you will probably need to go to full minimize().
Still, this should get you rough idea.
A NumPy solution:
import numpy as np
val = B / len(m) # column sums <= B
assert val * len(m[0]) >= A # row sums >= A
# create array shaped like m, filled with val
arr = np.empty_like(m)
arr[:] = val
I chose to ignore the original content of m - it's all zero in your example anyway.
from random import *
m = [[0,0,0],
[0,0,0]]
A = 20
B = 25
x = 1 #or other number, not relevant
rows = len(m)
cols = len(m[0])
def runner(list1, a1, b1, x1):
list1_backup = list(list1)
rows = len(list1)
cols = len(list1[0])
for r in range(rows):
while sum(list1[r]) <= a1:
c = randint(0, cols-1)
list1[r][c] += x1
for c in range(cols):
cant = sum([list1[r][c] for r in range(rows)])
while cant >= b1:
r = randint(0, rows-1)
if list1[r][c] >= x1: #I don't want negatives
list1[r][c] -= x1
good_a_int = 0
for r in range(rows):
test1 = sum(list1[r]) > a1
good_a_int += 0 if test1 else 1
if good_a_int == 0:
return list1
else:
return runner(list1=list1_backup, a1=a1, b1=b1, x1=x1)
m2 = runner(m, A, B, x)
for row in m:
print ','.join(map(lambda x: "{:>3}".format(x), row))
I am running the code below to build a one-dimensional array for each z and t. At the present moment, I am trying to make their sizes equivalent so that they each have a length of 501.
import numpy as np
#constants & parameters
omega = 1.
eps = 1.
c = 3.*(10.**8.)
hbar = 1.
eta = 0.01
nn = 10.**7.
n = eta*nn
lambdaOH = c/(1612.*10.**(6.))
gamma = 1.282*(10.**(-11.))
Tsp = 1./gamma
TR = 604800.
L = (Tsp/TR)*(np.pi)/((3.*(lambdaOH**2.))*n)
#time
Ngridt = 500.
tmax = 1.
dt = tmax/Ngridt
intervalt = tmax/dt + 1
t = np.linspace(0.01,tmax,intervalt)
#z space
Ngridz = 500.
zmax = L
dz = zmax/Ngridz
intervalz = zmax/dz + 1
z = np.linspace(0.01,zmax,intervalz)
When running the code, both intervalt and intervalz equal 501.0, but when checking the length of both z and t, len(z) = 500 while len(t) = 501. I have played around with the code above to yield len(z) = 501 by modifying certain parts. For example, if I insert the code
zmax = int(zmax)
then len(z) = 501. But I am wondering why the initial code, exactly as written, does not yield an array z with length 501?
(I am using Python 2.7.)
It is a problem of rounding. If you try to subtract 501 from intervalz you will find a very small negative number, -5.68e-14; linspace just takes the integer part of it, that is 500, and provides a 500-long list.
Notice two other problems with your code:
dt does not provide the correct spacing because you don't remove the initial t (same for dz)
Ngridt and Ngridz are conceptually integers, while you initialize them as floats. Just remove the dot at the end.
I think that your code could be simplified by writing (notice that Ngridt and Ngridz are initialized to 501)
#time
Ngridt = 501
tmax = 1.
t, dt = np.linspace(0.01,tmax,Ngridt,retstep=True)
#z space
Ngridz = 501
zmax = L
z, dz = np.linspace(0.01,zmax,Ngridz,retstep=True)
This is related to float arithmetic inaccuracy. It so happens that the formula for intervalz yields 500.99999999999994. This is just floating accuracy issue you can find all over SO. The np.linspace command then takes this number as 500 and not as 501.
Since linspace expects an int it is better to make sure you give it one.
BTW: mathematically speaking I don't see why you don't set
intervalz = Ngridz + 1
since intervalz = zmax/dz + 1 = zmax/(zmax/Ngridz) + 1 = Ngridz + 1
I am trying to implement a finite difference approximation to solve the Heat Equation, u_t = k * u_{xx}, in Python using NumPy.
Here is a copy of the code I am running:
## This program is to implement a Finite Difference method approximation
## to solve the Heat Equation, u_t = k * u_xx,
## in 1D w/out sources & on a finite interval 0 < x < L. The PDE
## is subject to B.C: u(0,t) = u(L,t) = 0,
## and the I.C: u(x,0) = f(x).
import numpy as np
import matplotlib.pyplot as plt
# parameters
L = 1 # legnth of the rod
T = 10 # terminal time
N = 10
M = 100
s = 0.25
# uniform mesh
x_init = 0
x_end = L
dx = float(x_end - x_init) / N
x = np.arange(x_init, x_end, dx)
x[0] = x_init
# time discretization
t_init = 0
t_end = T
dt = float(t_end - t_init) / M
t = np.arange(t_init, t_end, dt)
t[0] = t_init
# Boundary Conditions
for m in xrange(0, M):
t[m] = m * dt
# Initial Conditions
for j in xrange(0, N):
x[j] = j * dx
# definition of solution u(x,t) to u_t = k * u_xx
u = np.zeros((N, M+1)) # array to store values of the solution
# Finite Difference Scheme:
u[:,0] = x**2 #initial condition
for m in xrange(0, M):
for j in xrange(1, N-1):
if j == 1:
u[j-1,m] = 0 # Boundary condition
elif j == N-1:
u[j+1,m] = 0
else:
u[j,m+1] = u[j,m] + s * ( u[j+1,m] -
2 * u[j,m] + u[j-1,m] )
print u, #t, x
plt.plot(u, t)
#plt.show()
I think my code is working properly and it is producing an output. I want to plot the output of the solution u versus t (my time vector). If I can plot the graph then I am able to check if my numerical approximation agrees with the expected phenomena for the Heat Equation. However, I am getting the error that "x and y must have same first dimension". How can I correct this issue?
An additional question: Am I better off attempting to make an animation with matplotlib.animation instead of using matplotlib.plyplot ???
Thanks so much for any and all help! It is very greatly appreciated!
Okay so I had a "brain dump" and tried plotting u vs. t sort of forgetting that u, being the solution to the Heat Equation (u_t = k * u_{xx}), is defined as u(x,t) so it has values for time. I made the following correction to my code:
print u #t, x
plt.plot(u)
plt.show()
And now my programming is finally displaying an image. And here it is:
It is absolutely beautiful, isn't it?
I have currently made quite a large code in python and when I run it, it takes about 3 minutes for it to make the full calculation. Eventually, I want to increase my N to about 400 and change my m in the for loop to an even larger number - This would probably take hours to calculate which I want to cut down.
It's steps 1-6 that take a long time.
When attempting to run this with cython (I.E. importing pyximport then importing my file)
I get the following error FDC.pyx:49:19: 'range' not a valid cython language construct and
FDC.pyx:49:19: 'range' not a valid cython attribute or is being used incorrectly
from physics import *
from operator import add, sub
import pylab
################ PRODUCING CHARGES AT RANDOM IN r #############
N=11 #Number of point charges
x = zeros(N,float) #grid
y = zeros(N,float)
i=0
while i < N: #code to produce values of x and y within r
x[i] = random.uniform(0,1)
y[i] = random.uniform(0,1)
if x[i] ** 2 + y[i] ** 2 <= 1:
i+=1
print x, y
def r(x,y): #distance between particles
return sqrt(x**2 + y**2)
o = 0; k = 0; W=0 #sum of energy for initial charges
for o in range(0, N):
for k in range(0, N):
if o==k:
continue
xdist=x[o]-x[k]
ydist=y[o]-y[k]
W+= 0.5/(r(xdist,ydist))
print "Initial Energy:", W
##################### STEPS 1-6 ######################
d=0.01 #fixed change in length
charge=(x,y)
l=0; m=0; n=0
prevsW = 0.
T=100
for q in range(0,100):
T=0.9*T
for m in range(0, 4000): #steps 1 - 6 in notes looped over
xRef = random.randint(0,1) #Choosing x or y
yRef = random.randint(0,N-1) #choosing the element of xRef
j = charge[xRef][yRef] #Chooses specific axis of a charge and stores it as 'j'
prevops = None #assigns prevops as having no variable
while True: #code to randomly change charge positions and ensure they do not leave the disc
ops =(add, sub); op=random.choice(ops)
tempJ = op(j, d)
#print xRef, yRef, n, tempJ
charge[xRef][yRef] = tempJ
ret = r(charge[0][yRef],charge[1][yRef])
if ret<=1.0:
j=tempJ
#print "working", n
break
elif prevops != ops and prevops != None: #!= is 'not equal to' so that if both addition and subtraction operations dont work the code breaks
break
prevops = ops #####
o = 0; k = 0; sW=0 #New energy with altered x coordinate
for o in range(0, N):
for k in range(0, N):
if o==k:
continue
xdist = x[o] - x[k]
ydist = y[o] - y[k]
sW+=0.5/(r( xdist , ydist ))
difference = sW - prevsW
prevsW = sW
#Conditions:
p=0
if difference < 0: #accept change
charge[xRef][yRef] = j
#print 'step 5'
randomnum = random.uniform(0,1) #r
if difference > 0: #acceptance with a probability
p = exp( -difference / T )
#print 'step 6', p
if randomnum >= p:
charge[xRef][yRef] = op(tempJ, -d) #revert coordinate to original if r>p
#print charge[xRef][yRef], 'r>p'
#print m, charge, difference
o = 0; k = 0; DW=0 #sum of energy for initial charges
for o in range(0, N):
for k in range(0, N):
if o==k:
continue
xdist=x[o]-x[k]
ydist=y[o]-y[k]
DW+= 0.5/(r(xdist,ydist))
print charge
print 'Final Energy:', DW
################### plotting circle ###################
# use radians instead of degrees
list_radians = [0]
for i in range(0,360):
float_div = 180.0/(i+1)
list_radians.append(pi/float_div)
# list of coordinates for each point
list_x2_axis = []
list_y2_axis = []
# calculate coordinates
# and append to above list
for a in list_radians:
list_x2_axis.append(cos(a))
list_y2_axis.append(sin(a))
# plot the coordinates
pylab.plot(list_x2_axis,list_y2_axis,c='r')
########################################################
pylab.title('Distribution of Charges on a Disc')
pylab.scatter(x,y)
pylab.show()
What is taking time seems to be this:
for q in range(0,100):
...
for m in range(0, 4000): #steps 1 - 6 in notes looped over
while True: #code to randomly change charge positions and ensure they do not leave the disc
....
for o in range(0, N): # <----- N will be brought up to 400
for k in range(0, N):
....
....
....
....
100 x 4000 x (while loop) + 100 x 4000 x 400 x 400 = [400,000 x while loop] + [64,000,000,000]
Before looking into a faster language, maybe there is a better way to build your simulation?
Other than that, you will likely have immediate performance gains if you:
- shift to numpy arrays i/o python lists.
- Use xrange i/o range
[edit to try to answer question in the comments]:
import numpy as np, random
N=11 #Number of point charges
x = np.random.uniform(0,1,N)
y = np.random.uniform(0,1,N)
z = np.zeros(N)
z = np.sqrt(x**2 + y**2) # <--- this could maybe replace r(x,y) (called quite often in your code)
print x, y, z
You also could look into all the variables that are assigned or recalculated many many times inside your main loop (the one described above), and pull all that outside the loop(s) so it is not repeatedly assigned or recalculated.
for instance,
ops =(add, sub); op=random.choice(ops)
maybe could be replaced by
ops = random.choice(add, sub)
Lastly, and here I am out on a limb because I've never used it myself, but it might be a little bit simpler for you to use a package like Numba or Jit as opposed to cython; they allow you to decorate a critical part of your code and have it precompiled prior to execution, with none or very minor changes.