I have a problem. I'm doing a task for my lessons and I'm doing my best, but the teachers clues does not seem to help and I need to look for the problem myself facing the problem.
To begin with, I had to translate that task from my native language to english. So because of that there may be some misunderstandings as it was hard to explain it from mathematical point of view.
"There is a Mandelbrot set which is created by points defined on surface by complex numbers so as following recurrence equation does not go to infinity:
{ z0 = 0 // zn+1 = zn^2 + c
You have to make two-dimensional array T with a size of 600x600. Each element from that array T[k][l] will represent a complex number ckl = (−2 + k * 0.005, (−1.5 + l * 0.005)ˆi). Next for each of element from T[k][l] calculate first n_max = 100 numbers from series and save that value as n, for |zn| > 2."
Yea that was painful. I googled a lot about that but I could not use all other Python scripts to solve that task. I ended up using this: LINK, and made this:
import matplotlib.pyplot as plt
k = 600
l = 600
T = [0][0]
for i in range(k):
for j in range(k):
z = 0 + 0j
c = complex(-2 + i * 0.005, -1.5 + j * 0.005)
for g in range(100):
z = z * z + c
if abs(z) > 2.0:
T = i
break
plt.figure(dpi=200)
plt.imshow(T, cmap="hot")
plt.show()
And it is really bad, I really don't understand this. Plus I get an error: TypeError: Invalid shape () for image data. I found another solution couple of weeks ago, sent it but it has to be done the way described in script. Please help.. thanks for your time in advance. If you need any other screenshots of the task, I'll provide it.
Related
Im new to python, so have mercy if this question is quite trivial. The problem im struggling with is plotting a function consitsing of many functions which are all different by their 'mode' numbers, I called them i and k. I think that the most efficient way for problems like this is provided by a for-loop.
My first guess was to use a code like this:
# Define vectors and lists
chi = sc.jn_zeros(0, order)
om = np.zeros((order, order))
P = [[0]*order]*order
for i, k in itertools.product(range(order), range(order)):
om[i][k] = c*(np.sqrt(chi[i]/R)**2 + (pi*k/L))
P[i][k] = lambda t: (N np.sin(om[i][k] * t / 2)** 2 )/(2*om[i][k])**2)
print()
# sum up list of functions to one function
def p(t):
return sum(sum(P, []))
# plot
t1 = np.arange(0.0, 5.0 * 10**(-9), 0.001*10**(-9))
plt.figure()
plt.plot(t1, p(t1))
plt.show()
However, I always get the error:
in p return sum(sum(P, []))
TypeError: unsupported operand type(s) for +: 'int' and 'function'
My second guess is to avoid using a list and instead using a for-loop like:
def p(t):
return 0*t
for i, k in itertools.product(range(order), range(order)):
om[i][k] = c*(np.sqrt(chi[i]/R)**2 + (pi*k/L))
def p(t):
return p(t) + (np.sin(om[i][k] * t ) ** 2 / (2*(om[i][k]) ** 2)
print()
t1 = np.arange(0.0, 5.0 * 10**(-9), 0.001*10**(-9))
plt.figure()
plt.plot(t1, p(t1))
plt.show()
Nevertheless the code does not produce a plot. There is only the output:
Process finished with exit code -1073741571 (0xC00000FD)
I havent found problems regarding the summation of more than two functions yet. Hence I would really appreciate if anyone could refer to a similar problem, or if someone can share some information about the errors occuring in my codes. I already checked if the variable vectors (omega, chi, ... ) are calculated correctly. Thanks in advance.
There's no way to combine a set of functions like that. You'll just need to run the loops every time. Even if you did combine the functions, the net effect would still be the same.
# Define vectors and lists
chi = sc.jn_zeros(0, order)
om = np.zeros((order, order))
for i, k in itertools.product(range(order), range(order)):
om[i][k] = c*(np.sqrt(chi[i]/R)**2 + (pi*k/L))
print()
def p(t):
return sum(
(np.sin(om[i][k] * t / 2)** 2 )/(2*om[i][k])**2)
for i, k in itertools.product(range(order), range(order))
)
# plot
t1 = np.arange(0.0, 5.0 * 10**(-9), 0.001*10**(-9))
pt = [p(t) for t in t1]
plt.figure()
plt.plot(t1, pt)
plt.show()
So after watching this video on the fast fourier transform https://www.youtube.com/watch?v=h7apO7q16V0
I analysed the pseudocode and implemented it in python to find out that it was producing a different output to that of many fft calculator sites. My values seem to be all there its just odd, as the order is out of place, anyone know why. Is it a different kind of algorithm implementation or something.
import cmath
import math
def FFT(P):
n= len(P)
if n == 1:
return P
omega = cmath.exp((2 * cmath.pi * 1j)/n)
p_even = P[::2]
p_odd = P[1::2]
y_even = FFT(p_even)
y_odd = FFT(p_odd)
y = [0] * n
for i in range(n//2):
y[i] = y_even[i] + omega**i*y_odd[i]
y[i+n//2] = y_even[i] - omega**i*y_odd[i]
return y
poly = [0,1,2,3]
print(FFT([0,1,2,3]))
The site I tested it against was https://tonysader.github.io/FFT_Calculator/?
and I input into this site 0,1,2,3 and obtained: 6, -2+2J, -2, -2+-2J
whilst my python program output : 6, -2-2J, -2, -2+2J
The pseudocode I followed:
I think the program you're running is executing the inverse FFT. Try
omega = cmath.exp((-2 * cmath.pi * 1j)/n). Note the minus sign.
I am developing a code that uses a method called Platen to solve stochastic differential equations. Then I must solve that stochastic differential equation many times (on the order of 10,000 times) to average all the results. My code is:
import numpy as np
import random
import numba
#numba.jit(nopython=True)
def integrador2(y,t,h): #this is the integrator of the function that solves the SDE
m = 6.6551079E-26 #parameters
gamma=0.05
T = 5E-3
k_b = 1.3806488E-23
b=np.sqrt(2*m*gamma*T*k_b)
c=np.sqrt(h)
for i in range(len(t)):
dW=c*random.gauss(0,1)
A=np.array([y[i,-1]/m,-gamma*y[i,-1]]) #this is the platen method that is applied at
B_dW=np.array([0,b*dW]) #each time step
z=y[i]+A*h+B_dW
Az=np.array([z[-1]/m,-gamma*z[-1]])
y[i+1]=y[i]+1/2*(Az+A)*h+B_dW
return y
def media(args): #args is a tuple with the parameters
y = args[0]
t = args[1]
k = args[2]
x=0
p=0
for n in range(k): #k=number of trajectories
y=integrador2(y,t,h)
x=(1./(n+1))*(n*x+y[:,0]) #I do the average like this so as not to have to save all the
p=(1./(n+1))*(n*p+y[:,1]) #solutions in memory
return x,p
The variables y, t and h are:
y0 = np.array([initial position, initial moment]) #initial conditions
t = np.linspace(initial time, final time, number of time intervals) #time array
y = np.zeros((len(t)+1,len(y0))) #array of positions and moments
y[0,:]=np.array(y0) #I keep the initial condition
h = (final time-initial time)/(number of time intervals) #time increment
I need to be able to run the program for a number of time intervals of 10 ** 7 and solve it 10 ** 4 times (k = 10 ** 4).
I feel that I have already reached a dead end because I already accelerate the function that calculates the result with Numba and then (although I do not put it here) I parallelize the "media" function to work with the four cores that my computer has. Even doing all this, my program takes an hour and a half to execute for 10 ** 6 time intervals and k = 10 ** 4, I have not had the courage to execute it for 10 ** 7 time intervals because my intuition tells me that it would take more than 10 hours.
I would really appreciate if someone could advise me to make some parts of the code faster.
Finally, I apologize if I have not expressed myself completely correctly in any part of the question, I am a physicist, not a computer scientist and my English is far from perfect.
I can save about 75% of compute time by simplifying the math in the loop:
def integrador2(y,t,h): #this is the integrator of the function that solves the SDE
m = 6.6551079E-26 #parameters
gamma=0.05
T = 5E-3
k_b = 1.3806488E-23
b=np.sqrt(2*m*gamma*T*k_b)
c=np.sqrt(h)
h = h * 1.
coeff0 = h/m - gamma*h**2/(2.*m)
coeff1 = (1. - gamma*h + gamma**2*h**2/2.)
coeffd = c*b*(1. - gamma*h/2.)
for i in range(len(t)):
dW=np.random.normal()
# Method 2
y[i+1] = np.array([y[i][0] + y[i][1]*coeff0, y[i][1]*coeff1 + dW*coeffd])
return y
Here's a method using filters with scipy, which I don't think is compatible with Numba, but is slightly faster than the solution above:
from scipy import signal
# #numba.jit(nopython=True)
def integrador2(y,t,h): #this is the integrator of the function that solves the SDE
m = 6.6551079E-26 #parameters
gamma=0.05
T = 5E-3
k_b = 1.3806488E-23
b=np.sqrt(2*m*gamma*T*k_b)
c=np.sqrt(h)
h = h * 1.
coeff0a = 1.
coeff0b = h/m - gamma*h**2/(2.*m)
coeff1 = (1. - gamma*h + gamma**2*h**2/2.)
coeffd = c*b*(1. - gamma*h/2.)
noise = np.zeros(y.shape[0])
noise[1:] = np.random.normal(0.,coeffd*1.,y.shape[0]-1)
noise[0] = y[0,1]
a = [1, -coeff1]
b = [1]
y[1:,1] = signal.lfilter(b,a,noise)[1:]
a = [1, -coeff0a]
b = [coeff0b]
y[1:,0] = signal.lfilter(b,a,y[:,1])[1:]
return y
I've been using numpy.linalg.solve(A,B) to solve a linear equation. In my case: A is about 10,000x10,000 and B is around 10,000x5. If I initialize A and B randomly using:
A = numpy.random.rand(10000,10000)
B = numpy.random.rand(10000,5)
Then the computation time is <3 seconds. However, in my program that needs to solve this equation, the computational time is consistently about 14 seconds. This code is iterated over in a loop, so a speedup of almost 5 times is a big deal. Should the solution for linalg.solve() not be roughly constant for constant sized arrays?
Both implementations are using float64. And there is defintely enough ram (128 GB). I tried updating blas libraries on the computer (Ubuntu 16.04 - numpy installed with conda) and it showed improvements on the solutions from the randomly generated data, but not for the data in my program.
If anyone is looking for more specifics about the code, this is from line 27 of the .py file found at [1]. This program is doing registration of point clouds.
Any thoughts or help would be greatly appreciated.
[1]https://github.com/siavashk/pycpd/blob/master/pycpd/deformable_registration.py
Edit:
To try and make this more reproducible, I've generated some code to try and get me to the suspect np.linalg.solve() line:
import numpy as np
import time
def gaussian_kernel(Y, beta):
diff = Y[None,:,:] - Y[:,None,:]
diff = diff**2
diff = np.sum(diff, axis=2)
return np.exp(-diff / (2 * beta**2))
def initialize_sigma2(X, Y):
diff = X[None,:,:] - Y[:,None,:]
err = diff**2
return np.sum(err) / (X.shape[0] * Y.shape[0] * X.shape[1])
alpha = 0.1
beta = 3
X = np.random.rand(10000,5) * 100
Y = X + X*0.1
N, D = X.shape
M, _ = Y.shape
G = gaussian_kernel(Y, beta)
sigma2 = initialize_sigma2(X,Y)
TY = Y + np.random.rand(10000,5)
P = np.sum((X[None,:,:] - TY[:,None,:])**2, axis=2)
P /= np.sum(P,axis=0)
P1 = np.sum(P, axis=1)
Np = np.sum(P1)
A = np.dot(np.diag(P1), G) + alpha * sigma2 * np.eye(M)
B = np.dot(P, X) - np.dot(np.diag(P1), Y)
%time W = np.linalg.solve(A,B)
However, this code does not produce the time lag. Everything in this is the same as in the current script... except, the actual creation of the X and Y arrays. These should be two point clouds that are roughly close to one another in 3D space - this is why I have created one based on the other.
I have a system of equations that I am trying to integrate with integrate.odeint.
Within that system are terms like XiXj[i,j]*XiXj[j,k]/X[j]. It is possible for the denominator to be zero (particularly in the initial condition). However, both terms in the numerator should go to zero there, and the whole term should be interpreted as zero (and values going into the equations are nonnegative).
What I have done to handle this is to replace the 1/X[j] with Xinv[j] which is set to be 1/X[j] when X[j]>eps for some smallish eps, and use 1 if it is smaller than eps.
However, when I integrate, I find that odeint will sit for a very long time calculating derivatives at what appears to be a single value of t. I think this is because of it trying to calculate things near a discontinuity in what I've done.
Larger values of eps or assigning a minimum step size reduces the probability of encountering this issue, but it can still happen.
I'm hoping to figure out how to prevent this from happening. Unfortunately, I haven't been able to force this to happen in a simple example. So I've stripped down my example as far as I can.
Some background on what's being solved: I'm thinking about nodes in a networkx network (so running this code will require networkx). Nodes are susceptible, infected or recovered. I have equations for the derivatives of the susceptible (X) and infected (Y) states of each node. As correlations form across edges, I am also tracking the probability each edge is between S and I (XY) or S and S (XX) nodes.
Here's a system of equations. I've set g_{ij} to be 1 and gamma_i is simply gamma in the code below.
It should be possible to just copy and paste the code and it will run it. As it runs, the derivative function outputs the time and some simple measure of the derivatives. You'll notice that it pauses for a long time at particular times. Sometimes it starts up again. Sometimes I give up waiting.
(note, the network it solves for is random, so if by some unlikely chance you don't see the behavior, run it again. You may want to pickle the graph and reuse a graph to make things consistent between tests.)
from scipy import integrate
import scipy
import networkx as nx
def _dSIR_pair_based_(V, t, G, nodelist, index_of_node, tau, gamma):
#V is a vector of states
#t is time
#G is a networkx Graph
#nodelist is a list of nodes in G
#index_of_node is a dict index_of_node[u] is i such that nodelist[i]=u
# {u:i for i, u in enumerate(nodelist)}
#tau is transmission rate
#gamma is recovery rate
N=G.order()
X = V[0:N] #probability node is susceptible
eps = 0.1**(5)
Xinv = scipy.array([1/v if v>eps else 1 for v in X]) #there are places where we divide by X[i] which may == 0.
#In those cases the numerator is (very) 0, so it's easier
#to set this up as mult by inverse with a dummy value when
#it is 1/0.
Y = V[N:2*N] #prob node is infected
Yinv = scipy.array([1/v if v>eps else 1 for v in Y])
XY = V[2*N: 2*N+N**2] #prob edge exists and is from susceptible to infected
XX = V[2*N+N**2:] #prob edge exists and is between two susceptibles.
#print X.shape, Y.shape, XY.shape, XX.shape, N
XY.shape = (N,N) #get them into the right shape
XX.shape = (N,N)
YX = XY.T #not really needed, but helps keep consistent with equations as written in text
dX = scipy.zeros(N)
dY = scipy.zeros(N)
dXY = scipy.zeros((N,N))
dXX = scipy.zeros((N,N))
#I could make the below more efficient, but I think this sequence of for loops is easier to read, or at least understand.
#I expect it to run quickly regardless. Will avoid (premature) optimization for now.
for u in nodelist:
i = index_of_node[u]
dY[i] += -gamma*Y[i]
for v in G.neighbors(u):
j = index_of_node[v]
dX[i] += -tau*XY[i,j]
dY[i] += tau*XY[i,j]
dXY[i,j] += - (tau+gamma)*XY[i,j]
for w in G.neighbors(u):
if w == v: #skip these
continue
#so w != v.
k= index_of_node[w]
dXY[i,j] += tau * XX[i,j] * XY[j,k]*Xinv[j] - tau * YX[k,i] * XY[i,j]*Xinv[i]
dXX[i,j] += -tau * XX[i,j] * XY[j,k]*Xinv[j] - tau * YX[k,i] * XX[i,j]*Xinv[i]
dXY.shape = (N**2,1)
dXX.shape = (N**2,1)
dV = scipy.concatenate((dX[:, None], dY[:,None], dXY, dXX), axis=0).T[0]
print t, sum(dV)
return dV
def SIR_pair_based(G, nodelist, Y0, tau, gamma, tmin = 0, tmax = 100, tcount = 1001):
times = scipy.linspace(tmin,tmax,tcount)
X0 = 1-Y0 #if not initially infected, then initially susceptible
N = len(Y0)
XY0 = X0[:,None]*Y0[None,:]
XX0 = X0[:,None]*X0[None,:]
XY0.shape=(N**2,1)
XX0.shape=(N**2,1)
V0 = scipy.concatenate((X0[:,None], Y0[:,None], XY0, XX0), axis=0).T[0]
index_of_node = {node:i for i, node in enumerate(nodelist)}
V = integrate.odeint(_dSIR_pair_based_, V0, times, args = (G, nodelist, index_of_node, tau, gamma))#, mxstep=10)#(times[1]-times[0])/1000)
def test_SIR_pair_based():
G = nx.fast_gnp_random_graph(100,0.02)
nodelist = G.nodes()
Y0 = scipy.array([1 if node<10 else 0 for node in nodelist])
SIR_pair_based(G, nodelist, Y0, 1, 0.5, tmax = 10)
test_SIR_pair_based()
If I plot sum(X), sum(Y) and 1-sum(X+Y) I get the following for a larger value of eps=0.01 (when the random graph is such that it does get through the process - sometimes it still gets stuck):