Im new to python, so have mercy if this question is quite trivial. The problem im struggling with is plotting a function consitsing of many functions which are all different by their 'mode' numbers, I called them i and k. I think that the most efficient way for problems like this is provided by a for-loop.
My first guess was to use a code like this:
# Define vectors and lists
chi = sc.jn_zeros(0, order)
om = np.zeros((order, order))
P = [[0]*order]*order
for i, k in itertools.product(range(order), range(order)):
om[i][k] = c*(np.sqrt(chi[i]/R)**2 + (pi*k/L))
P[i][k] = lambda t: (N np.sin(om[i][k] * t / 2)** 2 )/(2*om[i][k])**2)
print()
# sum up list of functions to one function
def p(t):
return sum(sum(P, []))
# plot
t1 = np.arange(0.0, 5.0 * 10**(-9), 0.001*10**(-9))
plt.figure()
plt.plot(t1, p(t1))
plt.show()
However, I always get the error:
in p return sum(sum(P, []))
TypeError: unsupported operand type(s) for +: 'int' and 'function'
My second guess is to avoid using a list and instead using a for-loop like:
def p(t):
return 0*t
for i, k in itertools.product(range(order), range(order)):
om[i][k] = c*(np.sqrt(chi[i]/R)**2 + (pi*k/L))
def p(t):
return p(t) + (np.sin(om[i][k] * t ) ** 2 / (2*(om[i][k]) ** 2)
print()
t1 = np.arange(0.0, 5.0 * 10**(-9), 0.001*10**(-9))
plt.figure()
plt.plot(t1, p(t1))
plt.show()
Nevertheless the code does not produce a plot. There is only the output:
Process finished with exit code -1073741571 (0xC00000FD)
I havent found problems regarding the summation of more than two functions yet. Hence I would really appreciate if anyone could refer to a similar problem, or if someone can share some information about the errors occuring in my codes. I already checked if the variable vectors (omega, chi, ... ) are calculated correctly. Thanks in advance.
There's no way to combine a set of functions like that. You'll just need to run the loops every time. Even if you did combine the functions, the net effect would still be the same.
# Define vectors and lists
chi = sc.jn_zeros(0, order)
om = np.zeros((order, order))
for i, k in itertools.product(range(order), range(order)):
om[i][k] = c*(np.sqrt(chi[i]/R)**2 + (pi*k/L))
print()
def p(t):
return sum(
(np.sin(om[i][k] * t / 2)** 2 )/(2*om[i][k])**2)
for i, k in itertools.product(range(order), range(order))
)
# plot
t1 = np.arange(0.0, 5.0 * 10**(-9), 0.001*10**(-9))
pt = [p(t) for t in t1]
plt.figure()
plt.plot(t1, pt)
plt.show()
Related
I am trying to solve an ODE in python using the Euler method but I am getting the
"TypeError: 'float' object is not subscriptable" error when I call the function. Here's my code:
##Parameters
g = 9.8 #in m/s^2
l = 0.5 #in m
omega0 = np.sqrt(g/l)
def euler(theta0, w0, deltat, t_end):
t0 = 0 #in s
##Constructing the arrays
t_arr = np.arange(t0, t_end + deltat, deltat)
w = np.zeros(len(t_arr)) #angular velocity in rad/s
theta = np.zeros(len(t_arr))
##Setting up our initial conditions
w[0] = w0
theta = theta0
##Performing the Euler method for both small and large angles
for i in range(len(t_arr) -1):
w[i + 1] = w[i] - ((omega0)**2)*np.sin(theta[i])*deltat
theta[i + 1] = theta[i] + w[i]*deltat
return theta
euler(0.07, 0, 0.05, 5)
Output:
TypeError: 'float' object is not subscriptable
What am I doing wrong? Please help, your help will be much appreciated.
The two lines
theta = np.zeros(len(t_arr))
...
theta = theta0
close to each other do not make sense and indicate a misunderstanding in their meaning because both put a value in the variable theta. I.e. that the first one is without any effect. The second line is the only one which is relevant.
I assume that the variable theta should hold a numpy array. But what you pass in the call of the last line as the value for the parameter theta0 is a mere float value, so that contradicts.
I guess that you might want to put the theta0 into the first field of the just created array theta:
theta[0] = theta0
Probably then it works.
You meant to set theta[0] = theta0 when you set the initial conditions (line 16), but instead you redefined theta to be a float by just writing theta = theta0, instead of the np.zeros array you had previously made it in line 12. Change that and it will work.
The 'not subscriptable' TypeError is not a particularly easy-to-understand way of explaining the error. Read this question if you want more details about what that means. The most up-voted answer is:
It basically means that the object implements the getitem() method. In other words, it describes objects that are "containers", meaning they contain other objects. This includes strings, lists, tuples, and dictionaries.
I have a problem. I'm doing a task for my lessons and I'm doing my best, but the teachers clues does not seem to help and I need to look for the problem myself facing the problem.
To begin with, I had to translate that task from my native language to english. So because of that there may be some misunderstandings as it was hard to explain it from mathematical point of view.
"There is a Mandelbrot set which is created by points defined on surface by complex numbers so as following recurrence equation does not go to infinity:
{ z0 = 0 // zn+1 = zn^2 + c
You have to make two-dimensional array T with a size of 600x600. Each element from that array T[k][l] will represent a complex number ckl = (−2 + k * 0.005, (−1.5 + l * 0.005)ˆi). Next for each of element from T[k][l] calculate first n_max = 100 numbers from series and save that value as n, for |zn| > 2."
Yea that was painful. I googled a lot about that but I could not use all other Python scripts to solve that task. I ended up using this: LINK, and made this:
import matplotlib.pyplot as plt
k = 600
l = 600
T = [0][0]
for i in range(k):
for j in range(k):
z = 0 + 0j
c = complex(-2 + i * 0.005, -1.5 + j * 0.005)
for g in range(100):
z = z * z + c
if abs(z) > 2.0:
T = i
break
plt.figure(dpi=200)
plt.imshow(T, cmap="hot")
plt.show()
And it is really bad, I really don't understand this. Plus I get an error: TypeError: Invalid shape () for image data. I found another solution couple of weeks ago, sent it but it has to be done the way described in script. Please help.. thanks for your time in advance. If you need any other screenshots of the task, I'll provide it.
The problem is that I would like to be able to integrate the differential equations starting for each point of the grid at once instead of having to loop over the scipy integrator for each coordinate. (I'm sure there's an easy way)
As background for the code I'm trying to solve the trajectories of a Couette flux alternating the direction of the velocity each certain period, that is a well known dynamical system that produces chaos. I don't think the rest of the code really matters as the part of the integration with scipy and my usage of the meshgrid function of numpy.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation, writers
from scipy.integrate import solve_ivp
start_T = 100
L = 1
V = 1
total_run_time = 10*3
grid_points = 10
T_list = np.arange(start_T, 1, -1)
x = np.linspace(0, L, grid_points)
y = np.linspace(0, L, grid_points)
X, Y = np.meshgrid(x, y)
condition = True
totals = np.zeros((start_T, total_run_time, 2))
alphas = np.zeros(start_T)
i = 0
for T in T_list:
alphas[i] = L / (V * T)
solution = np.array([X, Y])
for steps in range(int(total_run_time/T)):
t = steps*T
if condition:
def eq(t, x):
return V * np.sin(2 * np.pi * x[1] / L), 0.0
condition = False
else:
def eq(t, x):
return 0.0, V * np.sin(2 * np.pi * x[1] / L)
condition = True
time_steps = np.arange(t, t + T)
xt = solve_ivp(eq, time_steps, solution)
solution = np.array([xt.y[0], xt.y[1]])
totals[i][t: t + T][0] = solution[0]
totals[i][t: t + T][1] = solution[1]
i += 1
np.save('alphas.npy', alphas)
np.save('totals.npy', totals)
The error given is :
ValueError: y0 must be 1-dimensional.
And it comes from the 'solve_ivp' function of scipy because it doesn't accept the format of the numpy function meshgrid. I know I could run some loops and get over it but I'm assuming there must be a 'good' way to do it using numpy and scipy. I accept advice for the rest of the code too.
Yes, you can do that, in several variants. The question remains if it is advisable.
To implement a generally usable ODE integrator, it needs to be abstracted from the models. Most implementations do that by having the state space a flat-array vector space, some allow a vector space engine to be passed as parameter, so that structured vector spaces can be used. The scipy integrators are not of this type.
So you need to translate the states to flat vectors for the integrator, and back to the structured state for the model.
def encode(X,Y): return np.concatenate([X.flatten(),Y.flatten()])
def decode(U): return U.reshape([2,grid_points,grid_points])
Then you can implement the ODE function as
def eq(t,U):
X,Y = decode(U)
Vec = V * np.sin(2 * np.pi * x[1] / L)
if int(t/T)%2==0:
return encode(Vec, np.zeros(Vec.shape))
else:
return encode(np.zeros(Vec.shape), Vec)
with initial value
U0 = encode(X,Y)
Then this can be directly integrated over the whole time span.
Why this might be not such a good idea: Thinking of each grid point and its trajectory separately, each trajectory has its own sequence of adapted time steps for the given error level. In integrating all simultaneously, the adapted step size is the minimum over all trajectories at the given time. Thus while the individual trajectories might have only short intervals with very small step sizes amid long intervals with sparse time steps, these can overlap in the ensemble to result in very small step sizes everywhere.
If you go beyond the testing stage, switch to a more compiled solver implementation, odeint is a Fortran code with wrappers, so half a solution. JITcode translates to C code and links with the compiled solver behind odeint. Leaving python you get sundials, the diffeq module of julia-lang, or boost::odeint.
TL;DR
I don't think you can "integrate the differential equations starting for each point of the grid at once".
MWE
Please try to provide a MWE to reproduce your problem, like you said : "I don't think the rest of the code really matters", and it makes it harder for people to understand your problem.
Understanding how to talk to the solver
Before answering your question, there are several things that seem to be misunderstood :
by defining time_steps = np.arange(t, t + T) and then calling solve_ivp(eq, time_steps, solution) : the second argument of solve_ivp is the time span you want the solution for, ie, the "start" and "stop" time as a 2-uple. Here your time_steps is 30-long (for the first loop), so I would probably replace it by (t, t+T). Look for t_span in the doc.
from what I understand, it seems like you want to control each iteration of the numerical resolution : that's not how solve_ivp works. More over, I think you want to switch the function "eq" at each iteration. Since you have to pass the "the right hand side" of the equation, you need to wrap this behavior inside a function. It would not work (see right after) but in terms of concept something like this:
def RHS(t, x):
# unwrap your variables, condition is like an additional variable of your problem,
# with a very simple differential equation
x0, x1, condition = x
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x[1] / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x[1] / L)
# compute new result for condition
condition_out = not(condition)
return [x0_out, x1_out, condition_out]
This would not work because the evolution of condition doesn't satisfy some mathematical properties of derivation/continuity. So condition is like a boolean switch that parametrizes the model, we can use global to control the state of this boolean :
condition = True
def RHS_eq(t, y):
global condition
x0, x1 = y
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x1 / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x1 / L)
# update condition
condition = 0 if condition==1 else 1
return [x0_out, x1_out]
finaly, and this is the ValueError you mentionned in your post : you define solution = np.array([X, Y]) which actually is initial condition and supposed to be "y0: array_like, shape (n,)" where n is the number of variable of the problem (in the case of [x0_out, x1_out] that would be 2)
A MWE for a single initial condition
All that being said, lets start with a simple MWE for a single starting point (0.5,0.5), so we have a clear view of how to use the solver :
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
# initial conditions for x0, x1, and condition
initial = [0.5, 0.5]
condition = True
# time span
t_span = (0, 100)
# constants
V = 1
L = 1
# define the "model", ie the set of equations of t
def RHS_eq(t, y):
global condition
x0, x1 = y
# compute new results for x0 and x1
if condition:
x0_out, x1_out = V * np.sin(2 * np.pi * x1 / L), 0.0
else:
x0_out, x1_out = 0.0, V * np.sin(2 * np.pi * x1 / L)
# update condition
condition = 0 if condition==1 else 1
return [x0_out, x1_out]
solution = solve_ivp(RHS_eq, # Right Hand Side of the equation(s)
t_span, # time span, a 2-uple
initial, # initial conditions
)
fig, ax = plt.subplots()
ax.plot(solution.t,
solution.y[0],
label="x0")
ax.plot(solution.t,
solution.y[1],
label="x1")
ax.legend()
Final answer
Now, what we want is to do the exact same thing but for various initial conditions, and from what I understand, we can't : again, quoting the doc
y0 : array_like, shape (n,) : Initial state. . The solver's initial condition only allows one starting point vector.
So to answer the initial question : I don't think you can "integrate the differential equations starting for each point of the grid at once".
def testing(min_quadReq, stepsize, max_quadReq, S):
y = np.arange(min_quadReq, max_quadReq, stepsize)
print("Y", y)
I_avg = np.zeros(len(y))
Q_avg = np.zeros(len(y))
x = np.arange(0, (len(S)))
debugger = 0
for i in range(0, len(y)):
I = np.array(S * (np.cos(2 * np.pi * y[i] * x)))
Q = np.array(S * (np.sin(2 * np.pi * y[i] * x)))
I_avg[i] = np.sum(I, 0)
Q_avg[i] = np.sum(Q, 0)
debugger += 1
D = [I_avg**2 + Q_avg**2]
maxIndex = np.argmax(D)
#maxValue = D.max()
# in python is arctan2(b,a) compared to matlab's atan2(a,b)
phaseOut = np.arctan2(Q_avg[maxIndex], I_avg[maxIndex])
# returns the out value and the phase
out = min_quadReq + ((maxIndex + 1) - 1) * stepsize
return out, phaseOut
I'm working on a project where uses DSP to process a signal at get out the relevant data. The code above is from the inner function of a quadrature modulation. From what I have seen this is the part of the code that have the biggest potential to be optimized. For example the two sum function is called about 92k times each and the quadrature function itself 2696 times. I'm not that familiar with python so if any have any suggestion to how a more efficient way of writing it or some good documentation it would be lovely.
The signal S is the input source and it's a array of [481][251]. The outer shell of the quadrature is called on by quadReq(cavSig[j, :]) just some extra information to show how it's called and how many times.
def randomnumber():
s = np.random.random_sample((1, 251))
print(s)
return s
randomnumber()
Edit: Added some more information
Your loop produces one I_avg value for each element of y. For compactness I could write it as a list comprehension.
In [61]: x=np.arange(4)
In [62]: y=np.arange(0,1,.2)
In [63]: [np.cos(2*np.pi*y[i]*x).sum() for i in range(y.shape[0])]
Out[63]:
[4.0,
-0.30901699437494745,
0.80901699437494767,
0.80901699437494834,
-0.30901699437494756]
But that y[i]*x part is just an outer product of y and x, which can be written with np.outter, or just as easily with broadcasting, y[:,None]*x.
In [64]: np.cos(2*np.pi*y[:,None]*x).sum(axis=1)
Out[64]: array([ 4. , -0.30901699, 0.80901699, 0.80901699, -0.30901699])
Get on to an interactive Python session, and play around with expressions like this. The best way to learn is by doing and seeing immediate results. Keep the test expressions and arrays small so you can see immediately what is happening.
Hoping to get some help here with parallelising my python code, I've been struggling with it for a while and come up with several errors in whichever way I try, currently running the code will take about 2-3 hours to complete, The code is given below;
import numpy as np
from scipy.constants import Boltzmann, elementary_charge as kb, e
import multiprocessing
from functools import partial
Tc = 9.2
x = []
g= []
def Delta(T):
'''
Delta(T) takes a temperature as an input and calculates a
temperature dependent variable based on Tc which is defined as a
global parameter
'''
d0 = (pi/1.78)*kb*Tc
D0 = d0*(np.sqrt(1-(T**2/Tc**2)))
return D0
def element_in_sum(T, n, phi):
D = Delta(T)
matsubara_frequency = (np.pi * kb * T) * (2*n + 1)
factor_d = np.sqrt((D**2 * cos(phi/2)**2) + matsubara_frequency**2)
element = ((2 * D * np.cos(phi/2))/ factor_d) * np.arctan((D * np.sin(phi/2))/factor_d)
return element
def sum_elements(T, M, phi):
'''
sum_elements(T,M,phi) is the most computationally heavy part
of the calculations, the larger the M value the more accurate the
results are.
T: temperature
M: number of steps for matrix calculation the larger the more accurate the calculation
phi: The phase of the system can be between 0- pi
'''
X = list(np.arange(0,M,1))
Y = [element_in_sum(T, n, phi) for n in X]
return sum(Y)
def KO_1(M, T, phi):
Iko1Rn = (2 * np.pi * kb * T /e) * sum_elements(T, M, phi)
return Iko1Rn
def main():
for j in range(1, 92):
T = 0.1*j
for i in range(1, 314):
phi = 0.01*i
pool = multiprocessing.Pool()
result = pool.apply_async(KO_1,args=(26000, T, phi,))
g.append(result)
pool.close()
pool.join()
A = max(g);
x.append(A)
del g[:]
My approach was to try and send the KO1 function into a multiprocessing pool but I either get a Pickling error or a too many files open, Any help is greatly appreciated, and if multiprocessing is the wrong approach I would love any guide.
I haven't tested your code, but you can do several things to improve it.
First of all, don't create arrays unnecessarily. sum_elements creates three array-like objects when it can use just one generator. First, np.arange creates a numpy array, then the list function creates a list object and and then the list comprehension creates another list. The function does 4 times the work it should.
The correct way to implement it (in python3) would be:
def sum_elements(T, M, phi):
return sum(element_in_sum(T, n, phi) for n in range(0, M, 1))
If you use python2, replace range with xrange.
This tip will probably help you in any python script you'll write.
Also, try to utilize multiprocessing better. It seems what you need to do is to create a multiprocessing.Pool object once, and use the pool.map function.
The main function should look like this:
def job(args):
i, j = args
T = 0.1*j
phi = 0.01*i
return K0_1(26000, T, phi)
def main():
pool = multiprocessing.Pool(processes=4) # You can change this number
x = [max(pool.imap(job, ((i, j) for i in range(1, 314)) for j in range(1, 92)]
Notice that I used a tuple in order to pass multiple arguments to job.
This is not an answer to the question, but if I may, I would propose how to speed up the code using simple numpy array operations. Have a look at the following code:
import numpy as np
from scipy.constants import Boltzmann, elementary_charge as kb, e
import time
Tc = 9.2
RAM = 4*1024**2 # 4GB
def Delta(T):
'''
Delta(T) takes a temperature as an input and calculates a
temperature dependent variable based on Tc which is defined as a
global parameter
'''
d0 = (np.pi/1.78)*kb*Tc
D0 = d0*(np.sqrt(1-(T**2/Tc**2)))
return D0
def element_in_sum(T, n, phi):
D = Delta(T)
matsubara_frequency = (np.pi * kb * T) * (2*n + 1)
factor_d = np.sqrt((D**2 * np.cos(phi/2)**2) + matsubara_frequency**2)
element = ((2 * D * np.cos(phi/2))/ factor_d) * np.arctan((D * np.sin(phi/2))/factor_d)
return element
def KO_1(M, T, phi):
X = np.arange(M)[:,np.newaxis,np.newaxis]
sizeX = int((float(RAM) / sum(T.shape))/sum(phi.shape)/8) #8byte
i0 = 0
Iko1Rn = 0. * T * phi
while (i0+sizeX) <= M:
print "X = %i"%i0
indices = slice(i0, i0+sizeX)
Iko1Rn += (2 * np.pi * kb * T /e) * element_in_sum(T, X[indices], phi).sum(0)
i0 += sizeX
return Iko1Rn
def main():
T = np.arange(0.1,9.2,0.1)[:,np.newaxis]
phi = np.linspace(0,np.pi, 361)
M = 26000
result = KO_1(M, T, phi)
return result, result.max()
T0 = time.time()
r, rmax = main()
print time.time() - T0
It runs a bit more than 20sec on my PC. One has to be careful not to use too much memory, that is why there is still a loop with a bit complicated construction to use only pieces of X. If enough memory is present, then it is not necessary.
One should also note that this is just the first step of speeding up. Much improvement could be reached still using e.g. just in time compilation or cython.