I have a numpy array (Potential) and I would like to compute the electromagnetic field. Right now it is the bottleneck of my program.
I have an array V dimension n+2, m+2. I want to create an Array E dimension n,m. The calculation of each cell is to do cell is ~:
sqrt((Cell_left-Cell_right)^2+(Cell_top-Cell_bottom)^2)
I would like to know if there is a way to apply a function to the whole array to avoid the expensive computation of "for loop" :)
right now my code is :
def set_e(self):
pass
for i in range(0, n):
for j in range(0, m):
self.E[i, j] = self.get_local_e(i, j)
def get_local_e(self, i, j):
return (
((self.solution[i + 2, j + 1] - self.solution[i, j + 1]) / unt_y) ** 2
+ ((self.solution[i + 1, j + 2] - self.solution[i + 1, j]) / unt_x) ** 2
) ** 0.5
Thanks
For the people that are interested in this issue, It is possible to do array calculation that way :
def set_e(self):
y_tmp = np.power((self.solution[:-2, 1:-1] - self.solution[2:, 1:-1]) / unt_y, 2)
x_tmp = np.power((self.solution[1:-1, :-2] - self.solution[1:-1, 2:]) / unt_x, 2)
self.E = np.power(x_tmp + y_tmp, 0.5)
It solved my issue
Let's work on this here.
There is something strange about your equation, as only computes the gradient along one row, see y_tmp.
The gradient function calculates along all rows and columns, that's what the shape of the input is the same as with the output.
import numpy as np
solution = np.array([[1.0, 2.0, 3.0, 4.0],
[3.0, 5.0, 7.0, 9.0],
[5.0, 8.0, 11.0, 14.0],
[7.0, 11.0, 15.0, 19.0]])
unt_y = 1
unt_x = 1
g = np.gradient(solution, unt_y, unt_x)
print(g)
a,b = g
c = np.power(a+b, 2.0)
print(c)
def set_e():
y_tmp = np.power((solution[:-2, 1:-1] - solution[2:, 1:-1]) / unt_y, 2)
print('y_tmp', y_tmp)
x_tmp = np.power((solution[1:-1, :-2] - solution[1:-1, 2:]) / unt_x, 2)
E = np.power(x_tmp + y_tmp, 0.5)
print(E)
set_e()
Related
I am currently to new to sympy and I am trying to reproduce the Mathematica example in the attached image in Python. My attempt is written below but it returns an empty list
import sympy
m , n, D_star, a, j = sympy.symbols('m , n, D_star, a, j')
s1 = sympy.Sum(a**(j-1),(j, 1, m-1))
rhs = 6 * sympy.sqrt((D_star * (1 + a)*(n - 1))/2)
expand_expr = sympy.solve(s1 - rhs, m)
temp = sympy.lambdify((a, n, D_star), expand_expr, 'numpy')
n = 100
a = 1.2
D_star = 2.0
ms = temp(1.2, 100, 2.0)
ms
# what I get is an empty list []
# expected answer using Mma FindRoot function is 17.0652
Adding .doit() to expand the sum seems to help. It gives Piecewise((m - 1, Eq(a, 1)), ((a - a**m)/(1 - a), True))/a for the sum in s1.
from sympy import symbols, Eq, Sum, sqrt, solve, lambdify
m, n, j, a, D_star = symbols('m n j a D_star')
s1 = Sum(a**(j - 1), (j, 1, m - 1)).doit()
rhs = 6 * sqrt((D_star * (1 + a) * (n - 1)) / 2)
expand_expr = solve(Eq(s1, rhs), m)
temp = lambdify((a, n, D_star), expand_expr, 'numpy')
n = 100
a = 1.2
D_star = 2.0
ms = temp(1.2, 100, 2.0)
This gives for expand_expr:
[Piecewise((log(a*(3*sqrt(2)*a*sqrt(D_star*(a*n - a + n - 1)) - 3*sqrt(2)*sqrt(D_star*(a*n - a + n - 1)) + 1))/log(a), Ne(a, 1)), (nan, True)),
Piecewise((3*sqrt(2)*a*sqrt(D_star*(a*n - a + n - 1)) + 1, Eq(a, 1)), (nan, True))]
which separates into a != 1 and a == 1.
The result of ms gives [array(17.06524172), array(nan)], again in a bit awkward way to separate a hypothetical a == 1.
I need to compute this formula:
It is an approximation of this integral
but it doesn't matter, actually I just want to compute the value of Figure 1 with PYTHON, that's what the topic concerns.
K, alpha and sigma are fixed values within a single computation, usually:
0 <= k <= 99;
alpha = 3;
sigma = 2.
Below is how I am trying to compute such summation in python:
import decimal
from scipy.special import binom
def residual_time_mean(alpha, sigma=2, k=1):
prev_prec = decimal.getcontext().prec
D = decimal.Decimal
decimal.getcontext().prec = 128
a = float(alpha)
s = float(sigma)
sum1 = 0.0
sum2 = 0.0
sumD1 = D(0.0)
sumD2 = D(0.0)
for i in range(1, k + 1):
sum1 += binom(k, i) * ((-1) ** (i + 1)) * (s / ((a - 1) * i - 1.0))
sum2 += binom(k, i) * ((-1) ** (i + 1)) * s / ((a - 1) * i - 1.0)
sumD1 += D(binom(k, i)) * (D(-1.0) ** (D(i) + D(1.0))) * (D(s) / ((D(a) - D(1.0)) * D(i) - D(1.0)))
sumD2 += D(binom(k, i)) * (D(-1.0) ** (D(i) + D(1.0))) * D(s) / ((D(a) - D(1.0)) * D(i) - D(1.0))
decimal.getcontext().prec = prev_prec
return sum1, sum2, float(sumD1), float(sumD2)
Running
for k in [0, 1, 2, 4, 8, 20, 50, 99]:
print("k={} -> {}".format(k, residual_time_mean(3, 2, k)))
the outcome is:
k=0 -> (0.0, 0.0, 0.0, 0.0)
k=1 -> (2.0, 2.0, 2.0, 2.0)
k=2 -> (3.3333333333333335, 3.3333333333333335, 3.3333333333333335, 3.3333333333333335)
k=4 -> (5.314285714285714, 5.314285714285714, 5.314285714285714, 5.314285714285714)
k=8 -> (8.184304584304588, 8.184304584304583, 8.184304584304584, 8.184304584304584)
k=20 -> (13.952692275798238, 13.952692275795965, 13.95269227579524, 13.95269227579524)
k=50 -> (23.134878809207617, 23.13390225415814, 23.134078892910786, 23.134078892910786)
k=99 -> (265412075330.96634, 179529505602.9507, 17667813427.20196, 17667813427.20196)
You can see that starting from k=8 the results are different.
Making a multiplication before a division leads results of sum1 and sum2 to diverge a lot for k=99 for instance.
sum1 += binom(k, i) * ((-1) ** (i + 1)) * (s / ((a - 1) * i - 1.0))
sum2 += binom(k, i) * ((-1) ** (i + 1)) * s / ((a - 1) * i - 1.0)
With decimal this problem doesn't occur but the result is not correct at all.
Computing the summation on WolframAlpha
for k = 99
(Here is the link for the computation on WolframAlpha). It gives 33.3159488(...) while for my python function it is 17667813427.20196. I trust WolframAlpha since it makes something like symbolic computation, indeed it also returns the real value in form of a fraction.
for other k
Approximation problems (e.g. the value computed by Wolfram is different from the one computed in python by an order of magnitude of 10^0 or more) starts occurring from k~=60.
Moreover computing the integral (Figure 2) with scipy.integrate leads to similar approximation errors.
The question:
Do you have any suggestion to handle this computation? Increasing decimal precision doesn't seem to be helpful.
I've discovered the problem by myself:
Executing scipy.special.binom(99,50) gives
5.044567227278209e+28
while calculating binomial (99,50) on WolframAlpha gives
5.0445672272782096667406248628e+28
There is an absolute difference with an order of magnitude of 10^12.
That's why, for sure, results of python function are unreliable for high values of k. So I need to change how the binomial is computed.
I don't understand why you involve a numpy function here, and why you are converting to float objects. Really, for this formula, if your inputs are always integers, then simply stick with int and fractions.Fraction and your answers will always be exact. It is easy enough to implement your own binom function:
In [8]: def binom(n, k):
...: return (
...: factorial(n)
...: // (factorial(k)*factorial(n-k))
...: )
...:
Note, I used integer division: //. And finally, your summation:
In [9]: from fractions import Fraction
...: def F(k, a, s):
...: result = Fraction(0, 1)
...: for i in range(1, k+1):
...: b = binom(k, i)*pow(-1, i+1)
...: x = Fraction(s, (a-1)*i - 1)
...: result += b*x
...: return result
...:
And the results:
In [10]: F(99, 3, 2)
Out[10]: Fraction(47372953498165579239913571130715220654368322523193013011418, 1421930192463933435386372127473055337225260516259631545875)
Which seems correct based on wolfram-alpha...
Note, if say, alpha can be a non-integer, you could use decimal.Decimal for arbitary-precision floating point operations:
In [17]: from decimal import Decimal
...: def F(k, a, s):
...: result = Decimal('0')
...: for i in range(1, k+1):
...: b = binom(k, i)*pow(-1, i+1)
...: x = Decimal(s) / Decimal((a-1)*i - 1)
...: result += b*x
...: return result
...:
In [18]: F(99, 3, 2)
Out[18]: Decimal('33.72169506311642881389682714')
Let's up the precision:
In [20]: import decimal
In [21]: decimal.getcontext().prec
Out[21]: 28
In [22]: decimal.getcontext().prec = 100
In [23]: F(99, 3, 2)
Out[23]: Decimal('33.31594880623309576443774363783112352607484321721989160481537847749994248174570647797323789728798446')
given several vectors:
x1 = [3 4 6]
x2 = [2 8 1]
x3 = [5 5 4]
x4 = [6 2 1]
I wanna find weight w1, w2, w3 to each item, and get the weighted sum of each vector: yi = w1*i1 + w2*i2 + w3*i3. for example, y1 = 3*w1 + 4*w2 + 6*w3
to make the variance of these values(y1, y2, y3, y4) to be minimized.
notice: w1, w2, w3 should > 0, and w1 + w2 + w3 = 1
I don't know what kind of problems it should be... and how to solve it in python or matlab?
You can start with building a loss function stating the variance and the constraints on w's. The mean is m = (1/4)*(y1 + y2 + y3 + y4). The variance is then (1/4)*((y1-m)^2 + (y2-m)^2 + (y3-m)^2 + (y4-m)^2) and the constraint is a*(w1+w2+w3 - 1) where a is the Lagrange multiplier. The problem looks like to me a convex optimisation with convex constraints since the loss function is quadratic with respect to target variables (w1,w2,w3) and the constraints are linear. You can look for projected gradient descent algorithms which respect to the constraints provided. Take a look to here http://www.ifp.illinois.edu/~angelia/L5_exist_optimality.pdf There are no straightforward analytic solutions to such kind of problems in general.
w = [5, 6, 7]
x1 = [3, 4, 6]
x2 = [2, 8, 1]
x3 = [5, 5, 4]
y1, y2, y3 = 0, 0, 0
for index, i in enumerate(w):
y1 = y1 + i * x1[index]
y2 = y2 + i * x2[index]
y3 = y3 + i * x3[index]
print(min(y1, y2, y3))
I think I maybe get the purpose of your problem.But if you want to find the smallest value, I hope this can help you.
I just make the values fixed, you can make it to be the def when you see this is one way to solve your question.
I don't know much about optimization problem, but I get the idea of gradient descent so I tried to reduce the weight between the max score and min score, my script is below:
# coding: utf-8
import numpy as np
#7.72
#7.6
#8.26
def get_max(alist):
max_score = max(alist)
idx = alist.index(max_score)
return max_score, idx
def get_min(alist):
max_score = min(alist)
idx = alist.index(max_score)
return max_score, idx
def get_weighted(alist,aweight):
res = []
for i in range(0, len(alist)):
res.append(alist[i]*aweight[i])
return res
def get_sub(list1, list2):
res = []
for i in range(0, len(list1)):
res.append(list1[i] - list2[i])
return res
def grad_dec(w,dist, st = 0.001):
max_item, max_item_idx = get_max(dist)
min_item, min_item_idx = get_min(dist)
w[max_item_idx] = w[max_item_idx] - st
w[min_item_idx] = w[min_item_idx] + st
def cal_score(w, x):
score = []
print 'weight', w ,x
for i in range(0, len(x)):
score_i = 0
for j in range(0,5):
score_i = w[j]*x[i][j] + score_i
score.append(score_i)
# check variance is small enough
print 'score', score
return score
# cal_score(w,x)
if __name__ == "__main__":
init_w = [0.2, 0.2, 0.2, 0.2, 0.2, 0.2]
x = [[7.3, 10, 8.3, 8.8, 4.2], [6.8, 8.9, 8.4, 9.7, 4.2], [6.9, 9.9, 9.7, 8.1, 6.7]]
score = cal_score(init_w,x)
variance = np.var(score)
round = 0
for round in range(0, 100):
if variance < 0.012:
print 'ok'
break
max_score, idx = get_max(score)
min_score, idx2 = get_min(score)
weighted_1 = get_weighted(x[idx], init_w)
weighted_2 = get_weighted(x[idx2], init_w)
dist = get_sub(weighted_1, weighted_2)
# print max_score, idx, min_score, idx2, dist
grad_dec(init_w, dist)
score = cal_score(init_w, x)
variance = np.var(score)
print 'variance', variance
print score
In my practice it really can reduce the variance. I am very glad but I don't know whether my solution is solid in math.
My full solution can be viewed in PDF.
The trick is to put the vectors x_i as columns of a matrix X.
Then writing the problem becomes a Convex Problem with constrain of the solution to be on the Unit Simplex.
I solved it using Projected Sub Gradient Method.
I calculated the Gradient of the objective function and created a projection to the Unit Simplex.
Now all needed is to iterate them.
I validated my solution using CVX.
% StackOverflow 44984132
% How to calculate weight to minimize variance?
% Remarks:
% 1. sa
% TODO:
% 1. ds
% Release Notes
% - 1.0.000 08/07/2017
% * First release.
%% General Parameters
run('InitScript.m');
figureIdx = 0; %<! Continue from Question 1
figureCounterSpec = '%04d';
generateFigures = OFF;
%% Simulation Parameters
dimOrder = 3;
numSamples = 4;
mX = randi([1, 10], [dimOrder, numSamples]);
vE = ones([dimOrder, 1]);
%% Solve Using CVX
cvx_begin('quiet')
cvx_precision('best');
variable vW(numSamples)
minimize( (0.5 * sum_square_abs( mX * vW - (1 / numSamples) * (vE.' * mX * vW) * vE )) )
subject to
sum(vW) == 1;
vW >= 0;
cvx_end
disp([' ']);
disp(['CVX Solution - [ ', num2str(vW.'), ' ]']);
%% Solve Using Projected Sub Gradient
numIterations = 20000;
stepSize = 0.001;
simplexRadius = 1; %<! Unit Simplex Radius
stopThr = 1e-6;
hKernelFun = #(vW) ((mX * vW) - ((1 / numSamples) * ((vE.' * mX * vW) * vE)));
hObjFun = #(vW) 0.5 * sum(hKernelFun(vW) .^ 2);
hGradFun = #(vW) (mX.' * hKernelFun(vW)) - ((1 / numSamples) * vE.' * (hKernelFun(vW)) * mX.' * vE);
vW = rand([numSamples, 1]);
vW = vW(:) / sum(vW);
for ii = 1:numIterations
vGradW = hGradFun(vW);
vW = vW - (stepSize * vGradW);
% Projecting onto the Unit Simplex
% sum(vW) == 1, vW >= 0.
vW = ProjectSimplex(vW, simplexRadius, stopThr);
end
disp([' ']);
disp(['Projected Sub Gradient Solution - [ ', num2str(vW.'), ' ]']);
%% Restore Defaults
% set(0, 'DefaultFigureWindowStyle', 'normal');
% set(0, 'DefaultAxesLooseInset', defaultLoosInset);
You can see the full code in StackOverflow Q44984132 (PDF is available as well).
I am trying to integrate this function: x^4 - 2x + 1 from 0 to 2
I wrote this program:
def f(x):
return (x**4)-(2*x)+1
N=10
a=0.0
b=2.0
h=(b-a)/N
s=f(a)+f(b)
for k in range(1,N/2):
s+=4*f(a+(2*k-1)*h)
for k in range(1,N/(2-1)):
s1 +=f(a+(2*k*h)
M=(s)+(2*s1)
print((1/3.0)*h)*(3)
But I got this error:
File "<ipython-input-29-6107592420b6>", line 17
M=(s)+(2*s1):
^
SyntaxError: invalid syntax
I tried writing it in different forms but I always get an error in M
You forgot a closing parentheses in your second for loop here: s1 += f(a+(2*k*h). It should be:
s1 += f(a + (2 * k * h)) # <<< here it is
For future reference you might also think about using scipy.integrate.
Look here for some methods which might have better accuracy depending on the nature and resolution of your data set.
A code might look like this:
import scipy.integrate as int
x = [ ii/10. for ii in range(21)]
y = [ xi**4 - 2*xi + 1 for xi in x]
tahdah = int.simps(y,x,even='avg')
print(tahdah)
Which yields and answer of 4.4 that you can confirm with pencil and paper.
Have you seen the code example of Simpsons Rule on Wikipedia (written in python)? I will repost it here for the benefit of future readers.
#!/usr/bin/env python3
from __future__ import division # Python 2 compatibility
def simpson(f, a, b, n):
"""Approximates the definite integral of f from a to b by the
composite Simpson's rule, using n subintervals (with n even)"""
if n % 2:
raise ValueError("n must be even (received n=%d)" % n)
h = (b - a) / n
s = f(a) + f(b)
for i in range(1, n, 2):
s += 4 * f(a + i * h)
for i in range(2, n-1, 2):
s += 2 * f(a + i * h)
return s * h / 3
# Demonstrate that the method is exact for polynomials up to 3rd order
print(simpson(lambda x:x**3, 0.0, 10.0, 2)) # 2500.0
print(simpson(lambda x:x**3, 0.0, 10.0, 100000)) # 2500.0
print(simpson(lambda x:x**4, 0.0, 10.0, 2)) # 20833.3333333
print(simpson(lambda x:x**4, 0.0, 10.0, 100000)) # 20000.0
I have a von Neumann equation which looks like:
dr/dt = - i [H, r], where r and H are square matricies of complex numbers and I need to find r(t) using python script.
Is there any standart instruments to integrate such equations?
When I was solving another aquation with a vector as initial value, like Schrodinger equation:
dy/dt = - i H y, I used scipy.integrate.ode function ('zvode'), but trying to use the same function for von Neumann equation gives me the following error:
**scipy/integrate/_ode.py:869: UserWarning: zvode: Illegal input detected. (See printed message.)
ZVODE-- ZWORK length needed, LENZW (=I1), exceeds LZW (=I2)
self.messages.get(istate, 'Unexpected istate=%s' % istate))
In above message, I1 = 72 I2 = 24**
Here is the code:
def integrate(r, t0, t1, dt):
e = linspace(t0, t1, (t1 - t0) / dt + 10)
g = linspace(t0, t1, (t1 - t0) / dt + 10)
u = linspace(t0, t1, (t1 - t0) / dt + 10)
while r.successful() and r.t < t1:
r.integrate(r.t + dt)
e[r.t / dt] = abs(r.y[0][0]) ** 2
g[r.t / dt] = abs(r.y[1][1]) ** 2
u[r.t / dt] = abs(r.y[2][2]) ** 2
return e, g, u
# von Neumann equation's
def right_part(t, rho):
hamiltonian = (h / 2) * array(
[[delta, omega_s, omega_p / 2.0 * sin(t * w_p)],
[omega_s, 0.0, 0.0],
[omega_p / 2.0 * sin(t * w_p), 0.0, 0.0]],
dtype=complex128)
return (dot(hamiltonian, rho) - dot(rho, hamiltonian)) / (1j * h)
def create_integrator():
r = ode(right_part).set_integrator('zvode', method='bdf', with_jacobian=False)
psi_init = array([[1.0, 0.0, 0.0],
[0.0, 0.0, 0.0],
[0.0, 0.0, 0.0]], dtype=complex128)
t0 = 0
r.set_initial_value(psi_init, t0)
return r, t0
def main():
r, t0 = create_integrator()
t1 = 10 ** -6
dt = 10 ** -11
e, g, u = integrate(r, t0, t1, dt)
main()
I have created a wrapper of scipy.integrate.odeint called odeintw that can handle complex matrix equations such as this. See How to plot the Eigenvalues when solving matrix coupled differential equations in PYTHON? for another question involving a matrix differential equation.
Here's a simplified version of your code that shows how you could use it. (For simplicity, I got rid of most of the constants from your example).
import numpy as np
from odeintw import odeintw
def right_part(rho, t, w_p):
hamiltonian = (1. / 2) * np.array(
[[0.1, 0.01, 1.0 / 2.0 * np.sin(t * w_p)],
[0.01, 0.0, 0.0],
[1.0 / 2.0 * np.sin(t * w_p), 0.0, 0.0]],
dtype=np.complex128)
return (np.dot(hamiltonian, rho) - np.dot(rho, hamiltonian)) / (1j)
psi_init = np.array([[1.0, 0.0, 0.0],
[0.0, 0.0, 0.0],
[0.0, 0.0, 0.0]], dtype=np.complex128)
t = np.linspace(0, 10, 101)
sol = odeintw(right_part, psi_init, t, args=(0.25,))
sol will be a complex numpy array with shape (101, 3, 3), holding the solution rho(t). The first index is the time index, and the other two indices are the 3x3 matrix.
QuTiP has some nice integrators for doing just this, using things like Master equations and Lindblad damping terms. QuTiP itself is only a thin wrapper around scipy.odeint, but it makes a lot of the mechanics much nicer, particularly since it has great documentation.