How could the energy and cos function be in different shapes? - python

I am trying to write a code that calculates an integral from zero to pi. But it gives an error which I do not understand how to fix. Thank you for your time.
import numpy as np
from math import pi,cos
vtheta=np.linspace(0.0,pi,1000)
def my_function(x):
Energy = np.arange(2.1,300.1,0.1)
return ((1.0)/(Energy-1+np.cos(x)))
print (my_function(vtheta).sum())

As is pointed out in the top comment:
Energy.shape == (2980,), but x.shape == (1000,)
so reduce the number of elements in Energy or increase np.cos(x).
Since energy is just a numpy arrage i reduced it to size=1000.
In order to fix this they need to be the same size, so this ,for example, works:
import numpy as np
from math import pi,cos
vtheta=np.linspace(0.0,pi,1000)
def my_function(x):
Energy = np.arange(2.1,102.1,0.1) #<-- changed 300.1 to 102.1
return ((1.0)/(Energy-1+np.cos(x)))
print (my_function(vtheta).sum())
This is the result (with the above):
39.39900748229355

Related

Evaluate/calculate math equation with every entry in array

I have an array and an equation.
I want to plug in all array values into the equation and save it.
What I've tried until now:
import math
import numpy as np
Z_F0=376.73
Epsilon=3.66
wl_range = [np.arange(0.1, 50, 0.1)]
wl_array = np.array(wl_range)
multiplied_array = 6+(2*math.pi*6)*math.exp(-1*(30.666/wl_array)**0.7528)
print(multiplied_array)
Or I've tried multiplied_array = np.vectorize(6+(2*math.pi*6)...)
But I get the
only size-1 arrays can be converted to Python scalars
error.
You don't need math, numpy has pi and exp. pi would have worked with math, as it is just a constant. But the content of your exponential is a vector, so you need to use numpy for that.
There are situations where math is faster (when you don't vectorize), as numpy has higher overhead checking for the dimensions of your input.
import numpy as np
Z_F0=376.73
Epsilon=3.66
wl_range = [np.arange(0.1, 50, 0.1)]
wl_array = np.array(wl_range)
multiplied_array = 6+(2*np.pi*6)*np.exp(-1*(30.666/wl_array)**0.7528)
print(multiplied_array)
output:
[ 6. 6. 6. 6. 6.00000001 6.00000015
6.00000127 6.00000656 6.00002459 6.00007283 6.00018111 6.00039369
6.0007699 6.00138329 6.00231952 6.00367362 6.00554684 6.00804354
6.01126831 6.01532348 6.020307 6.02631085 6.03341981 6.04171066
6.0512517 6.06210249 6.07431393 6.08792837 6.10297998 6.11949516
6.13749297 6.15698571 6.17797939 6.20047433 6.22446568 6.24994393
6.27689541 6.30530278 6.33514546 6.36640006 6.39904075 6.43303963
6.46836702 6.50499181 6.54288169 6.58200341 6.62232297 6.66380586
6.70641722 6.75012196 6.79488494 6.84067111 6.88744554 6.93517359
6.98382097 7.03335377 7.08373859 7.13494255 7.18693331 7.23967918
7.29314908 7.34731259 7.40213999 7.45760222 7.51367097 7.5703186 ...
math.exp() only works with a scalar argument x. If you use numpy.exp() then the equation should work.

How code an integration formula using Python

I have an integration equations to calculate key rate and need to convert it into Python.
The equation to calculate key rate is given by:
where R(n) is:
and p(n)dn is:
The key rate should be plotted like this:
I have sucessfully plotted the static model of the graph using following equation:
import numpy as np
import math
from math import pi,e,log
import matplotlib.pyplot as plt
n1=np.arange(10, 55, 1)
n=10**(-n1/10)
Y0=1*(10**-5)
nd=0.25
ed=0.03
nsys=nd*n
QBER=((1/2*Y0)+(ed*nsys))/(Y0+nsys)
H2=-QBER*np.log2(QBER)-(1-QBER)*np.log2(1-QBER)
Rsp=np.log10((Y0+nsys)*(1-(2*H2)))
print (Rsp)
plt.plot(n1,Rsp)
plt.xlabel('Loss (dB)')
plt.ylabel('log10(Rate)')
plt.show()
However, I failed to plot the R^ratewise model. This is my code:
import numpy as np
import matplotlib.pyplot as plt
def h2(x):
return -x*np.log2(x)-(1-x)*np.log2(1-x)
e0=0.5
ed=0.03
Y0=1e-5
nd=0.25
nt=np.linspace(0.1,0.00001,1000)
y=np.zeros(np.size(nt))
Rate=np.zeros(np.size(nt))
eta_0=0.0015
for (i,eta) in enumerate(nt):
nsys=eta*nd
sigma=0.9
y[i]=1/(eta*sigma*np.sqrt(2*np.pi))*np.exp(-(np.log(eta/eta_0)+(1/2*sigma*sigma))**2/(2*sigma*sigma))
Rate[i]=(max(0.0,(Y0+nsys)*(1-2*h2(min(0.5,(e0*Y0+ed*nsys)/(Y0+nsys))))))*y[i]
plt.plot(nt,np.log10(Rate))
plt.xlabel('eta')
plt.ylabel('Rate')
plt.show()
Hopefully that anyone can help me to code the key rate with integration p(n)dn as stated above. This is the paper for referrence:
key rate
Thank you.
I copied & ran your second code block as-is, and it generated a plot. Is that what you wanted?
Using y as the p(n) in the equation, and the Rsp as the R(n), you should be able to use
NumPy's trapz function
to approximate the integral from the sampled p(n) and R(n):
n = np.linspace(0, 1, no_of_samples)
# ...generate y & Rst from n...
R_rate = np.trapz(y * Rst, n)
However, you'll have to change your code to sample y & Rst using the same n, spanning from 0 to 1`.
P.S. there's no need for the loop in your second code block; it can be condensed by removing the i's, swapping eta for nt, and using NumPy's minimum and maximum functions, like so:
nsys=nt*nd
sigma=0.9
y=1/(nt*sigma*np.sqrt(2*np.pi))*np.exp(-(np.log(nt/eta_0)+(1/2*sigma*sigma))**2/(2*sigma*sigma))
Rate=(np.maximum(0.0,(Y0+nsys)*(1-2*h2(np.minimum(0.5,(e0*Y0+ed*nsys)/(Y0+nsys))))))*y

scipy.optimize get's trapped in local minima. What can I do?

from numpy import *; from scipy.optimize import *; from math import *
def f(X):
x=X[0]; y=X[1]
return x**4-3.5*x**3-2*x**2+12*x+y**2-2*y
bnds = ((1,5), (0, 2))
min_test = minimize(f,[1,0.1], bounds = bnds);
print(min_test.x)
My function f(X)has a local minima at x=2.557, y=1 which I should be able to find.
The code showed above will only give result where x=1. I have tried with different tolerance and alle three method: L-BFGS-B, TNC and SLSQP.
This is the thread I have been looking at so far:
Scipy.optimize: how to restrict argument values
How can I fix this?
I am using Spyder(Python 3.6).
You just encounterd the problem with local optimization: it strongly depends on the start (initial) values you pass in. If you supply [2, 1] it will find the correct minima.
Common solutions are:
use your optimization in a loop with random starting points inside your boundaries
import numpy as np
from numpy import *; from scipy.optimize import *; from math import *
def f(X):
x=X[0]; y=X[1]
return x**4-3.5*x**3-2*x**2+12*x+y**2-2*y
bnds = ((1,3), (0, 2))
for i in range(100):
x_init = np.random.uniform(low=bnds[0][0], high=bnds[0][1])
y_init = np.random.uniform(low=bnds[1][0], high=bnds[1][1])
min_test = minimize(f,[x_init, y_init], bounds = bnds)
print(min_test.x, min_test.fun)
use an algorithm that can break free of local minima, I can recommend scipy's basinhopping()
use a global optimization algorithm and use it's result as initial value for a local algorithm. Recommendations are NLopt's DIRECT or the MADS algorithms (e.g. NOMAD). There is also another one in scipy, shgo, that I have no tried yet.
Try scipy.optimize.basinhopping. It simply just repeat your minimize procedure multiple times and get multiple local minimums. The minimal one is the global minimum.
minimizer_kwargs = {"method": "L-BFGS-B"}
res=optimize.basinhopping(nethedge,guess,niter=100,minimizer_kwargs=minimizer_kwargs)

Python fmin(find minimum) for a vector function

I would like to find the minimum of 3dvar function defined as:
J(x)=(x-x_b)B^{-1}(x-x_b)^T + (y-H(x)) R^{-1} (y-H(x))^T (latex code)
with B,H,R,x_b,y given.
I would like to find the argmin(J(x)). However it seems fmin in python does not work. (the function J works correctly)
Here is my code:
import numpy as np
from scipy.optimize import fmin
import math
def dvar_3(x):
B=np.eye(5)
H=np.ones((3,5))
R=np.eye(3)
xb=np.ones(5)
Y=np.ones(3)
Y.shape=(Y.size,1)
xb.shape=(xb.size,1)
value=np.dot(np.dot(np.transpose(x-xb),(np.linalg.inv(B))),(x-xb)) +np.dot(np.dot(np.transpose(Y-np.dot(H,x)),(np.linalg.inv(R))),(Y-np.dot(H,x)))
return value[0][0]
ini=np.ones(5) #
ini.shape=(ini.size,1) #change initial to vertical vector
fmin(dvar_3,ini) #start at initial vector
I receive this error:
ValueError: operands could not be broadcast together with shapes (5,5) (3,3)
How can I solve this problem? Thank you in advance.
reshape argument x in the function dvar_3, the init argument of fmin() needs a one-dim array.
import numpy as np
from scipy.optimize import fmin
import math
def dvar_3(x):
x = x[:, None]
B=np.eye(5)
H=np.ones((3,5))
R=np.eye(3)
xb=np.ones(5)
Y=np.ones(3)
Y.shape=(Y.size,1)
xb.shape=(xb.size,1)
value=np.dot(np.dot(np.transpose(x-xb),(np.linalg.inv(B))),(x-xb)) +np.dot(np.dot(np.transpose(Y-np.dot(H,x)),(np.linalg.inv(R))),(Y-np.dot(H,x)))
return value[0][0]
ini=np.ones(5) #
fmin(dvar_3,ini) #start at initial vector

How to use well scipy.optimize.fmin

Hello I'm trying tu use scipy.optimize.fmin to minimize a function. But things aren't going well since my computation seems diverging instead of converging and i got an error. I tried to fixed a tolerance but it is not working.
Here is my code (Main program):
import sys,os
import numpy as np
from math import exp
import scipy
from scipy.optimize import fmin
from carlo import *
A=real()
x_r=0.11245
x_i=0.14587
#C=A.minim
part_real=0.532
part_imag=1.2
R_0 = fmin(A.minim,[part_real,part_imag],xtol=0.0001)
And the class:
import sys,os
import numpy as np
import random, math
import matplotlib.pyplot as plt
import cmath
#import pdb
#pdb.set_trace()
class real:
def __init__(self):
self.nmodes = 4
self.L_ch = 1
self.w = 2
def minim(self,p):
x_r=p[0]
x_i=p[1]
x=complex(x_r,x_i)
self.a=complex(3,4)*(3*np.exp(1j*self.L_ch))
self.T=np.array([[0.0,2.0*self.a],[(0.00645+(x)**2), 4.3*x**2]])
self.Id=np.array([[1,0],[0,1]])
self.disp=np.linalg.det(self.T-self.Id)
print self.disp
return self.disp
The error is:
(-2.16124712985-8.13819476595j)
/usr/local/lib/python2.7/site-packages/scipy/optimize/optimize.py:438: ComplexWarning: Casting complex values to real discards the imaginary part
fsim[0] = func(x0)
(-1.85751684826-8.95377303768j)
/usr/local/lib/python2.7/site-packages/scipy/optimize/optimize.py:450: ComplexWarning: Casting complex values to real discards the imaginary part
fsim[k + 1] = f
(-2.79592712985-8.13819476595j)
(-3.08484130014-7.36240080015j)
(-3.68788935914-6.62639114029j)
/usr/local/lib/python2.7/site-packages/scipy/optimize/optimize.py:475: ComplexWarning: Casting complex values to real discards the imaginary part
fsim[-1] = fxe
(-2.62046851255e+87-1.45013007728e+88j)
(-4.037931857e+87-2.2345341712e+88j)
(-7.45017628087e+87-4.12282179854e+88j)
(-1.14801242605e+88-6.35293780534e+88j)
(-2.11813751435e+88-1.17214723347e+89j)
Warning: Maximum number of function evaluations has been exceeded.
Actually I don't undersatnd why the computation is diverging, maybe I have to use something else instead of using fmin for minimizing?
Someone got an idea?
Thank you very much.
Try to optimize the absolute value instead of the complex value. That gave decent result for me.
f = lambda x: abs(A.minim(x))
R_0 = fmin(f,[part_real,part_imag],xtol=0.0001)
I guess fmin don't work well with complex values.

Categories

Resources