I'm trying to plot a multi-equation ODE from this paper and I'm having trouble figuring out how to express the dependencies between the functions. I.e, in 3.1, it uses both z1 and z2. Aswell as using its own precious state x. I've checked the Scipy documentation but they don't seem to express the functionality that I'm trying to achieve.
How do I express I.e equation 3.1 with Scipy?
You have to implement a system ODE function
def sys(t,u):
x,y,z1,z2 = u
v1 = z1/(p1*z2)-1
v2 = 1-y/p6
return p0*v1, p2*(x/p3-1)+p4*v1, p7*z1*v2, (p7-p5*z2/z1)*z2*v2
#integrate to jump
sol1 = solve_ivp(sys,(t0,td),u0,**options)
# implement the jump
x,y,z1,z2 = sol1.y[:,-1]
fac = (2-λ)/(2+λ);
x*=fac; y*=fac
ud = [x,y,z1,z2]
# restart integration
sol2 = solve_ivp(sys,(td,tf),ud,**options)
options is a dictionary for the method, tolerances, etc., see documentation. Then evaluate the solutions.
Thank you Lutz Lehmann, I solved it based on your answer.
On a side note, equation 3.3 isn't redundant, I should have clarified that it's a term for the amount of pure silver in each coin. So it's an independent value (weird way of defining it in the paper, I agree).
Here is the solution I ended up using:
def dSdt(t,S):
x,y,z1,z2,z1_z2 = S
xn = p0*((z1_z2/p1)-1)
yn = p2*((x/p3)-1) + p4*((z1_z2/p1)-1)
z1n = p7*z1*(1-(y/p6))
z2n = z2*(1-(y/p6))*(p7-p5*z1_z2)
z1_z2n = p5*(1-(y/p6))
return [xn,yn,z1n,z2n,z1_z2n]
# Before td
t1 = np.linspace(t0,395,abs(t0)+395,dtype=int)
S0_1 = [x_t0,y_t0,z1_t0,z2_t0,z1_div_z2_t0]
sol1 = odeint(dSdt,y0=S0_1,t=t1,tfirst=True)
# Handle division at td
sol1[-1][0] *= λ
sol1[-1][1] *= λ
S0_2 = sol1.T[:,-1]
# After td
t2 = np.linspace(start=396,stop=500,num=500-396,dtype=int)
sol2 = odeint(dSdt,y0=S0_2,t=t2,tfirst=True)
sol= np.concatenate((sol1,sol2))
t = np.concatenate((t1,t2))
Related
I was wondering if is there is a way to define a function that is a derivative of a function. I'm new to python so I don't no much, I tired looking up stuff that might be similar but nothing has worked so far. This is what I have for my code right now.
import sympy as sp
import math
x = sp.Symbol('x')
W = 15 #kN/m
E = 70 # Gpa
I = 52.9*10**(-6) #m**4
L = 3 #m
e = 0.01
xi = 1.8
y = 9
def f(x):
return ( ( y*3*(math.pi**4)*E*I/(W*L) ) - ( 48*(L**3)*math.cos(math.pi*x/(2*L)) ) + ( 48*(L**3) ) + ( (math.pi**3)*(x**3) ) )/(3*L*(math.pi**3))**(1/2)
def derv(f,x):
return sp.diff(f)
print (derv(f,x))
Also, I don't understand whatx = sp.Symbol('x') does, so if someone could explain that, that would be awesome.
Any help is appreciated.
You are conflating two different things: python functions like f and math functions, which you can express with sympy like y = π * x/3. f is a python function that returns a sympy expression. sympy lets you stay in the world of symbolic math functions by defining variables like x = sp.Symbol('x') So calling f() produces a symbolic math function like:
You can use sympy to find the derivative of the symbolic function returned by f() but you need to define it with the sympy versions of the cos() function (and sp.pi if you want to keep it symbolic).
For example:
import sympy as sp
x = sp.Symbol('x')
W = 15 #kN/m
E = 70 # Gpa
I = 52.9*10**(-6) #m**4
L = 3 #m
e = 0.01
xi = 1.8
y = 9
def f(x):
return ( ( y*3*(sp.pi**4)*E*I/(W*L) ) - ( 48*(L**3)*sp.cos(sp.pi*x/(2*L)) ) + ( 48*(L**3) ) + ( (sp.pi**3)*(x**3) ) )/(3*L*(sp.pi**3))**(1/2)
def derv(f,x):
return sp.diff(f(x)) # pass the result of f() which is a sympy function
derv(f,x)
You've programmed the function. it appears to be a simple function of two independent variables x and y.
Could be that x = sp.Symbol('x') is how SymPy defines the independent variable x. I don't know if you need one or another one for y.
You know enough about calculus to know that you need a derivative. Do you know how to differentiate a function of a single independent variable? It helps to know the answer before you start coding.
y*3*(math.pi**4)*E*I/(W*L) ) - ( 48*(L**3)*math.cos(math.pi*x/(2*L)) ) + ( 48*(L**3) ) + ( (math.pi**3)*(x**3) ) )/(3*L*(math.pi**3))**(1/2)
Looks simple.
There's only one term with y in it. The partial derivative w.r.t. y leaves you with 3*(math.pi**4)*E*I/(W*L) )
There's only one term with Cx**3 in it. That's easy to differentiate: 3C*x**2.
What's so hard? What's the problem?
In traditional programming, each function you write is translated to a series of commands that are then sent to the CPU and the result of the calculation is returned. Therefore, symbolic manipulation, like what we humans do with algebra and calculus, doesn't make any sense to the computer. Sympy gets around this by overriding Python's normal arithmetic operators, allowing you to do generate algebraic functions that can be manipulated similarly to how we humans do math. That's what sp.Symbols('x') is doing: providing you with a symbolic variable you can work with (you're also naming it in sympy).
If you want to evaluate your derivative, simply call evalf with the numerical value you want to assign to x.
While going through an article, I encountered a situation where I encountered below polynomial equation.
For reference, below is the equation.
15446 = 537.06/(1+r) + 612.25/(1+r)**2 + 697.86/(1+r)**3 + 795.67/(1+r)**4 + 907.07/(1+r)**5
This is discount cash flow time series values which we use in finance to get the idea of present value of future cash flows after applying the appropriate discount rate.
So from above equation, I need to calculate the variable r in python programming environment?. I do hope that there must be some library which can be used to solve such equations?.
I solve this, I thought to use the numpy.npv API.
import numpy as np
presentValue = 15446
futureValueList = [537.06, 612.25, 697.86,795.67, 907.07]
// I know it is not possible to get r from below. Just put
// it like this to describe my intention.
presentValue = np.npv(r, futureValueList)
print(r)
You can multiply your NPV formula with the highest power or (1+r) and then find the roots of the polynomial with polyroots (just take the only real root and disregard the complex ones):
import numpy as np
presentValue = 15446
futureValueList = [537.06, 612.25, 697.86,795.67, 907.07]
roots = np.polynomial.polynomial.polyroots(futureValueList[::-1]+[-presentValue])
r = roots[np.argwhere(roots.imag==0)].real[0,0] - 1
print(r)
#-0.3332398877886278
As it turns out the formula given is incomplete, see p. 14 of the linked article. The correct equation can be solved with standard optimization procedures, e.g. optimize.root providing a sensible initial guess:
from scipy import optimize
def fun(r):
r1 = 1 + r
return 537.06/r1 + 612.25/r1**2 + 697.86/r1**3 + 795.67/r1**4 + 907.07/r1**5 * (1 + 1.0676/(r-.0676)) - 15446
roots = optimize.root(fun, [.1])
print(roots.x if roots.success else roots.message)
#[0.11177762]
I am writing a scientific code in python to calculate the energy of a system.
Here is my function : cte1, cte2, cte3, cte4 are constants previously computed; pii is np.pi (calculated beforehand, since it slows the loop otherwise). I calculate the 3 components of the total energy, then sum them up.
def calc_energy(diam):
Energy1 = cte2*((pii*diam**2/4)*t)
Energy2 = cte4*(pii*diam)*t
d=diam/t
u=np.sqrt((d)**2/(1+d**2))
cc= u**2
E = sp.special.ellipe(cc)
K = sp.special.ellipk(cc)
Id=cte3*d*(d**2+(1-d**2)*E/u-K/u)
Energy3 = cte*t**3*Id
total_energy = Energy1+Energy2+Energy3
return (total_energy,Energy1)
My first idea was to simply loop over all values of the diameter :
start_diam, stop_diam, step_diam = 1e-10, 500e-6, 1e-9 #Diametre
diametres = np.arange(start_diam,stop_diam,step_diam)
for d in diametres:
res1,res2 = calc_energy(d)
totalEnergy.append(res1)
Energy1.append(res2)
In an attempt to speed up calculations, I decided to use numpy to vectorize, as shown below :
diams = diametres.reshape(-1,1) #If not reshaped, calculations won't run
r1 = np.apply_along_axis(calc_energy,1,diams)
However, the "vectorized" solution does not properly work. When timing I get 5 seconds for the first solution and 18 seconds for the second one.
I guess I'm doing something the wrong way but can't figure out what.
With your current approach, you're applying a Python function to each element of your array, which carries additional overhead. Instead, you can pass the whole array to your function and get an array of answers back. Your existing function appears to work fine without any modification.
import numpy as np
from scipy import special
cte = 2
cte1 = 2
cte2 = 2
cte3 = 2
cte4 = 2
pii = np.pi
t = 2
def calc_energy(diam):
Energy1 = cte2*((pii*diam**2/4)*t)
Energy2 = cte4*(pii*diam)*t
d=diam/t
u=np.sqrt((d)**2/(1+d**2))
cc= u**2
E = special.ellipe(cc)
K = special.ellipk(cc)
Id=cte3*d*(d**2+(1-d**2)*E/u-K/u)
Energy3 = cte*t**3*Id
total_energy = Energy1+Energy2+Energy3
return (total_energy,Energy1)
start_diam, stop_diam, step_diam = 1e-10, 500e-6, 1e-9 #Diametre
diametres = np.arange(start_diam,stop_diam,step_diam)
a = calc_energy(diametres) # Pass the whole array
As I'm really struggleing to get from R-code, to Python code, I would like to ask some help. The code I want to use has been provided to my from withing the mathematics forum of stackexchange.
https://math.stackexchange.com/questions/2205573/curve-fitting-on-dataset
I do understand what is going on. But I'm really having a hard time trying to solve the R-code, as I have never seen anything of it. I have written the function to return the sum of squares. But I'm stuck at how I could use a function similar to the optim function. And also I don't really like the guesswork at the initial values. I would like it better to run and re-run a type of optim function untill I get the wanted result, because my needs for a nearly perfect curve fit are really high.
def model (par,x):
n = len(x)
res = []
for i in range(1,n):
A0 = par[3] + (par[4]-par[1])*par[6] + (par[5]-par[2])*par[6]**2
if(x[i] == par[6]):
res[i] = A0 + par[1]*x[i] + par[2]*x[i]**2
else:
res[i] = par[3] + par[4]*x[i] + par[5]*x[i]**2
return res
This is my model function...
def sum_squares (par, x, y):
ss = sum((y-model(par,x))^2)
return ss
And this is the sum of squares
But I have no idea on how to convert this:
#I found these initial values with a few minutes of guess and check.
par0 <- c(7,-1,-395,70,-2.3,10)
sol <- optim(par= par0, fn=sqerror, x=x, y=y)$par
To Python code...
I wrote an open source Python package (BSD license) that has a genetic algorithm (Differential Evolution) front end to the scipy Levenberg-Marquardt solver, it functions similarly to what you describe in your question. The github URL is:
https://github.com/zunzun/pyeq3
It comes with a "user-defined function" example that's fairly easy to use:
https://github.com/zunzun/pyeq3/blob/master/Examples/Simple/FitUserDefinedFunction_2D.py
along with command-line, GUI, cluster, parallel, and web-based examples. You can install the package with "pip3 install pyeq3" to see if it might suit your needs.
Seems like I have been able to fix the problem.
def model (par,x):
n = len(x)
res = np.array([])
for i in range(0,n):
A0 = par[2] + (par[3]-par[0])*par[5] + (par[4]-par[1])*par[5]**2
if(x[i] <= par[5]):
res = np.append(res, A0 + par[0]*x[i] + par[1]*x[i]**2)
else:
res = np.append(res,par[2] + par[3]*x[i] + par[4]*x[i]**2)
return res
def sum_squares (par, x, y):
ss = sum((y-model(par,x))**2)
print('Sum of squares = {0}'.format(ss))
return ss
And then I used the functions as follow:
parameter = sy.array([0.0,-8.0,0.0018,0.0018,0,200])
res = least_squares(sum_squares, parameter, bounds=(-360,360), args=(x1,y1),verbose = 1)
The only problem is that it doesn't produce the results I'm looking for... And that is mainly because my x values are [0,360] and the Y values only vary by about 0.2, so it's a hard nut to crack for this function, and it produces this (poor) result:
Result
I think that the range of x values [0, 360] and y values (which you say is ~0.2) is probably not the problem. Getting good initial values for the parameters is probably much more important.
In Python with numpy / scipy, you would definitely want to not loop over values of x but do something more like
def model(par,x):
res = par[2] + par[3]*x + par[4]*x**2
A0 = par[2] + (par[3]-par[0])*par[5] + (par[4]-par[1])*par[5]**2
res[np.where(x <= par[5])] = A0 + par[0]*x + par[1]*x**2
return res
It's not clear to me that that form is really what you want: why should A0 (a value independent of x added to a portion of the model) be so complicated and interdependent on the other parameters?
More importantly, your sum_of_squares() function is actually not what least_squares() wants: you should return the residual array, you should not do the sum of squares yourself. So, that should be
def sum_of_squares(par, x, y):
return (y - model(par, x))
But most importantly, there is a conceptual problem that is probably going to plague this model: Your par[5] is meant to represent a breakpoint where the model changes form. This is going to be very hard for these optimization routines to find. These routines generally make a very small change to each parameter value to estimate to derivative of the residual array with respect to that variable in order to figure out how to change that variable. With a parameter that is essentially used as an integer, the small change in the initial value will have no effect at all, and the algorithm will not be able to determine the value for this parameter. With some of the scipy.optimize algorithms (notably, leastsq) you can specify a scale for the relative change to make. With leastsq that is called epsfcn. You may need to set this as high as 0.3 or 1.0 for fitting the breakpoint to work. Unfortunately, this cannot be set per variable, only per fit. You might need to experiment with this and other options to least_squares or leastsq.
First of all I must say my knowledge with using Sage math is really very limited, but I really want to improve an to be able to solve these problems I am having. I have been asked to implement the following:
1 - Read in FIPS 186-4 (http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf) the definition of ECDSA and implement using Sage math with:
(a) prime eliptic curves (P-xxx)
(b) binary eliptic curves (B-xxx)
I tried solving (a) by looking around the internet a lot and ended up with the following code:
Exercise 1, a)
class ECDSA_a:
def __init__(self):
#Parameters for Curve p-256 as stated on FIPS 186-4 D1.2.3
p256 = 115792089210356248762697446949407573530086143415290314195533631308867097853951
a256 = p256 - 3
b256 = ZZ("5ac635d8aa3a93e7b3ebbd55769886bc651d06b0cc53b0f63bce3c3e27d2604b", 16)
## base point values
gx = ZZ("6b17d1f2e12c4247f8bce6e563a440f277037d812deb33a0f4a13945d898c296", 16)
gy = ZZ("4fe342e2fe1a7f9b8ee7eb4a7c0f9e162bce33576b315ececbb6406837bf51f5", 16)
self.F = GF(p256)
self.C = EllipticCurve ([self.F(a256), self.F(b256)])
self.G = self.C(self.F(gx), self.F(gy))
self.N = FiniteField (self.C.order()) # how many points are in our curve
self.d = int(self.F.random_element()) # privateKey
self.pd = self.G*self.d # our pubkey
self.e = int(self.N.random_element()) # our message
#sign
def sign(self):
self.k = self.N.random_element()
self.r = (int(self.k)*self.G).xy()[0]
self.s = (1/self.k)*(self.e+self.N(self.r)*self.d)
#verify
def verify(self):
self.w = 1/self.N(self.s)
return self.r == (int(self.w*self.e)*self.G + int(self.N(self.r)*self.w)*self.pd).xy()[0]
#mutate
def mutate(self):
s2 = self.N(self.s)*self.N(-1)
if not (s2 != self.s) : return False
self.w = 1/s2
return self.r == (int(self.w*self.e)*self.G + int(self.N(self.r)*self.w)*self.pd).xy()[0] # sign flip mutant
#TESTING
#Exercise 1 a)
print("Exercise 1 a)\n")
print("Elliptic Curve defined by y^2 = x^3 -3x +b256*(mod p256)\n")
E = ECDSA_a()
E.sign()
print("Verify signature = {}".format(E.verify()))
print("Mutating = {}".format(E.mutate()))
But now I wonder, Is this code really what I have been asked for?
I mean, I got the values for p and all that from the link mentioned above.
But is this eliptic curve I made a prime one? (whatever that really means).
In order words is this code I glued together the answer? And what is the mutate function actually doing? I understand the rest but don't see why it needs to be here...
Also, what could I do about question (b)? I have looked all around the internet but I can't find a single understandable mention about binary eliptic curves in sage...
Could I just reuse the above code and simply change the curve creation to get the answer?
(a.) Is this code really what I have been asked for?
No.
The sign() method has the wrong signature: it does not accept an argument to sign.
It would be very helpful to write unit tests for your code based on published test vectors, perhaps these, cf this Secp256k1 ECDSA test examples question.
You might consider doing the verification methods described in D.5 & D.6 (pp 109 ff).
(b.) binary elliptic curves
The FIPS publication you cited offers some advice on implementing such curves, and yes you could leverage your current code. But there's probably less practical advantage to implementing them, compared to the P-xxx curves, as strength of B-xxx curves is on rockier footing. They have an advantage for hardware implementations such as FPGA, but that's not relevant for your situation.