replicating an excel solver function in python to get required output - python

assuming I got cashflows from year 1 through year 4
cf = [30,45,52,67]
and discount rates (zero coupon)
rt = [.02,.03,.04,.05]
calculating PV is straight fwd in python
import numpy as np
import pandas as pd
cf = [30,45,52,67]
rt = [.02,.03,.04,.05]
sum([x[0]/(1+x[1])**(i+1) for i,x in enumerate(zip(cf,rt))])
gives me the output
173.1775
Now, if I want my NPV to be 180 (hypothetically), I will simply run a solve in excel and that will adjust my "rt" (by adding a spread across the board)
How do I replicate the same in python? I have seen/ used SciPy optimize for other purpose, but unsure how do I use it here (or if there is any other solution)

You can solve your problem using newton (an implementation of the Newton-Raphson method) from scipy.optimize.
newton needs a starting point and a function of a single parameter that evaluates to zero when you reach your target (this is not truly true, newton can accept also functions of more than one variable, but…) so we write a function that accepts your arguments and returns the function needed by newton, and at last we call newtonwith the initial value of zero
In [25]: from scipy.optimize import newton
...: cf = [30,45,52,67]
...: rt = [.02,.03,.04,.05]
...:
...: def make_fun(cf, ret, val):
...: def fun(d):
...: return val-sum([x[0]/(1+x[1]+d)**(i+1)for i,x in enumerate(zip(cf,rt))])
...: return fun
...:
...: newton(make_fun(cf, rt, 180), 0)
Out[25]: -0.014576385759418057
Edit: of course you can choose a more descriptive name for make_fun …

Related

Creating vector with one unknown using a function

I have a problem that I cannot get my head around although there must be a simple way to do so. Basically, I have this function:
Pr(d) = Pr(d_0) -10*n*lodg(d/d_0)
where we can ignore (for now) the Pr(d) term. Now, I want to pass the follwing dataframe:
d
0 200
1 600
2 800
3 1000
with d_0 constant. I sould actually pass it as an array using df_matrix = df.to_numpy().
What I want is to create a function
import pandas as pd
import numpy as np
from sympy import symbols, solve
from scipy.optimize import fsolve
import math
def recieved_power(pr_d0,d_0, x):
pr_d0 -10*n*math.log(d/d_0)
that will return a vector with the variable n (unknown). It should return:
-3.0102999566398116*n
-7.781512503836435*n
-9.030899869919434*n
-10.0*n
Is that possible. I cannot just multiply by n afterwards because there might be new factors in a later stage of the work.
Thanks for any insight.
I suggest you change
def recieved_power(pr_d0,d_0, x):
pr_d0 -10*n*math.log(d/d_0)
into
def recieved_power(pr_d0,d,d_0):
return lambda n: [ pr_d0 -10*n*math.log(single_d/d_0) for single_d in d]
For example, the following code snippet takes in the parameters pr_d0, d and d0 and returns a function which takes in n and outputs a list of numbers, each of which representing a single pr_d0 -10nmath.log(d/d_0) value
import math
d=[200,600,800,1000]
def recieved_power(pr_d0,d,d_0):
return lambda n: [ pr_d0 -10*n*math.log(single_d/d_0) for single_d in d]
func_list=recieved_power(0,d,d_0=1)
print(func_list(3))

Error evaluating a derivative on Python (with .subs, .evalf and .lambdify)

I am trying to separately compute the elements of a Taylor expansion and did not obtain the results I was supposed to. The function to approximate is x**321, and the first three elements of that Taylor expansion around x=1 should be:
1 + 321(x-1) + 51360(x-1)**2
For some reason, the code associated with the second term is not working.
See my code below.
import sympy as sy
import numpy as np
import math
import matplotlib.pyplot as plt
x = sy.Symbol('x')
f = x**321
x0 = 1
func0 = f.diff(x,0).subs(x,x0)*((x-x0)**0/factorial(0))
print(func0)
func1 = f.diff(x,1).subs(x,x0)*((x-x0)**1/factorial(1))
print(func1)
func2 = f.diff(x,2).subs(x,x0)*((x-x0)**2/factorial(2))
print(func2)
The prints I obtain running this code are
1
321x - 321
51360*(x - 1)**2
I also used .evalf and .lambdify but the results were the same. I can't understand where the error is coming from.
f = x**321
x = sy.Symbol('x')
def fprime(x):
return sy.diff(f,x)
DerivativeOfF = sy.lambdify((x),fprime(x),"numpy")
print(DerivativeOfF(1)*((x-x0)**1/factorial(1)))
321*x - 321
I'm obviously just starting with the language, so thank you for your help.
I found a beginners guide how to Taylor expand in python. Check it out perhaps all your questions are answered there:
http://firsttimeprogrammer.blogspot.com/2015/03/taylor-series-with-python-and-sympy.html
I tested your code and it works fine. like Bazingaa pointed out in the comments it is just an issue how python saves functions internally. One could argument that for a computer it takes less RAM to save 321*x - 321 instead of 321*(x - 1)**1.
In your first output line it also gives you 1 instead of (x - 1)**0

Python/Numba - Custom class object as input type

I'm starting with numba and my first goal is to try and accelerate a not so complicated function with a nested loop.
Given the following class:
class TestA:
def __init__(self, a, b):
self.a = a
self.b = b
def get_mult(self):
return self.a * self.b
and a numpy ndarray that contains class TestA objects. Dimension (N,) where N is usually ~3 million in length.
Now given the following function:
def test_no_jit(custom_class_obj_container):
container_length = len(custom_class_obj_container)
sum = 0
for i in range(container_length):
for j in range(i + 1, container_length):
obj_i = custom_class_obj_container[i]
obj_j = custom_class_obj_container[j]
sum += (obj_i.get_mult() + obj_j.get_mult())
return sum
I've tried to play around numba to get it to work with the function above however I cannot seem to get it to work with nopython=True flag, and if it's set to false, then the runtime is higher than the no-jit function.
Here is my latest try in trying to jit the function (also using nb.prange):
#nb.jit(nopython=False, parallel=True)
def test_jit(custom_class_obj_container):
container_length = len(custom_class_obj_container)
sum = 0
for i in nb.prange(container_length):
for j in nb.prange(i + 1, container_length):
obj_i = custom_class_obj_container[i]
obj_j = custom_class_obj_container[j]
sum += (obj_i.get_mult() + obj_j.get_mult())
return sum
I've tried to search around but I cannot seem to find a tutorial of how to define a custom class in the signature, and how would I go in order to accelerate a function of that sort and get it to run on GPU and possibly (any info regarding that matter would be highly appreciated) to get it to run with cuda libraries - which are installed and ready to use (previously used with tensorflow)
The numba docs give an example of creating a custom type, even for nopython mode: https://numba.pydata.org/numba-doc/latest/extending/interval-example.html
In your case though, unless this is a really slimmed down version of what you actually want to do, it seems like the easiest approach would be to re-use existing types. Additionally, the construction of a 3M length object array is going to be slow, and produce fragmented memory (as the objects are not being stored in contiguous blocks).
An example of how using record arrays might be used to solve the problem:
x_dt = np.dtype([('a', np.float64),
('b', np.float64)])
n = 30000
buf = np.arange(n*2).reshape((n, 2)).astype(np.float64)
vec3 = np.recarray(n, dtype=x_dt, buf=buf)
#numba.njit
def mult(a):
return a.a * a.b
#numba.jit(nopython=True, parallel=True)
def sum_of_prod(vector):
sum = 0
vector_len = len(vector)
for i in numba.prange(vector_len):
for j in numba.prange(i + 1, vector_len):
sum += mult(vector[i]) + mult(vector[j])
return sum
sum_of_prod(vec3)
FWIW, I'm no numba expert. I found this question when searching for how to implement a custom type in numba for non-numerical stuff. In your case, because this is highly numerical, I think a custom type is probably overkill.

Python function as an argument for a R function using rpy2

I wrote a function in Python 2.7:
# Python #
def function_py(par):
#something happens
return(value)
and I want to use this function as an argument for another function in R. More precisely, I want to perform to compute the Sobol' indices using the following function:
# R #
library('sensitivity')
sobol(function_py_translated, X1,X2)
where function_py_translated would b the R equivalent of function_py.
I'm trying to use the rpy2 module, and for a simple function, I could make a working case:
import rpy2.rinterface as ri
import rpy2.robjects.numpy2ri
sensitivity = importr('sensitivity')
radd = ri.baseenv.get('+')
def costfun(X):
a = X[0]
b = X[1]
return(radd(a,b))
costfunr=ri.rternalize(costfun)
X1 = robjects.r('data.frame(matrix(rnorm(2*1000), nrow = 1000))')
X2 = robjects.r('data.frame(matrix(rnorm(2*1000), nrow = 1000))')
sobinde = sensitivity.sobol(costfunr,X1,X2)
print(sobinde.__getitem__(11))
The main problem is that I had to redefine the "+". Is there a way to work around this ? Being able to pass an arbitrary function without prior transformation ? The function I want to analyze is much more complicated.
Thank you very much for your time

How to read a system of differential equations from a text file to solve the system with scipy.odeint?

I have a large (>2000 equations) system of ODE's that I want to solve with python scipy's odeint.
I have three problems that I want to solve (maybe I will have to ask 3 different questions?).
For simplicity, I will explain them here with a toy model, but please keep in mind that my system is large.
Suppose I have the following system of ODE's:
dS/dt = -beta*S
dI/dt = beta*S - gamma*I
dR/dt = gamma*I
with beta = cpI
where c, p and gamma are parameters that I want to pass to odeint.
odeint is expecting a file like this:
def myODEs(y, t, params):
c,p, gamma = params
beta = c*p
S = y[0]
I = y[1]
R = y[2]
dydt = [-beta*S*I,
beta*S*I - gamma*I,
- gamma*I]
return dydt
that then can be passed to odeint like this:
myoutput = odeint(myODEs, [1000, 1, 0], np.linspace(0, 100, 50), args = ([c,p,gamma], ))
I generated a text file in Mathematica, say myOdes.txt, where each line of the file corresponds to the RHS of my system of ODE's, so it looks like this
#myODEs.txt
-beta*S*I
beta*S*I - gamma*I
- gamma*I
My text file looks similar to what odeint is expecting, but I am not quite there yet.
I have three main problems:
How can I pass my text file so that odeint understands that this is the RHS of my system?
How can I define my variables in a smart way, that is, in a systematic way? Since there are >2000 of them, I cannot manually define them. Ideally I would define them in a separate file and read that as well.
How can I pass the parameters (there are a lot of them) as a text file too?
I read this question that is close to my problems 1 and 2 and tried to copy it (I directly put values for the parameters so that I didn't have to worry about my point 3 above):
systemOfEquations = []
with open("myODEs.txt", "r") as fp :
for line in fp :
systemOfEquations.append(line)
def dX_dt(X, t):
vals = dict(S=X[0], I=X[1], R=X[2], t=t)
return [eq for eq in systemOfEquations]
out = odeint(dX_dt, [1000,1,0], np.linspace(0, 1, 5))
but I got the error:
odepack.error: Result from function call is not a proper array of floats.
ValueError: could not convert string to float: -((12*0.01/1000)*I*S),
Edit: I modified my code to:
systemOfEquations = []
with open("SIREquationsMathematica2.txt", "r") as fp :
for line in fp :
pattern = regex.compile(r'.+?\s+=\s+(.+?)$')
expressionString = regex.search(pattern, line)
systemOfEquations.append( sympy.sympify( expressionString) )
def dX_dt(X, t):
vals = dict(S=X[0], I=X[1], R=X[2], t=t)
return [eq for eq in systemOfEquations]
out = odeint(dX_dt, [1000,1,0], np.linspace(0, 100, 50), )
and this works (I don't quite get what the first two lines of the for loop are doing). However, I would like to do the process of defining the variables more automatic, and I still don't know how to use this solution and pass parameters in a text file. Along the same lines, how can I define parameters (that will depend on the variables) inside the dX_dt function?
Thanks in advance!
This isn't a full answer, but rather some observations/questions, but they are too long for comments.
dX_dt is called many times by odeint with a 1d array y and tuple t. You provide t via the args parameter. y is generated by odeint and varies with each step. dX_dt should be streamlined so it runs fast.
Usually an expresion like [eq for eq in systemOfEquations] can be simplified to systemOfEquations. [eq for eq...] doesn't do anything meaningful. But there may be something about systemOfEquations that requires it.
I'd suggest you print out systemOfEquations (for this small 3 line case), both for your benefit and ours. You are using sympy to translated the strings from the file into equations. We need to see what it produces.
Note that myODEs is a function, not a file. It may be imported from a module, which of course is a file.
The point to vals = dict(S=X[0], I=X[1], R=X[2], t=t) is to produce a dictionary that the sympy expressions can work with. A more direct (and I think faster) dX_dt function would look like:
def myODEs(y, t, params):
c,p, gamma = params
beta = c*p
dydt = [-beta*y[0]*y[1],
beta*y[0]*y[1] - gamma*y[1],
- gamma*y[1]]
return dydt
I suspect that the dX_dt that runs sympy generated expressions will be a lot slower than a 'hardcoded' one like this.
I'm going add sympy tag, because, as written, that is the key to translating your text file into a function that odeint can use.
I'd be inclined to put the equation variability in the t parameters, rather a list of sympy expressions.
That is replace:
dydt = [-beta*y[0]*y[1],
beta*y[0]*y[1] - gamma*y[1],
- gamma*y[1]]
with something like
arg12=np.array([-beta, beta, 0])
arg1 = np.array([0, -gamma, -gamma])
arg0 = np.array([0,0,0])
dydt = arg12*y[0]*y[1] + arg1*y[1] + arg0*y[0]
Once this is right, then the argxx definitions can be move outside dX_dt, and passed via args. Now dX_dt is just a simple, and fast, calculation.
This whole sympy approach may work fine, but I'm afraid that in practice it will be slow. But someone with more sympy experience may have other insights.

Categories

Resources