Linear combination of function objects in python - python

Problem: I want to numerically integrate a function f(t,N) that may be written as a linear combination of N other known functions g_1(t), ..., g_N(t).
My Solution I: I know the functions g_i and also the coefficients, so my initial idea was to create an row vector of coefficients and a column vector containing the lambda functions g_i and then use np.dot for the inner product to get the function object I want. Unfortunately, you cannot just add two function objects nor multiply a function object by a scalar.
My Solution II: Of course I can do something like (basically defining point wise what I want):
def f(t,N,a,g):
"""
a = numpy array of coefficients
g = numpy array of lambda functions corresponding to functions g_i
"""
res = 0
for i in xrange(N):
res += a[i] * g[i](t)
return res
But the for loop is of course not very great, especially when:
I need to run this function at many many time steps t
I pass this function f into a numerical integration routine like scipy.integrate.quad.

briefly:
In Cython You could speed up indexing using memoryviews.
If these equations are linear You could superimpose them using sympy:
example:
import sympy as sy
x,y = sy.symbols('x y')
g0 = x*0.33 + 6
g1 = x*0.72 + 1.3
g2 = x*11.2 - 6.5
gn = x*3.3 - 7.3
G = [g0,g1,g2,gn]
#this is superimposition
print sum(G).subs(x,15.1)
print sum(gi.subs(x,15.1) for gi in G)
'''
output:
228.305000000000
228.305000000000
'''
If its not what You want, give some example input and output, so that I can try and dont go blind...
With low ram avaiable You could get finall equation to numexpr and evaluate it with some input. Otherwise its best to work on numpy arrays.

Related

How to assign a index based formula to each indexed position of a tensor on sympy

Let's say that we have a IndexedBase 2-dim tensor r[i,j]. I want to assign to each indexed position a formula that uses the i and j positions of other 1-dim tensors, like this.
from sympy import symbols, IndexedBase, Idx
from sympy.functions.elementary.exponential import *
N = symbols('N', integer=True)
Np = symbols('Np', integer=True)
x = IndexedBase('x', (Np,))
z = IndexedBase('z', (Np,))
r = IndexedBase('r', (Np,N,))
i = Idx('i', (1,Np))
j = Idx('j', (1,N))
r[i,j] = sqrt(x[i]**2 + z[j]**2)
I know that could be easily translated to numpy, but sympy does not allow item association IndexedBase objects.
I need to understand how sympy treats the IndexedBase variables on this case. The final objective is to use lambdify on a much more complex expression, in order to allow numpy vectors as input arguments, but the operations are all based at this type of association. How could I perform this task?
Maybe I did not uderstand correctly the basis of the IndexedBase variables in Sympy. Sorry if this is a dummy question.

Something like de-lambdify

Suppose I have the following lambda function
import numpy as np
f = lambda x,t : np.cos(t)
Now I want to obtain a symbolic expression of f (the ultimate goal is to obtain a primitive of that function f). So
import sympy as sym
x_s = sym.Symbol('x')
t_s = sym.Symbol('t')
sym.integrate(f(x_s,t_s), t_s)
But that fails with the error:
TypeError: loop of ufunc does not support argument 0 of type Symbol
which has no callable cos method
I want to be able to translate any Closed-form expression to sympy.
Thanks.
Edit: What I am trying to achieve at the end is to numerically solve a PDE using Dedalus. For the moment I just need a linear reaction diffusion equation with non-constant coefficients. That function f(x,t) is one of the coefficients. Since the equation is linear, I have the analytic solution, which is a function that depends on the primitive with respect to t of that function f(x,t). So I want to input the function f(x,t) in one place, and then use it to construct the numerical solution (in this case I just need to call the f(x,t) and also the analytic solution (in this case I need to call the primitive of f(x,t)).
So bottom line:
I need to define a lambda function that depends on 2 arguments x and t for the following purpose:
1) compute values given x and t.
2) convert that function definition to text to be used to solve the numerical equation using dedalus (and replace every reference to sym to np, otherwise dedalus will give error).
3) be able to obtain the primitive of that function (that's why I am using sympy) and then define a lambda function of that primitive in order to compute values for getting the analytical solution.
You are getting error because numpy`s cos is not compatible with sympy.Symbol. To get rid of this error you should rewrite your lambda like this:
import sympy as sym
f = lambda x,t : sym.cos(t)
De-lambdify is performed like this:
x_s = sym.Symbol('x')
t_s = sym.Symbol('t')
sym.integrate(f(x_s,t_s), t_s)
When you call f, it returns sympy cos function.
P.S. Check if you really need x parameter in f function.
So, full code will be:
import sympy as sym
f = lambda x,t : sym.cos(t)
x_s = sym.Symbol('x')
t_s = sym.Symbol('t')
sym.integrate(f(x_s,t_s), t_s)

How to parallelize a computation?

I am trying to compute a ordinary ODE (ordinary differential equation) on a distance matrix but I do not know how to parallelize my code.
from scipy.integrate import quad
from math import exp
import numpy as np
import matplotlib.pyplot as plt
#I have my distance matrix and I wanna count how many points are distanced
# from point i with distance r at maximum
def v(dist, r, i):
return 1/N*(np.count_nonzero(np.select([dist[i,:]<r],[dist[i,:]]))+1)
#integral of rho from r to infinity
def rho_barre(rho, r):
return quad(rho, r, np.inf)
# integral over r of a certain integrand
def grad_F(i, j, rho, v, v_r, dist):
return quad(lambda r : ((v(dist, r, i)+v(dist, r, j))/2-v_r)*rho_barre(rho, max(r, dist[i,j])), 0, np.inf)
#parameters
delta_T = 0.1
rho = (lambda x: exp(-x))
v_r =0
for t in range (1000):
for i in range(N):
for j in range(N):
d_matrix[i,j] = d_matrix[i,j] + delta_T* grad_F(i,j,rho, v, v_r, d_matrix)
First I have the following error can't multiply sequence by non-int of type 'float' that I don't understand why. Then, I know that three loops are too much in python and I want to know how can we make it faster in Python.
It sounds like you have a few different questions. Let me see if I can answer more abstractly and you can piece it together
Parallel
One very easy easy way to work in parallel in Python is multiprocessing
If you apply the same function many times, instead of:
res = [myfun(arg) for arg in args]
you can do:
import multiprocessing as mp
with mp.Pool() as pool:
res = pool.map(myfun,args)
There are limitations. Both myfun and args must be pickleable (which lambda is not so you will want to address that in your code)
Nested Loops
In general, python loops are slow. When working with NumPy, it is better to "vectorize" if you can.
So instead of woking on each [i,j] element of d_matrix, see if you can work on them all at the same time. So compute a matrix grad_F (rather than a function) and add it. You will still need your time loop but you may be able to solve your d_matrix in a single, very fast, action.
Other tips:
Can you precompute rho_barre. Maybe use scipy.integrate.cumtrapz to compute that?
Also, try to write fewer one-liners. Use new functions instead of lambdas. It will make understanding your code much easier!

Is there an efficient function to calculate a product?

I'm looking for a numpy function (or a function from any other package) that would efficiently evaluate
with f being a vector-valued function of a vector-valued input x. The product is taken to be a simple component-wise multiplication.
The issue here is that both the length of each x vector and the total number of result vectors (f of x) to be multiplied (N) is very large, in the order of millions. Therefore, it is impossible to generate all the results at once (it wouldn't fit in memory) and then multiply them afterwards using np.multiply.reduce or the like .
A toy example of the type of code I would like to replace is:
import numpy as np
x = np.ones(1000000)
prod = f(x)
for i in range(2, 1000000):
prod *= f(i * np.ones(1000000))
with f a vector-valued function with the dimension of its output equal to the dimension of its input.
To be sure: I'm not looking for equivalent code, but for a single, highly optimized function. Is there such a thing?
For those familiar with Wolfram Mathematica: It would be the equivalent to Product. In Mathematica, I would be able to simply write Product[f[i ConstantArray[1,1000000]],{i,1000000}].
Numpy ufuncs all have a reduce method. np.multiply is a ufunc. So it's a one-liner:
np.multiply.reduce(v)
Where v is the vector of values you compute in what is hopefully an equally efficient manner.
To compute the vector, just apply your function to the input:
v = f(x)
So with your example:
np.multiply.reduce(np.sin(x))
Alternative
A simpler way to phrase the same thing is np.prod:
np.prod(v)
You can also use the prod method directly on your vector:
v.prod()

Derivative of a sum in Theano

So I want to calculate the gradient and Hessian of the following sum. Afaik Theano should be able to do that, however I can't figure out how.
X is a Matrix of size M x N; y M sized vector; beta a N sized vector.
One way to compute the sum is using the scan() function, which I did like this:
res,ups = theano.scan(lambda v,w: v*np.log(1/(1+np.exp(-1*w.dot(beta))))
+((1-v)*(np.log(1/(1+np.exp(w.dot(beta)))))), sequences = [y,X])
t7 = theano.function(inputs = [X,y,beta],outputs = res)
and that works fine as far as I can tell. However, I can't use this as an Input for the grad() function with respect to beta.
So what I would like to know is if there is a way to either use the scan function as input of the grad function or a different way to compute the sum.
(I first tried in sympy, but sympy can't lambdify Indexedbase objects, so I can compute the grad but can't use it as a function, maybe that helps? )
The Sum adds up a function of the Dot Product of a line in X and beta while the binary vector y decides which of two functions will be used.
log(1/(1+exp(-X_i*beta)))
Hope that helps?

Categories

Resources