How to take element-wise logarithm of a matrix in sympy? - python

Working with a sympy Matrix or numpy array of sympy symbols, how does one take the element-wise logarithm?
For example, if I have:
m=sympy.Matrix(sympy.symbols('a b c d'))
Then np.abs(m) works fine, but np.log(m) does not work ("AttributeError: log").
Any solutions?

Use Matrix.applyfunc:
In [6]: M = sympy.Matrix(sympy.symbols('a b c d'))
In [7]: M.applyfunc(sympy.log)
Out[7]:
⎡log(a)⎤
⎢ ⎥
⎢log(b)⎥
⎢ ⎥
⎢log(c)⎥
⎢ ⎥
⎣log(d)⎦
You can't use np.log because that does a numeric log, but you want the symbolic version, i.e., sympy.log.

If you want an elementwise logarithm, and your matrices are all going to be single-column, you should just be able to use a list comprehension:
>>> m = sympy.Matrix(sympy.symbols('a b c d'))
>>> logm = sympy.Matrix([sympy.log(x) for x in m])
>>> logm
Matrix([
[log(a)],
[log(b)],
[log(c)],
[log(d)]])
This is kind of ugly, but you could wrap it in a function for ease, e.g.:
>>> def sp_log(m):
return sympy.Matrix([sympy.log(x) for x in m])
>>> sp_log(m)
Matrix([
[log(a)],
[log(b)],
[log(c)],
[log(d)]])

Related

How to make a symmetric matrix of symbols in python?

I´d like to create a matrix whose elements are all variables, so i tried the following
import sympy as sp
from sympy import MatrixSymbol, Matrix
A=sp.symbols('rho0:'+str(side*(side)/2))
rho = MatrixSymbol('rho', side, side)
rho[0][0]=A[0]
count=0
for i in range(side):
for j in range(i,side):
rho[i][j]=A[count]
rho[j][i]=rho[i][j]
count+=1
Nevertheless it seems the type of matrix I´m using doesn´t support symbols, what should I do ?
MatrixSymbol is used to represent an abstract matrix as a single entity/symbol in a manner like Symbol is used to represent a scalar. If you are just going to use generic symbols in order, perhaps you could use
>>> MatrixSymbol('rho', 2, 2).as_explicit()
Matrix([[rho[0, 0], rho[0, 1]], [rho[1, 0], rho[1, 1]]])
Notice that a normal Matrix of symbols has been produced.
But since you want the matrix to be symmetric you would either have to loop back over the matrix and assign elements below the matrix as you tried in your code above.
Alternatively, you can modify what you did above by using correct Matrix indexing and a Matrix instead of a MatrixSymbol:
import sympy as sp
from sympy import zeros, Matrix
sides = 2
A=sp.symbols('rho0:'+str(side*(side+1)/2)) # note (side+1) not side
rho = zeros(side, side)
rho[0,0]=A[0]
count=0
for i in range(side):
for j in range(i,side):
rho[i,j]=A[count]
rho[j,i]=rho[i,j]
count+=1
If you first make a matrix that is not symmetric then you can make a symmetric matrix out of its elements:
In [23]: M = MatrixSymbol('M', 3, 3)
In [24]: M
Out[24]: M
In [25]: M.as_explicit()
Out[25]:
⎡M₀₀ M₀₁ M₀₂⎤
⎢ ⎥
⎢M₁₀ M₁₁ M₁₂⎥
⎢ ⎥
⎣M₂₀ M₂₁ M₂₂⎦
In [26]: M[0,0]
Out[26]: M₀₀
In [27]: Msym = Matrix(3, 3, lambda i, j: M[min(i,j),max(i,j)])
In [28]: Msym
Out[28]:
⎡M₀₀ M₀₁ M₀₂⎤
⎢ ⎥
⎢M₀₁ M₁₁ M₁₂⎥
⎢ ⎥
⎣M₀₂ M₁₂ M₂₂⎦
In [29]: Msym.is_symmetric()
Out[29]: True

substituting variable to second order derivative in sympy

I am using sympy to derive some equations and I experience some unexpected behaviour with substituting. Let's say I have a function f(x) that I differentiate by x like this:
fx = f(x).diff()
This results in the following:
Now if I substitute x with a value such as pi, like this:
fx.subs(x, pi)
I get:
However, if I substitute x with another variable, let say phi, like this:
fx.subs(x, phi)
I get something unexpected:
What is happening is that sympy is replacing x in the equation before differentiating, I would like that it does after it. I have seen some suggestions that I should use .doit(), but that does not result in a wanted solution:
fx.doit().subs(x, phi)
What am I doing wrong and how can I replace the variable after the differentiation?
Use srepr to see the structure of an expression more directly:
In [48]: f(x).diff(x).subs(x, pi)
Out[48]:
⎛d ⎞│
⎜──(f(x))⎟│
⎝dx ⎠│x=π
In [49]: srepr(f(x).diff(x).subs(x, pi))
Out[49]: "Subs(Derivative(Function('f')(Symbol('x')), Tuple(Symbol('x'), Integer(1))), Tuple(Symbol('x')), Tuple(pi))"
So you can see that Subs is what is used to represent an unevaluated substitution:
In [50]: Subs(f(x).diff(x), x, phi)
Out[50]:
⎛d ⎞│
⎜──(f(x))⎟│
⎝dx ⎠│x=φ
Then doit is used to make the Subs evaluate (by performing the substitution):
In [51]: Subs(f(x).diff(x), x, phi).doit()
Out[51]:
d
──(f(φ))
dφ
https://docs.sympy.org/latest/modules/core.html#subs

Simplifying looped np.tensordot expression

Currently, my script looks as follows:
import numpy as np
a = np.random.rand(10,5,2)
b = np.random.rand(10,5,50)
c = np.random.rand(10,2,50)
for i in range(a.shape[0]):
c[i] = np.tensordot(a[i], b[i], axes=(0,0))
I want to replicate the same behaviour without using a for loop, since it can be done in parallel. However, I have not found a neat way yet to do this with the tensordot function. Is there any way to create a one-liner for this operation?
You can use numpy.einsum function, in this case
c = np.einsum('ijk,ijl->ikl', a, b)
An alternative to einsum is matmul/#. The first array has to be transposed so the sum-of-products dimension is last:
In [162]: a = np.random.rand(10,5,2)
...: b = np.random.rand(10,5,50)
In [163]: c=a.transpose(0,2,1)#b
In [164]: c.shape
Out[164]: (10, 2, 50)
In [165]: c1 = np.random.rand(10,2,50)
...:
...: for i in range(a.shape[0]):
...: c1[i] = np.tensordot(a[i], b[i], axes=(0,0))
...:
In [166]: np.allclose(c,c1)
Out[166]: True
tensordot reshapes and transposes the arguments, reducing the task to simple dot. So while it's fine for switching which axes get the sum-of-products, it doesn't handle batches any better than dot. That's a big part of why matmul was added. np.einsum gives a same power (and more), but performance usually isn't quite as good (unless it's been "optimized" to the equivalent matmul).

How to remove a Coefficient of (1) from SymPy symbolic expression?

I want to remove any coefficient that is equal to 1 in sympy symbolic expression , for example:
I want 1.0x**2 to be x**2 , Is there anyway to do it ?
Also if possible to round integers , for example 2.0x**2 to be 2*x**2
You can use nsimplify:
In [4]: nsimplify(2.0*x**2)
Out[4]:
2
2⋅x
in a Python shell
>>> import sympy
>>> sympy.nsimplify("1.0*x**2")
x**2
>>> sympy.nsimplify("2.0*x**2")
2*x**2

Understanding the runtime of numpy.where and equivalent alternatives

According to http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html, if x and y are given and input arrays are 1-D, where is equivalent to [xv if c else yv for (c,xv, yv) in zip(x!=0, 1/x, x)]. When doing runtime benchmarks, however, they have significantly different speeds:
x = np.array(range(-500, 500))
%timeit np.where(x != 0, 1/x, x)
10000 loops, best of 3: 23.9 µs per loop
%timeit [xv if c else yv for (c,xv, yv) in zip(x!=0, 1/x, x)]
1000 loops, best of 3: 232 µs per loop
Is there a way I can rewrite the second form so that it has a similar runtime to the first? The reason I ask is because I'd like to use a slightly modified version of the second case to avoid division by zero errors:
[1 / xv if c else xv for (c,xv) in zip(x!=0, x)]
Another question: the first case returns a numpy array while the second case returns a list. Is the most efficient way to have the second case return an array is to first make a list and then convert the list to an array?
np.array([xv if c else yv for (c,xv, yv) in zip(x!=0, 1/x, x)])
Thanks!
You just asked about 'delaying' the 'where':
numpy.where : how to delay evaluating parameters?
and someone else just asked about divide by zero:
Replace all elements of a matrix by their inverses
When people say that where is similar to the list comprehension, they attempt to describe the action, not the actual implementation.
np.where called with just one argument is the same as np.nonzero. This quickly (in compiled code) loops through the argument, and collects the indices of all non-zero values.
np.where when called with 3 arguments, returns a new array, collecting values from the 2 and 3rd arguments based on the nonzero values. But it's important to realize that those arguments must be other arrays. They are not functions that it evaluates element by element.
So the where is more like:
m1 = 1/xv
m2 = xv
[v1 if c else v2 for (c, v1, v2) in zip(x!=0, m1, m2)]
It's easy to run this iteration in compiled code because it just involves 3 arrays of matching size (matching via broadcasting).
np.array([...]) is a reasonable way of converting a list (or list comprehension) into an array. It may be a little slower than some alternatives because np.array is a powerful general purpose function. np.fromiter([], dtype) may be faster in some cases, because it isn't as general (you have to specify dtype, and it it only works with 1d).
There are 2 time proven strategies for getting more speed in element-by-element calculations:
use packages like numba and cython to rewrite the problem as c code
rework your calculations to use existing numpy methods. The use of masking to avoid divide by zero is a good example of this.
=====================
np.ma.where, the version for masked arrays is written in Python. Its code might be instructive. Note in particular this piece:
# Construct an empty array and fill it
d = np.empty(fc.shape, dtype=ndtype).view(MaskedArray)
np.copyto(d._data, xv.astype(ndtype), where=fc)
np.copyto(d._data, yv.astype(ndtype), where=notfc)
It makes a target, and then selectively copies values from the 2 inputs arrays, based on the condition array.
You can avoid division by zero while maintaining performance by using advanced indexing:
x = np.arange(-500, 500)
result = np.empty(x.shape, dtype=float) # set the dtype to whatever is appropriate
nonzero = x != 0
result[nonzero] = 1/x[nonzero]
result[~nonzero] = 0
If you for some reason want to bypass an error with numpy it might be worth looking into the errstate context:
x = np.array(range(-500, 500))
with np.errstate(divide='ignore'): #ignore zero-division error
x = 1/x
x[x!=x] = 0 #convert inf and NaN's to 0
Consider changing the array in place by using np.put():
In [56]: x = np.linspace(-1, 1, 5)
In [57]: x
Out[57]: array([-1. , -0.5, 0. , 0.5, 1. ])
In [58]: indices = np.argwhere(x != 0)
In [59]: indices
Out[59]:
array([[0],
[1],
[3],
[4]], dtype=int64)
In [60]: np.put(x, indices, 1/x[indices])
In [61]: x
Out[61]: array([-1., -2., 0., 2., 1.])
The approach above does not create a new array, which could be very convenient if x is a large array.

Categories

Resources