I have this line of code in MATLAB, written by someone else:
c=a.'/b
I need to translate it into Python. a, b, and c are all arrays. The dimensions that I am currently using to test the code are:
a: 18x1,
b: 25x18,
which gives me c with dimensions 1x25.
The arrays are not square, but I would not want the code to fail if they were. Can someone explain exactly what this line is doing (mathematically), and how to do it in Python? (i.e., the equivalent for the built-in mrdivide function in MATLAB if it exists in Python?)
The line
c = a.' / b
computes the solution of the equation c b = aT for c. Numpy does not have an operator that does this directly. Instead you should solve bT cT = a for cT and transpose the result:
c = numpy.linalg.lstsq(b.T, a.T)[0].T
The symbol / is the matrix right division operator in MATLAB, which calls the mrdivide function. From the documentation, matrix right division is related to matrix left division in the following way:
B/A = (A'\B')'
If A is a square matrix, B/A is roughly equal to B*inv(A) (although it's computed in a different, more robust way). Otherwise, x = B/A is the solution in the least squares sense to the under- or over-determined system of equations x*A = B. More detail about the algorithms used for solving the system of equations is given here. Typically packages like LAPACK or BLAS are used under the hood.
The NumPy package for Python contains a routine lstsq for computing the least-squares solution to a system of equations. This routine will likely give you comparable results to using the mrdivide function in MATLAB, but it is unlikely to be exact. Any differences in the underlying algorithms used by each function will likely result in answers that differ slightly from one another (i.e. one may return a value of 1.0, whereas the other may return a value of 0.999). The relative size of this error could end up being larger, depending heavily on the specific system of equations you are solving.
To use lstsq, you may have to adjust your problem slightly. It appears that you want to solve an equation of the form cB = a, where B is 25-by-18, a is 1-by-18, and c is 1-by-25. Applying a transpose to both sides gives you the equation BTcT = aT, which is a more standard form (i.e. Ax = b). The arguments to lstsq should be (in this order) BT (an 18-by-25 array) and aT (an 18-element array). lstsq should return a 25-element array (cT).
Note: while NumPy doesn't make any distinction between a 1-by-N or N-by-1 array, MATLAB certainly does, and will yell at you if you don't use the proper one.
In Matlab, A.' means transposing the A matrix. So mathematically, what is achieved in the code is AT/B.
How to go about implementing matrix division in Python (or any language) (Note: Let's go over a simple division of the form A/B; for your example you would need to do AT first and then AT/B next, and it's pretty easy to do the transpose operation in Python |left-as-an-exercise :)|)
You have a matrix equation
C*B=A (You want to find C as A/B)
RIGHT DIVISION (/) is as follows:
C*(B*BT)=A*BT
You then isolate C by inverting (B*BT)
i.e.,
C = A*BT*(B*BT)' ----- [1]
Therefore, to implement matrix division in Python (or any language), get the following three methods.
Matrix multiplication
Matrix transpose
Matrix inverse
Then apply them iteratively to achieve division as in [1].
Only, you need to do AT/B, therefore your final operation after implementing the three basic methods should be:
AT*BT*(B*BT)'
Note: Don't forget the basic rules of operator precedence :)
You can also approach this using the pseudo-inverse of B then post multiplying that result with A. Try using numpy.linalg.pinv then combine this with matrix multiplication via numpy.dot:
c = numpy.dot(a, numpy.linalg.pinv(b))
[edited] As Suvesh pointed out, i was completely wrong before. however, numpy can still easily do the procedure he gives in his post:
A = numpy.matrix(numpy.random.random((18, 1))) # as noted by others, your dimensions are off
B = numpy.matrix(numpy.random.random((25, 18)))
C = A.T * B.T * (B * B.T).I
Related
I know that the recommendation is not to use linalg.inv and use linalg.solve when inverting matrices. This makes sense when I have situation like Ax = b and I want to get x, but is there a way to compute something like: A - B * D^{-1} * C without using linalg.inv? Or what is the most numerically stable way to deal with the inverse in the expression?
Thanks!
Please don't inv—it's not as bad as most people think, but there's easier ways: you mentioned how np.linalg.solve(A, b) equals A^{-1} . b, but there's no requirement on what b is. You can use solve to solve your question, A - np.dot(B, np.linalg.solve(D, C)).
(Note, if you're doing blockwise matrix inversion, C is likely B.transpose(), right?)
I have series with two symbols and I need to find sum with two-point accuracy. I checked results for some parts using sympy, and I know the answer, but I have no idea how to prove it. I don't even know how to prove convergence using sympy
import sympy as sp
i, j = sp.symbols('i, j', integer = True)
S = sp.Sum(sp.Sum(1/(i * j)*sp.sin(sp.pi*i/2)*
sp.sin(sp.pi*(2*j-1)/2)/sp.sinh(sp.pi*i), (i, 1,sp.oo )),(j,1,sp.oo))
S.is_convergent()
returns
NotImplementedError: convergence checking for more that one symbol
containing series is not handled
In principle, you can evaluate a Sympy Sum by calling either the method .doit(), which returns a closed-form expression (if Sympy is able to find one), or by calling the method .n(), which returns a numerical approximation of the sum (floating point number). In this case, both options do not work (I would expect at least the .n() to give an answer).
As a workaround, you could try to perform a simplification of the sum and then attempt to evaluate it. In particular,
S.factor()
returns
which transforms the double-index sum into the product of two single-index sums. Calling
S.factor().doit()
returns
which implies that the second sum cannot be evaluated in closed-form. However, .n() works now.
S.factor().n()
0.0599820444370520
Imagine I have a numpy array in python that has complex numbers as its elements.
I would like to know if it is possible to split any matrix of this kind into a hermitian and anti-hermitian part? My intuition says that this is possible, similar to the fact that any function can be split into an even and an uneven part.
If this is indeed possible, how would you do this in python? So, I'm looking for a function that takes as input any matrix with complex elements and gives a hermitian and non-hermitian matrix as output such that the sum of the two outputs is the input.
(I'm working with python 3 in Jupyter Notebook).
The Hermitian part is (A + A.T.conj())/2, the anti-hermitian part is (A - A.T.conj())/2 (it is quite easy to prove).
If A = B + C with B Hermitian and C anti-Hermitian, you can take the conjugate (I'll denote it *) on both sides, uses its linearity and obtain A* = B - C, from which the values of B and C follow easily.
I need to solve a large number of 3x3 symmetric, postive-definite systems with Python. So far, I did
res = numpy.zeros(n)
for k, obj in enumerate(data_array):
# construct A, rhs, idx from obj
res[idx] += numpy.linalg.solve(A, rhs)
This produces the correct result, however is also quite slow if n is large. (Well... Yeah.) Perhaps 3x3 isn't a problem size where calling solve() makes much sense.
Any hints?
In NumPy 1.8 and later, numpy.linalg.solve actually broadcasts. For numpy.linalg.solve(a, b), if b.ndim == a.ndim - 1, it will perform a broadcasted matrix-vector solve; otherwise, it'll do a broadcasted matrix-matrix solve. (This decision criterion isn't documented; I had to look at the source.)
If you can efficiently construct a stack of As and rhss, you can call solve once and avoid a Python loop.
I have two boolean sparse square matrices of c. 80,000 x 80,000 generated from 12BM of data (and am likely to have orders of magnitude larger matrices when I use GBs of data).
I want to multiply them (which produces a triangular matrix - however I dont get this since I don't limit the dot product to yield a triangular matrix).
I am wondering what the best way of multiplying them is (memory-wise and speed-wise) - I am going to do the computation on a m2.4xlarge AWS instance which has >60GB of RAM. I would prefer to keep the calc in RAM for speed reasons.
I appreciate that SciPy has sparse matrices and so does h5py, but have no experience in either.
Whats the best option to go for?
Thanks in advance
UPDATE: sparsity of the boolean matrices is <0.6%
If your matrices are relatively empty it might be worthwhile encoding them as a data structure of the non-False values. Say a list of tuples describing the location of the non-False values. Or a dictionary with the tuples as the keys.
If you use e.g. a list of tuples you could use a list comprehension to find the items in the second list that can be multiplied with an element from the first list.
a = [(0,0), (3,7), (5,2)] # et cetera
b = ... # idem
for r, c in a:
res = [(r, k) for j, k in b if k == j]
-- EDITED TO SATISFY BELOW COMMENT / DOWNVOTER --
You're asking how to multiply matrices fast and easy.
SOLUTION 1: This is a solved problem: use numpy. All these operations are easy in numpy, and since they are implemented in C, are rather blazingly fast.
http://www.numpy.org/
http://www.scipy.org
also see:
Very large matrices using Python and NumPy
http://docs.scipy.org/doc/scipy/reference/sparse.html
SciPy and Numpy have sparse matrices and matrix multiplication. It doesn't use much memory since (at least if I wrote it in C) it probably uses linked lists, and thus will only use the memory required for the sum of the datapoints, plus some overhead. And, it will almost certainly be blazingly fast compared to pure python solution.
SOLUTION 2
Another answer here suggests storing values as tuples of (x, y), presuming value is False unless it exists, then it's true. Alternate to this is a numeric matrix with (x, y, value) tuples.
REGARDLESS: Multiplying these would be Nasty time-wise: find element one, decide which other array element to multiply by, then search the entire dataset for that specific tuple, and if it exists, multiply and insert the result into the result matrix.
SOLUTION 3 ( PREFERRED vs. Solution 2, IMHO )
I would prefer this because it's simpler / faster.
Represent your sparse matrix with a set of dictionaries. Matrix one is a dict with the element at (x, y) and value v being (with x1,y1, x2,y2, etc.):
matrixDictOne = { 'x1:y1' : v1, 'x2:y2': v2, ... }
matrixDictTwo = { 'x1:y1' : v1, 'x2:y2': v2, ... }
Since a Python dict lookup is O(1) (okay, not really, probably closer to log(n)), it's fast. This does not require searching the entire second matrix's data for element presence before multiplication. So, it's fast. It's easy to write the multiply and easy to understand the representations.
SOLUTION 4 (if you are a glutton for punishment)
Code this solution by using a memory-mapped file of the required size. Initialize a file with null values of the required size. Compute the offsets yourself and write to the appropriate locations in the file as you do the multiplication. Linux has a VMM which will page in and out for you with little overhead or work on your part. This is a solution for very, very large matrices that are NOT SPARSE and thus won't fit in memory.
Note this solves the complaint of the below complainer that it won't fit in memory. However, the OP did say sparse, which implies very few actual datapoints spread out in giant arrays, and Numpy / SciPy handle this natively and thus nicely (lots of people at Fermilab use Numpy / SciPy regularly, I'm confident the sparse matrix code is well tested).