How can I calculate exp([[1,2,3]]) like in python numpy.
Is there any in tensorflow or any other library?
print(np.exp(np.array([[1,2,3,4],[2,3,4,66]])) )
I need a c++ solution.
actual result(math)
https://www.symbolab.com/solver/matrix-exponential-calculator/e%5E%7B%5Cbegin%7Bpmatrix%7D1%260%260%5C%5C0%261%260%5C%5C0%260%261%5Cend%7Bpmatrix%7D%7D?or=ex
Possibly duplicate. see Complex matrix exponential in C++
Careful here, the matrix exponential is actually very difficult and a current topic of research for the general case! See https://en.wikipedia.org/wiki/Matrix_exponential. What Tortellini is describing is an element wise exponentiation, which is not the same thing.
The numpy library does the element wise exponent, the math you linked seems to be the matrix exponent. For a diagonal matrix they are the same.
But the two don't correspond in general (See for example: https://www.wolframalpha.com/input/?i=exp%28%7B%7B1%2C0%2C1%7D%2C%7B0%2C1%2C0%7D%2C%7B0%2C0%2C1%7D%7D%29), so please make sure which one you need.
Related
Is there a (python) or pseudocode example of geoToH3 available? I just need this function and would like to avoid installing the library on my target environment (AWS GLUE, PySpark)
I tried to follow the javascript implementation but even that used C magic internally.
There isn't a pseudocode implementation that I'm aware of, but there's a fairly thorough explanation in the documentation. Roughly:
Select the icosahedron face (0-20) the point lies on (using point square distance in 3d space)
Project the point into face-oriented IJK coordinates
Convert the IJK coords to an H3 index by calculating the index digits at each resolution and setting the appropriate bits
The core logic can be found here and here. It's not trivial to implement - unless there's a strong reason to avoid installing, that would be the far easier and more reliable option.
If I have a vector space spanned by five vectors v1....v5, to find the orthogonal basis for A where A=[v1,v2...v5] and A is 5Xn
should I use np.linalg.qr(A) or scipy.linalg.orth(A)??
Thanks in advance
Note that sp.linalg.orth uses the SVD while np.linalg.qr uses a QR factorization. Both factorizations are obtained via wrappers for LAPACK functions.
I don't think there is a strong preference for one over the other. The SVD will be slightly more stable but also a bit slower to compute. In practice I don't think you will really see much of a difference.
You'll want to use:
scipy.linalg.orth(A)
The generally accepted rule is to use scipy.linalg - because it covers more functionality than np.linalg. Hope that helps!
I'm trying to understand LSH implementation. I found this on stackoverflow
Can you suggest a good minhash implementation?
and I try to follow the Duhaime's implementation.
In my case, i wish apply a permutation on the minhash(like in datasketch tool), and i think this implementation isn't good for me.
I already start from sparse matrix.
Someone can give some suggestion about this tecnique? isn't very diffuse so i don't find more material about implementation with Python.
I hope in you help.
Don't just look for example code. Try to understand the math behind it.
Obviously, maxhash should work similar. Or you could omit 0 values. But then you should double check the math.
Assume that I have a square matrix M. Assume that I would like to invert the matrix M.
I am trying to use the the fractions mpq class within gmpy2 as members of my matrix M. If you are not familiar with these fractions, they are functionally similar to python's built-in package fractions. The only problem is, there are no packages that will invert my matrix unless I take them out of fraction form. I require the numbers and the answers in fraction form. So I will have to write my own function to invert M.
There are known algorithms that I could program, such as gaussian elimination. However, performance is an issue, so my question is as follows:
Is there a computationally fast algorithm that I could use to calculate the inverse of a matrix M?
Is there anything else you know about these matrices? For example, for symmetric positive definite matrices, Cholesky decomposition allows you to invert faster than the standard Gauss-Jordan method you mentioned.
For general matrix inversions, the Strassen algorithm will give you a faster result than Gauss-Jordan but slower than Cholesky.
It seems like you want exact results, but if you're fine with approximate inversions, then there are algorithms which approximate the inverse much faster than the previously mentioned algorithms.
However, you might want to ask yourself if you need the entire matrix inverse for your specific application. Depending on what you are doing it might be faster to use another matrix property. In my experience computing the matrix inverse is an unnecessary step.
I hope that helps!
I have some integer matrices of moderate size (a few hundred rows). I need to solve equations of the form Ax = b where b is a standard basis vector and A is one of my matrices. I have been using numpy.linalg.lstsq for this purpose, but the rounding errors end up being too significant.
How can I carry out an exact symbolic computation?
(PS I don't really need the code to be efficient; I'm more concerned about ease of coding.)
If your only option is to use free tools written in python, sympy might work, but it could well be simpler to use mathematica.
Note that if you're serious about your comment that you require your solution vector to be integer, then you're looking for something called the "integer least squares problem". Which is believed to be NP-hard. There are some heuristic solvers, but it all gets very complicated.
The mpmath library has support for arbitrary-precision floating-point numbers, and supports matrix algebra: http://mpmath.googlecode.com/svn/tags/0.17/doc/build/matrices.html
Using sympy to do the computation exactly is then a second option.