Anything like SciPy in Ruby? - python

Looking further into the differences between Python and Ruby, is there a Ruby equivalent to SciPy, or what other scientific math gems are available for Ruby?

There's nothing quite as mature or well done as SciPy, but check out SciRuby and Numerical Ruby.

rnum
Ruby Numerical Library is a linear
algebra package using Blas and Lapack.
http://rnum.rubyforge.org/
Site has some speed comparisons

Someone else mentioned NArray and Ara T. Howard's / #drawohara's SciRuby.
But there's also a new SciRuby project (proceeding with Ara's and Masahiro Tanaka's blessings) which includes a dense and sparse matrix gem, NMatrix. It's not finished yet, but it does basic stuff.
The SciRuby github account also has a bunch of other useful gems, including statsample (for statistics) and rubyvis (for visualization).

linalg https://github.com/wedesoft, installation instructions: http://www.quora.com/Installation-Instructions/How-do-I-install-Ruby-linalg-library-on-Mac

Related

Solving large sparse linear system of quations Python vs Matlab [duplicate]

I want to compute magnetic fields of some conductors using the Biot–Savart law and I want to use a 1000x1000x1000 matrix. Before I use MATLAB, but now I want to use Python. Is Python slower than MATLAB ? How can I make Python faster?
EDIT:
Maybe the best way is to compute the big array with C/C++ and then transfering them to Python. I want to visualise then with VPython.
EDIT2: Which is better in my case: C or C++?
You might find some useful results at the bottom of this link
http://wiki.scipy.org/PerformancePython
From the introduction,
A comparison of weave with NumPy, Pyrex, Psyco, Fortran (77 and 90) and C++ for solving Laplace's equation.
It also compares MATLAB and seems to show similar speeds to when using Python and NumPy.
Of course this is only a specific example, your application might be allow better or worse performance. There is no harm in running the same test on both and comparing.
You can also compile NumPy with optimized libraries such as ATLAS which provides some BLAS/LAPACK routines. These should be of comparable speed to MATLAB.
I'm not sure if the NumPy downloads are already built against it, but I think ATLAS will tune libraries to your system if you compile NumPy,
http://www.scipy.org/Installing_SciPy/Windows
The link has more details on what is required under the Windows platform.
EDIT:
If you want to find out what performs better, C or C++, it might be worth asking a new question. Although from the link above C++ has best performance. Other solutions are quite close too i.e. Pyrex, Python/Fortran (using f2py) and inline C++.
The only matrix algebra under C++ I have ever done was using MTL and implementing an Extended Kalman Filter. I guess, though, in essence it depends on the libraries you are using LAPACK/BLAS and how well optimised it is.
This link has a list of object-oriented numerical packages for many languages.
http://www.oonumerics.org/oon/
NumPy and MATLAB both use an underlying BLAS implementation for standard linear algebra operations. For some time both used ATLAS, but nowadays MATLAB apparently also comes with other implementations like Intel's Math Kernel Library (MKL). Which one is faster by how much depends on the system and how the BLAS implementation was compiled. You can also compile NumPy with MKL and Enthought is working on MKL support for their Python distribution (see their roadmap). Here is also a recent interesting blog post about this.
On the other hand, if you need more specialized operations or data structures then both Python and MATLAB offer you various ways for optimization (like Cython, PyCUDA,...).
Edit: I corrected this answer to take into account different BLAS implementations. I hope it is now a fair representation of the current situation.
The only valid test is to benchmark it. It really depends on what your platform is, and how well the Biot-Savart Law maps to Matlab or NumPy/SciPy built-in operations.
As for making Python faster, Google's working on Unladen Swallow, a JIT compiler for Python. There are probably other projects like this as well.
As per your edit 2, I recommend very strongly that you use Fortran because you can leverage the available linear algebra subroutines (Lapack and Blas) and it is way simpler than C/C++ for matrix computations.
If you prefer to go with a C/C++ approach, I would use C, because you presumably need raw performance on a presumably simple interface (matrix computations tend to have simple interfaces and complex algorithms).
If, however, you decide to go with C++, you can use the TNT (the Template Numerical Toolkit, the C++ implementation of Lapack).
Good luck.
If you're just using Python (with NumPy), it may be slower, depending on which pieces you use, whether or not you have optimized linear algebra libraries installed, and how well you know how to take advantage of NumPy.
To make it faster, there are a few things you can do. There is a tool called Cython that allows you to add type declarations to Python code and translate it into a Python extension module in C. How much benefit this gets you depends a bit on how diligent you are with your type declarations - if you don't add any at all, you won't see much of any benefit. Cython also has support for NumPy types, though these are a bit more complicated than other types.
If you have a good graphics card and are willing to learn a bit about GPU computing, PyCUDA can also help. (If you don't have an nvidia graphics card, I hear there is a PyOpenCL in the works as well). I don't know your problem domain, but if it can be mapped into a CUDA problem then it should be able to handle your 10^9 elements nicely.
And here is an updated "comparison" between MATLAB and NumPy/MKL based on some linear algebra functions:
http://dpinte.wordpress.com/2010/03/16/numpymkl-vs-matlab-performance/
The dot product is not that slow ;-)
I couldn't find much hard numbers to answer this same question so I went ahead and did the testing myself. The results, scripts, and data sets used are all available here on my post on MATLAB vs Python speed for vibration analysis.
Long story short, the FFT function in MATLAB is better than Python but you can do some simple manipulation to get comparable results and speed. I also found that importing data was faster in Python compared to MATLAB (even for MAT files using the scipy.io).
I would also like to point out that Python (+NumPy) can easily interface with Fortran via the F2Py module, which basically nets you native Fortran speeds on the pieces of code you offload into it.

Expokit realization on Python

I am looking for a Pythonic realization of Expokit, which is a software package that provides matrix exponential routines for small dense or very large sparse matrices, real or complex, i.e. it finds
w(t) = exp(t*A)*v
This package had been realized in Fortran and Matlab and can be found here https://www.maths.uq.edu.au/expokit/
I have found a python wrapper expokitpy
https://github.com/weinbe58/expokitpy and a Krylov subspace methods package KryPy https://github.com/andrenarchy/krypy. Both seem to be relevant, however neither of them goes with good enough documentation (for me) to do time-evolution.
Does somebody have a working solution with the packages mentioned above or similar?
In case this is still useful to someone, it looks like there was an effort to incorporate expokit within scipy which has now stalled and is looking for somebody to finish. Though here are some instructions to compile with Fortran and then run via Python, with good results.
It seems also to have been adopted by slepc4py, which is then used by quimb, which seems useful if you need it for quantum information (or just use its expm and expm_multiply methods).

Computing Eigen Values and Eigen Vector using IronPython2.7

This might be a frequent problem for IronPython users. But I am new to python.
I wish to compute Eigen Value and Eigen vector of around 50x50 matrix using IronPython2.7
I explored possibilities of using Numpy and Scipy, but they are not supported for IronPython.
Are there any better ways to achieve what I require?
Sho for IronPython is a good environment by Microsoft that does provide what you want. I am glad it worked for you.
To quote the link itself
Sho is a playground for data: a set of libraries and an interactive
environment for rapid prototyping, data analysis, and visualization.
The linear algebra classes in Sho do provide multiple functions for computing the eigenvalues, as shown in the book of Sho.

OpenGL matrix math utilities for Python?

Before I do it myself, are there any Python libraries available for OpenGL-specific/compatible matrix math on 4x4 matrices? Basically, I need about the feature set offered by Android's android.opengl.Matrix class.
I created the library Pyrr to provide all the maths features you need for Core OpenGL.
It features matrices, vectors and quaternions and basic support for other primitives (rectangles, rays, lines, etc).
It has both a procedural API and, more recently, an Object Oriented API which is very powerful.
It's available on PyPi pip install pyrr and from the github link above.
Feedback, issues and new features are welcome!
You can use numpy to generate data that is compatible with OpenGL. Many of the PyOpenGL calls can take numpy data structures directly (assuming it's the correct type). Additionally, numpy arrays are typically well arranged in memory, and so you can do what you want with the data (and it's easy to check how they are arranged).
PyGLM might be something to look at as well. The author claims to be 2x to 10x as fast as numpy.
It is based on the popular OpenGL Mathematics, C++ header only, library.
If you already know what you want in OpenGL, why not use PyOpenGL? I believe all the functionality you want should be there, and here are some docs on doing matrix transformations and interfacing with NumPy.

Lua Equivalent for NumPy and SciPy?

I'm thinking of learning lua, i learned that it is a smaller language compared to python and has an efficient JIT compiler implementation in the form LuaJIT.
I would like to know is it possible to use lua the way i use python with Numpy+Scipy.
further if Lua has numpy+scipy equivalent does it have a matplotlib equivalent?
there is something else, i found something similar to Scipy+matplotlib for Lua,i'm not sure how active development is last release was in January 2011, it is GSL-shell
it leverages the GSL library and has plotting capabilities , it is more like matlab than scipy+matplotlib.
There is numlua, but since it depends on BLAS/LAPACK, FFTW, and HDF5, LuaJIT will not buy you any performance gain with numlua per se.
I am authoring the Lunum project, which has no dependencies and can be used as a shared module or embedded in other C applications.
It's in active development, and used in serious physics research. It supports a good subset of the Numpy semantics. Array slicing will be ready in the next release.
https://github.com/jzrake/lunum
I would also have a look at SciLua:
A complete framework for numerical computing based on LuaJIT which combines the ease of use of scripting languages (MATLAB, R, ...) with the high performance of compiled languages (C/C++, Fortran, ...).

Categories

Resources