Scientific libraries for Lua? [closed] - python

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Are there any scientific packages for Lua comparable to Scipy?

You should try Torch7 (github).
Torch7 has a very nice and efficient vector/matrix/tensor numerical library
with a Lua front-end. It also has a bunch of functions for computer vision
and machine learning.
It's pretty recent but getting better quickly.

One can always use Lunatic Python and access scipy inside lua.
> require("python")
> numpy = python.import("numpy")
> numpy.array ... etc ..

You have some options:
Numeric Lua - C module for Lua 5.1/5.2, provides matrices, FFT, complex numbers and others
GSL Shell - Modification of Lua (supports Lua libraries) with a nice syntax. Provides almost everything that Numeric Lua does, plus ODE solvers, plotting capabilities, and other nice things. Has a great documentation.
SciLua - Pure LuaJIT module. Aims to be a complete framework for scientific computing in Lua. Provides vectors and matrices, random numbers / distributions, optimization, others. Still in early development.
Lua Numerical Algorithms - Pure LuaJIT module (uses blas/lapack via LuaJIT FFI). Provides matrices / linear algebra, FFT, complex numbers, optimization algorithms, ODE solver, basic statistics (+ PCA, LDA), and others. Still in early development, but has a somewhat complete documentation and test suits.

There is the basis for one in Numeric Lua.

I'm not sure if it is comparable to Scipy, but there is GSL Shell which is based on LuaJIT and GNU Scientific Library, which offers many numerical algorithms and vector/matrix linear algebra operations.

There's a Numpy-like extension for Lua which runs without dependencies at
https://github.com/jzrake/lunum
In the future it will provide FFT's and linear algebra like Numpy+Scipy. Presently it supports numeric array manipulation like in Numpy.

Related

Best language for battery modelling? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm interested to learn if there's a general consensus around using one language or environment for building physics-based computational models of batteries?
The modelling typically involves mathematically representing electrochemical, mechanical and thermal phenomena, solving partial differential equations and outputting plots of different variables in two and three dimensions.
So far, I've seen various academic research groups using MATLAB, but from other questions here, I can see that Fortran and Python have been suggested for relatively generic physics modelling. (See here: https://goo.gl/3ACddi)
I have a preference for a free (as in beer & speech) environment, wherever possible, but I recognise that some proprietary environments may have built-in toolboxes that are useful. Additionally, I would like the environment to allow the code to be easily parallelized so that it can run across many cores.
This is a broad question, but I'll share what I've experienced so far. Maybe it's of some use. Keep in mind that this is all my personal option.
MATLAB: It's widely used in academic environments. One reason is that Mathworks is following a smart business strategy where educational licenses are very cheap compared to the retail prize, thus many students and professors get used to MATLAB, even if there might be something better for them out there.
MATLAB has the advantage of being very easy to code. It will often take you a short time to get the first prototype of your code running. This comes at the expense of performance (compared to C/C++ and Python, which are often a bit faster than MATLAB). One of the downsides is that Matlab was not meant to compete with C/C++ and the like. You don't even have namespaces in matlab. Writing frameworks in matlab is therefore a whole lot more tiresome (if not impossible) and inefficient than writing one in C/C++. For instance if you create a function in your workspace called max which does absolutely nothing, you have no way to call Matlab's built in max function as long as yours is in the workspace.
C++: I'm studying engineering and here C++ is the favourite choice when it comes to physical simulations. Compared to other languages it's really fast. And since the programmer is responsible for memory management, he or she can get the last 10% bit of performance by writing efficient and case specific code for handling memory. There's also a ton of Open Source libraries out there, for example Eigen which is a library for Matrix and Vector calculation.
C: Some people (hello Linus) are convinced, that C++ is not a good language and prefer the plain C since it is a bit faster and the library "bloat" (in C++ coming from STD, Boost and the likes) is smaller. More arguments against C++ are that it seduces the programmer into creating classes for every little thing and use Polymorphism out of laziness. Both things can have a negative impact on performance, but if it makes it worth refusing to work with C++ at all is up to you to decide. As a sidenote: The complete Linux Kernel is written in C, not C++ and many tools like GIT are also written in plain C.
Python: Another language suitable for rapid prototyping since you don't need to compile a lot and the syntax is optimized to be easy and intuitive to use. Debuggers are not necessary since you can simply use the Interpreter to check out different variables and their values, much like in matlab. But contrary to Matlab Python also allows you to create objects with methods and everything like C++. (I know that Matlab recently added classes, but I refuse to say it's equivalent to C++/Python). Python is also widely used for academic purposes. There are open source libraries for Machine Learning, Artificial Intelligence and everything. There are also libraries which allow you to use Fractions without approximations. I.e. 1/6 is stored as two numbers, numerator and denominator, and not as a double. In the open source community people are putting a great effort into copying many features Matlab has over to Python, which is why you'll find many open source enthusiasts using it.
You see, some languages are good for rapid prototyping, meaning for scenarios where you want to get a proof of concept. MATLAB is useful since you don't have to compile anything and you can quickly visualize results. Python is also worth noting for rapid prototyping. But once you need to deploy the code on actual hardware or want to sell a finished product with user interface and everything, you'd probably go with something like C/C++ or Python, but not Matlab.

MatLab user thinking of learning Python [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I’m considering learning Python with the idea of letting go of MatLab, although I really like MatLab. However, I’m concerned that getting all of the moving and independent pieces to fit together may be a challenge and one that may not be worth it in the end. I’ve also thought about getting into Visual Basic or Visual C++. In the end, I keep coming back to the ease of MatLab. Any thoughts or comments regarding the difficulty of getting going in Python? Is it worth it?
A good place to start is this page: SciPy getting started, which gives an overview of the scientific toolstack that you might be able to use to move towards MatLab from Python: notably the libraries numpy, scipy, matplotlib, and the interactive working environment IPython. In particular, numpy and matplotlib are designed to be very similar to working with MatLab.
NumPy‘s array type augments the Python language with an efficient data structure useful for numerical work, e.g., manipulating matrices. NumPy also provides basic numerical routines, such as tools for finding eigenvectors.
For example in matlab you might write
eye(3)-diag([1 1],1)
and get back
1 -1 0
0 1 -1
0 0 1
In Python/numpy you would write
import numpy as np
np.eye(3)-np.diag([1,1],1)
and get back
array([[ 1., -1., 0.],
[ 0., 1., -1.],
[ 0., 0., 1.]])
With matplotlib
you have full control of line styles, font properties, axes properties, etc, via an object oriented interface or via a set of functions familiar to MATLAB users.
In MatLab for plotting you might write
x=linspace(-pi, pi, 100);
plot(x,sin(x))
In Python/numpy/matplotlib you would write
x=np.linspace(-np.pi, np.pi, 100)
import matplotlib.pyplot as plt
plt.plot(x,np.sin(x))
There is plenty on the web designed for people making the transition, see, e.g. NumPy for Matlab Users.
MATLAB® and NumPy/SciPy have a lot in common. But there are many differences. NumPy and SciPy were created to do numerical and scientific computing in the most natural way with Python, not to be MATLAB® clones. This page is intended to be a place to collect wisdom about the differences, mostly for the purpose of helping proficient MATLAB® users become proficient NumPy and SciPy users. NumPyProConPage is another page for curious people who are thinking of adopting Python with NumPy and SciPy instead of MATLAB® and want to see a list of pros and cons.
You might also like to consider pylab, which brings together numpy and matplotlib into a single namespace, so you don't have to bother with the np and plt prefixes I used above. See, e.g., wikipedia.
There are tags on this site that are worth looking at: e.g. numpy, scipy, matplotlib. There is also a question on Python at stats.se which you might find relevant. If you are interested in statistics, or in reading, writing and manipulating tabular data, you will be interested in pandas, Python's answer to R's data frame.
As to C++, it's a great language, but not in the same category as Python. This is not the right place to discuss their pros and cons, but in short, C++ is much closer to the machine than Python and if you spend the time you can write highly optimized code. In Python you can get code working very quickly, glueing together independent pieces and easily reading and writing data from wherever you want to, but Python code can sometimes run slowly (it's like Matlab -- if you vectorize in numpy it's fast, otherwise it's interpreted and slow). You might occasionally want to speed up slow Python code using the ability for Python to call functions defined in C, see, e.g., this question. (I'll leave Visual Basic to one side as it doesn't seem relevant.)
Finally, as noted in the comments, answering any specifics would involve knowing exactly what your requirements were, not just what you want to do, but who you want to do it with, and how much time and money you have available to invest.
Yes Python is worth learning. It was my first major language which I learned over ten years ago. It's used heavily in Linux systems especially when the systems boot up. Also the libraries for the language are very well developed. If you want to design a game PyGame has been around a long time and makes the process pretty easy. If you want to program for networking Twisted is an excellent library they have. And their web framework Django is a beauty.
I found Python to be a very English like language to get into.
If you'd like to you might take a peak at Ruby as well. Ruby allows a lot of different styles of programming. So you can program just like you did in other languages in Ruby (I strictly mean in style). Also Ruby has the most "free" resources for learning and getting into the language online that I've ever seen.
I love both languages. But my answer to you is "Yes!" Python is a great first language.
It all depends on what you want to do, and what method's you prefer.
You can find many mathematical plotting libraries here: https://wiki.python.org/moin/NumericAndScientific/Plotting Many of which could be an excellent alternative to Matlab.
Python is great for Firmware programming like with Arduino's. C++ is great and very powerful for Programming software & applications. If you want to program hardware, go with python. If you want to program software, go with C++. Im learning C++ and its great.

PyOpenCL vs Clyther vs pure OpenCL and C99: what's the best for novice? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have a problem: fast linear systems solving (I have a lot of such systems). I'm going to solve it using GPU and OpenCL.
I love dynamic languages such as Ruby or Python and I got out of a habit of using low level languages like C.
So I have two simultaneous aims:
Develop such OpenCL solution for solving linear systems as fast as I
can with as less efforts as possible.
Don't loose a lot in performance. I don't want to pay 2-10x deceleration for convenience, but I'm ready to pay 30-50% for work with high level language.
The best case for me is: almost python code compile in OpenCL C almost without waste.
I found such solutions: pure OpenCL C, PyOpenCL, Clyther.
With what should I start?
My opinion is that trying to shoehorn a dynamic language into OpenCL is not worth the effort. You will lose most of what you like about Python, and probably not save much time for your effort in the end.
But I am speaking only of writing OpenCL kernels in Python. There is also the host application, which prepares and submits the kernels. If you like Python, I suggest writing the host app in pure Python with a wrapper like PyOpenCL to access the OpenCL API. Then, write your kernels in pure OpenCL and have your Python app submit them as-is. I believe this will get most of what you want from Python while costing almost nothing in performance.
The hardest part of programming with OpenCL is parallelizing your algorithm -- and that means writing your kernels. Chances are, you will be spending the majority of your time tweaking and understanding your OpenCL C code, which AFAIK is your only choice for writing kernels.
That being the case, I say go for a pure C / OpenCL implementation. Once you have the "boilerplate" OpenCL API portion up and running, you are not likely to have to change much of it. If anything, you will be playing with things like the workgroup size you pass to clEnqueueNDRangeKernel.
If you're a novice with CL, I say keep it simple. Adding another software layer to the problem -- especially a problem as well defined as a linear solver -- only complicates your efforts.
EDIT:
I should add that you broaden your potential for online help / support when you use the standard OpenCL API. If you choose to go with one of the python bindings, you might limit your potential support to the folks from those communities.

Python GPU programming [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I am currently working on a project in python, and I would like to make use of the GPU for some calculations.
At first glance it seems like there are many tools available; at second glance, I feel like im missing something.
Copperhead looks awesome but has not yet been released. It would appear that im limited to writing low-level CUDA or openCL kernels; no thrust, no cudpp. If id like to have something sorted, im going to have to do it myself.
That doesnt seem quite right to me. Am I indeed missing something? Or is this GPU-scripting not quite living up to the hype yet?
Edit: GPULIB seems like it might be what I need. Documentation is rudimentary, and the python bindings are mentioned only in passing, but im applying for a download link right now. Anyone has experience with that, or links to similar free-for-academic-use GPU libraries? ReEdit: ok, python bindings are infact nonexistant.
Edit2: So I guess my best bet is to write something in C/CUDA and call that from python?
PyCUDA provides very good integration with CUDA and has several helper interfaces to make writing CUDA code easier than in the straight C api. Here is an example from the Wiki which does a 2D FFT without needing any C code at all.
I will publish here some information that I read on reddit. It will be useful for people who are coming without a clear idea of what different packages do and how they connect cuda with Python:
From:
Reddit
There's a lot of confusion in this thread about what various projects aim to do and how ready they are. There is no "GPU backend for NumPy" (much less for any of SciPy's functionality). There are a few ways to write CUDA code inside of Python and some GPU array-like objects which support subsets of NumPy's ndarray methods (but not the rest of NumPy, like linalg, fft, etc..)
PyCUDA and PyOpenCL come closest. They eliminate a lot of the plumbing surrounding launching GPU kernels (simplified array creation & memory transfer, no need for manual deallocation, etc...). For the most part, however, you're still stuck writing CUDA kernels manually, they just happen to be inside your Python file as a triple-quoted string. PyCUDA's GPUarray does include some limited NumPy-like functionality, so if you're doing something very simple you might get away without writing any kernels yourself.
NumbaPro includes a "cuda.jit" decorator which lets you write CUDA kernels using Python syntax. It's not actually much of an advance over what PyCUDA does (quoted kernel source), it's just your code now looks more Pythonic. It definitely doesn't, however, automatically run existing NumPy code on the GPU.
Theano let you construct symbolic expression trees and then compiles them to run on the GPU. It's not NumPy and only has equivalents for a small subset of NumPy's functionality.
gnumpy is a thinly documented wrapper around CudaMat. The only supported element type is float32 and only a small subset of NumPy is implemented.
I know that this thread is old, but I think I can bring some relevant information that answers to the question asked.
Continuum Analytics has a package that contains libraries that resolves the CUDA computing for you. Basically you instrument your code that needs to be parallelized (within a function) with a decorator and you need to import a library. Thus, you don't need any knowledge about CUDA instructions.
Information can be found on NVIDIA page
https://developer.nvidia.com/anaconda-accelerate
or you can go directly to the Continuum Analytics' page
https://store.continuum.io/cshop/anaconda/
There is a 30 day trial period and a free licence for academics.
I use this extensively and accelerates my code between 10 to 50 times.
Theano looks like it might be what you're looking for. From what I understand, it is very capable of doing some heavy mathematical lifting with the GPU and appears to be actively maintained.
Good luck!
Check this page for a open source library distributed with Anaconda
https://www.anaconda.com/blog/developer-blog/open-sourcing-anaconda-accelerate/
" Today, we are releasing a two new Numba sub-projects called pyculib and pyculib_sorting, which contain the NVIDIA GPU library Python wrappers and sorting functions from Accelerate. These wrappers work with NumPy arrays and Numba GPU device arrays to provide access to accelerated functions from:
cuBLAS: Linear algebra
cuFFT: Fast Fourier Transform
cuSparse: Sparse matrix operations
cuRand: Random number generation (host functions only)
Sorting: Fast sorting algorithms ported from CUB and ModernGPU
Going forward, the Numba project will take stewardship of pyculib and pyculib_sorting, releasing updates as needed when new Numba releases come out. These projects are BSD-licensed, just like Numba "
Have you taken a look at PyGPU?
http://fileadmin.cs.lth.se/cs/Personal/Calle_Lejdfors/pygpu/
I can recommend scikits.cuda . but for that you need to download CULA full version(free for students.) . Another is CUV .
If you are looking for something better and ready to pay for that,you can also take a look at array fire.Write now I am using scikits and quite satisfy so far.

Open source alternative to MATLAB's fmincon function? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
Is there an open-source alternative to MATLAB's fmincon function for constrained linear optimization? I'm rewriting a MATLAB program to use Python / NumPy / SciPy and this is the only function I haven't found an equivalent to. A NumPy-based solution would be ideal, but any language will do.
Is your problem convex? Linear? Non-linear? I agree that SciPy.optimize will probably do the job, but fmincon is a sort of bazooka for solving optimization problems, and you'll be better off if you can confine it to one of the categories below (in increasing level of difficulty to solve efficiently)
Linear Program (LP)
Quadratic Program (QP)
Convex Quadratically-Constrained Quadratic Program (QCQP)
Second Order Cone Program (SOCP)
Semidefinite Program (SDP)
Non-Linear Convex Problem
Non-Convex Problem
There are also combinatoric problems such as Mixed-Integer Linear Programs (MILP), but you didn't mention any sort of integrality constraints, suffice to say that they fall into a different class of problems.
The CVXOpt package will be of great use to you if your problem is convex.
If your problem is not convex, you need to choose between finding a local solution or the global solution. Many convex solvers 'sort of' work in a non-convex domain. Finding a good approximation to the global solution would require some form Simulated Annealing or Genetic Algorithm. Finding the global solution will require an enumeration of all local solutions or a combinatorial strategy such as Branch and Bound.
Python optimization software:
OpenOpt http://openopt.org (this one is numpy-based as you wish, with automatic differentiation by FuncDesigner)
Pyomo https://software.sandia.gov/trac/coopr/wiki/Package/pyomo
CVXOPT http://abel.ee.ucla.edu/cvxopt/
NLPy http://nlpy.sourceforge.net/
The open source Python package,SciPy, has quite a large set of optimization routines including some for multivariable problems with constraints (which is what fmincon does I believe). Once you have SciPy installed type the following at the Python command prompt
help(scipy.optimize)
The resulting document is extensive and includes the following which I believe might be of use to you.
Constrained Optimizers (multivariate)
fmin_l_bfgs_b -- Zhu, Byrd, and Nocedal's L-BFGS-B constrained optimizer
(if you use this please quote their papers -- see help)
fmin_tnc -- Truncated Newton Code originally written by Stephen Nash and
adapted to C by Jean-Sebastien Roy.
fmin_cobyla -- Constrained Optimization BY Linear Approximation
GNU Octave is another MATLAB clone that might have what you need.
For numerical optimization in Python you may take a look at OpenOpt solvers:
http://openopt.org/NLP
http://openopt.org/Problems
I don't know if it's in there, but there's a python distribution called Enthought that might have what you're looking for. It was designed specifically for data analysis has over 60 additional libraries.
Have a look at http://www.aemdesign.com/downloadfsqp.htm.
There you will find C code which provides the same functionality as fmincon. (However, using a different algorithm. You can read the manual if you are interested in the details.)
It's open source but not under GPL.
There is a program called SciLab that is a MATLAB clone.
I haven't used it at all, but it is open source and might have the function you are looking for.
Octave in the latest version implements an equivalent to the Matlab fmincon function into the optimization package.
https://octave.sourceforge.io/optim/function/fmincon.html
Scilab has an implementation of fmincon (using IPOpt) which is now regularly updated:
https://atoms.scilab.org/toolboxes/fmincon
For large-scale optimization it outperforms Matlab's fmincon.

Categories

Resources