How is scipy.special.expi implemented? - python

I was using scipy.special.expn when I realized I could be using expi instead and it should be much faster, to judge from the Cephes code that I expected it would be based on. But switching from expn to expi made almost no difference in runtime.
This made me suspect that expi is implemented by an equivalent call to expn which does not take advantage of the simpler conditions in force for expi. But looking through the source code for scipy I am baffled as to how expi is implemented. I can find the C source for expn but not expi.
Can someone clarify how expi is implemented and/or where I can find the source for it?

Related

Computing log of integral in terms of log of integrand

This question may be half computational math, half programming.
I'm trying to estimate log[\int_0^\infty\int_0^\infty f(x,y)dxdy] [actually thousands of such integrals] in Python. The function f(x,y) involves some very large/very small numbers that are bound to cause overflow/underflow errors; so I'd really prefer to work with log[f(x,y)] instead of f(x,y).
Thus my question is two parts:
1) Is there a way to estimate log[\int_0^\infty\int_0^\infty f(x,y)dxdy] using the log of the function instead of the function itself?
2) Is there an implementation of this in Python?
Thanks
I would be surprised if the math and/or numpy libraries or perhaps some more specific third party libraries would not be able to solve a problem like this. Here are some of their log functions:
math.log(x[, base]), math.log1p(x), math.log2(x), math.log10(x) (https://docs.python.org/3.3/library/math.html)
numpy.log, numpy.log10, numpy.log2, numpy.log1p, numpy.logaddexp, numpy.logaddexp2 (https://numpy.org/doc/stable/reference/routines.math.html#exponents-and-logarithms)
Generally, Just google: "logarithm python library" and try to identify similar stackoverflow problems, which will allow you to find the right libraries and functions to try out. Once you do that, then you can follow this guide, so that someone can try to help you get from input to expected output: How to make good reproducible pandas examples

Expokit realization on Python

I am looking for a Pythonic realization of Expokit, which is a software package that provides matrix exponential routines for small dense or very large sparse matrices, real or complex, i.e. it finds
w(t) = exp(t*A)*v
This package had been realized in Fortran and Matlab and can be found here https://www.maths.uq.edu.au/expokit/
I have found a python wrapper expokitpy
https://github.com/weinbe58/expokitpy and a Krylov subspace methods package KryPy https://github.com/andrenarchy/krypy. Both seem to be relevant, however neither of them goes with good enough documentation (for me) to do time-evolution.
Does somebody have a working solution with the packages mentioned above or similar?
In case this is still useful to someone, it looks like there was an effort to incorporate expokit within scipy which has now stalled and is looking for somebody to finish. Though here are some instructions to compile with Fortran and then run via Python, with good results.
It seems also to have been adopted by slepc4py, which is then used by quimb, which seems useful if you need it for quantum information (or just use its expm and expm_multiply methods).

Seasonal-Trend-Loess Method for Time Series in Python

Does anyone know if there is a Python-based procedure to decompose time series utilizing STL (Seasonal-Trend-Loess) method?
I saw references to a wrapper program to call the stl function
in R, but I found that to be unstable and cumbersome from the environment set-up perspective (Python and R together). Also, link was 4 years old.
Can someone point out something more recent (e.g. sklearn, spicy, etc.)?
I haven't tried STLDecompose but I took a peek at it and I believe it uses a general purpose loess smoother. This is hard to do right and tends to be inefficient. See the defunct STL-Java repo.
The pyloess package provides a python wrapper to the same underlying Fortran that is used by the original R version. You definitely don't need to go through a bridge to R to get this same functionality! This package is not actively maintained and I've occasionally had trouble getting it to build on some platforms (thus the fork here). But once built, it does work and is the fastest one you're likely to find. I've been tempted to modify it to include some new features, but just can't bring myself to modify the Fortran (which is pre-processed RATFOR - very assembly-language like Fortran, and I can't find a RATFOR preprocessor anywhere).
I wrote a native Java implementation, stl-decomp-4j, that can be called from python using the pyjnius package. This started as a direct port of the original Fortran, refactored to a more modern programming style. I then extended it to allow quadratic loess interpolation and to support post-decomposition smoothing of the seasonal component, features that are described in the original paper but that were not put into the Fortran/R implementation. (They apparently are in the S-plus implementation, but few of us have access to that.) The key to making this efficient is that the loess smoothing simplifies when the points are equidistant and the point-by-point smoothing is done by simply modifying the weights that one is using to do the interpolation.
The stl-decomp-4j examples include one Jupyter notebook demonstrating how to call this package from python. I should probably formalize that as a python package but haven't had time. Quite willing to accept pull requests. ;-)
I'd love to see a direct port of this approach to python/numpy. Another thing on my "if I had some spare time" list.
Here you can find an example of Seasonal-Trend decomposition using LOESS (STL), from statsmodels.
Basicaly it works this way:
from statsmodels.tsa.seasonal import STL
stl = STL(TimeSeries, seasonal=13)
res = stl.fit()
fig = res.plot()
There is indeed:
https://github.com/jrmontag/STLDecompose
In the repo you will find a jupyter notebook for usage of the package.
RSTL is a Python port of R's STL: https://github.com/ericist/rstl. It works pretty well except it is 3~5 times slower than R's STL according to the author.
If you just want to get lowess trend line, you can just use Statsmodels' lowess function
https://www.statsmodels.org/dev/generated/statsmodels.nonparametric.smoothers_lowess.lowess.html.

Which mathematical method is used by odeint?

I'm working with scipy.integrate.odeint and want to understand it better. For this I have two slightly related questions:
Which mathematical method is it using? Runge-Kutta? Adams-Bashforth? I found this site, but it seems to be for C++, but as far as I know the python function uses the C++ version as well... It states that it switches automatically between implicit and explicit solver, does anybody know how it does this?
To understand/reuse the information I would like to know at which timepoints it evaluates the function and how exactly it computes the solution of the ODE, but fulloutput does not seem to help/I wasn't able to find out how. So to be more precise, an example with Runge-Kutta-Fehlberg: I want the different timepoints at which it evaluated f and the weights it used to multiply it.
Additional information (what for this Info is needed):
I want to reuse this information to use automatic differentiation. So I would call odeint as a black box, find out all the relevant steps it made and reuse this info to calculate the differential dx(T_end)/dx0.
If you know of any other method to solve my problem, please go ahead. Also if another ode solver might be more appropriate to d this.
PS: I'm new, so would it be better to split this question into to questions? I.e. seperate 1. and 2.?

When Does It Make Sense To Rewrite A Python Module in C?

In a game that I am writing, I use a 2D vector class which I have written to handle the speeds of the objects. This is called a large number of times every frame as there are a lot of objects on the screen, so any increase I can make in its speed will be useful.
It is pretty simple, consisting mostly of wrappers to the related math functions. It would be quite trivial to rewrite in C, but I am not sure whether doing so will make any significant difference as all it really does is call the underlying math functions, add, multiply or divide.
So, my question is under what circumstances does it make sense to rewrite in C? Where will you see a significant speed boost, and where can you see a reasonable speed boost without rewriting an extensive amount of the program?
If you're vector-munging, give numpy a try first. Chances are you will get speeds not far from C if you utilize numpy's vector manipulation functions wisely.
Other than that, your question is very heuristic. If your code is too slow:
Profile it - chances are you'll be able to improve it in Python
Use the correct optimized C-based libraries (numpy in your case)
Try psyco
Try rewriting parts with cython
If all else fails, rewrite in C
First measure then optimize
You should never optimize anything, be it in C or any other language, without timing your code before and after your optimization:
your clever optimization could in fact induce a slow down
optimizing something that takes 1% of the total execution time will never give you more than 1% performance
The common approach is:
profile your code
identify a hotspot
time this hotspot
optimize it
time the hotspot again, see if it's faster. If it's not goto 3.
If you can't find hotspots it could mean that your app is already optimized, or that you are not using the good algorithm for your problem. In both cases profiling helps understanding what your code does.
For profiling python code under Linux, you can use pyprof2calltree which works in conjunction with kcachegrind, and is totally awesome.
Common wisdom is "profile", "measure", etc. Well - maybe. Just get in the debugger and take 10 stackshots. If more than one of them terminates in your wrapper code, then it is costing more than 10% roughly, so you should consider re-doing it in C, to save that time. Chances are you will find other things also that are costing more than that.
A nice Profiler I use on Linux is pycallgraph - however, as your program gets bigger it starts to create much larger images which are harder to trace. I'm pretty sure you can exclude modules, though.

Categories

Resources