I have a 3D array that I need to integrate numerically using Python. My array is a function of wavelength, depth and time. It is data that I have modelled numerically using another software package and don't have an analytical form of the function, just the 3d array output from the other package. I need to find the triple integral of this array. In Matlab I use trapz(my_array, 3) where 3 is the ndims to integrate over. The Scipy trapz only seems to work on a single integral.
I think I may have 2 options but I am I need some advice.
opt 1. use 3d interpolation in scipy that returns a function handle, do these exist? the 1d version returns a function, and then use scipy.integrate.tplquad to do the integration over the interpolated function where I use the max and in values in my array as the integration limits.
opt 2. use three nested trapz calls like this suggestion for 2d I found on another site. --> sp.trapz(sp.trapz(f, y[np.newaxis,:], axis=1), x, axis=0))
Can't quite get my head around to make either work. Any help/advice would be appreciated. I need to make sure that my integration error is as low as possible.
Related
I have a 1-D numpy array of positions(basically an eye-gaze data).
I want to apply sgolay function of matlab in python.
Though i have already seen scipy's savgol_filter and savgol_coeffs,i am not able to understand how to implement it as i am not well versed in it's mathematics.
I have also seen that there is a confusion about whether savgol_filter or savgol_coeffs of python is equivalent to matlab's sgolay function.
I want to differentiate the numpy array once to get velocity and then from velocity array differentiate it again to get acceleration.
The whole procedure is attached in this image
I am working on a project related to gravitational lensing, for which I need to evaluate the confluent hypergeometric function 1F1(a,b,z) for an array z of length ~ 10^8 complex points, a = 1+0.48j and b = 1. I am looking for an efficient way to evaluate this on large array sizes. The scipy implementation is fast but does not accept complex arguments for a and b.
mpmath seems to be the best way to calculate 1F1 for complex parameters but mpmath.hyp1f1 does not accept array values. The best workaround I found for this was to use np.vectorize or np.frompyfunc to allow passing a NumPy array as a parameter. However, this is extremely slow and would take days to execute (even with gmpy2 installed). I assume this is because mpmath functions are always slow on large array sizes.
a nonpython implementation would be fine as well, as long as I can somehow save the result on disk and read it into my python code. I have seen some implementations (for example https://www.math.ucla.edu/~mason/research/pearson_final.pdf) which could possibly work but I'm not sure.
Another possible way would be to interpolate the function
(consecutive points in my input array are extremely close) but I'm not sure what would be the best way to do that.
Thanks!
I was having a very similar problem than you have.
I figured out that the mpmath package has a "hidden" set of function with (only) float precision, which one can access by writing fp. upfront. This does not exist for hyp1f1 but for the more general hyper. Meaning there is a fp.hyper in the mpmath package which is with fp.hyper([a],[b],z) equivalent to hyper1f1(a,b,z), but is a lot faster.
If you vectorize this with np.vectorize this should make your calculation substansially faster.
Disclaimer: I got an error message saying that some complex value is converted to real by dropping the imaginary part when evaluating this, but so far the results i have gotten seem sensible and compatible to the hyper1f1(a,b,z) values.
Added: It seems that fp.hyper does not like getting numpy datatypes even if they are scalars, as in the case of a,b,z beeing numpy scalars (for example one element of an numpy array) it will simply return 1 without giving an error message independent of the actual input. If you use np.vectorize however everything should be fine.
Eitherway: Use at own risc.
I need to do a Fourier transform of a map in Python. Fast Fourier Transforms expect periodic boundary conditions, but the input map is not periodic. So I need to apply an input filter/weight slowly tapering the map toward zero at the edges. Are there libraries for doing this in python?
My favorite function to apodize a map is the generalized Gaussian (also called 'Super-Gaussian' which is a Gaussian whose exponent is raised to a power P. By setting P to, say, 4 or 6 you get a flat-top pulse which falls off smoothly, which is good for FFT applications where sharp edges always create ripples in conjugate space.
The generalized Gaussian is available on Scipy. Here is a minimal code (Python 3) to apodize a 2D array with a generalized Gaussian. As noted in previous comments, there are dozens of functions which would work just as well.
import numpy as np
from scipy.signal import general_gaussian
# A 128x128 array
array = np.random.rand(128,128)
# Define a general Gaussian in 2D as outer product of the function with itself
window = np.outer(general_gaussian(128,6,50),general_gaussian(128,6,50))
# Multiply
ap_array = window*array
Such tapering is often referred to as a "window".
Scipy has many window functions.
You can use numpy.expand_dims to create the 2D window you want.
Regarding Stefan's comment, apparently the numpy team thinks that including more than arrays was a mistake. I would stick to using scipy for signal processing. Watch out, as they moved quite a bit of functions around in their 1.0 release so older documentation is, well, quite old.
As a final note: a "filter" is typically reserved for multiplications you apply in the Frequency domain, not spatial domain.
Given some coordinates in 3D (x-, y- and z-axes), what I would like to do is to get a polynomial (fifth order). I know how to do it in 2D (for example just in x- and y-direction) via numpy. So my question is: Is it possible to do it also with the third (z-) axes?
Sorry if I missed a question somewhere.
Thank you.
Numpy has functions for multi-variable polynomial evaluation in the polynomial package -- polyval2d, polyval3d -- the problem is getting the coefficients. For fitting, you need the polyvander2d, polyvander3d functions that create the design matrices for the least squares fit. The multi-variable polynomial coefficients thus determined can then be reshaped and used in the corresponding evaluation functions. See the documentation for those functions for more details.
Are there functions in python that will fill out missing values in a matrix for you, by using collaborative filtering (ex. alternating minimization algorithm, etc). Or does one need to implement such functions from scratch?
[EDIT]: Although this isn't a matrix-completion example, but just to illustrate a similar situation, I know there is an svd() function in Matlab that takes a matrix as input and automatically outputs the singular value decomposition (svd) of it. I'm looking for something like that in Python, hopefully a built-in function, but even a good library out there would be great.
Check out numpy's linalg library to find a python SVD implementation
There is a library fancyimpute. Also, sklearn NMF