Generate MATLAB code from python - python

I have a problem when using the MATLAB python engine.
I want to get approximated solutions to ODEs (using something like the ode45 function in MATLAB) from Python, but the problem is that ODE approximation requires an ODE function specification that I can't seem to create from the MATLAB Python engine.
It works fine calling MATLAB functions, such as isprime, from Python but there seems to be no way of specifying a MATLAB function in Python.
My question is therefore;
Are there any way of generating MATLAB function code from Python or is a way to specify MATLAB functions from Python?

odefun passed to ode45, according to docs, has to be a function handle.
Solve the ODE
y' = 2t
Use a time interval of [0,5] and the initial condition y0 = 0.
tspan = [0 5];
y0 = 0;
[t,y] = ode45(#(t,y) 2*t, tspan, y0);
#(t,y) 2*t returns a function handle to anonymous function.
Unfortunately, function handles are listed as one of datatypes unsupported in MATLAB <-> Python conversion:
Unsupported MATLAB Types The following MATLAB data types are not supported by the MATLAB Engine API for Python:
Categorical array
char array (M-by-N)
Cell array (M-by-N)
Function handle
Sparse array
Structure array
Table
MATLAB value objects (for a discussion of handle and value classes see Comparison of Handle and Value Classes)
Non-MATLAB objects (such as Java® objects)
To sum up, it seems like there is no straightforward way of doing it.
Potential workaround may involve some combination of engine.workspace and engine.eval, as shown in Use MATLAB Engine Workspace in Python example.
Workaround with engine.eval (first demo):
import matlab.engine
import matplotlib.pyplot as plt
e = matlab.engine.start_matlab()
tr, yr = e.eval('ode45(#(t,y) 2*t, [0 5], 0)', nargout=2)
plt.plot(tr, yr)
plt.show()
By doing so, you avoid passing function handle via MATLAB/Python barrier. You pass string (bytes) and allow MATLAB to evaluate it there. What's returned is pure numeric arrays. After that, you may operate on result vectors, e.g. plot them.
Since passing arguments as literals would quickly became pain, engine.workspace may be used to avoid it:
import matlab.engine
import matplotlib.pyplot as plt
e = matlab.engine.start_matlab()
e.workspace['tspan'] = matlab.double([0.0, 5.0])
e.workspace['y0'] = 0.0
tr, yr = e.eval('ode45(#(t,y) 2*t, tspan, y0)', nargout=2)
plt.plot(tr, yr)
plt.show()

Related

Python issue with fitting a custom function containing double integrals

I want to fit some data using a custom function which contains a double integral. a,b, and c are pre-defined parameters, and alpha and beta are two angles on which the function must be integrated.
import numpy as np
from scipy import integrate
x=np.linspace(0,100,100)
a=100
b=5
c=1
def custom_function(x,a,b,c):
f = lambda alpha,beta: (np.pi/2)*(np.sin(x*a*np.sin(alpha)*np.cos(beta))/x*a*np.sin(alpha)*np.cos(beta))*(np.sin(x*b*np.sin(alpha)*np.sin(beta))/x*b*np.sin(alpha)*np.sin(beta))*(np.sin(x*c*np.cos(alpha))/x*c*np.cos(alpha))*np.sin(alpha)
return integrate.dblquad(f, 0, np.pi/2, 0, np.pi/2)
when running the code, I get the following error:
TypeError: cannot convert the series to <class 'float'>
I've tried simplyfing the function but I still get the same issue, anyone could help me locate the problem?
Are you sure you are not trying to multiply sinc functions, sin(x*u)/(x*u)? Currently you are multiplying terms like u * sin(x*u) / x because there are not parenthesis in the denominator.
You should be able to fit your function for small a,b,c. But having a = 100, you should have a much higher resolution, I would say steps.
I am asuming you are trying to fit using some local minimization method.
If you have a function with more than many maxima and minima while you are trying to fit you are likely to get stuck. You could try some of non-convex optimization methods available
as well

Vector to matrix function in NumPy without accessing elements of vector

I would like to create a NumPy function that computes the Jacobian of a function at a certain point - with the Jacobian hard coded into the function.
Say I have a vector containing two arbitrary scalars X = np.array([[x],[y]]), and a function f(X) = np.array([[2xy],[3xy]]).
This function has Jacobian J = np.array([[2y, 2x],[3y, 3x]])
How can I write a function that takes in the array X and returns the Jacobian? Of course, I could do this using array indices (e.g. x = X[0,0]), but am wondering if there is a way to do this directly without accessing the individual elements of X.
I am looking for something that works like this:
def foo(x,y):
return np.array([[2*y, 2*x],[3*y, 3*x]])
X = np.array([[3],[7]])
J = foo(X)
Given that this is possible on 1-dimensional arrays, e.g. the following works:
def foo(x):
return np.array([x,x,x])
X = np.array([1,2,3,4])
J = foo(X)
You want the jacobian, which is the differential of the function. Is that correct? I'm afraid numpy is not the right tool for that.
Numpy works with fixed numbers not with variables. That is given some number you can calculate the value of a function. The differential is a different function, that has a special relationship to the original function but is not the same. You cannot just calculate the differential but must deduce it from the functional form of the original function using differentiating rules. Numpy cannot do that.
As far as I know you have three options:
use a numeric library to calculate the differential at a specific point. However you only will get the jacobian at a specific point (x,y) and no formula for it.
take a look at a pythen CAS library like e.g. sympy. There you can define expressions in terms of variables and compute the differential with respect to that variables.
Use a library that perform automatic differentiation. Maschine learning toolkits like pytorch or tensorflow have excellent support for automatic differentiation and good integration of numpy arrays. They essentially calculate the differential, by knowing the differential for all basic operation like multiplication or addition. For composed functions, the chain rule is applied and the difderential can be calculated for arbitray complex functions.

Partial derivatives of a function found using interp2d in python/sagemath

I have a function of two variables, R(t,r), that has been constructed using a list of values for R, t, and r. This function cannot be written down, the values are found from solving a differential equation (d R(t,r)/dt). I require to take the derivatives of the function, in particular, I need
dR(t,r)/dr, d^2R(t,r)/drdt. I have tried using this answer to do this, but I cannot seem to get an answer that makes sense. (note that all derivatives should be partials). Any help would be appreciated.
Edit:
my current code. I understand getting anything to work without the `Rdata' file is impossible but the file itself is 160x1001. Really, any data could be made up to get the rest to work. Z_t does not return answers that seem like the derivative of my original function based on what I know, therefore, I know it is not differentiating my function as I'd expect.
If there are numerical routines for using the array of data I do not mind, I simply need some way of figuring out the derivatives.
import numpy as np
from scipy import interpolate
data = np.loadtxt('Rdata.txt')
rvals = np.linspace(1,160,160)
tvals = np.linspace(0,1000,1001)
f = interpolate.interp2d(tvals, rvals, data)
Z_t = interpolate.bisplev(tvals, rvals, f.tck, dx=0.8, dy=0)

An issue with paralellising function broadcasting over a mesh using dask

I am looking to parallelise a function which takes multiple 1-dimensional ranges (which are of the form np.linspace(x,y,t)) of numerical input values (this is variable, but lets say it takes five), creates a mesh out of these ranges, and then evaluates some (5-dimensional) cost function for this over this mesh. In its current form it looks something like this:
def func_5d(a,b,c,d,e):
return a + b + c + d + e
def range_search(a_range, b_range, c_range, d_range, e_range):
mesh = itertools.product(a_range, b_range, c_range, d_range, e_range)
func_eval = map(lambda x: (func_5d(np.array(x)), x), mesh)
return func_eval
So, here I would be looking to parallelise the function range_search using dask. Ideally, this would be done by creating a dask mesh, which could then be chunked, and then mapped through to our cost function using either multi-threading or multi-core processing. Looking through the dask documentation, it does not appear that dask.array contains any suitable mechanism to achieve this. There is a dask.array.meshgrid function, extended from the numpy library, but this does not support chunking. Additionally, dask.array does not seem to contain a paralellised map function. However, there is one in dask.bag. But the documentation seems to suggest that dask.bag is used only as a module to carry out preliminary processing of raw data (in formats such as CSV, JSON, etc). Dask.bag objects do also have a method called product() which seems to imitate the itertools.product; however this only takes one other dask.bag object as an argument. So meshing 5 arrays required this method called to be stacked (4 times), which aside from being hideously ugly, is also inefficent when the number of inputs is variable.
From here, I don't really know where to go. I have worked through the Jupyter Notebooks that dask have put together, but they do not seem to hold an answer to my question. Any suggestions on the best approach to paralellising functions of the above form would be much appreciated.
I would use Numpy Slicing for this
a[:, None, None] + b[None, :, None] + c[None, None, :]
You will want to make sure that your input vectors are chunked finely enough that the products of them will still fit comfortably in memory.

"Direct" numpy functions on an array vs numpy array functions

I have a question about the design of Python. I have realised that some functions are implemented directly on container classes (e.g. numpy arrays) while other function that act on these containers must be called from numpy itself. An example would be:
import numpy as np
y = np.array([4,7,9,1])
m1 = np.mean(y) # Ok
m2 = y.mean() # Ok
print(m1 == m2) # True
x = [2,3]
r1 = np.concatenate([x, y]) # Ok
r2 = y.concatenate(x) # AttributeError: 'numpy.ndarray' object has no attribute 'concatenate'
print(r1 == r2)
Why can the mean be calculated directly from the array while the array as no method to concatenate another one to it? Is there a general rule which functions can be called directly on the array and which ones not? And if both is possible what is the pythonic way to do it?
The overview of NumPy history gives an indication of why not everything is consistent: it has two predecessors that were developed independently. Backward compatibility requires the project to keep array methods like max. Ongoing development favors the function syntax np.fun(array). I suppose one reason for the latter is that it allows array_like input (the term used throughout NumPy documentation): anything that NumPy can turn into an ndarray.
The question of why there are both methods and functions of the same name has been discussed and links provided.
But to focus on your two examples:
mean uses just one array. Logically it can be an ndarray method.
concatenate takes a list of arrays, and doesn't give priority to any one of them.
There is a np.append function that looks superficially like the list .append method. But it just passes the task on to concatenate with just a few modifications. And it causes all kinds of newby errors - it isn't inplace, it ravels, and it is slow compared to the list method.
Or consider the large family of ufunc. Those are functions, some take one array, others two. They share a common ufunc functionality.
np.add(a,b) <=> a+b <=> a.__add__(b)
np.sin(a) # no a.sin()
I suspect the choice to make sin a ufunc rather than a method has been influenced by common mathematical notation.
To me a big plus to the function approach is that it can be applied to a list or scalar. np.sin(1) works just as well as np.sin([0,.5,1]) or np.sin(np.arange(0,1,.5)).
Yes, history goes a long way toward excusing the mix of functions of methods, but many of the choices are logical.

Categories

Resources