Embed a function from a Matlab MEX file directly in Python - python

I am using a proprietary Matlab MEX file to import some simulation results in Matlab (no source code available of course!). The interface with Matlab is actually really simple, as there is a single function, returning a Matlab struct. I would like to know if there is any way to call this function in the MEX file directly from Python, without having to use Matlab?
What I have in mind is for example using something like SWIG to import the C function into Python by providing a custom Matlab-wrapper around it...
By the way, I know that with scipy.io.loadmat it is already possible to read Matlab binary *.mat data files, but I don't know if the data representation in a mat file is the same as the internal representation in Matlab (in which case it might be useful for the MEX wrapper).
The idea would be of course to be able to use the function provided in the MEX with no Matlab installation present on the system.
Thanks.

Unless I'm misunderstanding something about how Matlab works or about your question, it is very highly unlikely to be possible. From a technical point of view any solution would need to be a full, binary compatible, bug for bug, feature for feature reimplementation of the Matlab C library, (implementing mxGetPr, mxGetN and so on) but binding to Python.
Let me edit my own answer to say the following: If you do have a MATLAB license available there is the excellent package MLAB wrap which does at least part of what you want.

You can create stand alone shared libraries from Matlab code, e.g http://www.mathworks.com/help/toolbox/compiler/mbuild.html. These you should be able to call from python. But you need the Matlab Compiler, however, it looks like there is a trial version of it available for free.
See also this stackoverflow topic.

You can build a library from a mex file as Mauro pointed out
You can use scipy.io.loadmat safely, the data representation :
from:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.io.loadmat.html
Returns :
mat_dict : dict
dictionary with variable names as keys, and loaded matrices as values
The loaded matrices are as you saved them, i.e. data representation should be coherent.

A mex function is an api that allows Matlab (i.e. a matlab program) to call a function written in c/c++. This function, in turn, can call Matlab own internal functions. As such, the mex function will be linked against Matlab libraries. Thus, to call a mex function directly from a Python program w/o Matlab libraries doesn't look possible (and doesn't makes sense for that matter).
Of consideration is why was the mex function created in the first place? Was it to make some non-matlab c libraries (or c code) available to matlab users, or was it to hide some proprietery matlab-code while still making it available to matlab users? If its the first case, then you could request the owners of the mex function to provide it in a non-mex dynamic lib form that you can include in another c or python program. This should be easy if the mex function doesnt depend on Matlab internal functions.
Others above have mentioned the matlab compiler... yes, you can include a mex function in a stand alone binary callable from unix (thus from python but as a unix call) if you use the Matlab Compiler to produce such binary. This would require the binary to be deployed along with Matlab's runtime environment. This is not quite the same as calling a function directly from python-- there are no return values for example.

Related

matlab to python using ctypes

We want to use functions written in matlab in our new python application. We want to use ctypes because the user won't need matlab on his machine.
We are testing this method but can't get it to work. We lack c knowledge (and much more...).
This is our simple test matlab function:
function [ z ] = adding( x,y )
z = x + y;
end
We compiled this with matlab into a shared library .dll. In a python interpreter we have:
import ctypes
dl = ctypes.CDLL('adding.dll')
Now we are stuck because we can't find the command to access the function in matlab.
What should we do ?
Short answer - No.
You can not export code written in MATLAB as C in form of DLL and interface with it using ctypes on python side, so that you can afterwards expect a serious performance boost over usual communication via unix pipe (as in mlabwrapper).
The problem is that such DLL is dependent on MCR (matlab runtime). The DLL contains your source code in an obfuscated form. When you call exported function - the DLL is loaded, which then unpacks the source code, creates an instance of MATLAB (an interpreter) and communicates your code and its results with the MATLAB JIT. This functionality is called "MATLAB compiler toolbox". Alternatively, it can produce OS executables (that follow the same logic).
Rewrite in C/C++ (losing dependency on MATLAB)
If you are not lucky to code-generate your project as in here. Consider rewriting your code in plain C or using C++ libraries as IT++ or Armadillo.
There are many resources/tutorials available explaining how to use ctypes and call functions inside a dll. See for example this SO question.
If I remember correctly the matlab compiler should properly export all the functions from the dll so they should be accessible from ctypes. However, you will have to ensure the matlab libraries / runtime are in your library path when you try to load the dll. The matlab site has plenty of docs for this, see for example this tutorial.

Using embedded C library in Python emulation

Short Question
Which would be easier to emulate (in Python) a complex (SAE J1939) communication stack from an existing embedded C library:
1) Full port - meaning manually convert all of the C functions to python modules
2) Wrap the stack in a Python wrapper - meaning call the real c code in Python
Background Information
I have already written small portions of this stack in Python, however they are very non-trival to implement with 100% coverage. Because of this very reason, we have recently purchased an off the shelf SAE J1939 stack for our embedded platforms. To clarify, I know that portions touching the hardware layer will have to be re-created and mapped to the PC's CAN drivers.
I am hoping to find someone here on SO that has or even looked into porting a 5k LOC C library to Python. If there are any C to Python tools that work well that would be helpful for me to look into as well.
My advice would be to wrap it.
Reasons for that:
if you convert function by function, you'll introduce new bugs (we're just human) and this kind of stuff is pretty hard to test
wrapping for python is done easily, using swig or even ctypes to load a dll on the fly, you'll find tons of tutorial
if your lib gets updated, you have less impact in the long term.
However, you need to
check that the license you purchase allows you to do that
know that having same implementation on embedded and PC side, it won't help tracking bugs
you might have a bit less portability than a full python implementation (anyway, not much of a point for you as your low layer needs to be rewritten per target)
Definitely wrap it. It might be as easy are running ctypesgen.py and then using it. Check this blog article about using ctypesgen to create a wrapper for libreadline http://wavetossed.blogspot.com/2011/07/asynchronous-gnu-readline.html in order to get access to the full API.

Are there advantages to use the Python/C interface instead of Cython?

I want to extend python and numpy by writing some modules in C or C++, using BLAS and LAPACK. I also want to be able to distribute the code as standalone C/C++ libraries. I would like this libraries to use both single and double precision float. Some examples of functions I will write are conjugate gradient for solving linear systems or accelerated first order methods. Some functions will need to call a Python function from the C/C++ code.
After playing a little with the Python/C API and the Numpy/C API, I discovered that many people advocate the use of Cython instead (see for example this question or this one). I am not an expert about Cython, but it seems that for some cases, you still need to use the Numpy/C API and know how it works. Given the fact that I already have (some little) knowledge about the Python/C API and none about Cython, I was wondering if it makes sense to keep on using the Python/C API, and if using this API has some advantages over Cython. In the future, I will certainly develop some stuff not involving numerical computing, so this question is not only about numpy. One of the thing I like about the Python/C API is the fact that I learn some stuff about how the Python interpreter is working.
Thanks.
The current "top answer" sounds a bit too much like FUD in my ears. For one, it is not immediately obvious that the Average Developer would write faster code in C than what NumPy+Cython gives you anyway. Quite the contrary, the time it takes to even get the necessary C code to work correctly in a Python environment is usually much better invested in writing a quick prototype in Cython, benchmarking it, optimising it, rewriting it in a faster way, benchmarking it again, and then deciding if there is anything in it that truly requires the 5-10% more performance that you may or may not get from rewriting 2% of the code in hand-tuned C and calling it from your Cython code.
I'm writing a library in Cython that currently has about 18K lines of Cython code, which translate to almost 200K lines of C code. I once managed to get a speed-up of almost 25% for a couple of very important internal base level functions, by injecting some 20 lines of hand-tuned C code in the right places. It took me a couple of hours to rewrite and optimise this tiny part. That's truly nothing compared to the huge amount of time I saved by not writing (and having to maintain) the library in plain C in the first place.
Even if you know C a lot better than Cython, if you know Python and C, you will learn Cython so quickly that it's worth the investment in any case, especially when you are into numerics. 80-95% of the code you write will benefit so much from being written in a high-level language, that you can safely lay back and invest half of the time you saved into making your code just as fast as if you had written it in a low-level language right away.
That being said, your comment that you want "to be able to distribute the code as standalone C/C++ libraries" is a valid reason to stick to plain C/C++. Cython always depends on CPython, which is quite a dependency. However, using plain C/C++ (except for the Python interface) will not allow you to take advantage of NumPy either, as that also depends on CPython. So, as usual when writing something in C, you will have to do a lot of ground work before you get to the actual functionality. You should seriously think about this twice before you start this work.
First, there is one point in your question I don't get:
[...] also want to be able to distribute the code as standalone C/C++ libraries. [...] Some functions will need to call a Python function from the C/C++ code.
How is this supposed to work?
Next, as to your actual question, there are certainly advantages of using the Python/C API directly:
Most likely, you are more familar with writing C code than writing Cython code.
Writing your code in C gives you maximum control. To get the same performance from Cython code as from equivalent C code, you'll have to be very careful. You'll not only need to make sure to declare the types of all variables, you'll also have to set some flags adequately -- just one example is bounds checking. You will need intimate knowledge how Cython is working to get the best performance.
Cython code depends on Python. It does not seem to be a good idea to write code that should also be distributed as standalone C library in Cython
The main disadvantage of the Python/C API is that it can be very slow if it's used in an inner loop. I'm seeing that calling a Python function takes a 80-160x hit over calling an equivalent C++ function.
If that doesn't bother your code then you benefit from being able to write some chunks of code in Python, have access to Python libraries, support callbacks written directly in Python. That also means that you can make some changes without recompiling, making prototyping easier.

In Python, why is a module implemented in C faster than a pure Python module, and how do I write one?

The python documentation states, that the reason cPickle is faster than Pickle is, that the former is implemented in C. What does that mean exactly?
I am making a module for advanced mathematics in Python, and some calculations take a significant amount of time. Does that mean that if my program is implemented in C it can be made much faster?
I wish to import this module from other Python programs, just the way I can import cPickle.
Can you explain how to do implement a Python module in C?
You can write fast C code and then use it in your python scripts, so your program will run faster.[1]
http://docs.python.org/extending/index.html#extending-index
An example is Numpy, written in C ( https://numpy.org/ )
Typical use is to implement the bottleneck in C (or to use a library written in C, of course ;) ), due to its speed, and to use python for the remaining code
[1] by the way, this is why cPickle is faster than pickle
edit:
take a look at Pyrex: http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/version/Doc/About.html
'Pyrex is a language specially
designed for writing Python extension
modules. It's designed to bridge the
gap between the nice, high-level,
easy-to-use world of Python and the
messy, low-level world of C. '
It's not the 'official' way but it may be useful
As mentioned, numpy is excellent for vector computations. (Could be better still, but the comment that it's better than anything you could write without actually doing work is definitely true.)
Not everything can be easily vectorized, though, so if you do have tight inner loops with lots of function calls (say a heavily recursive algorithm) you still have a couple of options: probably the most popular is Cython, which allows you to write modules and functions in a kind of annotated Python and get C-like speed when you need it.
Or maybe your time is all dominated by library calls to compute eigenvalues or invert matrices or evaluate special functions or divide really large integers -- many of which the Sage project handles very well, by the way, if what you're doing is more mathematical than pure crunching -- in which case the time spent in Python might not even matter. It all depends on the details of the kind of numerics you're doing.
When you write a function in python, a new function object is created, the function code is parsed and bytecompiled[and saved in the "func_code" attribute], so when you call that function the interpreter reads its bytecode and executes it.
If you write the same function in C, following C/Python API to make it avaiable in python, the interpreter will create the function object, but this function won't have a bytecode.
When the interpreter finds a call to that function it calls the real C function, thus it executes at "machine" speed and not at "python-machine" speed.
You can verify this checking functions written in C:
>>> map.func_code
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'builtin_function_or_method' object has no attribute 'func_code'
>>> def mymap():pass
...
>>> mymap.func_code
<code object mymap at 0xcfb5b0, file "<stdin>", line 1>
To understand how you can write C code for python use follow the guides in the official site.
Anyway, if you are simply doing N-dimensional array calculations numpy ought to be sufficient.
Besides Pyrex/Cython, already mentioned, you have other alternatives:
Shed Skin: Translates (a restricted subset of) Python to C++. Can automatically generate an extension for you. You'd create an extension doing this (assuming Linux):
wget http://shedskin.googlecode.com/files/shedskin-0.7.tgz
tar -xzf shedskin-0.7.tgz
# On your code folder:
PYTHONPATH=/path/to/shedskin-0.7 python shedskin -e yourmodule.py
# The above generates a Makefile and a yourmodule.h/.cpp pair
make
# Now you can "import yourmodule" from Python and check it's from the .so by "print yourmodule.__file__
PyPy: A faster Python, with a JIT compiler. You could simply run your code on it instead of CPython. Only supports Python 2.5 now, 2.7 support soon. Can give huge speedups on math-heavy code. To install and run it (assuming Linux 32-bit):
wget http://pypy.org/download/pypy-1.4.1-linux.tar.bz2
tar -xjf pypy-1.4.1-linux.tar.bz2
sudo ln -s /path/to/pypy-1.4.1-linux/bin/pypy /usr/local/bin
# Then, instead of "python yourprogram.py" you'll just run "pypy yourprogram.py"
Weave: Allows you to write C inline, the compiles it.
Edit: If you want us to run these tools for you and benchmark, just post your code ;)

Wrapping a C library in Python: C, Cython or ctypes?

I want to call a C library from a Python application. I don't want to wrap the whole API, only the functions and datatypes that are relevant to my case. As I see it, I have three choices:
Create an actual extension module in C. Probably overkill, and I'd also like to avoid the overhead of learning extension writing.
Use Cython to expose the relevant parts from the C library to Python.
Do the whole thing in Python, using ctypes to communicate with the external library.
I'm not sure whether 2) or 3) is the better choice. The advantage of 3) is that ctypes is part of the standard library, and the resulting code would be pure Python – although I'm not sure how big that advantage actually is.
Are there more advantages / disadvantages with either choice? Which approach do you recommend?
Edit: Thanks for all your answers, they provide a good resource for anyone looking to do something similar. The decision, of course, is still to be made for the single case—there's no one "This is the right thing" sort of answer. For my own case, I'll probably go with ctypes, but I'm also looking forward to trying out Cython in some other project.
With there being no single true answer, accepting one is somewhat arbitrary; I chose FogleBird's answer as it provides some good insight into ctypes and it currently also is the highest-voted answer. However, I suggest to read all the answers to get a good overview.
Thanks again.
Warning: a Cython core developer's opinion ahead.
I almost always recommend Cython over ctypes. The reason is that it has a much smoother upgrade path. If you use ctypes, many things will be simple at first, and it's certainly cool to write your FFI code in plain Python, without compilation, build dependencies and all that. However, at some point, you will almost certainly find that you have to call into your C library a lot, either in a loop or in a longer series of interdependent calls, and you would like to speed that up. That's the point where you'll notice that you can't do that with ctypes. Or, when you need callback functions and you find that your Python callback code becomes a bottleneck, you'd like to speed it up and/or move it down into C as well. Again, you cannot do that with ctypes. So you have to switch languages at that point and start rewriting parts of your code, potentially reverse engineering your Python/ctypes code into plain C, thus spoiling the whole benefit of writing your code in plain Python in the first place.
With Cython, OTOH, you're completely free to make the wrapping and calling code as thin or thick as you want. You can start with simple calls into your C code from regular Python code, and Cython will translate them into native C calls, without any additional calling overhead, and with an extremely low conversion overhead for Python parameters. When you notice that you need even more performance at some point where you are making too many expensive calls into your C library, you can start annotating your surrounding Python code with static types and let Cython optimise it straight down into C for you. Or, you can start rewriting parts of your C code in Cython in order to avoid calls and to specialise and tighten your loops algorithmically. And if you need a fast callback, just write a function with the appropriate signature and pass it into the C callback registry directly. Again, no overhead, and it gives you plain C calling performance. And in the much less likely case that you really cannot get your code fast enough in Cython, you can still consider rewriting the truly critical parts of it in C (or C++ or Fortran) and call it from your Cython code naturally and natively. But then, this really becomes the last resort instead of the only option.
So, ctypes is nice to do simple things and to quickly get something running. However, as soon as things start to grow, you'll most likely come to the point where you notice that you'd better used Cython right from the start.
ctypes is your best bet for getting it done quickly, and it's a pleasure to work with as you're still writing Python!
I recently wrapped an FTDI driver for communicating with a USB chip using ctypes and it was great. I had it all done and working in less than one work day. (I only implemented the functions we needed, about 15 functions).
We were previously using a third-party module, PyUSB, for the same purpose. PyUSB is an actual C/Python extension module. But PyUSB wasn't releasing the GIL when doing blocking reads/writes, which was causing problems for us. So I wrote our own module using ctypes, which does release the GIL when calling the native functions.
One thing to note is that ctypes won't know about #define constants and stuff in the library you're using, only the functions, so you'll have to redefine those constants in your own code.
Here's an example of how the code ended up looking (lots snipped out, just trying to show you the gist of it):
from ctypes import *
d2xx = WinDLL('ftd2xx')
OK = 0
INVALID_HANDLE = 1
DEVICE_NOT_FOUND = 2
DEVICE_NOT_OPENED = 3
...
def openEx(serial):
serial = create_string_buffer(serial)
handle = c_int()
if d2xx.FT_OpenEx(serial, OPEN_BY_SERIAL_NUMBER, byref(handle)) == OK:
return Handle(handle.value)
raise D2XXException
class Handle(object):
def __init__(self, handle):
self.handle = handle
...
def read(self, bytes):
buffer = create_string_buffer(bytes)
count = c_int()
if d2xx.FT_Read(self.handle, buffer, bytes, byref(count)) == OK:
return buffer.raw[:count.value]
raise D2XXException
def write(self, data):
buffer = create_string_buffer(data)
count = c_int()
bytes = len(data)
if d2xx.FT_Write(self.handle, buffer, bytes, byref(count)) == OK:
return count.value
raise D2XXException
Someone did some benchmarks on the various options.
I might be more hesitant if I had to wrap a C++ library with lots of classes/templates/etc. But ctypes works well with structs and can even callback into Python.
Cython is a pretty cool tool in itself, well worth learning, and is surprisingly close to the Python syntax. If you do any scientific computing with Numpy, then Cython is the way to go because it integrates with Numpy for fast matrix operations.
Cython is a superset of Python language. You can throw any valid Python file at it, and it will spit out a valid C program. In this case, Cython will just map the Python calls to the underlying CPython API. This results in perhaps a 50% speedup because your code is no longer interpreted.
To get some optimizations, you have to start telling Cython additional facts about your code, such as type declarations. If you tell it enough, it can boil the code down to pure C. That is, a for loop in Python becomes a for loop in C. Here you will see massive speed gains. You can also link to external C programs here.
Using Cython code is also incredibly easy. I thought the manual makes it sound difficult. You literally just do:
$ cython mymodule.pyx
$ gcc [some arguments here] mymodule.c -o mymodule.so
and then you can import mymodule in your Python code and forget entirely that it compiles down to C.
In any case, because Cython is so easy to setup and start using, I suggest trying it to see if it suits your needs. It won't be a waste if it turns out not to be the tool you're looking for.
For calling a C library from a Python application there is also cffi which is a new alternative for ctypes. It brings a fresh look for FFI:
it handles the problem in a fascinating, clean way (as opposed to ctypes)
it doesn't require to write non Python code (as in SWIG, Cython, ...)
I'll throw another one out there: SWIG
It's easy to learn, does a lot of things right, and supports many more languages so the time spent learning it can be pretty useful.
If you use SWIG, you are creating a new python extension module, but with SWIG doing most of the heavy lifting for you.
Personally, I'd write an extension module in C. Don't be intimidated by Python C extensions -- they're not hard at all to write. The documentation is very clear and helpful. When I first wrote a C extension in Python, I think it took me about an hour to figure out how to write one -- not much time at all.
If you have already a library with a defined API, I think ctypes is the best option, as you only have to do a little initialization and then more or less call the library the way you're used to.
I think Cython or creating an extension module in C (which is not very difficult) are more useful when you need new code, e.g. calling that library and do some complex, time-consuming tasks, and then passing the result to Python.
Another approach, for simple programs, is directly do a different process (compiled externally), outputting the result to standard output and call it with subprocess module. Sometimes it's the easiest approach.
For example, if you make a console C program that works more or less that way
$miCcode 10
Result: 12345678
You could call it from Python
>>> import subprocess
>>> p = subprocess.Popen(['miCcode', '10'], shell=True, stdout=subprocess.PIPE)
>>> std_out, std_err = p.communicate()
>>> print std_out
Result: 12345678
With a little string formating, you can take the result in any way you want. You can also capture the standard error output, so it's quite flexible.
ctypes is great when you've already got a compiled library blob to deal with (such as OS libraries). The calling overhead is severe, however, so if you'll be making a lot of calls into the library, and you're going to be writing the C code anyway (or at least compiling it), I'd say to go for cython. It's not much more work, and it'll be much faster and more pythonic to use the resulting pyd file.
I personally tend to use cython for quick speedups of python code (loops and integer comparisons are two areas where cython particularly shines), and when there is some more involved code/wrapping of other libraries involved, I'll turn to Boost.Python. Boost.Python can be finicky to set up, but once you've got it working, it makes wrapping C/C++ code straightforward.
cython is also great at wrapping numpy (which I learned from the SciPy 2009 proceedings), but I haven't used numpy, so I can't comment on that.
I know this is an old question but this thing comes up on google when you search stuff like ctypes vs cython, and most of the answers here are written by those who are proficient already in cython or c which might not reflect the actual time you needed to invest to learn those to implement your solution. I am a complete beginner in both. I have never touched cython before, and have very little experience on c/c++.
For the last two days, I was looking for a way to delegate a performance heavy part of my code to something more low level than python. I implemented my code both in ctypes and Cython, which consisted basically of two simple functions.
I had a huge string list that needed to processed. Notice list and string.
Both types do not correspond perfectly to types in c, because python strings are by default unicode and c strings are not. Lists in python are simply NOT arrays of c.
Here is my verdict. Use cython. It integrates more fluently to python, and easier to work with in general. When something goes wrong ctypes just throws you segfault, at least cython will give you compile warnings with a stack trace whenever it is possible, and you can return a valid python object easily with cython.
Here is a detailed account on how much time I needed to invest in both them to implement the same function. I did very little C/C++ programming by the way:
Ctypes:
About 2h on researching how to transform my list of unicode strings to a c compatible type.
About an hour on how to return a string properly from a c function. Here I actually provided my own solution to SO once I have written the functions.
About half an hour to write the code in c, compile it to a dynamic library.
10 minutes to write a test code in python to check if c code works.
About an hour of doing some tests and rearranging the c code.
Then I plugged the c code into actual code base, and saw that ctypes does not play well with multiprocessing module as its handler is not pickable by default.
About 20 minutes I rearranged my code to not use multiprocessing module, and retried.
Then second function in my c code generated segfaults in my code base although it passed my testing code. Well, this is probably my fault for not checking well with edge cases, I was looking for a quick solution.
For about 40 minutes I tried to determine possible causes of these segfaults.
I split my functions into two libraries and tried again. Still had segfaults for my second function.
I decided to let go of the second function and use only the first function of c code and at the second or third iteration of the python loop that uses it, I had a UnicodeError about not decoding a byte at the some position though I encoded and decoded everthing explicitely.
At this point, I decided to search for an alternative and decided to look into cython:
Cython
10 min of reading cython hello world.
15 min of checking SO on how to use cython with setuptools instead of distutils.
10 min of reading on cython types and python types. I learnt I can use most of the builtin python types for static typing.
15 min of reannotating my python code with cython types.
10 min of modifying my setup.py to use compiled module in my codebase.
Plugged in the module directly to the multiprocessing version of codebase. It works.
For the record, I of course, did not measure the exact timings of my investment. It may very well be the case that my perception of time was a little to attentive due too mental effort required while I was dealing with ctypes. But it should convey the feel of dealing with cython and ctypes
There is one issue which made me use ctypes and not cython and which is not mentioned in other answers.
Using ctypes the result does not depend on compiler you are using at all. You may write a library using more or less any language which may be compiled to native shared library. It does not matter much, which system, which language and which compiler. Cython, however, is limited by the infrastructure. E.g, if you want to use intel compiler on windows, it is much more tricky to make cython work: you should "explain" compiler to cython, recompile something with this exact compiler, etc. Which significantly limits portability.
If you are targeting Windows and choose to wrap some proprietary C++ libraries, then you may soon discover that different versions of msvcrt***.dll (Visual C++ Runtime) are slightly incompatible.
This means that you may not be able to use Cython since resulting wrapper.pyd is linked against msvcr90.dll (Python 2.7) or msvcr100.dll (Python 3.x). If the library that you are wrapping is linked against different version of runtime, then you're out of luck.
Then to make things work you'll need to create C wrappers for C++ libraries, link that wrapper dll against the same version of msvcrt***.dll as your C++ library. And then use ctypes to load your hand-rolled wrapper dll dynamically at the runtime.
So there are lots of small details, which are described in great detail in following article:
"Beautiful Native Libraries (in Python)": http://lucumr.pocoo.org/2013/8/18/beautiful-native-libraries/
There's also one possibility to use GObject Introspection for libraries that are using GLib.

Categories

Resources