How to run mypyc generated code without the Python interpreter? - python

I want to write code in Python but still have real-time capable code. This means I cannot use the Python interpreter. Mypyc looks promising for this very specific purpose, even though it is not a goal of the tool, as it is only meant to accelerate Python. Would it be possible to run mypyc generated code without the Python interpreter?
I have tried the following things, without success:
Compiling __native.c with gcc and manually linking it to files it requires, such as Python.h (in python installation) and mypyc libraries.
Opening the .so file in a C program with dlopen and importing functions with dlsym.

Related

pybind with boost/dll - dual use DLL?

TL;DR
Adding pybind11 bindings to a working C++ DLL project allows me to import and use the resulting DLL in Python but breaks my ability to use it in C++ code using boost/dll machinery.
Summary
I've got a C++ library that I compile to FooLib.dll. I use boost/dll's BOOST_DLL_ALIAS and boost::dll::import_alias() to export and load a class Foo that does some work in other C++ code.
Some details omitted but it all works great, following this recipe.
I'd like to be able to call the same library code from Python to do some complicated functional testing and do comparisons to numpy/scipy prototypes without having to write so much test code in C++.
So I tried to add pybind11 bindings to the FooLib DLL project using PYBIND11_MODULE.
It compiles, I get a FooLib.dll. I can copy and rename that to FooLib.pyd, import it as a Python module, and it all works fine. I export Foo as a Python class, and it works.
However, when I compile in the pybind bindings, the boost/dll import machinery can no longer load the original FooLib.dll. I verify with boost::dll::library_info() that the appropriate CreateFoo symbol is exported to the DLL. But loading with boost::dll::import_alias() fails with:
boost::dll::shared_library::load() failed: The specified module could not be found
Minimal Example
Unfortunately, something that needs a C++ and Python executable and compiled boost isn't exactly minimal, but I did my best here:
https://github.com/danzimmerman/pybind-boostdll-minimal
Direct links to the source files:
DLL Project Files
HelloSayerLib.h
HelloSayerImp.cpp
C++ Test Code
HelloSayerLibCppTest.cpp
Python Test Code
HelloSayerLibPythonTests.py
Any advice for next steps?
Is it even possible to compile to one binary that works for both C++ and Python like this?
The suggestion in #n.'pronouns'm. comment is correct. Simply copying the python DLL from the Anaconda distribution I built against to the C++ program's run directory fixes the problem. Makes sense in retrospect, but didn't occur to me.
Makes it even more likely that I should keep the builds separate or at least set up my real project to only build with pybind bindings on my machine.

Alternative for .pyc to compile python

I am developing a project that I want to release as closed source, but its written in python, and you can open any file with a text editor to see the code, so not ideal. I use pyinstaller to compile the project, but that only "hides" the main file, and the rest of them are still accesible, which is not ideal at all. I know that python compiles the imported files with cpython, and those are the .pyc files in the pycache folder, but I am also aware that these files can be decompiled easily, so that isn't a good solution. Is there any way I can compile my python packages and to make them non-readable by the user but still be importable by python?
You might want to look into Cython
Cython can compile your python code into native C while still being available to be imported from python.

Statically linking Python, but still supporting external .pyd modules

I'm looking at statically link Python with my application. The reason for this is because in some test cases I've seen a 10% speed increase. My application uses the Python C-API heavily, and it seems that Whole Program Optimization is able to do some good optimizations. I expect Profile Guided Optimizations will gain a little more too. This is all being done in MSVC2015
So far I've recompiled the pythoncore project (python35.dll) into a static library and linked that with my application (let's call it myapp.exe). FYI other than changing the project type to static, the only other thing that needs doing is setting the define Py_NO_ENABLE_SHARED during the static lib compile, and when compiling myapp.exe. This works fine and it's how I was able to obtain the 10% speed improvement test result.
So the next step is continuing to support external python modules that have .pyd files (which are .dll files renamed to .pyd). These modules will have been compiled expecting to dynamically link with python35.dll, so I need to provide a workaround for that requirement, since all of the python functions are now embedded into myapp.exe.
First I use a .def file to export all of the public Python functions from myapp.exe. This works fine.
The missing piece is how do I create a python35.dll which redirects all the calls to the functions exported from myapp.exe.
My first attempt is using DLL forwarding. I made a custom python35.dll which has a .def file with lines such as:
PyArg_Parse=myapp.PyArg_Parse
In theory, this works. If I use Dependency Walker on socket.pyd, it correctly opens my python35.dll and shows that all the calls are being forwarded to myapp.exe.
However when actually running myapp.exe and trying to import socket, it fails to load the required entry points from myapp.exe. 'import socket' in Python will cause a LoadLibrary("socket.pyd") to occur. This will load my custom python35.dll implicitly. The failure occurs while trying to load python35.dll, it's unable to find the entry points for it's forwards. It seems like the reason for this is because myapp.exe won't be part of the library search path. I seem to be able to verify this by copying myapp.exe to myapp.dll. If I do that then the python35.dll load works, however this isn't a solution since that will result in 2 copies of the Python enviroment (one in myapp.exe, one in myapp.dll)
Possible other avenues I've also looked into but haven't found the right solution for:
Somehow getting .exe files to be part of the library search path
Using Windows manifest/configuration to redirect the library somehow
Manually using declspec(naked) and jmp statements to more explicitly wrap the .dll. I'm working in x64, so I don't think that's possible anymore?
I could manually do the whole Python API and wrap each function manually. This is doable if I can find a way to create the function definitions of all the exports so it's not an insane amount of manual work.
In summary, is there a way to redirect/forward calls to a .dll to functions/data exported from an .exe. Thanks!
I ended up going with the solution that #martineau suggested in the comments, which was to put all of my application, including Python, into a single .dll instead of an .exe. Then the .exe is just a simple file that calls into the .dll and does nothing else.

Compiling Python to native code? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Is it feasible to compile Python to machine code?
Is it possible to compile Python code (plus its dependencies, plus the interpreter library) into a single, native Windows executable (with nothing else bundled along with it) from a Python file? (Kind of like how the GNU compiler for Java compiles Java into a native (humongous) executable, which contains everything in true machine code.)
If so, how would I go about doing this?
(Specifically, py2exe does not do what I want -- it includes the libraries inside a separate ZIP file, and it includes the interpreter as a separate DLL.)
Note 1:
To emphasize, I'm not asking for a "self-extracting archive", an "executable packer", or some other way of 'cheating' by bundling the files inside an exe -- I'm looking for something that genuinely converts Python into a native executable, like what GCJ does for Java.
Note 2:
Only if the above isn't possible:
Is it possible to at least generate a single executable from a Python code containing the interpreter bundled along with all the library dependencies, such that the resulting executable does not need to self-extract onto the target disk before running?
In this scenario, the 'compilation' requirement is relaxed: it doesn't matter if the code is actually compiled into machine code (it could simply be embedded as a text resource into the target executable), but the result must nevertheless be a single exe file [and nothing else] that can run standalone, specifically without needing to unpack/install anything onto the target disk before running.
Shed Skin can compile Python to C++, but only a restricted subset of it. Some aspects of Python are very difficult to compile to native code.
The short answer is no, and that is going to go for almost any language: any program you write is going to depend on some external libraries even if just the Windows system DLLs.
If you wrote a C program and compiled it with Microsoft's compiler you would still need the C runtime libraries to be installed. Chances are they already will be on most systems but it isn't guaranteed. Likewise even if you managed to compile a C Python interpreter statically linked to its libraries you still have to get the C runtime from somewhere.
What I suspect you are really asking is whether you can compile to a single .exe that depends only on libraries which you have a reasonable expectation of already being installed. So it all depends on what you are willing to consider part of the base system? Can you assume .Net framework 4 or Silverlight are installed? If so you might want to look at IronPython.
Likewise pypy can be built with either the Visual Studio toolchain or MinGW but I'm pretty sure in both cases you'll still need some external libraries at runtime.

Python package with compiled code

I'm looking into releasing a python package which includes an existing fortran or C program. The fortran/C program is compiled by running
./configure
make
The python code calls the resulting binary through subprocess calls (i.e. the code is not really wrapped as such). What I would like is that when the user types
python setup.py install
the fortran/C program is first compiled using the ./configure and make commands, then I want the python module to be installed, and the binary to be installed in the python bin/ directory alongside executables that are usually installed via the scripts= option in distutils.core.setup.
First, are there any problems with doing this? And if not, what is the best way to do it via setup.py? Are there existing functions to automate the ./configure and make, since this is pretty standard? Or should I just use os.system calls? And either way, where should those commands go in setup.py? Then should I have make output the binary to e.g. scripts/ and then have scripts=['scripts/mybinary'] in the setup() function?
Don't make this too complex.
Just provide them as separate items with a README that says -- basically -- what you said in the question.
Build the Fortran/C with ./configure; make; make install.
Setup Python with python setup.py install.
It doesn't appear to be rocket science. Trying to over-simplify the installation means that you must account for every OS vagary and oddness.
It's easier to trust the users to do "standard" installations so that the Fortran/C is on the system PATH, and your Python script should be configured to find them on the system PATH.
People who want to use your software are then free to reconfigure it to their own unique needs. They will anyway. Don't overpackage and force them to fight against you to reconfigure things.
consider writing a python C extension as a wrapper for your C code, and a f2py extension as a wrapper for your fortran code. Then you can just use them in your python code as fast calls instead of using subprocess.

Categories

Resources