I'm trying to use distutils with a Python module that contains extensions written in C. The program code is housed on a Linux server, but I sometimes upload changes from a Windows machine using the file transfer program WinSCP (editing is done in Notepad++). I've noticed that distutils often does not notice these changes in the C code (i.e. python setup.py build does not trigger gcc if the code was previously compiled). A check of the C source code on the server shows that it really has been updated correctly. On the other hand, changing the code directly on the server using a text editor like vim always causes python setup.py build to recompile the changed files. Any idea why uploading changed files might not cause distutils to recompile them?
Thanks.
EDIT:
After investigating this further I am noticing the same problem if I just create a plain C program with a Makefile. Thus this problem does not look like it is a distutils problem.
Looking into the source for distutils and seeing how it enforces rebuilds it looks like it checks timestamps of files to determine whether a file is out of date or not.
Can you make sure the timestamp is changing when winscp is uploading the file? Otherwise it looks like the build command has a "force" option that forces a rebuild no matter what.
Related
I want to write code in Python but still have real-time capable code. This means I cannot use the Python interpreter. Mypyc looks promising for this very specific purpose, even though it is not a goal of the tool, as it is only meant to accelerate Python. Would it be possible to run mypyc generated code without the Python interpreter?
I have tried the following things, without success:
Compiling __native.c with gcc and manually linking it to files it requires, such as Python.h (in python installation) and mypyc libraries.
Opening the .so file in a C program with dlopen and importing functions with dlsym.
I've tried looking online and I'm honestly lost at this point.
I'm trying to find if there's a way to import python scripts and run them AFTER my Python program has been compiled.
For an example, let's say I have a main.py such that:
import modules.NewModule
a = NewModuleClass()
a.showYouWork()
then I compile main.py with pyinstaller so that my directory looks like:
main.exe
modules/NewModule.py
My end goal is to make a program that can dynamically read new Python files in a folder (this will be coded in) and use them (the part I'm struggling with). I know it's possible, since that's how add-ons work in Blender 3D but I've struggled for many hours to figure this out. I think I'm just bad at choosing the correct terms in Google.
Maybe I just need to convert all of the Python files in the modules directory to .pyc files? Then, how would I use them?
Also, if this is a duplicate on here (it probably is), please let me know. I couldn't find this issue on this site either.
You may find no detailed answer simply because there is no problem. PyInstaller does not really compile Python scripts into machine code executables. It just assembles then into a folder along with an embedded Python interpretor, or alternatively creates a compressed single file executable that will automatically uncompress itself at run time into a temporary folder containing that.
From then on, you have an almost standard Python environment, with normal .pyc file which can contain normal Python instructions like calls to importlib to dynamically load other Python modules. You have just to append the directory containing the modules to sys.path before importing them. An other possible caveat, is that pyinstaller only gets required modules and not a full Python installation, so you must either make sure that the dynamic modules do not rely on missing standard modules, or be prepared to face an ImportError.
I am developing a project that I want to release as closed source, but its written in python, and you can open any file with a text editor to see the code, so not ideal. I use pyinstaller to compile the project, but that only "hides" the main file, and the rest of them are still accesible, which is not ideal at all. I know that python compiles the imported files with cpython, and those are the .pyc files in the pycache folder, but I am also aware that these files can be decompiled easily, so that isn't a good solution. Is there any way I can compile my python packages and to make them non-readable by the user but still be importable by python?
You might want to look into Cython
Cython can compile your python code into native C while still being available to be imported from python.
I'm looking at statically link Python with my application. The reason for this is because in some test cases I've seen a 10% speed increase. My application uses the Python C-API heavily, and it seems that Whole Program Optimization is able to do some good optimizations. I expect Profile Guided Optimizations will gain a little more too. This is all being done in MSVC2015
So far I've recompiled the pythoncore project (python35.dll) into a static library and linked that with my application (let's call it myapp.exe). FYI other than changing the project type to static, the only other thing that needs doing is setting the define Py_NO_ENABLE_SHARED during the static lib compile, and when compiling myapp.exe. This works fine and it's how I was able to obtain the 10% speed improvement test result.
So the next step is continuing to support external python modules that have .pyd files (which are .dll files renamed to .pyd). These modules will have been compiled expecting to dynamically link with python35.dll, so I need to provide a workaround for that requirement, since all of the python functions are now embedded into myapp.exe.
First I use a .def file to export all of the public Python functions from myapp.exe. This works fine.
The missing piece is how do I create a python35.dll which redirects all the calls to the functions exported from myapp.exe.
My first attempt is using DLL forwarding. I made a custom python35.dll which has a .def file with lines such as:
PyArg_Parse=myapp.PyArg_Parse
In theory, this works. If I use Dependency Walker on socket.pyd, it correctly opens my python35.dll and shows that all the calls are being forwarded to myapp.exe.
However when actually running myapp.exe and trying to import socket, it fails to load the required entry points from myapp.exe. 'import socket' in Python will cause a LoadLibrary("socket.pyd") to occur. This will load my custom python35.dll implicitly. The failure occurs while trying to load python35.dll, it's unable to find the entry points for it's forwards. It seems like the reason for this is because myapp.exe won't be part of the library search path. I seem to be able to verify this by copying myapp.exe to myapp.dll. If I do that then the python35.dll load works, however this isn't a solution since that will result in 2 copies of the Python enviroment (one in myapp.exe, one in myapp.dll)
Possible other avenues I've also looked into but haven't found the right solution for:
Somehow getting .exe files to be part of the library search path
Using Windows manifest/configuration to redirect the library somehow
Manually using declspec(naked) and jmp statements to more explicitly wrap the .dll. I'm working in x64, so I don't think that's possible anymore?
I could manually do the whole Python API and wrap each function manually. This is doable if I can find a way to create the function definitions of all the exports so it's not an insane amount of manual work.
In summary, is there a way to redirect/forward calls to a .dll to functions/data exported from an .exe. Thanks!
I ended up going with the solution that #martineau suggested in the comments, which was to put all of my application, including Python, into a single .dll instead of an .exe. Then the .exe is just a simple file that calls into the .dll and does nothing else.
Short version
Is it possible to build a SCons environment before the SConstruct script exits?
Long version
I'm porting some software from Windows to Linux. On Windows, it builds in Visual Studio 2013, using MSVC++ and Intel Fortran. On Linux, we're building it with g++ and gfortran.
I've written a Python script that reads a Visual Studio project file (either .vcxproj for the C++ code or .vfproj for the Fortran) and executes the relevant SCons builders to create the build. My SConstruct file then basically looks like this:
def convertVSProjectFile(filename):
...
projects = [ 'Source/Proj1/Proj1.vcxproj',
'Source/Proj2/Proj2.vcxproj',
'Source/Proj3/Proj3.vfproj',
...
];
for p in projects:
convertVSProjectFile(filename)
In time this will be reworked to interpret the .sln file rather than listing the projects manually.
For the C++ code, this works fine. It's a problem for the Fortran code, though. The problem comes up where files in two separate projects refer to the same Fortran module. The Fortran scanner spots this and makes the module's source file a dependency of both targets. However, the FORTRANMODPATH construction variable is set differently for the two targets. SCons warns that the same target is built twice with the same builder, but then seems to just pick one of them more or less at random, making it hard to predict where to .mod file will end up.
I can think of a few ways of fixing this:
- Construct each environment separately, build it, then move on to the next one. But I don't know if there's a way of doing this.
- Set the FORTRANMODPATH for each object file rather than each project. Then the .mod file can go in the object folder for the source file instead of all the .mod files for a project going in the same folder. But I can't spot a way of doing this either. Could I achieve this by creating a new Environment for every source file?
- Anything else anyone can come up with.
It is possible to specify the environment variables for each target
objs += env.Object(target=..., source=..., FORTRANMODPATH=...)
SCons will see the second use has different FORTRANMODPATH and should rebuild it as necessary.