I'm trying to build a simple IDE that is web based in Python. For now, this IDE will support C only. I know it is possible to call the gcc with Python to compile and run a single C file. But what if I would like to compile and run multiple C files from a single project (i.e. linking .h files and .c files), is this possible? If yes, can you please tell me how?
Well assuming you want to handle all type of projects and their dependencies (which is not easy) the best way is to have a module that generates a Makefile for the project and use it to compile and solve all dependencies
Related
I am developing a project that I want to release as closed source, but its written in python, and you can open any file with a text editor to see the code, so not ideal. I use pyinstaller to compile the project, but that only "hides" the main file, and the rest of them are still accesible, which is not ideal at all. I know that python compiles the imported files with cpython, and those are the .pyc files in the pycache folder, but I am also aware that these files can be decompiled easily, so that isn't a good solution. Is there any way I can compile my python packages and to make them non-readable by the user but still be importable by python?
You might want to look into Cython
Cython can compile your python code into native C while still being available to be imported from python.
I am working with the 3.6.4 source release of Python. I have no trouble building it with Visual Studio as a dynamic library (/MDd) I can link the Python .dll to my own code and verify its operation.
But when I build it (and my code) with (/MTd) it soon runs off the rails when I try to open a file with a Python program. A Debug assertion fails in read.cpp ("Expression: _osfile(fh) & FOPEN"). What I believe is happening is the Python .dll is linking with improper system libraries. What I can't figure out is how to get it to link with the correct ones (static libraries).
This is what I needed to do to build and use python statically embedded in another application.
To build the static python library (e.g., python36_d.lib, python36.lib)
Convert ALL projects in the python solution (pcbuild.sln) to static. This is about 40 projects, so it may take awhile. This includes setting library products to be build as 'static lib', and setting all /MD and /MDd build options to /MT and /MTd.
For at least the pythoncore project alter the Preprocess define to be Py_NO_ENABLE_SHARED. This tells the project it will be looking for calls from static libraries.
By hook or crook, find yourself a pyconfig.h file and put it in the Include area of your Python build. It is unclear how this file is built from Windows tools, but one seems to be able to snag one from other sources and it works ok. One could probably grab the pyconfig.h from the Pre-compiled version of the code you are building. [By the way, the Python I built was 3.6.5 and was built with Windows 2015, update 3.]
Hopefully, this should enable you to build both python36.lib and python36_d.lib. Now you need to make changes to your application project(s) to enable it to link with the python library. You need to do this:
Add the Python Include directory to the General->Include Directories list.
Add the Python Library directories to the General->Library Directories lists.
This will be ..\PCBuild\win32 and ..\PCBuild\amd64.
Add the define Py_NO_ENABLE_SHARED to the C/C++ -> Preprocessor area.
For Linker->input add (for releases) python36.lib;shlwapi.lib;version.lib and (for debugs) python36_d.lib;shlwapi.lib;version.lib.
And that should be it. It should run and work. But one more thing. In order to be able to function, the executable needs to access the Lib directory of the python build. So a copy of that needs to be moved to wherever the executable (containing the embedded python) resides. Or you can add the Lib area to the execution PATH for windows. That should work as well.
That's about all of it.
Short version
Is it possible to build a SCons environment before the SConstruct script exits?
Long version
I'm porting some software from Windows to Linux. On Windows, it builds in Visual Studio 2013, using MSVC++ and Intel Fortran. On Linux, we're building it with g++ and gfortran.
I've written a Python script that reads a Visual Studio project file (either .vcxproj for the C++ code or .vfproj for the Fortran) and executes the relevant SCons builders to create the build. My SConstruct file then basically looks like this:
def convertVSProjectFile(filename):
...
projects = [ 'Source/Proj1/Proj1.vcxproj',
'Source/Proj2/Proj2.vcxproj',
'Source/Proj3/Proj3.vfproj',
...
];
for p in projects:
convertVSProjectFile(filename)
In time this will be reworked to interpret the .sln file rather than listing the projects manually.
For the C++ code, this works fine. It's a problem for the Fortran code, though. The problem comes up where files in two separate projects refer to the same Fortran module. The Fortran scanner spots this and makes the module's source file a dependency of both targets. However, the FORTRANMODPATH construction variable is set differently for the two targets. SCons warns that the same target is built twice with the same builder, but then seems to just pick one of them more or less at random, making it hard to predict where to .mod file will end up.
I can think of a few ways of fixing this:
- Construct each environment separately, build it, then move on to the next one. But I don't know if there's a way of doing this.
- Set the FORTRANMODPATH for each object file rather than each project. Then the .mod file can go in the object folder for the source file instead of all the .mod files for a project going in the same folder. But I can't spot a way of doing this either. Could I achieve this by creating a new Environment for every source file?
- Anything else anyone can come up with.
It is possible to specify the environment variables for each target
objs += env.Object(target=..., source=..., FORTRANMODPATH=...)
SCons will see the second use has different FORTRANMODPATH and should rebuild it as necessary.
I have been given a project to do. One of the main essential requirements is that this is given to the customer to run as single exe. It does not matter which programming language, however it will be comparing files between a set of default files and the customer’s files.
Is there any way I can do this so that I have one exe?
The py2exe library allows you to create exe files from your python code. I've not used it but it may do the job!
http://www.py2exe.org/
Alternatively, you can try pyinstaller.
See also: py2exe - generate single executable file
You're in luck! You can do just that with Python using the py2exe conversion utility.
You can find it at: http://www.py2exe.org/
I'm trying to use distutils with a Python module that contains extensions written in C. The program code is housed on a Linux server, but I sometimes upload changes from a Windows machine using the file transfer program WinSCP (editing is done in Notepad++). I've noticed that distutils often does not notice these changes in the C code (i.e. python setup.py build does not trigger gcc if the code was previously compiled). A check of the C source code on the server shows that it really has been updated correctly. On the other hand, changing the code directly on the server using a text editor like vim always causes python setup.py build to recompile the changed files. Any idea why uploading changed files might not cause distutils to recompile them?
Thanks.
EDIT:
After investigating this further I am noticing the same problem if I just create a plain C program with a Makefile. Thus this problem does not look like it is a distutils problem.
Looking into the source for distutils and seeing how it enforces rebuilds it looks like it checks timestamps of files to determine whether a file is out of date or not.
Can you make sure the timestamp is changing when winscp is uploading the file? Otherwise it looks like the build command has a "force" option that forces a rebuild no matter what.