Cythonize a .pyc no .pyx available - python

In my hands i have a .pyc and not the corresponding .py or .pyx.
Is it possible to cythonize a .pyc ?
should it be decompressed to .py first and how ?

Cython is a superset of Python, and hence you can cythonize a .py file. I say this because .pyc files can be decompiled to .py file. There are several libraries that can do this, however, I would suggest that you have a look at this question asked previously.
Although this can be done, there are no real benefits, your python code can at most gain a 20% speed boost.

Related

How can I import a module from a pyc file or so file in Pypy?

This i my scenario:
I have a python project that runs in cPython.
and I have some .pyc, .so files in this project, and I don't have these files's source code.
This project runs well in cPython.
But if I change the interpreter to pypy, it can't import these modules which contained by the .pyc files and .so files.
Is there any way that I can solve this problem?
You would need to decompile the code to get back some semblance of *.py files. There are various projects out there to do this: search for "python decompile". Sponsoring one of the efforts would probably go a long way towards getting a working decompiler.

Alternative for .pyc to compile python

I am developing a project that I want to release as closed source, but its written in python, and you can open any file with a text editor to see the code, so not ideal. I use pyinstaller to compile the project, but that only "hides" the main file, and the rest of them are still accesible, which is not ideal at all. I know that python compiles the imported files with cpython, and those are the .pyc files in the pycache folder, but I am also aware that these files can be decompiled easily, so that isn't a good solution. Is there any way I can compile my python packages and to make them non-readable by the user but still be importable by python?
You might want to look into Cython
Cython can compile your python code into native C while still being available to be imported from python.

Cythonize but not compile .pyx files using setup.py

I have a Cython project containing several .pyx files. To distribute my project I would like to provide my generated .c files as recommended in the Cython documentation, to minimize problems with different Cython versions.
My current work flow is to build the project using:
me#machine$ python setup.py build_ext --inplace
This cythonizes (i.e. translate .pyx files to .c / .cpp files using Cython) all .pyx files and then compiles them.
For a release I do not need the compiled (.so) files, so I basically ignore them. The problem is, I waste a lot of time with the useless compilation.
Is there a way to cythonize all .pyx files in the folder using the setup.py, without compiling them?
Edit: Why not just use $ cython my_file.pyx
I have about 20 .pyx files which have different compiler directives. Calling cython on the command line for each file would be slower than just waiting for it to compile.
On the other hand creating a shell script to cythonize them would leave me with a second list of compiler directives that needs to be kept up to date. I would rather not do this, in order to keep my possible points of failure minimal.
One way to create the C files without compiling them is first remove in setup.py "setup(...)" line and replace it with only the "cythonize("*.pyx")" part. Then run:
me#machine$ python setup.py

Import external file using Cython

I downloaded a pyx and c file from the internet and I am just wondering how can i incorporate it into python? The documentation on Cython is fairly vague and mainly focus on generating pyx/c from py file. It would be great if you could give me some solid examples on how to do this properly. Many thanks
The Cython executable turns a .pyx file into a .c file. You then use your favorite C build tool to compile it into a shared library (e.g. an .so file on Linux). Then poof, you have an extension module. Note that you'll need to figure out all the extra arguments to your C compiler like the header paths for Python and numpy. These all depend very heavily on not only your OS but also the particulars of how you've installed Python and SciPy on it.
If this all sounds a bit scary, see if the .pyx file is simple enough that you can use pyximport to handle all the messy compilation for you. If external libraries are needed, you'll probably need to construct a .pyxbld file.
Cython is a compiler: it converts a .pyx into a .c, which you then build to a .so or .pyd. Take a look at the cython docs on compilation. You will probably want to use pyximport module technique if you want to modify the code and experiment, and then use a setup.py when done and you need a final version of your Cython .pyx module.

is there any kind of performance gain while using .pyc files in python?

We can write a piece of python code and put it in already compiled ".pyc" file and use it. I am wondering that is there any kind of gain in terms of performance or it is just a kind of modular way of grouping the code.
Thanks a lot
There is no performance gain over the course of your program. It only improves the startup time.
A program doesn't run any faster when it is read from a ‘.pyc’ or
‘.pyo’ file than when it is read from a ‘.py’ file; the only thing
that's faster about ‘.pyc’ or ‘.pyo’ files is the speed with which
they are loaded.
http://www.network-theory.co.uk/docs/pytut/CompiledPythonfiles.html
And pyo files being the next optimization just remove doc strings from your code.
Yes, simply because the first time you execute a .py file, it is compiled to a .pyc file.
So basically you have to add the compilation time. Afterwards, the .pyc file should be always used.
I'm not sure about .pyc files (very minor gain is at least not creating .pyc files again), but there's a '-O' flag for the Python interpreter which produces optimised bytecode (.pyo files).
You will gain the fraction of a second it took the Python runtime to parse and compile the source into bytecode, which only happens on first run/import.
You will lose portability.
See here.
A program doesn't run any faster when it is read from a ‘.pyc’ or
‘.pyo’ file than when it is read from a ‘.py’ file; the only thing
that's faster about ‘.pyc’ or ‘.pyo’ files is the speed with which
they are loaded.
Based on the accepted answer of this question: Why are main runnable Python scripts not compiled to pyc files like modules?.
When a module is loaded, the py file is "byte compiled" to pyc files.
The time stamp is recorded in pyc files. This is done not to make it
run faster but to load faster

Categories

Resources