I'm new to all of this thing, and so please excuse me if i kinda did something stupid here. Treat me and explain to me as if i'm a total noob would be helpful.
I have a simple function written in python, filename a.pyx:-
#!/usr/bin/env python
import os
import sys
def get_syspath():
ret = sys.path
print "Syspath:{}".format(ret)
return ret
I want it to be able to be used by tcl.
I read thru the cython page, and followed.
I ran this:-
cython -o a.c a.pyx
I then ran this command to generate the object file a.o:-
gcc -fpic -c a.c -I/usr/local/include -I/tools/share/python/2.7.1/linux64/include/python2.7
And then ran this to generate the so file a.so:-
gcc -shared a.o -o a.so
when i load it from a tclsh, it failed.
$tclsh
% load ./a.so
couldn't load file "./a.so": ./a.so: undefined symbol: PyExc_RuntimeError
Am i taking the correct path here? If not, can please explain to me what went wrong, and what should I be doing?
Thanks in advance.
The object code needs to be linked to the libraries it depends on when you're building the loadable library. This means adding appropriate -l... options and possibly some -L... options as well. I'm guessing that the option will be something like -lpython27 or something like that (which links to a libpython27.so somewhere on your library search path; the library search path is modified with the -L... options), but I don't know. Paths will depend a lot on exactly how your system is set up and experimentation is likely required on your half.
It still probably won't work as a loadable library in Tcl. Tcl expects there to be a library initialisation function (in your case, it will look for A_Init) that takes a Tcl_Interp* as its only argument so that the library can install the commands it defines into the Tcl interpreter context. I would be astonished if Python made such a thing by default. It's not failing with that for you yet because the failures are still happening during the internal dlopen() call and not the dlsym() call, but I can confidently predict that you'll still face them.
The easiest way to “integrate” that sort of functionality is by running the command in a subprocess.
Here's the Python code you might use:
import os
import sys
print sys.path
And here's the Tcl code to use it:
set syspath [exec python /path/to/yourcode.py]
# If it is in the same directory as this script, use this:
#set syspath [exec python [file join [file dirname [info script]] yourcode.py]]
It's not the most efficient way, but it's super-easy to make it work since you don't have to solve the compilation and linking of the two languages. It's programmer-efficient…
Maybe have a look at tclpython or libtclpy.
Both allow calling Python code from Tcl.
But if you wanted to wrap things in a nicer way, e.g. have nicer already wrapped APIs, maybe you should also look at Elmer, which seems to aim at the task your attempting.
Related
Python is a scripting language. It is hard to protect python code from being copied. No 100% protection is required but at least slow down those who have bad intentions. Is it possible to minify/uglify python code the way javascript front-end code is being done today?
EDIT: The python code will be used in Raspberry Pi, not server. On raspberry pi, anyone can take out the SDcard and gain access to the python code.
What about starting off with only distributing the pyc files? These are files created by Python interpreter for performance reasons--their load times are faster than .pys--but to the casual user they are difficult to decipher.
python -m compileall .
Ramp up the security by using Cython to compile your python src. To "cythonize" your code, run Cython + GCC on each module. The init.py files must be left intact to keep module imports working. A silly Hello world example:
$ cython helloworld.py -o helloworld.c
$ gcc -shared -pthread -fPIC -fwrapv -O2 -Wall -fno-strict-aliasing -I/usr/include/python3.7 -o helloworld.so helloworld.c
YMMV using this approach; I've run into various gotchas using different modules.
I will answer my own question.
I found the following software tools that can do the job. I have not tried them, so I cannot comment on how effective they are. Comments are welcomed on their effectiveness.
https://liftoff.github.io/pyminifier/
https://mnfy.readthedocs.io/en/latest/
Sure, you could uglify it, but given the fact that python relies on indentation for syntax, you couldn't do the equivalent minification (which in JS relies largely upon removing all whitespace).
Beside the point, but JS is minified to make it download faster, not obfuscate it.
python is executed server-side. while sometimes it's fun to intentionally obfuscate code (look into perl obfuscation ;), it should never be necessary for server-side code.
if you're trying to hide your python from someone but they already have access to the directories and files it is stored in, you have bigger problems than code obfuscation.
Nuitka.net is a perfect way to convert your python code to compiled object code. This makes reverse engineering and exposing your algorithms extremely hard. Nuitka can also produce an standalone executable that is very portable.
While this may be a way to preserve trade secrets, it comes with some hard limitations.
a) some python libs are already binary distros which are difficult to bundle in a standalone exe (e.g. xgboost, pytorch).
b) wide pip distribution of a binary package is an exercise in deep frustration because it is linked to the cpython lib. manylinux and universal builds are vast wasteland waiting to be mapped and documented.
As for the downvotes, please consider that 1) not all python runs on servers - some run on the edge, 2) non-open source authors need to protect their intellectual property, 3) smaller always makes for faster installs.
I'm using an IronPython script engine in my C# application. It generally works ok, but for some reason it cannot find the "math" library (i.e. I can't "import math"). I checked my DLL's (IronPython.dll, Microsoft.Scripting, Microsoft.Dynamic) and they all seem to be ok and recent version (I copied them out of an IronPython 2.7.7.0 installation). However, when I try to execute an "import math" command, it says "No module named math". I can import "sys" and other modules ok, why not "math"?
Here's a simplified version of my code:
pyEngine = Python.CreateEngine();
outputStream = new NotifyingMemoryStream();
outputStream.TextWritten += OutputStream_TextWritten;
outputStreamWriter = new StreamWriter(outputStream) { AutoFlush = true };
pyEngine.Runtime.IO.SetOutput(outputStream, outputStreamWriter);
pyEngine.Runtime.IO.SetErrorOutput(outputStream, outputStreamWriter);
ScriptSource source = pyEngine.CreateScriptSourceFromString("import math" + Environment.NewLine + "math.log(10)", Microsoft.Scripting.SourceCodeKind.AutoDetect);
double b = source.Execute<double>()
The error occurs at the "double b = source..." line. Any help would be appreciated.
Just copying the IronPython DLLs out of an IronPython installation into your project does not get you the standard library.
The reason sys works is that it's one of a handful of special modules that are not just "builtins", but literally linked or frozen into the main interpreter. Most of the stdlib will not work.
The short version is: You have to either:
Also copy the standard library, or
Reference it in-place like this:
ICollection searchPaths = pythonEngine.GetSearchPaths();
searchPaths.add("D:\Absolute\Path\To\IronPython");
pythonEngine.SetSearchPaths(searchPaths);
Obviously, the latter solution won't work if you want to deploy or distribute your app.
According to this blog post and a few others, it looks like the way most people handle this is to NuGet the stdlib into your project instead of copying stuff around manually. And, while you're at it, to NuGet IronPython instead of copying and pasting the DLLs.
This still doesn't completely solve deploy/distribute, but from there, it's just a matter of configuring your build to copy whichever parts of that Lib you want into your target, basically the same way you're presumably already doing with the DLLs. If you copy some or all of the stdlib libs into your build target, they'll automatically be on your search path; if you copy them into some custom subdirectory instead, you'll of course need to add that as shown above.
If you don't plan to deploy/distribute, you still may want to configure the copy. But, if not, you can just add ..\\.. to your search path.
(By the way, all of this is definitely not what I vaguely remember doing… but then what I remember is probably horribly out of date.)
I want to make "common" script which I will use in all my sconscripts
This script must use some scons functions like Object() or SharedObject()
Is there any scons file that i can import or maybe another useful hack.
Im new to python and scons.
I have done exactly what you are explaining. If your SConstruct and/or SConscript scripts simply import your common Python code, then there is nothing special you have to do, except import the appropriate SCons modules in your Python code.
If, on the other hand, you have a Python script, from which you want to invoke SCons (as opposed to launching scons from the command line) then much more effort will be needed. I originally looked into doing this, but later decided it wasnt worth the effort.
Following this recommendation, I have written a native C extension library to optimise part of a Python module via ctypes. I chose ctypes over writing a CPython-native library because it was quicker and easier (just a few functions with all tight loops inside).
I've now hit a snag. If I want my work to be easily installable using distutils using python setup.py install, then distutils needs to be able to build my shared library and install it (presumably into /usr/lib/myproject). However, this not a Python extension module, and so as far as I can tell, distutils cannot do this.
I've found a few references to people other people with this problem:
Someone on numpy-discussion with a hack back in 2006.
Somebody asking on distutils-sig and not getting an answer.
Somebody asking on the main python list and being pointed to the innards of an existing project.
I am aware that I can do something native and not use distutils for the shared library, or indeed use my distribution's packaging system. My concern is that this will limit usability as not everyone will be able to install it easily.
So my question is: what is the current best way of distributing a shared library with distutils that will be used by ctypes but otherwise is OS-native and not a Python extension module?
Feel free to answer with one of the hacks linked to above if you can expand on it and justify why that is the best way. If there is nothing better, at least all the information will be in one place.
The distutils documentation here states that:
A C extension for CPython is a shared library (e.g. a .so file on Linux, .pyd on Windows), which exports an initialization function.
So the only difference regarding a plain shared library seems to be the initialization function (besides a sensible file naming convention I don't think you have any problem with). Now, if you take a look at distutils.command.build_ext you will see it defines a get_export_symbols() method that:
Return the list of symbols that a shared extension has to export. This either uses 'ext.export_symbols' or, if it's not provided, "PyInit_" + module_name. Only relevant on Windows, where the .pyd file (DLL) must export the module "PyInit_" function.
So using it for plain shared libraries should work out-of-the-box except in
Windows. But it's easy to also fix that. The return value of get_export_symbols() is passed to distutils.ccompiler.CCompiler.link(), which documentation states:
'export_symbols' is a list of symbols that the shared library will export. (This appears to be relevant only on Windows.)
So not adding the initialization function to the export symbols will do the trick. For that you just need to trivially override build_ext.get_export_symbols().
Also, you might want to simplify the module name. Here is a complete example of a build_ext subclass that can build ctypes modules as well as extension modules:
from distutils.core import setup, Extension
from distutils.command.build_ext import build_ext
class build_ext(build_ext):
def build_extension(self, ext):
self._ctypes = isinstance(ext, CTypes)
return super().build_extension(ext)
def get_export_symbols(self, ext):
if self._ctypes:
return ext.export_symbols
return super().get_export_symbols(ext)
def get_ext_filename(self, ext_name):
if self._ctypes:
return ext_name + '.so'
return super().get_ext_filename(ext_name)
class CTypes(Extension): pass
setup(name='testct', version='1.0',
ext_modules=[CTypes('ct', sources=['testct/ct.c']),
Extension('ext', sources=['testct/ext.c'])],
cmdclass={'build_ext': build_ext})
I have setup a minimal working python package with ctypes extension here:
https://github.com/himbeles/ctypes-example
which works on Windows, Mac, Linux.
It takes the approach of memeplex above of overwriting build_ext.get_export_symbols() and forcing the library extension to be the same (.so) for all operating systems.
Additionally, a compiler directive in the c / c++ source code ensures proper export of the shared library symbols in case of Windows vs. Unix.
As a bonus, the binary wheels are automatically compiled by a GitHub Action for all operating systems :-)
Some clarifications here:
It's not a "ctypes based" library. It's just a standard C library, and you want to install it with distutils. If you use a C-extension, ctypes or cython to wrap that library is irrelevant for the question.
Since the library apparently isn't generic, but just contains optimizations for your application, the recommendation you link to doesn't apply to you, in your case it is probably easier to write a C-extension or to use Cython, in which case your problem is avoided.
For the actual question, you can always use your own custom distutils command, and in fact one of the discussions linked to just such a command, the OOF2 build_shlib command, that does what you want. In this case though you want to install a custom library that really isn't shared, and then I think you don't need to install it in /usr/lib/yourproject, but you can install it into the package directory in /usr/lib/python-x.x/site-packages/yourmodule, together with your python files. But I'm not 100% sure of that so you'll have to try.
I'm trying to create a python script that will disassemble a binary (a Windows exe to be precise) and analyze its code.
I need the ability to take a certain buffer, and extract some sort of struct containing information about the instructions in it.
I've worked with libdisasm in C before, and I found it's interface quite intuitive and comfortable.
The problem is, its Python interface is available only through SWIG, and I can't get it to compile properly under Windows.
At the availability aspect, diStorm provides a nice out-of-the-box interface, but it provides only the Mnemonic of each instruction, and not a binary struct with enumerations defining instruction type and what not.
This is quite uncomfortable for my purpose, and will require a lot of what I see as spent time wrapping the interface to make it fit my needs.
I've also looked at BeaEngine, which does in fact provide the output I need, a struct with binary info concerning each instruction, but its interface is really odd and counter-intuitive, and it crashes pretty much instantly when provided with wrong arguments.
The CTypes sort of ultimate-death-to-your-python crashes.
So, I'd be happy to hear about other solutions, which are a little less time consuming than messing around with djgcc or mingw to make SWIGed libdisasm, or writing an OOP wrapper for diStorm.
If anyone has some guidance as to how to compile SWIGed libdisasm, or better yet, a compiled binary (pyd or dll+py), I'd love to hear/have it. :)
Thanks ahead.
Well, after much meddling around, I managed to compile SWIGed libdisasm!
Unfortunately, it seems to crash python on incorrect (and sometimes correct) usage.
How I did it:
I compiled libdisasm.lib using Visual Studio 6, the only thing you need for this is the source code in whichever libdisasm release you use, and stdint.h and inttypes.h (The Visual C++ compatible version, google it).
I SWIGed the given libdisasm_oop.i file with the following command line
swig -python -shadow -o x86disasm_wrap.c -outdir . libdisasm_oop.i
Used Cygwin to run ./configure in the libdisasm root dir. The only real thing you get from this is config.h
I then created a new DLL project, added x86disasm_wrap.c to it, added the c:\PythonXX\libs and c:\PythonXX\Include folders to the corresponding variables, set to Release configuration (important, either this or do #undef _DEBUG before including python.h).
Also, there is a chance you'll need to fix the path to config.h.
Compiled the DLL project, and named the output _x86disasm.dll.
Place that in the same folder as the SWIG generated x86disasm.py and you're done.
Any suggestions for other, less crashy disasm libs for python?
You might try using ctypes to interface directly with libdisasm instead of going through a SWIG layer. It may be take more development time but AFAIK you should be able to access the underlying functionality using ctypes.
I recommend you look at Pym's disassembly library which is also the backend for Pym's online disassembler.
You can use the distorm library: https://code.google.com/p/distorm/
Here's another build: http://breakingcode.wordpress.com/2009/08/31/using-distorm-with-python-2-6-and-python-3-x-revisited/
There's also BeaEngine: http://www.beaengine.org/
Here's a Windows installer for BeaEngine: http://breakingcode.wordpress.com/2012/04/08/quickpost-installer-for-beaenginepython/