EDIT: I have asked an opposing question here: How to embed Python3 with the standard library
A solution for Python2 is provided here: Is it possible to embed python without the standard library?
However, Python3 fails on Py_Initialize(); with:
Fatal Python error: Py_Initialize: unable to load the file system
codec ImportError: No module named 'encodings'
This makes sense because py3 source files are utf-8 by default. So it seems that it requires an external binary just in order to parse py3 source files.
So what to do?
It looks as though I need to locate the encodings binary in my system Python installation, copy this into my project tree, and maybe set some environment variable PYTHONPATH(?) So that my libpython.dylib can find it.
Is it possible to avoid this? And if not, can anyone clarify the steps I need to take? Are there going to be any more hiccups?
NOTE: For posterity, this is how I got a stand-alone libpython.dylib linking into my project on OSX:
First I locate my system Python's library: /usr/local/Frameworks/Python.framework/Versions/3.4/Python (in my case it was installed with homebrew).
Now I:
copy the .dylib into my project folder creating ./Libs/libpython3.4.1_OSX.dylib
Go into build settings -> linking and set other linker flags to -lpython3.4.1_OSX
At this point it will appear to work. However if you know try building it on a fresh OSX installation, it will fail. This is because:
$ otool -D ./libpython3.4.1_OSX.dylib
./libpython3.4.1_OSX.dylib:
/usr/local/Frameworks/Python.framework/Versions/3.4/Python
The .dylib is still holding onto it's old location. It's really weird to me that the .dylib contains a link to its location, as anything that uses it must know where it is in order to invoke it in the first place!
We can correct this with:
$ install_name_tool -id #rpath/libpython3.4.1_OSX.dylib libpython3.4.1_OSX.dylib
But then also in our Xcode project we must:
Go into build phases. Add a copy files step that copies libpython3.4.1_OSX.dylib to Frameworks (that's the right place to put it).
Go into build settings -> linking and set runpath search paths to #executable_path/../Frameworks/libpython3.4.1_OSX.dylib
Finally I need to go into edit scheme -> run -> arguments -> environment variables and add PYTHONHOME with value ../Frameworks
I suspect that to get this working I will also need to add a PYTHONPATH
Links:
https://mikeash.com/pyblog/friday-qa-2009-11-06-linking-and-install-names.html
http://qin.laya.com/tech_coding_help/dylib_linking.html
https://github.com/conda/conda-build/issues/279#issuecomment-67241554
Can you please help me understand how Mach-O libraries work in Mac Os X?
http://nshipster.com/launch-arguments-and-environment-variables/
I have attempted this and it would take more time than you want to spend to embed Python 3 without the Python library.
Some modules in the library are necessary for Python 3 to run and it would take a lot of modifications for it to work without them.
Related
I am using pydev and a virtualenv (which has already been set up successfully). How do you add quantlib (and for that matter any python wrapper plus its C++ native library) to a virtualenv?
I successfully built quantlib and the quantlib-SWIG from source as described here. I notice that after the boost build, //usr/local/lib contains libQuantLib.* files which are probably the native libs.
I then tried copying libQuantLib.* to my virtualenv/lib/python2.7/site-packages, as described here but eclipse still complains about unresolved imports (at this point I am also externally referencing //usr/local/lib/QuantLib-SWIG-1.4/Python/build/lib.linux-x86_64-2.7/QuantLib folder). I am not sure if I had this correctly working.
I have seen this solution, but I really want everything contained in the virtualenv - both the python wrapper and C++ libraries, so everything is resolved when I set the project's pydev interpreter as my virtualenv.
I am unsure what best practices are here.
I'm not familiar with the way the virtualenv is set up. However: from the fact that your Python modules are in virtualenv/lib/python2.7/site-packages, I'd guess that the native libraries should go in virtualenv/lib. The correct way to have everything set up there, though, would be to tell the build machinery where you want the library; in your case (and assuming my guess above is correct) you'd do it by building QuantLib with:
./configure --prefix=/path/to/virtualenv
make
make install
where /path/to/virtualenv is the path to your virtualenv, including the virtualenv folder (but not lib). This will put header files and native libraries in the correct place in the virtualenv. After this, build QuantLib-SWIG using the QuantLib libraries you just installed: I think the easiest way is to do it from within the virtualenv (that is, using the Python interpreter inside it). Activate the env, enter the QuantLib-SWIG/Python directory, and run:
export PATH=/path/to/virtualenv/bin:$PATH
python setup.py build
python setup.py install
where setting PATH as above might be needed to find the correct quantlib-config script. (By the way, you should end up with just a QuantLib Python module in site-packages, not the whole build/lib.linux-x86_64-2.7 thing you have now.)
This is probably a question that has a very easy and straightforward answer, however, despite having a few years programming experience, for some reason I still don't quite get the exact concepts of what it means to "build" and then to "install". I know how to use them and have used them a lot, but have no idea about the exact processes which happen in the background...
I have looked across the web, wikipedia, etc... but there is no one simple answer to it, neither can I find one here.
A good example, which I tried to understand, is adding new modules to python:
http://docs.python.org/2/install/index.html#how-installation-works
It says that "the build command is responsible for putting the files to install into a build directory"
And then for the install command: "After the build command runs (whether you run it explicitly, or the install command does it for you), the work of the install command is relatively simple: all it has to do is copy everything under build/lib (or build/lib.plat) to your chosen installation directory."
So essentially what this is saying is:
1. Copy everything to the build directory and then...
2. Copy everything to the installation directory
There must be a process missing somewhere in the explanation...complilation?
Would appreciate some straightforward not too techy answer but in as much detail as possible :)
Hopefully I am not the only one who doesn't know the detailed answer to this...
Thanks!
Aivoric
Building means compiling the source code to binary in a sandbox location where it won't affect your system if something goes wrong, like a build subdirectory inside the source code directory.
Install means copying the built binaries from the build subdirectory to a place in your system path, where they become easily accessible. This is rarely done by a straight copy command, and it's often done by some package manager that can track the files created and easily uninstall them later.
Usually, a build command does all the compiling and linking needed, but Python is an interpreted language, so if there are only pure Python files in the library, there's no compiling step in the build. Indeed, everything is copied to a build directory, and then copied again to a final location. Only if the library depends on code written in other languages that needs to be compiled you'll have a compiling step.
You want a new chair for your living-room and you want to make it yourself. You browse through a catalog and order a pile of parts. When they arrives at your door, you can't immediately use them. You have to build the chair at your workshop. After a bit of elbow-grease, you can sit down in it. Afterwards, you install the chair in your living-room, in a convenient place to sit down.
The chair is a program you want to use. It arrives at your house as source code. You build it by compiling it into a runnable program. You install it by making it easier to use.
The build and install commands you are refering to come from setup.py file right?
Setup.py (http://docs.python.org/2/distutils/setupscript.html)
This file is created by 3rd party applications / extensions of Python. They are not part of:
Python source code (bunch of c files, etc)
Python libraries that come bundled with Python
When a developer makes a library for python that he wants to share to the world he creates a setup.py file so the library can be installed on any computer that has python. Maybe this is the MISSING STEP
Setup.py sdist
This creates a python module (the tar.gz files). What this does is copy all the files used by the python library into a folder. Creates a setup.py file for the module and archives everything so the library can be built somewhere else.
Setup.py build
This builds the python module back into a library (SPECIFICALLY FOR THIS OS).
As you may know, the computer that the python library originally came from will be different from the library that you are installing on.
It might have a different version of python
It might have a different operating system
It might have a different processor / motherboard / etc
For all the reasons listed above the code will not work on another computer. So setup.py sdist creates a module with only the source files needed to rebuild the library on another computer.
What setup.py does exactly is similar to what a makefile would do. It compiles sources / creates libraries all that stuff.
Now we have a copy of all the files we need in the library and they will work on our computer / operating system.
Setup.py install
Great we have all the files needed. But they won't work. Why? Well they have to be added to Python that's why. This is where install comes in. Now that we have a local copy of the library we need to install it into python so you can use it like so:
import mycustomlibrary
In order to do this we need to do several things including:
Copy files to their library folders in our version of python.
Make sure library can be imported using import command
Run any special install instructions for this library. (seting up paths, etc)
This is the most complicated part of the task. What if our library uses BeautifulSoup? This is not a part of Python Library. We'd have to install it in a way such that our library and any others can use BeautifulSoup without interfering with each other.
Also what if python was installed someplace else? What if it was installed on a server with many users?
Install handles all these problems transparently. What is does is make the library that we just built able to run. All you have to do is use the import command, install handles the rest.
I can't seem to import any of the basic modules located in the "lib-dynload" directory. They are all there, but I get the error: "ImportError: No module named X" when trying to import them.
I checked my sys.path and it includes the directory where all of these modules are located and my PYTHONHOME environment variable is set correctly. I'm at a bit of a loss as to what the problem could be. Some background info: This is cross-compiled from Python 2.6.6 source and installed onto an ARM embedded Linux board with Angstrom.
It did have python on there before, I had tried to bit-bake it into the image but it was missing a lot of stuff. I ended up doing my best to clean the directory tree of anything to do with the previous python before loading on my cross compiled version.
An strace of a simple script that just attempts to import math: http://pastebin.com/3XgJ3nPR
I see no checks in that trace for filenames like math.so or mathmodule.so which might indicate that shared-object modules are turned off entirely — that the version of Python you have compiled cannot load binary modules dynamically.
More: looking over the config.out from my most recent Python build, I see several lines where Python is investigating whether the platform will let it dynamically load binary modules that end in .so:
checking for dlopen... yes
checking DYNLOADFILE... dynload_shlib.o
checking MACHDEP_OBJS... MACHDEP_OBJS
What do these lines say on your cross-compile?
I have recently run across a similar issue building Python 2.7.13, and I believe it is this bug which is being fixed for Python 3 but not ported back to 2. The build process (setup.py) generates a list of modules to build, and then subtracts the list of built-in modules (sys.builtin_module_names); however, setup.py is run (from the Makefile) using python2.7 which in my case picked up the system (Ubuntu) binary rather than the one built, so it subtracts off modules that are built-in for the system python (including operator and collections) but not for the one being built, so they are neither built-in nor built as external modules.
I was able to use a suggestion from the bug and prepend the built python, in the source directory, to the path (and add a symlink from python2.7 -> python). This worked because I was building an x86 python on a multi-arch x64 machine; if you are building for another system like ARM you may need to apply the patch from the bug to get the list of built-in modules from earlier in the build process rather than the host python.
I have installed python oauth on my python2.4 platform, however making the python twitter package work requires some tweaks in oauth.. I am quite new to python but I assume I cannot alter the egg.. how do I install a non-egg version and how do I remove the egg safely ?
Python eggs (like java jar files) use the zip format. So to answer your question on how to make your tweaks:
Find the file location
Navigate to location, make a backup copy
If the file is stored as oauth.egg, unzip it
Start modifying!
Find the egg location
Open up a python interpreter and run the following:
>>> import oauth
>>> oauth.__file__
'/usr/lib/python2.6/dist-packages/oauth/__init__.pyc'
Your path will differ, but that will tell you where to look. Often the source code will be unpacked and available in the same directory as a .py file, in this case oauth.py.
(By the way the __file__ attribute is available on all modules unless they represent linked C libraries, but that should not be your case with oauth.)
I'll skip the file navigation, backup, and unzip details, as those will depend on your system.
Removing a Python Egg Safely
I'm afraid my knowledge is lacking here. Removing the egg file is easy, but I'm not at all sure how to check for dependencies from other packages, other than running $ ack python.module.to.remove across your python library. But some basic facts that may help
Directories that include __init__.py in them are treated as part of the python path. See Modules and Packages
Python eggs will add a .pth file containing additional places to add to the path.
>>> import sys; sys.path will show every directory that Python searches for modules/packages.
The PYTHONPATH environment variable can be configured to add paths you choose to the python search path
PS If you are new to Python, I highly recommend finding out more about IPython. It makes the Python intepreter much nicer to deal with.
Good luck and welcome to Python!
I'm developing python C++ extensions for use in both OSX and linux. Currently, I can run my code with a wrapper script wrapper.sh:
#!/bin/bash
trunk=`dirname $0`
trunk=`cd $trunk; pwd`
export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:$trunk/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$trunk/lib/:$trunk/src/hdf5/lib/:$trunk/src/python/lib
$trunk/src/python/bin/python "$#"
which is able to set up my run like this: wrapper.sh app.py
What I would like to do is to eliminate the need for wrapper.sh, so I need alternatives for DYLD_LIBRARY_PATH and LD_LIBRARY_PATH. I can not put my libraries in some standard location like /usr/local/lib because on my machine, I maintain several independent instances of my libraries. That is, my libraries need to be kept somewhere relative to my installation path. I can't put these environment variables in my login script for the same reason. Currently, I need to call one of my wrapper.sh scripts to use the associated libraries. My goal is to be able to run merely app.py, which if it lives in my installation path, should be able to find its associated python and libraries. The purpose is to simplify execution for users, and to simplify usage of external tools like nosetests.
One alternative seems to be using rpath when I build my version of python:
./configure --enable-shared --prefix=$(CURDIR)/$(PYTHON_DIR) LDFLAGS="-Wl,-rpath,$(CURDIR)/lib/ -Wl,-rpath,$(CURDIR)/src/hdf5/lib -Wl,-rpath,$(CURDIR)/src/python/lib"
This trick seems to work fine on linux, even though one of my libraries ended up needing to be copied directly into trunk/src/python/lib/python2.6/lib-dynload for some reason unclear to me. However, this trick is not working on OSX; it looks like I need to run install_name_tool on all my dylibs libraries.
The other alternative I came up with was to do something like this:
ln -s wrapper.sh python
so that my scripts could all use #! ../python, but I'm getting Unmatched ". errors. Same thing if I use #! ../wrapper.sh. I'm not really an expert in bash...
However, these all seem so unnecessarily complicated, and surely this is something that other people have solved?? Thanks for any advice!
For python extensions, consider using PYTHONPATH: the Python interpreter will search the PYTHONPATH for .py/.pyc/.pyo/.so modules, as well as packages. See docs for Python 2.x as well as docs for Python 3.x; specifically the section named "The Module Search Path" on both pages. This also references information that seems to indicate that it is possible to update the module search path at runtime, which, if true, means that you could add all that logic to your program and it can hunt for its libraries on its own (say if it installs a copy in /usr/libexec/pkgname/... somewhere or something).
For all but the most complex of cases, though, setting PYTHONPATH and using a shell-script or native-compiled binary wrapper to start the core program is an okay approach, and one that is also used in other language environments including Mono and Java.
Not sure if this would be an acceptable (partial) solution in your circumstances, but another way to get libraries noticed by ld on linux is to add the path to the libraries to /etc/ld.so.conf and then runldconfig
For the Mac I don't remember the details, but I think Apple provide some resources for distributing apps packaged as a .app which includes some default locations (relative to the root of the .app) for libraries, or "frameworks" as they call them. Would require some googling from there - sorry can't help further on that but hope you get some progress :-)