I am using pydev and a virtualenv (which has already been set up successfully). How do you add quantlib (and for that matter any python wrapper plus its C++ native library) to a virtualenv?
I successfully built quantlib and the quantlib-SWIG from source as described here. I notice that after the boost build, //usr/local/lib contains libQuantLib.* files which are probably the native libs.
I then tried copying libQuantLib.* to my virtualenv/lib/python2.7/site-packages, as described here but eclipse still complains about unresolved imports (at this point I am also externally referencing //usr/local/lib/QuantLib-SWIG-1.4/Python/build/lib.linux-x86_64-2.7/QuantLib folder). I am not sure if I had this correctly working.
I have seen this solution, but I really want everything contained in the virtualenv - both the python wrapper and C++ libraries, so everything is resolved when I set the project's pydev interpreter as my virtualenv.
I am unsure what best practices are here.
I'm not familiar with the way the virtualenv is set up. However: from the fact that your Python modules are in virtualenv/lib/python2.7/site-packages, I'd guess that the native libraries should go in virtualenv/lib. The correct way to have everything set up there, though, would be to tell the build machinery where you want the library; in your case (and assuming my guess above is correct) you'd do it by building QuantLib with:
./configure --prefix=/path/to/virtualenv
make
make install
where /path/to/virtualenv is the path to your virtualenv, including the virtualenv folder (but not lib). This will put header files and native libraries in the correct place in the virtualenv. After this, build QuantLib-SWIG using the QuantLib libraries you just installed: I think the easiest way is to do it from within the virtualenv (that is, using the Python interpreter inside it). Activate the env, enter the QuantLib-SWIG/Python directory, and run:
export PATH=/path/to/virtualenv/bin:$PATH
python setup.py build
python setup.py install
where setting PATH as above might be needed to find the correct quantlib-config script. (By the way, you should end up with just a QuantLib Python module in site-packages, not the whole build/lib.linux-x86_64-2.7 thing you have now.)
Related
I try to write a simple plugin with gimpfu in python and I tried following those instructions.
1.2. Installation
Gimp-python consists of a Python module written in C and some native python support modules. You can build pygimp with the commands:
./configure
make
make install
This will build and install gimpmodule and its supporting modules, and install the sample plugins in gimp's plugin directory.
Where do I have to execute those commands?
I tried adding my script to the plugins folder but it seems like there is no python module called gimpfu. I believe I have to enable or install it in some way, but I can't find a solutio to do it.
EDIT: It seems like gimpfu is availible in the gimpfy-console insode gimp. It just doesn't seem to be availible for my plugin scripts.
No need to install anything. In the Windows versions Python support is built-in, and the gimpfu import is available when your code is executed by Gimp.
If you don't see the plugin in the menu it is likely a syntax error that doesn't let it run its registration code. See here for some debugging techniques.
However, since you mention PyCharm, you may have another Python interpreter installed and this makes things complicated because there can be conflicts depending on order of installation (and remember, Gimp uses Python 2.7)
Now it all depends if you are really doing a plugin (called from the Gimp menu) or a batch (where Gimp is called from a shell script), which is somewhat different. If you are writing a batch, see this answer for an example.
you don't need to install anything, on windows gimp comes with a python interpreter along with the libraries inside of it.
if you want to run your script from inside GIMP then you should check this answer and you should add the path to gimp to your system PATH environment variable (which is C:\Program Files\GIMP 2\bin on my system) , and instead of calling gimp-console.exe you should replace that with whatever gimp-console is currently available in that folder, the one on my system is gimp-console-2.10.exe.
For testing my libraries on multiple Python versions I have a single virtual environment that I install them into, and reference them with their complete name/version (i.e. python3.7). Recently I noticed that sys.path is still referencing the source library instead of the copied library (i.e. /source/python/... instead of /source/virtualenv/lib/python3.7/...)
I've tried make install instead of make altinstall 1, I've searched for answers -- so far nothing has helped.
How do I fix this?
1 PSA: If you use make (alt)install and you don't want to clobber your system Python, make sure and use
./configure --prefix /path/to/install_to/here
TL;DR Remove the pyvenv.cfg in the virtualenv root directory.
The issue is the interaction with the virtualenv, and not make. Somewhere in Python's startup it checks to see if it is running in a virtualenv and, if it is, uses the libraries from it's original installation (and I had created the virtualenv from my source copy).
The solution is to remove the pyvenv.cfg file in the root of the virtualenv. This will completely isolate the virtualenv from the system (so no sharing of site-packages nor dist-packages), but is exactly what I wanted for my purposes.
I maintain a Python utility that allows bpy to be installable as a Python module. Due to the hugeness of the spurce code, and the length of time it takes to download the libraries, I have chosen to provide this module as a wheel.
Unfortunately, platform differences and Blender runtime expectations makes support for this tricky at times.
Currently, one of my big goals is to get the Blender addon scripts directory to install into the correct location. The directory (simply named after the version of Blender API) has to exist in the same directory as the Python executable.
Unfortunately the way that setuptools works (or at least the way that I have it configured) the 2.79 directory is not always placed as a sibling to the Python executable. It fails on Windows platforms outside of virtual environments.
However, I noticed in setuptools documentation that you can specify eager_resources that supposedly guarantees the location of extracted files.
https://setuptools.readthedocs.io/en/latest/setuptools.html#automatic-resource-extraction
https://setuptools.readthedocs.io/en/latest/pkg_resources.html#resource-extraction
There was a lot of hand waving and jargon in the documentation, and 0 examples. I'm really confused as to how to structure my setup.py file in order to guarantee the resource extraction. Currently, I just label the whole 2.79 directory as "scripts" in my setuptools Extension and ship it.
Is there a way to write my setup.py and package my module so as to guarantee the 2.79 directory's location is the same as the currently running python executable when someone runs
py -3.6.8-32 -m pip install bpy
Besides simply "hacking it in"? I was considering writing a install_requires module that would simply move it if possible but that is mangling with the user's file system and kind of hacky. However it's the route I am going to go if this proves impossible.
Here is the original issue for anyone interested.
https://github.com/TylerGubala/blenderpy/issues/13
My build process is identical to the process descsribed in my answer here
https://stackoverflow.com/a/51575996/6767685
Maybe try the data_files option of distutils/setuptools.
You could start by adding data_files=[('mydata', ['setup.py'],)], to your setuptools.setup function call. Build a wheel, then install it and see if you can find mydata/setup.py somewhere in your sys.prefix.
In your case the difficult part will be to compute the actual target directory (mydata in this example). It will depend on the platform (Linux, Windows, etc.), if it's in a virtual environment or not, if it's a global or local install (not actually feasible with wheels currently, see update below) and so on.
Finally of course, check that everything gets removed cleanly on uninstall. It's a bit unnecessary when working with virtual environments, but very important in case of a global installation.
Update
Looks like your use case requires a custom step at install time of your package (since the location of the binary for the Python interpreter relative to sys.prefix can not be known in advance). This can not be done currently with wheels. You have seen it yourself in this discussion.
Knowing this, my recommendation would be to follow the advice from Jan Vlcinsky in his comment for his answer to this question:
Post install script after installing a wheel.
Add an extra setuptools console entry point to your package (let's call it bpyconfigure).
Instruct the users of your package to run it immediately after installing your package (pip install bpy && bpyconfigure).
The purpose of bpyconfigure should be clearly stated (in the documentation and maybe also as a notice shown in the console right after starting bpyconfigure) since it would write into locations of the file system where pip install does not usually write.
bpyconfigure should figure out where is the Python interpreter, and where to write the extra data.
The extra data to write should be packaged as package_data, so that it can be found with pkg_resources.
Of course bpyconfigure --uninstall should be available as well!
Is there a way to build and install Python 2.7.x so that it has no direct dependency whatsoever on anything under /System/Library/Frameworks? (IOW, such Python should remain functional even after sudo chmod 000 /System/Library/Frameworks.)
I thought it would be enough to omit the --enable-framework flag at the time of running ./configure, but I was wrong: if I do this the resulting Python still has plenty of dependencies to frameworks under /System/Library/Frameworks, including, of course, /System/Library/Frameworks/Python.framework. (IOW, one has to wonder if there's any difference between installing with and without selecting --enable-framework.)
Yes, --enable-framework makes a difference when building and installing Python. Without --enable-framework, Python is built as a conventional "unix-style" build by default installed to /usr/local/ but that can be changed with the --prefix= option to ./configure. --enable-framework builds a Python that, by default, is installed into /Library/Frameworks, although that can be changed by specifying another path to --enable-framework. But any Python build will be dependent on other libraries and frameworks provided by the operating system. This is normal. Why are you concerned about it?
Update: It's easy to avoid using the Apple-supplied system Pythons, e.g. those which are in /usr/bin and whose shared components are in /System/Library/Frameworks/Python.frameworks, just by installing another Python 2.7 and not using /usr/bin/python2.7. But that doesn't mean you should or can avoid using other system frameworks.
That said, there is one known problematic Apple-supplied framework in OS X 10.6 through 10.8 that is used by Python: that is Tk 8.5, used by Python Tkinter applications including IDLE. Fortunately, it is pretty easy to work around that. Like Python, you can install a newer, third-party version of the Tcl 8.5 and Tk 8.5 frameworks into /Library/Frameworks and some Python distributions, like the binary installers from python.org, will use them. We recommend the ActiveTcl distribution if you are able to use it. See http://www.python.org/download/mac/tcltk/ for more information.
Also, be aware that you need to install separate versions of Distribute (or setuptools), pip (if you use it), and/or virtualenv for each instance of Python you have. Don't fall into the trap of using the Apple-supplied easy_install commands in /usr/bin/ which are for the system Pythons.
Further update: With the further refinement
avoid all the stuff under /S/L/F/Python.framework". I already tried
something like what you describe, but the resulting installation still
depends on stuff under /S/L/F/Python.framework
all I can do is reiterate that building your own Python, be it a "unix" build, a "shared" build, or a "framework" build, the resultant Python should be totally independent of anything in /System/Library/Frameworks/Python.framework. If not, something went wrong in the build or in how you are executing Python. More details would be needed to determine what is going wrong, at a minimum something like:
/path/to/your/python -c "import sys, pprint; print(sys.version); print(sys.executable); pprint.pprint(sys.path)"
If you built the Python, we'd need to see the complete configure and make commands. But that would be getting into localized debugging not really appropriate for StackOverflow.
Last (!) update: In a framework build, the --enable-framework=/path/to option to configure uses that "prefix" as the install "prefix" location for the framework and two auxiliary directories if you stick to using paths that end in Library/Frameworks. So, if you used:
./configure --enable-framework=/baz/quux/Library/Frameworks && make && make install
it should result in:
/baz
quux
Applications
Python 2.7
Build Applet.app
IDLE.app
...
Library
Frameworks
Python
Version
2.7
Headers/
Python
...
Resources/
bin
...
2to3
idle2.7
...
python
python2
python2.7
...
include/
lib/
share/
bin
2to3 -> ...bin/2to3
...
idle2.7 -> ...bin/idle2.7
...
python -> ...bin/python
...
The top-level bin directory is somewhat vestigial and really just confuses matters. It contains symlinks to the executables in the framework bin directory. It's what gets installed in /usr/local/bin by a default framework build. One problem with using it is that Distutils-installed scripts will, by default, get installed to the framework bin directory and there won't be an alias for them in the top-level directory. That's why it is recommended that you put the framework bin directory at the head of your shell PATH and just ignore the top-level bin.
If --prefix=/foo/bar is added to the previous configure, it will use the prefix path as the root for the vestigial top-level bin directory. In the above example, that top-level bin directory would be installed instead at:
/foo
bar
bin
2to3 -> ...bin/2to3
...
idle2.7 -> ...bin/idle2.7
...
python -> ...bin/python
...
Otherwise, it should have no effect.
I'm looking to create the following:
A portable version of python that can be run on any system (with any previous version of python or no python installed) and have it pre-configured with various python packages (ie, django, lxml, pysqlite, etc)
The closest I've found to the above is virtualenv, but this only goes so far.
If I package up a nice virtualenv for python on one machine, it contains sym links to a lot of the libraries it needs. I can take those sym links and convert them to their actual files, but if I try to move this entire directory to another machine, I get seg fault after seg fault.
To launch python on a different machine, I'm using:
LD_LIBRARY_PATH=lib/ ./bin/python
and in lib/ I have all of the shared libraries I copied from the original machine. The problem here is these shared libraries might rely on other shared libraries that I'm not including, so executing this on other linux distros does not work. Probably due to it falling back on older shared libaries installed on the system that do not work with what I copied over.
Anyone have an idea on how to get this working? Is this even possible?
EDIT:
To clarify, the desired outcome is to create a tar.gz of a python binary and associated packages (django, lxml, pysqlite, etc) that can be extracted and run on any linux based system, ie (ubuntu 8.04, redhat 5, suse 11, etc), all 32bit distros, where the locally installed version of python doesn't impact what's in the tar.gz.
I just tested this and it works great.
Get the copy of python you want to install and untar it and cd to the untarred folder first.
Also get a copy of setuptools and untar that.
/opt/portapy used below is of course just the name I came up with for this post, it could be any path and the full path should be tarred up and the same path should be used on any systems you put this on due to absolute path linking.
mkdir /opt/portapy
cd <python source dir>
./configure --prefix=/opt/portapy && make && make install
cd <setuptools source dir>
/opt/portapy/bin/python ./setup.py install
Make the virtual env folder inside the portapy folder.
mkdir /opt/portapy/virtenv
/opt/portapy/bin/virtualenv /opt/portapy/virtenv
cd /opt/portapy/virtenv
source bin/activate
Done. You are ready to install all of your libraries here and have the option of creating multiple virtual envs this way.
You can then tar up the whole /opt/portapy folder and transport it to any Linux system of the same arch, within reason I suspect.
I compiled 2.7.5 ond centOS 5.8 64bit and moved the folder to a Cent6.9 system and it runs perfectly.
I don't know how this is even possible. If it were, they woudn't need to distribute binary packages of python for different platforms. You can't simply distribute python that will run on any platform. It has to be built from source for that arch. Virtualenv will expect you to tell it which system python to use (using links).
This pretty much goes for almost any binary package that links against system libs. Again, if it were possible, we wouldn't need any platform specific binary distributions.
You can, however, achieve part of what you want. That is, running python on another machine that doesn't have python installed as long as its the same arch. This is the same concept behind freezing, or py2exe/py2app/pyinstaller. An interpreter is bundled into a standalone environment. So the app can run on any similar platform.
Edit
I just realized that while your question speaks about "system" agnostically, your title contains the reference "linux". There are different flavors of linux, so in order for it to work you would have to build it fat for multiple archs and also completely contain the standalone links. You might try building a package with pyinstaller and using that to include in your project.
You can try just building python from source, in your virtualenv:
$ ./configure --prefix=/path/to/virtualenv && make && make install
If you still have problems with the links to libs, you can also investigate building it statically
I'm not sure that working solely in Python is the way to go here. You might have better luck with Puppet of Chef, which are configuration tools that can be used to create a local environment. There is plenty of code out there to install virtualenv and python on just about any Linux plus OSX (probably not Windows though).
Your workflow would be to install chef or Puppet (your choice), run a script to install the Python you want, then enter a virtualenv and pip install any packages you might need.
Sorry this isn't as easy as virtualenv alone, but it is much more robust.
Well, since I rarely accept "can't be done", there is a way to do it. Warning: it isn't pretty and you should probably look into a different scenario.
What you will need to to is determine a standard location for this top level directory. Second, using that directory as your root you will need to compile Python on each Linux distribution you want to run this on. For this you would use something like "/usr/local/myappname/platform/" to configure and compile Python to live in. In each case substitute "platform" with the name of the platform such as "/usr/local/rhel/". If memory serves the configure option you are looking for here is --prefix.
Once you have each distribution compiled you will need a script to determine which one to use and either set environment variables or have it create symlinks to the appropriate "installation" of python. I would then use virtualenv and bootstrap in that tree to keep the "in-use" python libraries even more specific.
I can't think of a common Linux distribution that doesn't have Python by default. As such you could use setup.py and/or basic python scripts to script this out since you should be able to rely in Python being present - even if its ye olde version as in RHEL installs. Personally I find the above method overly complicated but it would meet your stated requirements with the allowance for a final script. Of course, you could use shar (SHell ARchive) to tar all of this into a runnable shell script to do the installation and avoid the need for secondary scripts. If you gzip the resulting shel archive then you can decompress it on target systems and execute it to set everything up.
All that said, I would not recommend this. I would recommend determining the minimum Python version you can run on and ensuring that is installed by the distribution whenever possible and if needs be pulling down from a repo and installing. Then, use virtualenv and bootstrap with a requirements.txt to install necessary python libraries and apps into the virutalenv. For that see this documentation
I faced the same problem, so I created PortableVirtualenv. Your Question is just the definition of it.
I use it as a base for commercial multiplatform app I develop. (But PortableVirtualenv is public domain - use it freely.)
If needed, you can pip-install any package and zip the whole directory to distribute also packages you need.
One nice option is to make a "snap" portable linux application. They have a python mode which lets you specify you specify exactly what modules you need. From https://snapcraft.io/first-snap#python :
Snaps let you distribute a dependency-isolated Python app in an app store experience for end users.
Another option is to containerize your application with something like docker. Then instead of executing your script directly, the user is actually running a small OS with just your application and its dependencies. https://www.infoq.com/articles/docker-executable-images/ has more about executable containers.
Container images can also be used for short lived processes: a containerized executable meant to be run on your computer. These containers execute a single task, are short lived and can generally be removed after use. We call these executable images. Examples are compilers (Golang) or build tools (Maven), presentation software (I love to hack a simple presentation in Markdown format and let a RevealJS Docker image serve that) and browsers (a fresh contained browser to follow that fishy link). A real evangelist for executable images is Docker's own Jessie Frazelle. To get some great inspiration be sure to read her blog about them or check out this presentation at DockerCon 2015.