I'm new to Python and overwhelmed with documentation regarding modules, package modules and imports.
Best I can understand, there are no "packages", only modules and import mechanisms and the officially recommended high-level API is provided bysetuptools whose Distribution class is the key element that drives what and how modules are added to the runtime module import mechanism (sys.path?) but I'm lost in the details of things.
My understanding is that setup.py is a normal Python module that indirectly uses runtime API to add more modules while being executed before actual "user" code. As a normal module it could download more code (like installers), sort dependencies topologically, call system linker etc.
But specifically, I'm having difficulty in understanding:
How setup.py decides whether to shadow system modules (like math) or not
How setup.py configures the import mechanism so that above doesn't happen by accident
How does -m interpreter flag work in comparison?
For conciseness, let's assume the most recent CPython and standard library only.
EDIT:
I see that packages are real, so are globals and levels as seen in __import__ function. It appears that setuptools has control over them.
setuptools helps packaging the code into the right formats (example: setup.py build sdist bdist_wheel to build the archives that can be downloaded from PyPI) and then unpacking this code unto the right locations (example: setup.py install or indirectly pip install) so that later on it can be picked up automatically by Python's own import mechanisms (typically yes, the location is one of the directories pointed at by sys.path).
I might be wrong but I wouldn't say that:
setup.py decides whether to shadow system modules (like math) or not
nor that:
setup.py configures the import mechanism so that above doesn't happen by accident
As far as I know this can really well happen, there's nothing to prevent that and it is not necessarily a bad thing.
How does -m interpreter flag work in comparison?
Instead of instructing Python interpreter to execute the code from a Python file by its path (example: python ./setup.py) it is possible to instruct the Python interpreter to execute a Python module (example: python -m setup, python -m pip, python -m http.server). In this case the Python will look for the named module in the directories pointed at by sys.path. This is why setuptools/setup.py and pip install the packages in a location that - if everything is configured right - is in sys.path.
I recommend you have a look at the values stored in sys.path by yourself, with for example the following command: python -c "import sys; print(sys.path)".
Related
I maintain a Python utility that allows bpy to be installable as a Python module. Due to the hugeness of the spurce code, and the length of time it takes to download the libraries, I have chosen to provide this module as a wheel.
Unfortunately, platform differences and Blender runtime expectations makes support for this tricky at times.
Currently, one of my big goals is to get the Blender addon scripts directory to install into the correct location. The directory (simply named after the version of Blender API) has to exist in the same directory as the Python executable.
Unfortunately the way that setuptools works (or at least the way that I have it configured) the 2.79 directory is not always placed as a sibling to the Python executable. It fails on Windows platforms outside of virtual environments.
However, I noticed in setuptools documentation that you can specify eager_resources that supposedly guarantees the location of extracted files.
https://setuptools.readthedocs.io/en/latest/setuptools.html#automatic-resource-extraction
https://setuptools.readthedocs.io/en/latest/pkg_resources.html#resource-extraction
There was a lot of hand waving and jargon in the documentation, and 0 examples. I'm really confused as to how to structure my setup.py file in order to guarantee the resource extraction. Currently, I just label the whole 2.79 directory as "scripts" in my setuptools Extension and ship it.
Is there a way to write my setup.py and package my module so as to guarantee the 2.79 directory's location is the same as the currently running python executable when someone runs
py -3.6.8-32 -m pip install bpy
Besides simply "hacking it in"? I was considering writing a install_requires module that would simply move it if possible but that is mangling with the user's file system and kind of hacky. However it's the route I am going to go if this proves impossible.
Here is the original issue for anyone interested.
https://github.com/TylerGubala/blenderpy/issues/13
My build process is identical to the process descsribed in my answer here
https://stackoverflow.com/a/51575996/6767685
Maybe try the data_files option of distutils/setuptools.
You could start by adding data_files=[('mydata', ['setup.py'],)], to your setuptools.setup function call. Build a wheel, then install it and see if you can find mydata/setup.py somewhere in your sys.prefix.
In your case the difficult part will be to compute the actual target directory (mydata in this example). It will depend on the platform (Linux, Windows, etc.), if it's in a virtual environment or not, if it's a global or local install (not actually feasible with wheels currently, see update below) and so on.
Finally of course, check that everything gets removed cleanly on uninstall. It's a bit unnecessary when working with virtual environments, but very important in case of a global installation.
Update
Looks like your use case requires a custom step at install time of your package (since the location of the binary for the Python interpreter relative to sys.prefix can not be known in advance). This can not be done currently with wheels. You have seen it yourself in this discussion.
Knowing this, my recommendation would be to follow the advice from Jan Vlcinsky in his comment for his answer to this question:
Post install script after installing a wheel.
Add an extra setuptools console entry point to your package (let's call it bpyconfigure).
Instruct the users of your package to run it immediately after installing your package (pip install bpy && bpyconfigure).
The purpose of bpyconfigure should be clearly stated (in the documentation and maybe also as a notice shown in the console right after starting bpyconfigure) since it would write into locations of the file system where pip install does not usually write.
bpyconfigure should figure out where is the Python interpreter, and where to write the extra data.
The extra data to write should be packaged as package_data, so that it can be found with pkg_resources.
Of course bpyconfigure --uninstall should be available as well!
I have a single Python file which is supposed to take in a bunch of inputs during the command.
For eg: python script.py "string_1" "string_2"
I also have a bunch of dependencies including pandas, datetime and even Python3.
I want to package all this code in a manner that anyone can install the package along with the dependencies as well (in a directory or so) and then just call the script/module : in the above manner. Without having to actually go into a Python interpreter.
I tried using the python-packaging resource, but with that I would need to go into the interpreter, right ?
I found a good article today that explains quite well the procedure: https://medium.com/dreamcatcher-its-blog/making-an-stand-alone-executable-from-a-python-script-using-pyinstaller-d1df9170e263
pyinstaller --onefile <script.py> is the tl;dr on linux. On windows you need also py32exe
If you can rely on a base install of python being present already.
Then it's worth looking at Python's zipapp module introduced in Python3.5 https://docs.python.org/3/library/zipapp.html#creating-standalone-applications-with-zipapp For background info PEP441 https://www.python.org/dev/peps/pep-0441/
Also there is a project called Shiv which adds some extra abilities to the zipapp module bundled in python3.5
https://shiv.readthedocs.io/en/latest/
Have a look at pex (https://pex.readthedocs.io/en/stable/). It wraps up your python scripts, files, dependencies, etc into a single executable. You still need the python interpreter installed, but it includes everything else.
So I've put together some Python bindings to some legacy Fortran code. They work, I can install this with the usual python setup.py install, but I would like to write some nosetests to check things using e.g. travis-ci.
I'm not sure how one goes about doing this. If one installs the package, and the bindings are available, then obviously you can just import the different methods and classes and use them in tests, but what if the package isn't installed? I guess that a pointer to some examples would be worthwile.
I have a C++ library (we'll call it Example in the following) for which I wrote Python bindings using the boost.python library. This Python-wrapped library will be called pyExample. The entire project is built using CMake and the resulting Python-wrapped library is a file named libpyExample.so.
When I use the Python bindings from a Python script located in the same directory as libpyExample.so, I simply have to write:
import libpyExample
libpyExample.hello_world()
and this executes a hello_world() function exposed by the wrapping process.
What I want to do
For convenience, I would like my pyExample library to be available from anywhere simply using the command
import pyExample
I also want pyExample to be easily installable in any virtualenv in just one command. So I thought a convenient process would be to use setuptools to make that happen. That would therefore imply:
Making libpyExample.so visible for any Python script
Changing the name under which the module is accessed
I did find many things about compiling C++ extensions with setuptools, but nothing about packaging a pre-compiled C++ extension. Is what I want to do even possible?
What I do not want to do
I don't want to build the pyExample library with setuptools, I would like to avoid modifying the existing project too much. The CMake build is just fine, I can retrieve the libpyExample.so file very easily.
If I understand your question correctly, you have the following situation:
you have an existing CMake-based build of a C++ library with Python bindings
you want to package this library with setuptools
The latter then allows you to call python setup.py install --user, which installs the lib in the site-packages directory and makes it available from every path in your system.
What you want is possible, if you overload the classes that setuptools uses to build extensions, so that those classes actually call your CMake build system. This is not trivial, but you can find a working example here, provided by the pybind11 project:
https://github.com/pybind/cmake_example
Have a look into setup.py, you will see how the classes build_ext and Extension are inherited from and modified to execute the CMake build.
This should work out of the box for you or with little modification - if your build requires special -D flags to be set.
I hope this helps!
I'm using setuptools to install a Python module that I'm working on. In addition to numpy, scipy, ..., whose presence I can assure with install_requires = [...], my module also depends on a Python module - let's call it specialmodule - that is a Python interface to a program that is neither an egg, nor a single .py-file or a VCS repo (so Dependencies that aren’t in PyPI is not applicable). The program is written in C++ and has a Python interface, and can either be built from source after cloning from git, or obtained as a tar archive.
Is there a way to use setuptools to check the existence of this module (which is in PYTHONPATH), and if it can not be found, display some message to the user that the module is missing (and if possible, also some instructions on how to get it)?
Edit: Also, if there is a more elegant way to do this with a different approach than with setuptools, I'd be glad to hear! But I would really like to check directly on installation, not during runtime of my module.