I'm trying to write a plugin for Sublime Text 3.
I have to use several third party packages in my code. I have managed to get the code working by manually copying the packages into /home/user/.config/sublime-text-3/Packages/User/, then I used relative imports to get to the needed code. How would I distribute the plugin to the end users? Telling them to copy the needed dependencies to the appropriate location is certainly not the way to go. How are 3rd party modules supposed to be used properly with Sublime Text plugins? I can't find any documentation online; all I see is the recommendation to put the modules in the folder.
Sublime uses it's own embedded Python interpreter (currently Python 3.3.6 although the next version will also support Python 3.8 as well) and as such it will completely ignore any version of Python that you may or may not have installed on your system, as well as any libraries that are installed for that version.
For that reason, if you want to use external modules (hereafter dependencies) you need to do extra work. There are a variety of ways to accomplish this, each with their own set of pros and cons.
The following lists the various ways that you can achieve this; all of them require a bit of an understanding about how modules work in Python in order to understand what's going on. By and large except for the paths involved there's nothing too "Sublime Text" about the mechanisms at play.
NOTE: The below is accurate as of the time of this answer. However there are plans for Package Control to change how it works with dependencies that are forthcoming that may change some aspect of this.
This is related to the upcoming version of Sublime supporting multiple versions of Python (and the manner in which it supports them) which the current Package Control mechanism does not support.
It's unclear at the moment if the change will bring a new way to specify dependencies or if only the inner workings of how the dependencies are installed will change. The existing mechanism may remain in place regardless just for backwards compatibility, however.
All roads to accessing a Python dependency from a Sublime plugin involve putting the code for it in a place where the Python interpreter is going to look for it. This is similar to how standard Python would do things, except that locations that are checked are contained within the area that Sublime uses to store your configuration (referred to as the Data directory) and instead of a standalone Python interpreter, Python is running in the plugin host.
Populate the library into the Lib folder
Since version 3.0 (build 3143), Sublime will create a folder named Lib in the data directory and inside of it a directory based on the name of the Python version. If you use Preferences > Browse Packages and go up one folder level, you'll see Lib, and inside of it a folder named e.g. python3.3 (or if you're using a newer build, python33 and python38).
Those directories are directly on the Python sys.path by default, so anything placed inside of them will be immediately available to any plugin just as a normal Python library (or any of those built in) would be. You could consider these folders to be something akin to the site-packages folder in standard Python.
So, any method by which you could install a standard Python library can be used so long as the result is files ending up in this folder. You could for example install a library via pip and then manually copy the files to that location from site-packages, manually install from sources, etc.
Lib/python3.3/
|-- librarya
| `-- file1.py
|-- libraryb
| `-- file2.py
`-- singlefile.py
Version restrictions apply here; the dependency that you want to use must support the version of Python that Sublime is using, or it won't work. This is particularly important for Python libraries with a native component (e.g. a .dll, .so or .dylib), which may require hand compiling the code.
This method is not automatic; you would need to do it to use your package locally, and anyone that wants to use your package would also need to do it as well. Since Sublime is currently using an older version of Python, it can be problematic to obtain a correct version of libraries as well.
In the future, Package Control will install dependencies in this location (Will added the folder specifically for this purpose during the run up to version 3.0), but as of the time I'm writing this answer that is not currently the case.
Vendor your dependencies directly inside of your own package
The Packages folder is on the sys.path by default as well; this is how Sublime finds and loads packages. This is true of both the physical Packages folder, as well as the "virtual" packages folder that contains the contents of sublime-package files.
For example, one can access the class that provides the exec command via:
from Default.exec import ExecCommand
This will work even though the exec.py file is actually stored in Default.sublime-package in the Sublime text install folder and not physically present in the Packages folder.
As a result of this, you can vendor any dependencies that you require directly inside of your own package. Here this could be the User package or any other package that you're creating.
It's important to note that Sublime will treat any Python file in the top level of a package as a plugin and try to load it as one. Hence it's important that if you go this route you create a sub-folder in your package and put the library in there.
MyPackage/
|-- alibrary
| `-- code.py
`-- my_plugin.py
With this structure, you can access the module directly:
import MyPackage.alibrary
from MyPackage.alibrary import someSymbol
Not all Python modules lend themselves to this method directly without modification; some code changes in the dependency may be required in order to allow different parts of the library to see other parts of itself, for example if it doesn't use relative import to get at sibling files. License restrictions may also get in the way of this as well, depending on the library that you're using.
On the other hand, this directly locks the version of the library that you're using to exactly the version that you tested with, which ensures that you won't be in for any undue surprises further on down the line.
Using this method, anything you do to distribute your package will automatically also distribute the vendored library that's contained inside. So if you're distributing by Package Control, you don't need to do anything special and it will Just Work™.
Modify the sys.path to point to a custom location
The Python that's embedded into Sublime is still standard Python, so if desired you can manually manipulate the sys.path that describes what folders to look for packages in so that it will look in a place of your choosing in addition to the standard locations that Sublime sets up automatically.
This is generally not a good idea since if done incorrectly things can go pear shaped quickly. It also still requires you to manually install libraries somewhere yourself first, and in that case you're better off using the Lib folder as outlined above, which is already on the sys.path.
I would consider this method an advanced solution and one you might use for testing purposes during development but otherwise not something that would be user facing. If you plan to distribute your package via Package Control, the review of your package would likely kick back a manipulation of the sys.path with a request to use another method.
Use Package Control's Dependency system (and the dependency exists)
Package control contains a dependency mechanism that uses a combination of the two prior methods to provide a way to install a dependency automatically. There is a list of available dependencies as well, though the list may not be complete.
If the dependency that you're interested in using is already available, you're good to go. There are two different ways to go about declaring that you need one or more dependencies on your package.
NOTE: Package Control doesn't currently support dependencies of dependencies; if a dependency requires that another library also be installed, you need to explicitly mention them both yourself.
The first involves adding a dependencies key to the entry for your package in the package control channel file. This is a step that you'd take at the point where you're adding your package to Package Control, which is something that's outside the scope of this answer.
While you're developing your package (or if you decide that you don't want to distribute your package via Package Control when you're done), then you can instead add a dependencies.json file into the root of your package (an example dependencies.json file is available to illustrate this).
Once you do that, you can choose Package Control: Satisfy Dependencies from the command Palette to have Package Control download and install the dependency for you (if needed).
This step is automatic if your package is being distributed and installed by Package Control; otherwise you need to tell your users to take this step once they install the package.
Use Package Control's Dependency system (but the dependency does not exist)
The method that Package Control uses to install dependencies is, as outlined at the top of the question subject to change at some point in the (possibly near) future. This may affect the instructions here. The overall mechanism may remain the same as far as setup is concerned, with only the locations of the installation changing, but that remains to be seen currently.
Package Control installs dependencies via a special combination of vendoring and also manipulation of the sys.path to allow things to be found. In order to do so, it requires that you lay out your dependency in a particular way and provide some extra metadata as well.
The layout for the package that contains the dependency when you're building it would have a structure similar to the following:
Packages/my_dependency/
├── .sublime-dependency
└── prefix
└── my_dependency
└── file.py
Package Control installs a dependency as a Package, and since Sublime treats every Python file in the root of a package as a plugin, the code for the dependency is not kept in the top level of the package. As seen above, the actual content of the dependency is stored inside of the folder labeled as prefix above (more on that in a second).
When the dependency is installed, Package Control adds an entry to it's special 0_package_control_loader package that causes the prefix folder to be added to the sys.path, which makes everything inside of it available to import statements as normal. This is why there's an inherent duplication of the name of the library (my_dependency in this example).
Regarding the prefix folder, this is not actually named that and instead has a special name that determines what combination of Sublime Text version, platform and architecture the dependency is available on (important for libraries that contain binaries, for example).
The name of the prefix folder actually follows the form {st_version}_{os}_{arch}, {st_version}_{os}, {st_version} or all. {st_version} can be st2 or st3, {os} can be windows, linux or osx and {arch} can be x32 or x64.
Thus you could say that your dependency supports only st3, st3_linux, st3_windows_x64 or any combination thereof. For something with native code you may specify several different versions by having multiple folders, though commonly all is used when the dependency contains pure Python code that will work regardless of the Sublime version, OS or architecture.
In this example, if we assume that the prefix folder is named all because my_dependency is pure Python, then the result of installing this dependency would be that Packages/my_dependency/all would be added to the sys.path, meaning that if you import my_dependency you're getting the code from inside of that folder.
During development (or if you don't want to distribute your dependency via Package Control), you create a .sublime-dependency file in the root of the package as shown above. This should be a text file with a single line that contains a 2 digit number (e.g. 01 or 50). This controls in what order each installed dependency will get added to the sys.path. You'd typically pick a lower number if your dependency has no other dependencies and a higher value if it does (so that it gets injected after those).
Once you have the initial dependency laid out in the correct format in the Packages folder, you would use the command Package Control: Install Local Dependency from the Command Palette, and then select the name of your dependency.
This causes Package Control to "install" the dependency (i.e. update the 0_package_control_loader package) to make the dependency active. This step would normally be taken by Package Control automatically when it installs a dependency for the first time, so if you are also manually distributing your dependency you need to provide instructions to take this step.
Related
I maintain a Python utility that allows bpy to be installable as a Python module. Due to the hugeness of the spurce code, and the length of time it takes to download the libraries, I have chosen to provide this module as a wheel.
Unfortunately, platform differences and Blender runtime expectations makes support for this tricky at times.
Currently, one of my big goals is to get the Blender addon scripts directory to install into the correct location. The directory (simply named after the version of Blender API) has to exist in the same directory as the Python executable.
Unfortunately the way that setuptools works (or at least the way that I have it configured) the 2.79 directory is not always placed as a sibling to the Python executable. It fails on Windows platforms outside of virtual environments.
However, I noticed in setuptools documentation that you can specify eager_resources that supposedly guarantees the location of extracted files.
https://setuptools.readthedocs.io/en/latest/setuptools.html#automatic-resource-extraction
https://setuptools.readthedocs.io/en/latest/pkg_resources.html#resource-extraction
There was a lot of hand waving and jargon in the documentation, and 0 examples. I'm really confused as to how to structure my setup.py file in order to guarantee the resource extraction. Currently, I just label the whole 2.79 directory as "scripts" in my setuptools Extension and ship it.
Is there a way to write my setup.py and package my module so as to guarantee the 2.79 directory's location is the same as the currently running python executable when someone runs
py -3.6.8-32 -m pip install bpy
Besides simply "hacking it in"? I was considering writing a install_requires module that would simply move it if possible but that is mangling with the user's file system and kind of hacky. However it's the route I am going to go if this proves impossible.
Here is the original issue for anyone interested.
https://github.com/TylerGubala/blenderpy/issues/13
My build process is identical to the process descsribed in my answer here
https://stackoverflow.com/a/51575996/6767685
Maybe try the data_files option of distutils/setuptools.
You could start by adding data_files=[('mydata', ['setup.py'],)], to your setuptools.setup function call. Build a wheel, then install it and see if you can find mydata/setup.py somewhere in your sys.prefix.
In your case the difficult part will be to compute the actual target directory (mydata in this example). It will depend on the platform (Linux, Windows, etc.), if it's in a virtual environment or not, if it's a global or local install (not actually feasible with wheels currently, see update below) and so on.
Finally of course, check that everything gets removed cleanly on uninstall. It's a bit unnecessary when working with virtual environments, but very important in case of a global installation.
Update
Looks like your use case requires a custom step at install time of your package (since the location of the binary for the Python interpreter relative to sys.prefix can not be known in advance). This can not be done currently with wheels. You have seen it yourself in this discussion.
Knowing this, my recommendation would be to follow the advice from Jan Vlcinsky in his comment for his answer to this question:
Post install script after installing a wheel.
Add an extra setuptools console entry point to your package (let's call it bpyconfigure).
Instruct the users of your package to run it immediately after installing your package (pip install bpy && bpyconfigure).
The purpose of bpyconfigure should be clearly stated (in the documentation and maybe also as a notice shown in the console right after starting bpyconfigure) since it would write into locations of the file system where pip install does not usually write.
bpyconfigure should figure out where is the Python interpreter, and where to write the extra data.
The extra data to write should be packaged as package_data, so that it can be found with pkg_resources.
Of course bpyconfigure --uninstall should be available as well!
Is there a way to convert a python package, i.e. is a folder of python files, into a single file that can be copied and then directly imported into a python script without needing to run any extra shell commands? I know it is possible to zip all of the files and then unzip them from python when they are needed, but I'm hoping that there is a more elegant solution.
It's not totally clear what the question is. I could interpret it two ways.
If you are looking to manage the symbols from many modules in a more organized way:
You'll want to put an __init__.py file in your directory and make it a package. In it you can define the symbols for your package, and create a graceful import packagename behavior. Details on packages.
If you are looking to make your code portable to another environment:
One way or the other, the package needs to be accessible in whatever environment it is run in. That means it either needs to be installed in the python environment (likely using pip), copied into a location that is in a subdirectory relative to the running code, or in a directory that is listed in the PYTHONPATH environment variable.
The most straightforward way to package up code and make it portable is to use setuptools to create a portable package that can be installed into any python environment. The manual page for Packaging Projects gives the details of how to go about building a package archive, and optionally uploading to PyPi for public distribution. If it is for private use, the resulting archive can be passed around without uploading it to the public repository.
My project contains:
My own custom Python files
Unique package-specific generated Python code
Resources (e.g. binaries)
Dependencies on 3rd party modules (e.g. numpy)
The generated Python code makes things tricky, and separates this use case from a typical Python package where everyone gets the same code. I may create several packages to be distributed to different clients. Each package will have different/unique generated Python code, but use identical versions of my custom Python scripts and 3rd party dependencies. For example I may make a "package builder" script, which generates the unique Python code and bundles the dependencies together, depending on the builder arguments.
I want to distribute my Python scripts, including the resources and dependencies. The receiver of this package cannot download the 3rd party dependencies using a requirements.txt and pip; all dependencies and binaries must be included in this package.
The way I envision the client using this package is that they simply unzip the archive I provide, set their PYTHONPATH to the unzipped directory, and invoke my custom Python file to start the process.
If I'm going about this the wrong way I'd appreciate suggestions.
I need to ship a collection of Python programs that use multiple packages stored in a local Library directory: the goal is to avoid having users install packages before using my programs (the packages are shipped in the Library directory). What is the best way of importing the packages contained in Library?
I tried three methods, but none of them appears perfect: is there a simpler and robust method? or is one of these methods the best one can do?
In the first method, the Library folder is simply added to the library path:
import sys
import os
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'Library'))
import package_from_Library
The Library folder is put at the beginning so that the packages shipped with my programs have priority over the same modules installed by the user (this way I am sure that they have the correct version to work with my programs). This method also works when the Library folder is not in the current directory, which is good. However, this approach has drawbacks. Each and every one of my programs adds a copy of the same path to sys.path, which is a waste. In addition, all programs must contain the same three path-modifying lines, which goes against the Don't Repeat Yourself principle.
An improvement over the above problems consists in trying to add the Library path only once, by doing it in an imported module:
# In module add_Library_path:
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'Library'))
and then to use, in each of my programs:
import add_Library_path
import package_from_Library
This way, thanks to the caching mechanism of CPython, the module add_Library_path is only run once, and the Library path is added only once to sys.path. However, a drawback of this approach is that import add_Library_path has an invisible side effect, and that the order of the imports matters: this makes the code less legible, and more fragile. Also, this forces my distribution of programs to inlude an add_Library_path.py program that users will not use.
Python modules from Library can also be imported by making it a package (empty __init__.py file stored inside), which allows one to do:
from Library import module_from_Library
However, this breaks for packages in Library, as they might do something like from xlutils.filter import …, which breaks because xlutils is not found in sys.path. So, this method works, but only when including modules in Library, not packages.
All these methods have some drawback.
Is there a better way of shipping programs with a collection of packages (that they use) stored in a local Library directory? or is one of the methods above (method 1?) the best one can do?
PS: In my case, all the packages from Library are pure Python packages, but a more general solution that works for any operating system is best.
PPS: The goal is that the user be able to use my programs without having to install anything (beyond copying the directory I ship them regularly), like in the examples above.
PPPS: More precisely, the goal is to have the flexibility of easily updating both my collection of programs and their associated third-party packages from Library by having my users do a simple copy of a directory containing my programs and the Library folder of "hidden" third-party packages. (I do frequent updates, so I prefer not forcing the users to update their Python distribution too.)
Messing around with sys.path() leads to pain... The modern package template and Distribute contain a vast array of information and were in part set up to solve your problem.
What I would do is to set up setup.py to install all your packages to a specific site-packages location or if you could do it to the system's site-packages. In the former case, the local site-packages would then be added to the PYTHONPATH of the system/user. In the latter case, nothing needs to changes
You could use the batch file to set the python path as well. Or change the python executable to point to a shell script that contains a modified PYTHONPATH and then executes the python interpreter. The latter of course, means that you have to have access to the user's machine, which you do not. However, if your users only run scripts and do not import your own libraries, you could use your own wrapper for scripts:
#!/path/to/my/python
And the /path/to/my/python script would be something like:
#!/bin/sh
PYTHONPATH=/whatever/lib/path:$PYTHONPATH /usr/bin/python $*
I think you should have a look at path import hooks which allow to modify the behaviour of python when searching for modules.
For example you could try to do something like kde's scriptengine does for python plugins[1].
It adds a special token to sys.path(like "<plasmaXXXXXX>" with XXXXXX being a random number just to avoid name collisions) and then when python try to import modules and can't find them in the other paths, it will call your importer which can deal with it.
A simpler alternative is to have a main script used as launcher which simply adds the path to sys.path and execute the target file(so that you can safely avoid putting the sys.path.append(...) line on every file).
Yet an other alternative, that works on python2.6+, would be to install the library under the per-user site-packages directory.
[1] You can find the source code under /usr/share/kde4/apps/plasma_scriptengine_python in a linux installation with kde.
I am writing a program in python to be sent to other people, who are running the same python version, however these some 3rd party modules that need to be installed to use it.
Is there a way to compile into a .pyc (I only say pyc because its a python compiled file) that has the all the dependant modules inside it as well?
So they can run the programme without needing to install the modules separately?
Edit:
Sorry if it wasnt clear, but I am aware of things such as cx_freeze etc but what im trying to is just a single python file.
So they can just type "python myapp.py" and then it will run. No installation of anything. As if all the module codes are in my .py file.
If you are on python 2.3 or later and your dependencies are pure python:
If you don't want to go the setuptools or distutiles routes, you can provide a zip file with the pycs for your code and all of its dependencies. You will have to do a little work to make any complex pathing inside the zip file available (if the dependencies are just lying around at the root of the zip this is not necessary. Then just add the zip location to your path and it should work just as if the dependencies files has been installed.
If your dependencies include .pyds or other binary dependencies you'll probably have to fall back on distutils.
You can simply include .pyc files for the libraries required, but no - .pyc cannot work as a container for multiple files (unless you will collect all the source into one .py file and then compile it).
It sounds like what you're after is the ability for your end users to run one command, e.g. install my_custom_package_and_all_required_dependencies, and have it assemble everything it needs.
This is a perfect use case for distutils, with which you can make manifests for your own code that link out to external dependencies. If your 3rd party modules are available publicly in a standard format (they should be, and if they're not, it's pretty easy to package them yourself), then this approach has the benefit of allowing you to very easily change what versions of 3rd party libraries your code runs against (see this section of the above linked doc). If you're dead set on packaging others' code with your own, you can always include the required files in the .egg you create with distutils.
Two options:
build a package that will install the dependencies for them (I don't recommend this if the only dependencies are python packages that are installed with pip)
Use virtual environments. You use an existing python on their system but python modules are installed into the virtualenv.
or I suppose you could just punt, and create a shell script that installs them, and tell them to run it once before they run your stuff.