boost module in Python 2.7? - python

I am trying to debug a file for a project I am working on, and the first thing I made sure to do is install/build all of the modules that the file is importing. Thisis the first line of the file:
from scitbx.array_family import flex
which in turn reads from flex.py,
from __future__ import division
import boost.optional # import dependency
import boost.std_pair # import dependency
import boost.python
I entered the commands in ipython individually and get stuck on importing boost.optional. Since they are all from the same module I tried searching for the module named boost.
I found the site: http://www.boost.org/doc/libs/1_57_0/more/getting_started/unix-variants.html
and installed the related .bz2 file in the same directory as my other modules to make sure it is within sys.path. However I still can't get ipython to import anything. Am I completely off base in my approach or is there some other boost module that I can't find? I should mention that I am a complete novice with computers, and am learning as I go along. Any advice is much appreciated!

The library you have installed is called Boost. This is a collection of C++ libraries, one of which is Boost.Python. However this library doesn't provide Python modules that you can import directly - it doesn't provide boost.optional. Instead it enables interoperability between Python and C++ - you can write a C++ library using Boost.Python that can then be used in a normal Python interpreter.
In you case boost.optional is provided by the CCTBX collection of software, which does depend on Boost and Boost.Python. So you are not too far off. This thread in the mailing list covers your error message and some potential solutions.
Essentially you need to use the custom cctbx.python command (or scitbx.python, they are equivalent) to run python as this sets the PYTHONPATH correctly for their requirements. It's also documented on this page.

Related

Python sys.path vs import

What I wish to understand is what is good/bad practice, and why, when it comes to imports. What I want to understand is the agreed upon view by the community on the matter, if there's any one such in some PEP document or similar.
What I see normally is people have a python environment, use conda/pip to install packages and all that's needed to do in the code is to use "import X" (and variants). In my current understanding this is the right way to do things.
Whenever python interacts with C++ at my firm, though, it always ends up with the need to use sys.path and absolute imports (but we have some standardized paths to use as "base" and usually define relative paths based on those).
There are 2 major cases:
Python wrapper for C++ library (pybind/ctype/etc) - in this case the user python code must use sys.path to specify where the C++ library to import is.
A project that establish communications between python and C++ (say C++ server, python clients, TCP connections and flatbuffer serialization between the two) - Here the python code lives together with the C++ code, and if it some python files end up using sys.path to import python modules from the same project but that live in a different directory - Essentially we deploy the python together with the C++ through our C++ deployment procedure.
I am not fully sure if we can do something better for case #1, but case #2 seems quite unnecessary, and basically forced just by the choice to not deploy the python code through a python package manager. Choice ends up forcing us to use sys.path on both the library and user code.
This seems very bad to me as basically this way of doing things doesn't allow us to fully manage our python environments (since we have some libraries that we import even thought they are not technically installed in the environment), and that is probably why I have a negative view of using sys.path for imports. But I need to find if I'm right, and if so I need some official (or almost) documents to support my case if I'm to propose fixes to our procedures.
For your scenario 2, my understanding is you have some C++ and accompanying python in one place, and a separate python project wants to import that python.
Could you structure the imported python as a package and install it to your environment with pip install path/to/package? If it's a package that you'll continue to edit, you can add the -e flag to pip install so that when the package changes your imports get the latest code.

What issues with future 3rd-party packages could be expected if removing unneeded Python3 core modules

I have an environment with some extreme constraints that require me to reduce the size of a planned Python 3.8.1 installation. The OS is not connected to the internet, and a user will never open an interactive shell or attach a debugger.
There are of course lots of ways to do this, and one of the ways I am exploring is to remove some core modules, for example python3-email. I am concerned that there are 3rd-party packages that future developers may include in their apps that have unused but required dependencies on core python features. For example, if python3-email is missing, what 3rd-party packages might not work that one would expect too? If a developer decides to use a logging package that contains an unreferenced EmailLogger class in a referenced module, it will break, simply because import email appears at the top.
Do package design requirements or guidelines exist that address this?
It's an interesting question, but it is too broad to be cleanly answered here. In short, the Python standard library is expected to always be there, even though sometimes it broken up in multiple parts (Debian for example). But you say it yourself, you don't know what your requirements are since you don't know yet what future packages will run on this interpreter... This is impossible to answer. One thing you could do is to use something like modulefinder on the future code before letting it run on that constrained Python interpreter.
I was able to get to a solution. The issue was best described to me as cascading imports. It is possible to stop a module from being loaded, by adding an entry to sys.modules. For example, when importing the asyncio module ssl and _ssl modules will be loaded, even though they are not needed outside of ssl. This can be stopped with the following code. This can be verified both by seeing the python process is 3MB smaller, but also by using module load hooks to watch each module as it loads:
import importhook
import sys
sys.modules['ssl'] = None
#importhook.on_import(importhook.ANY_MODULE)
def on_any_import(module):
print(module.__spec__.name)
assert module.__spec__.name not in ['ssl', '_ssl']
import asyncio
For my original question about 3rd-party design guidelines, some recommend placing the import statements within the class rathe that at the module level, however this is not routinely done.

Why doesn't this package work in python3?

I'm curious, for a small custom python package.
If I run the python file that imports and uses function from the package in python2, everything works fine. If I run the file in python3 though it fails importing functions from the package.
from cust_package import this_function
ImportError: cannon import name 'this_function'
The functions in the package seem to be python3 compatible, and I used futurize on them just in case. Is the issue to do with some sort of labeling of the package/python version? The package is tiny, 2 .py files of ~8 functions each.
Appreciate the help, thanks!
The default dir() mechanism behaves differently with different types
of objects, as it attempts to produce the most relevant, rather than
complete, information.
Dir documentation
There are available questions if you want all the available functions.
here

central path for python modules

I am starting to convert a lot of C stuff in python 3.
In C I defined a directory called "Toolbox", where i put all my functions i needed in different programs, so called libraries.
To use a specific library i had just to add the line
#include "/home/User/Toolbox/VectorFunctions.h"
into my source. So i was able to use the same library in different sources.
In python i tried to write some Toolbox functions and implement them into the source with import VectorFunctions, which works, as long as the file VectorFunctions.py is in the same directory as the source.
I there a way (I think there must be one...) telling python that VectorFunctions.py is located in a different directory, e.g. /home/User/Python_Toolbox?
Thanks for any comment!
What I would do is to organize these toolbox functions into one installable Python package bruno_toolbox, with its setup.py, and then install it into development mode to system site packages, using python setup.py develop, and then use the bruno_toolbox like any other package on the system, everywhere. Then if that package feels useful, I'd publish it to PyPI for the benefit of everyone.
You can use python path. Writing this code beginning of your program :
import sys
sys.path.append('/home/User/Python_Toolbox')
If you have VectorFunctions.py in this folder you can import it :
import VectorFunctions

Is there a way to import Python3's multiprocessing module inside a Sublime Text 3 plugin?

I'm making a ST3 plugin on my OSX dev-environment, but according to the unofficial docs:
Sublime Text ships with a trimmed down standard library. The Tkinter, multiprocessing and sqlite3 modules are among the missing ones.
Even if it's not bundled with ST3, is there a way I can still import multiprocessing (docs)? Is there a way I can import it as a "standalone" module inside my plugin dir?
Assuming you intend this program for wide distribution, it's probably unreasonable to expect users to install Python 3 in a standard location in order to use your work. However, depending on Python's redistribution license, you may be able to get away with just bundling multiprocessing with your plugin. If your main my_awesome_plugin.py file is in the root of your plugin directory, put the full multiprocessing directory in as a subdirectory and you should be able to import multiprocessing as mp just fine. Fortunately it's pure Python, so you don't have to worry about platform-specific distributions.
EDIT
OK, my bad. After a little more exploring around the ...lib/python3.3/ hierarchy, it turns out there is a compiled _multiprocessing.so library in there to supplement the pure Python version. So, this will be a little more difficult than just dropping in the multiprocessing directory. I still think it might be doable, as you could theoretically include the compiled library for the three different ST3 platforms, then put in some logic to determine what OS the user is running. However, I'm not as up to date on the Python internals as I would need to be to properly answer this question, so I'll defer to someone else on exactly how that could work.

Categories

Resources