My goal is to be able to package a fully-functional Python interpreter and all dependencies. Couple of quick thoughts up front:
I can't install via pip/requirements.txt on due to Firewall restrictions. I will only have access to pip install on the build system.
Maintaining an internal repo wouldn't be feasible
The code to be distributed will be a number of tools/utilities, not a single script (not as straight-forward to use 'freeze' utilities)
I'm attempting to avoid third-party tools that don't have a strong community.
I'm not using the version of Python packaged with the OS (Ubuntu uses 3.5, we'll likely be using 3.6)
My plan was as follows:
Create a Docker container for the target OS (Ubuntu)
Download Python source and manually build it with a prefix of /build
Use the full path to Python to install dependencies with Pip. For example:
/build/bin/python3.6 -m pip install -r requirements.txt
Tar up the /build directory with the Python runtime and all dependencies
All scripts and utilities will use the absolute path to the interpreter such as:
/opt/python/bin/python3.6
Does anyone see any glaring issues with this plan? I was able to build successfully, move the package to another host, and import all of the pip installed dependencies (requests, numpy, psutil, etc.)
Related
So I have published a conda package (link).
This package contains .c extensions (coming from cython code), which need to be compiled when the package is installed. My problem is that none of the extensions are compiled when running the install command
conda install -c nicolashug scikit-surprise
Compiling the extensions can be done by simply running
python setup.py install
which is exactly what pip does. The package is on PyPI and works fine.
As far as I understand, this setup.py command is only called when I build the conda package using conda build: the meta.yaml file (created with conda skeleton) contains
build:
script: python setup.py install --single-version-externally-managed--record=record.txt
But I need this to be done when the package is installed, not built.
Reading the conda docs, it looks like the install process is merely a matter of copying files:
Installing the files of a conda package into an environment can be thought of as changing the directory to an environment, and then downloading and extracting the .zip file and its dependencies
That would mean I would have to build the package for all platforms and architectures, and then upload them to conda... Which is impossible to me.
So, is there a way to build the package when it is installed, just like pip does?
As far as I know, there is no way to have the compilation happen on the user's machine when installing a conda package. Indeed, the whole idea of a conda package is that you do the compiling so that I don't have to on my machine, and all that's distributed is the compiled library. On Windows in particular, setting up compilers so they work properly (with Python) is a big big PITA, which is one of the biggest reasons for conda (and also wheels installed by pip).
If you don't have access to a particular OS directly, you can use Continuous Integration (CI) services, such as Appveyor (Windows), Travis CI (Linux/macOS), or CircleCI (Linux/macOS) to build packages and upload them to Anaconda cloud (or to PyPI for that matter). These services integrate directly with GitHub and other code-sharing services, and are generally free for FOSS projects. That way, you can build packages on each commit, on each tag, or some other variation that you desire.
In the end, you may save more time by setting up these services, because you won't have to provide compiler support for users who can't install a source package from PyPI.
I have a ROS package that I want to distribute. It brings some dependencies that can't be installed via pip or package manager. They should be downloaded and installed manually. I wrote an installation script which is working fine, but I want all process to be autonomous in other words I want all dependencies installed with rosdep if possible.
Ideal implementation:
- Create an external package which has the necessary CMakeLists file
- Run the catkin_make to automatically download and install the libraries with my script (or run rosdep to install dependencies but I guess this is not possible)
You should be able to use wstool to set something up to accomplish this. This tutorial describes how to chain several catkin workspaces together. Your dependencies that need to be installed manually could be set up as their own catkin workspaces.
A little background: I am working on some python modules that other developers on our team will use. A common theme of each module is that one or more messages will be published to Kafka. We intend at this time to use the Confluent Kafka client. We are pretty new to python development in our organization -- we have traditionally been a .NET shop.
The complication: while the code that we create will run on Linux (rhel 7), most of the developers will do their work on Windows.
So we need the librdkafka C library compiled on each developer machine (which has dependencies of its own, one of which is OpenSSL). Then a pip install of confluent-kafka should just work, which means a pip install of our package will work. Theoretically.
To start I did the install on my Linux laptop (Arch). I knew I already had OpenSSL and the other zip lib dependencies available, so this process was painless:
git clone librdkafka repo
configure, make and install per the README
pip install confluent-kafka
done
The install of librdkafka went into /usr/local:
/usr/local/lib/librdkafka.a
/usr/local/lib/librdkafka++.a
/usr/local/lib/librdkafka.so -> librdkafka.so.l
/usr/local/lib/librdkafka++.so -> librdkafka++.so.l
/usr/local/lib/librdkafka.so.l
/usr/local/lib/librdkafka++.so.l
/usr/local/lib/pkgconfig/rdkafka.pc
/usr/local/lib/pkgconfig/rdkafka++.pc
/usr/local/include/librdkafka/rdkafkacpp.h
/usr/local/include/librdkafka/rdkafka.h
Now the painful part, making it work on Windows:
install precompiled OpenSSL
git clone librdkafka repo
open in VS2015
install libz via NuGet
build solution
install to where???
This is where I'm stuck. What would a standard install on a Windows 7/8/10 machine look like?
I have the following from the build output, but no idea what should go where in order to make the pip install confluent-kafka "just work":
/librdkafka/win32/Release/librdkafka.dll
/librdkafka/win32/Release/librdkafka.exp
/librdkafka/win32/Release/librdkafka.lib
/librdkafka/win32/Release/librdkafkacpp.dll
/librdkafka/win32/Release/librdkafkacpp.exp
/librdkafka/win32/Release/librdkafkacpp.lib
/librdkafka/win32/Release/zlib.dll
<and the .h files back in the src>
Any recommendations on an install location?
I'm not sure where the ideal place to install on Windows would be, but I ran the following test with some success.
I copied my output and headers to C:\test\lib and C:\test\include, then ran a pip install with the following options:
pip install --global-option=build_ext --global-option="-LC:\test\lib" --global-option="-IC:\test\include" confluent-kafka
Unfortunately, this doesn't quite work because the confluent-kafka setup does not support Windows at this time: https://github.com/confluentinc/confluent-kafka-python/issues/52#issuecomment-252098462
It's an old question but seems still no easy answer yet. Also Confluent seems too busy to work on Windows supporting...
I had the same headache couple weeks ago and after some research, I managed to make it work for me on Windows. I logged my finding and uploaded a pre-compiled library to my Git, please check and see if it helps. :D
https://github.com/MichaelZhangCA/confluent-kafka-python
My environment is Python 3.6 64 bit version but ideally, it should also work for 32 bit if you follow the same approach.
I assume you have successfully followed instructions from MichaelZhangCA (https://github.com/MichaelZhangCA/confluent-kafka-python/) from previous post.
If you did so, these probably were the last two commands executed:
::Install confluent-kafka
cd C:\confluent-kafka-python\confluent-kafka-python-0.11.4
python setup.py install
If that is correct, those DLLs were created under C:\confluent-kafka-python\librdkafka-reference\release\.
All you have to do is to copy them to a diretory already in Widnows' PATH.
For example, I use Anaconda 5.2 For Windows, with python 3.6. My Anaconda Prompt has an empty directory in PATH, so I copied there those DLL's:
::Anaconda Prompt - copy DLLs to a directory already in PATH
mkdir %CONDA_PREFIX%\Library\usr\bin
copy C:\confluent-kafka-python\librdkafka-reference\release %CONDA_PREFIX%\Library\usr\bin
If you don't use Anaconda, just copy those DLL's to any other directory in Windows' PATH. You may also leave them in C:\confluent-kafka-python\librdkafka-reference\release, and add this directory to PATH.
The question
Ansible is a python moduel, installable via pip. It relies on several dependencies, also pip modules. Is it possible to "roll up" all of those dependencies and Ansible itself into some sort of a single package, that can be installed offline, without root? It's highly preferable to not need pip for the install, although it will be available for package creation.
Extra background
I'm trying to install Ansible on one of our servers. The server does not have access to the internet, there is no root access. Pip is not installed, but Python is. It is possible to get pip installed there, but might be complicated. The only way to get anything on the server is via an internal tar.gz package sharing solution.
I've tried fiddling around with rpm, saving dependencies, but the absence of root access put an end to that.
Use pip on an internet-connected machine to download all the deps to a local dir with --download and -r requirements.txt, then drop that dir on the disconnected machine with pip installed, and install using --no-index and --find-links=(archive dir).
See https://pip.pypa.io/en/latest/user_guide/#fast-local-installs
My python package contains a lot of files compiled by python-protobuf (python2-protobuf-2.5.0 on Arch Linux), I installed the package on Ubuntu server 12.04.3 (which have python-protobuf-2.4.1), tried to run the code, and hit the following error:
from google.protobuf.internal import enum_type_wrapper
ImportError: cannot import name enum_type_wrapper
I think it's because the protobuf modules in my package are compiled by protobuf-2.5.0 and they do not work with protobuf-2.4.1.
I have no idea of the environments in which my code may run, the version of protobuf may vary. How to make my package work with both protobuf 2.4 and 2.5?
(A possible way: include two different sets of protobuf libraries (one compiled by 2.4.1, the other compiled by 2.5.0) in my package, get google.protobuf version at runtime and select the protobuf libraries to import. Is it possible?
You need to specify the version of protobuf that will work with in your setup.py in the list install_requires=['protobuf>=2.5.0']. With a Python package, you can put just the name or the exact versions that will run with the package using ==. I believe you can also specify != for specific versions.
If you are not packaging it with a setup.py, you should set up a virtualenv and put a file install_requires.txt with all the specific python packages and versions in the root of the project.
That might look like:
$ cd ../project
$ virtualenv project_venv
$ source project_venv/bin/activate
$ cd project
$ pip install protobuf>=2.5.0
$ pip freeze > ./requirements.txt
Then someone you distribute to can activate their virtualenv and do:
$ pip install -r requirements.txt
Make sure your package will work from a fresh virtualenv by installing with that method. This is also good to check before installing via a setup.py. You want to make sure your requirements will get anyone working who just does a fresh sudo python setup.py install, or python setup.py install in a virtualenv context.
You can exit a virtualenv context with:
$ deactivate
Your best bet may be to include a copy of the protobuf runtime library with your package, maybe under a different package name. Then you can make sure that it matches the version of your generated code.
Another option is to invoke protoc as part of the installation process, so you get whatever version is available on the host.
I don't think packaging multiple versions of your generated code sounds like a good idea -- you'll just have problems again when the next protobuf release comes out.