I have python script which also imports protoc-generated modules (protobuf and grpc). I would like to package this in rpm. Previously I used to write a spec file defining rpm, its contents, compile/link flags (if building from c/c++ sources), post/postun phases if any and so on.
What would be the correct way to create rpm for python script + generated python modules. I know that there is distutils python module, and people normally write their own setup.py script, and then do python setup.py bdist_rpm. However I'm not sure how I can plug-in the protobuf generation phase.
I'd appreciate helpful suggestions!
Related
My project contains:
My own custom Python files
Unique package-specific generated Python code
Resources (e.g. binaries)
Dependencies on 3rd party modules (e.g. numpy)
The generated Python code makes things tricky, and separates this use case from a typical Python package where everyone gets the same code. I may create several packages to be distributed to different clients. Each package will have different/unique generated Python code, but use identical versions of my custom Python scripts and 3rd party dependencies. For example I may make a "package builder" script, which generates the unique Python code and bundles the dependencies together, depending on the builder arguments.
I want to distribute my Python scripts, including the resources and dependencies. The receiver of this package cannot download the 3rd party dependencies using a requirements.txt and pip; all dependencies and binaries must be included in this package.
The way I envision the client using this package is that they simply unzip the archive I provide, set their PYTHONPATH to the unzipped directory, and invoke my custom Python file to start the process.
If I'm going about this the wrong way I'd appreciate suggestions.
I have a C++ library (we'll call it Example in the following) for which I wrote Python bindings using the boost.python library. This Python-wrapped library will be called pyExample. The entire project is built using CMake and the resulting Python-wrapped library is a file named libpyExample.so.
When I use the Python bindings from a Python script located in the same directory as libpyExample.so, I simply have to write:
import libpyExample
libpyExample.hello_world()
and this executes a hello_world() function exposed by the wrapping process.
What I want to do
For convenience, I would like my pyExample library to be available from anywhere simply using the command
import pyExample
I also want pyExample to be easily installable in any virtualenv in just one command. So I thought a convenient process would be to use setuptools to make that happen. That would therefore imply:
Making libpyExample.so visible for any Python script
Changing the name under which the module is accessed
I did find many things about compiling C++ extensions with setuptools, but nothing about packaging a pre-compiled C++ extension. Is what I want to do even possible?
What I do not want to do
I don't want to build the pyExample library with setuptools, I would like to avoid modifying the existing project too much. The CMake build is just fine, I can retrieve the libpyExample.so file very easily.
If I understand your question correctly, you have the following situation:
you have an existing CMake-based build of a C++ library with Python bindings
you want to package this library with setuptools
The latter then allows you to call python setup.py install --user, which installs the lib in the site-packages directory and makes it available from every path in your system.
What you want is possible, if you overload the classes that setuptools uses to build extensions, so that those classes actually call your CMake build system. This is not trivial, but you can find a working example here, provided by the pybind11 project:
https://github.com/pybind/cmake_example
Have a look into setup.py, you will see how the classes build_ext and Extension are inherited from and modified to execute the CMake build.
This should work out of the box for you or with little modification - if your build requires special -D flags to be set.
I hope this helps!
I am writing a program in python to be sent to other people, who are running the same python version, however these some 3rd party modules that need to be installed to use it.
Is there a way to compile into a .pyc (I only say pyc because its a python compiled file) that has the all the dependant modules inside it as well?
So they can run the programme without needing to install the modules separately?
Edit:
Sorry if it wasnt clear, but I am aware of things such as cx_freeze etc but what im trying to is just a single python file.
So they can just type "python myapp.py" and then it will run. No installation of anything. As if all the module codes are in my .py file.
If you are on python 2.3 or later and your dependencies are pure python:
If you don't want to go the setuptools or distutiles routes, you can provide a zip file with the pycs for your code and all of its dependencies. You will have to do a little work to make any complex pathing inside the zip file available (if the dependencies are just lying around at the root of the zip this is not necessary. Then just add the zip location to your path and it should work just as if the dependencies files has been installed.
If your dependencies include .pyds or other binary dependencies you'll probably have to fall back on distutils.
You can simply include .pyc files for the libraries required, but no - .pyc cannot work as a container for multiple files (unless you will collect all the source into one .py file and then compile it).
It sounds like what you're after is the ability for your end users to run one command, e.g. install my_custom_package_and_all_required_dependencies, and have it assemble everything it needs.
This is a perfect use case for distutils, with which you can make manifests for your own code that link out to external dependencies. If your 3rd party modules are available publicly in a standard format (they should be, and if they're not, it's pretty easy to package them yourself), then this approach has the benefit of allowing you to very easily change what versions of 3rd party libraries your code runs against (see this section of the above linked doc). If you're dead set on packaging others' code with your own, you can always include the required files in the .egg you create with distutils.
Two options:
build a package that will install the dependencies for them (I don't recommend this if the only dependencies are python packages that are installed with pip)
Use virtual environments. You use an existing python on their system but python modules are installed into the virtualenv.
or I suppose you could just punt, and create a shell script that installs them, and tell them to run it once before they run your stuff.
So I have been taking a few classes on python and the whole time, I was wondering about modules. I can install them and run them with Eclipse but if I compile that program, so if it has an 'exe' extension, how would the module react on a computer that doesn't have it installed.
Example:
If I made some random little thing with something like pygame. I installed the pygame module on my computer, made an application with the pygame module and compiled it into an executable, how does the other computer that I run that file on. Or does it not work at all?
Python modules are already executable - you don't compile them. If you want to run them on another computer, you can install python and any other dependent module such as pygame on that computer, copy the scripts over and run them.
Python has many ways to wrap scripts up into an installer to do the work for you. Its common to use python distutils to write a setup.py file which handles the install. From there you can use setup.py to bundle your scripts into zip files, tarballs, executables, rpms, etc... for other machines. You can document what the user needs to make your stuff go or you can use something like pip or distribute to write dependency files to automatically install pygame (and etc...).
There are many ways to handle this and its not particularly easy the first time round. For starters, read up on distutils in the standard python docs and then google for the pip installer.
I'm looking into releasing a python package which includes an existing fortran or C program. The fortran/C program is compiled by running
./configure
make
The python code calls the resulting binary through subprocess calls (i.e. the code is not really wrapped as such). What I would like is that when the user types
python setup.py install
the fortran/C program is first compiled using the ./configure and make commands, then I want the python module to be installed, and the binary to be installed in the python bin/ directory alongside executables that are usually installed via the scripts= option in distutils.core.setup.
First, are there any problems with doing this? And if not, what is the best way to do it via setup.py? Are there existing functions to automate the ./configure and make, since this is pretty standard? Or should I just use os.system calls? And either way, where should those commands go in setup.py? Then should I have make output the binary to e.g. scripts/ and then have scripts=['scripts/mybinary'] in the setup() function?
Don't make this too complex.
Just provide them as separate items with a README that says -- basically -- what you said in the question.
Build the Fortran/C with ./configure; make; make install.
Setup Python with python setup.py install.
It doesn't appear to be rocket science. Trying to over-simplify the installation means that you must account for every OS vagary and oddness.
It's easier to trust the users to do "standard" installations so that the Fortran/C is on the system PATH, and your Python script should be configured to find them on the system PATH.
People who want to use your software are then free to reconfigure it to their own unique needs. They will anyway. Don't overpackage and force them to fight against you to reconfigure things.
consider writing a python C extension as a wrapper for your C code, and a f2py extension as a wrapper for your fortran code. Then you can just use them in your python code as fast calls instead of using subprocess.