Here's the deal. I am developing a framework whose sole users have extremely messed up python installations on their servers (Linux). They all have multiple versions of Python on their servers and their PYTHONHOME and PYTHONPATH variables are pointing to different versions.
Since my framework will require Python 2.6, I thought that a safe way to distribute my application might be to bundle a pre-compiled version of Python with my application. To test this theory out, I downloaded ActivePython and bundled all the necessary files with by application. My main script calls #!/vendor/ActivePython2.6/bin/python.
So far, I have tested the framework on different server distributions and with different people's servers and it seems to have worked so far with no problems (yet).
My question is, are there any problems in doing this and are there any alternatives?
I'd recommend against it. You'll run into problems between 32b and 64b versions, between different libc versions, noexec locations, wrong selinux/apparmor profiles for custom paths and many other potential problems...
Unless you're planning to release (and test!) a package for each separate distribution, architecture and version, I'd say you're creating problems for yourself. The alternative is to provide both versions of course - provide the framework only by default and make the static python package available in case of problems.
Related
I'm working in the VFX industry and we deal with different software packages that ship with their own Python interpreter. We are running on Linux and use modules to handle our environments to make sure that people are using the correct version of all applications depending on the project they are working on.
Since months, we are trying to setup an environment that supports multiple versions of Python. And what is blocking right now are additional Python libraries that we are using in our in-house tools, like sqlalchemy, psycopg2, openpyxl, zmq, etc.
So far, for each project, we have config file that defines the version of each package to be use including the additional Python modules. And to use the correct Python version of each Python module, we look up the main Python interpreter defined in that same modules definition file. This works as long as the major and minor versions of all Python interpreters do line up.
But now, I would like to start an application that ships with a Python 3.7 interpreter and another application with a Python 3.9 interpreter and so on. All applications do you use our in-house tools which need the additional Python modules. Of course, this fails when trying to import any additional module.
For now, the only solution that I see is to install the corresponding Python modules in the 'site-packages' of each application that comes with its own Python interpreter. That should work. But this means that we have to install for each application version all necessary Python modules (ideally, the same version of it to avoid compatibility issues) and when we decide to update one of them, this needs to be done again for all 3rd party applications.
That does not sound super-efficient to me.
Do you have similar experiences and what did you came up with to handle this? I know, that there are more advanced packages like rez to handle complex environment setups, but although I do not know the details of rez, I could imagine that the problems stays the same. I guess that it is not possible to globally populate PYTHONPATH with additional modules so that it works on multiple Python interpreter versions.
Another solution that I could imagine is to make sure that on startup of each application that needs additional Python modules, we do our own sys.path modification depending on the interpreter version. That would imply some coding but we could keep a global version handling without installing them everywhere.
Anyway, if you have any further hints, please let me know.
Greets,
Carlo
I would like to develop some Python3.6 software. The problem is that the software would run on hundreds of uniquely configured build environments that may or may not have python installed and do not have access to the internet or pypi. The machine are a mix between windows and Suse. It's important not to mess with the build environment so I would like to package my software with a isolated python environment with all the dependencies.
I'm finding it difficult to find a solution that would meet my criteria.
I've come across python virtual environments but they do not have an interpreter and are not really intended to copied around.
Another person on stack overflow recommend PEX, this looks perfect but does not seem to be compatible with Windows.
I also have thought about making the software a statically linked binary, using Cython. But again to my knowledge this still requires the correct python to be installed and has to use pure Python.
https://pyoxidizer.readthedocs.io/en/latest/comparisons.html has a comparison of various solutions in this space. It looks that if you need a cross-platform solution that doesn't require the target systems to be pre-configured (e.g. with a particular version of python pre-installed), your options are PyInstaller, PyOxidizer and Docker.
PyInstaller is more established, while PyOxidizer claims to have faster startup.
I'd expect Docker to be the least problematic if you have complex dependencies. It must be preinstalled on the target systems, but the build environments will probably have it already installed. Obviously it comes with more overhead.
I'm building a Django app, which I comfortably run (test :)) on a Ubuntu Linux host. I would like to package the app without source code and distribute it to another production machine. Ideally the app could be run by ./runapp command which starts a CherryPy server that runs the python/django code.
I've discovered several ways of doing this:
Distributing the .pyc files only and building and installing all the requirements on target machine.
Using one of the many tools to package Python apps into a distributable package.
I'm really gunning for nr.2 option, I'd like to have my Django app contained, so it's possible to distribute it without needing to install or configure additional things. Searching the interwebs provided me with more questions than answers and a very sour taste that Django packing is an arcane art that everybody knows but nobody speaks about. :)
I've tried Freeze (fails), Cx_freeze (easy install version fails, repository version works, but the app output fails) and red up on dbuilder.py (which is supposed to work but doesn't work really - I guess). If I understand correctly most problems originate form the way that Django imports modules (example) but I have no idea how to solve it.
I'll be more than happy if anyone can provide any pointers or good resources online regarding packing/distributing standalone Django applications.
I suggest you base your distro on setuptools (a tool that enhances the standard Python distro mechanizm distutils).
Using setuptools, you should be able to create a Python egg containing your application. The egg's metadata can contain a list of dependencies that will be automatically installed by easy_install (can include Django + any third-party modules/packages that you use).
setuptools/distutils distros can include scripts that will be installed to /usr/bin, so that's how you can include your runapp script.
If you're not familiar with virtualenv, I suggest you take a look at that as well. It is a way to create isolated Python environments, it will be very useful for testing your distro.
Here's a blog post with some info on virtualenv, as well as a discussion about a couple of other nice to know tools: Tools of the Modern Python Hacker: Virtualenv, Fabric and Pip
The --noreload option will stop Django auto-detecting which modules have changed. I don't know if that will fix it, but it might.
Another option (and it's not ideal) is to obscure some of your core functionality by packaging it as a dll, which your plain text code will call.
I want to make some python scripts to create an "Appliance" with VirtualBox. However, I can't find any documentation anywhere on making calls to VBoxService.exe. Well, I've found stuff that works from OUTSIDE the Machine, but nothing from working from inside the machine.
Does anyone know anything about this? If there's a library for another language like C I'd be okay with it, though Python would be heavily preferred.
Consider using libvirt. The VirtualBox support is bleeding-edge (not in any release, may not even be in source control yet, but is available as a set of patches on the mailing list) -- but this single API, available for C, Python and several other languages, lets you control virtual machines and images running in Qemu/KVM, Xen, LXC (Linux Containers), UML (User-Mode Linux), OpenVZ and others.
I build and administer virtual appliances (in an automated QA context) using libvirt with the qemu/KVM backend, and it meets my needs very well.
libvirt can be configured to allow remote access (such as controlling or querying VBoxService or libvirtd from within one of the VMs, which you appear to want to do -- though I question the wisdom and utility), with numerous authentication and transport options available.
[Caveat: libvirt principally targets Unixlike operating systems; it can be built for win32, but YMMV]
I am a member of a team that is about to launch a beta of a python (Django specifically) based web site and accompanying suite of backend tools. The team itself has doubled in size from 2 to 4 over the past few weeks and we expect continued growth for the next couple of months at least. One issue that has started to plague us is getting everyone up to speed in terms of getting their development environment configured and having all the right eggs installed, etc.
I'm looking for ways to simplify this process and make it less error prone. Both zc.buildout and virtualenv look like they would be good tools for addressing this problem but both seem to concentrate primarily on the python-specific issues. We have a couple of small subprojects in other languages (Java and Ruby specifically) as well as numerous python extensions that have to be compiled natively (lxml, MySQL drivers, etc). In fact, one of the biggest thorns in our side has been getting some of these extensions compiled against appropriate versions of the shared libraries so as to avoid segfaults, malloc errors and all sorts of similar issues. It doesn't help that out of 4 people we have 4 different development environments -- 1 leopard on ppc, 1 leopard on intel, 1 ubuntu and 1 windows.
Ultimately what would be ideal would be something that works roughly like this, from the dos/unix prompt:
$ git clone [repository url]
...
$ python setup-env.py
...
that then does what zc.buildout/virtualenv does (copy/symlink the python interpreter, provide a clean space to install eggs) then installs all required eggs, including installing any native shared library dependencies, installs the ruby project, the java project, etc.
Obviously this would be useful for both getting development environments up as well as deploying on staging/production servers.
Ideally I would like for the tool that accomplishes this to be written in/extensible via python, since that is (and always will be) the lingua franca of our team, but I am open to solutions in other languages.
So, my question then is: does anyone have any suggestions for better alternatives or any experiences they can share using one of these solutions to handle larger/broader install bases?
Setuptools may be capable of more of what you're looking for than you realize -- if you need a custom version of lxml to work correctly on MacOS X, for instance, you can put a URL to an appropriate egg inside your setup.py and have setuptools download and install that inside your developers' environments as necessary; it also can be told to download and install a specific version of a dependency from revision control.
That said, I'd lean towards using a scriptably generated virtual environment. It's pretty straightforward to build a kickstart file which installs whichever packages you depend on and then boot virtual machines (or production hardware!) against it, with puppet or similar software doing other administration (adding users, setting up services [where's your database come from?], etc). This comes in particularly handy when your production environment includes multiple machines -- just script the generation of multiple VMs within their handy little sandboxed subnet (I use libvirt+kvm for this; while kvm isn't available on all the platforms you have developers working on, qemu certainly is, or you can do as I do and have a small number of beefy VM hosts shared by multiple developers).
This gets you out of the headaches of supporting N platforms -- you only have a single virtual platform to support -- and means that your deployment process, as defined by the kickstart file and puppet code used for setup, is source-controlled and run through your QA and review processes just like everything else.
I always create a develop.py file at the top level of the project, and have also a packages directory with all of the .tar.gz files from PyPI that I want to install, and also included an unpacked copy of virtualenv that is ready to run right from that file. All of this goes into version control. Every developer can simply check out the trunk, run develop.py, and a few moments later will have a virtual environment ready to use that includes all of our dependencies at exactly the versions the other developers are using. And it works even if PyPI is down, which is very helpful at this point in that service's history.
Basically, you're looking for a cross-platform software/package installer (on the lines of apt-get/yum/etc.) I'm not sure something like that exists?
An alternative might be specifying the list of packages that need to be installed via the OS-specific package management system such as Fink or DarwinPorts for Mac OS X and having a script that sets up the build environment for the in-house code?
I have continued to research this issue since I posted the question. It looks like there are some attempts to address some of the needs I outlined, e.g. Minitage and Puppet which take different approaches but both may accomplish what I want -- although Minitage does not explicitly state that it supports Windows. Lacking any better options I will try to make either one of these or just extensive customized use of zc.buildout work for our needs, but I still feel like there must be better options out there.
You might consider creating virtual machine appliances with whatever production OS you are running, and all of the software dependencies pre-built. Code can be edited either remotely, or with a shared folder. It worked pretty well for me in a past life that had a fairly complicated development environment.
Puppet doesn't (easily) support the Win32 world either. If you're looking for a deployment mechanism and not just a "dev setup" tool, you might consider looking into ControlTier (http://open.controltier.com/) which has a open-source cross-platform solution.
Beyond that you're looking at "enterprise" software such as BladeLogic or OpsWare and typically an outrageous pricetag for the functionality offered (my opinion, obviously).
A lot of folks have been aggressively using a combination of Puppet and Capistrano (even non-rails developers) for deployment automation tools to pretty good effect. Downside, again, is that it's expecting a somewhat homogeneous environment.