We have an app (a bunch of Twisted classes actually) which runs on a specific Python version and depends on quite a bit of modules. This app needs to be deployed onto a Windows Server machine which has no access to Internet.
Currently we are choosing between:
having to install Python prior to everything else, and a Python script which unpacks all modules and runs setup.py,
making an NSIS installer which installs Python, then all modules with .exe installers, then unpacks smaller modules into some other dir, then adds the dir to %PYTHONPATH%.
What is the good accepted way of dealing with such situation? Obviously we cannot use pip, easy_install.exe and other blessed tools, and our approaches are silly and inelegant.
As a third option you can consider deploing the application as an executable using PyInstaller (http://www.pyinstaller.org). You dont need to install anything on the client machine (not even python)
PyInstaller is a program that converts (packages) Python programs into stand-alone executables, under Windows, Linux, Mac OS X, Solaris and AIX. Its main advantages over similar tools are that PyInstaller works with any version of Python since 2.4, it builds smaller executables thanks to transparent compression, it is fully multi-platform, and use the OS support to load the dynamic libraries, thus ensuring full compatibility.
I have used it in a project to deploy standalone application in both Linux and Windows. Worked like a charm. My project also used Twisted.
Between your current two choices the setup.py approach is more pythonic. But beware that if any of your modules has some c implementation for faster performance that need to be compiled, you can't do that on you client's machine.
Try to use wheels. It designed to cover your case, i.e. you build wheel once, downloading all required packages. Then you just copy wheel archive to the target machine and install your application without downloading anything.
Related
I need to package my Python application, its dependencies, and Python itself into a single MSI installer for distribution to users. The end result should desirably be:
Python is installed in the standard location
the package and its dependencies are installed in a separate directory (possibly site-packages)
the installation directory should contain the Python uncompressed and a standalone executable is not required
Kind of a dup of this question about how to make a python into an executable.
It boils down to:
py2exe on windows, Freeze on Linux, and
py2app on Mac.
I use PyInstaller (the svn version) to create a stand-alone version of my program that includes Python and all the dependencies. It takes a little fiddling to get it to work right and include everything (as does py2exe and other similar programs, see this question), but then it works very well.
You then need to create an installer. NSIS Works great for that and is free, but it creates .exe files not .msi. If .msi is not necessary, I highly recommend it. Otherwise check out the answers to this question for other options.
My company uses the free InnoSetup tool. It is a moderately complex program that has tons of flexibility for building installers for windows. I believe that it creates .exe and not .msi files, however. InnoSetup is not python specific but we have created an installer for one of our products that installs python along with dependencies to locations specified by the user at install time.
I've had much better results with dependencies and custom folder structures using pyinstaller, and it lets you find and specify hidden imports and hooks for larger dependencies like numpy and scipy. Also a PITA, though.
py2exe will make windows executables with python bundled in.
py2exe is the best way to do this. It's a bit of a PITA to use, but the end result works very well.
Ok, I have used py2exe before and it works perfectly except for one thing... It only works on executable windows machines. I then learned about Jython which turn a python script into a .Jar file. Which as you know is executable from any machine that has Java ("To your latest running version") installed. Which is great because both unix, windows, and ios (Most of the time) Run java. That means its executable from all of the following machines. As long as they run Java. No need for "py2mac + py2exe + freeze" just to run on all operating systems. Just Jython
For more information on how it works and how you can use it click here.
http://www.jython.org/
since I think the original post wasn't clear enough -
EDIT:
Is there a tool or an option to pack my python project and all of it's dependencies and sub dependencies - same as pyinstaller does, but pyinstaller generates an executable binary, and I'd like to have the ability to make changes in the code after it's distributed (e.g on a client's environment)
Original Post
I have this python project, with a lot of dependencies and sub-dependencies, that's currently distributed by building it using pipenv to create a virtual environment and getting the 3rd party libraries, and pyinstaller to generate an executable which is being used on a virtual machine with another OS (I'm packing an executable to the target's machine OS by building it on a docker with the same OS).
the thing is - I'm using some data and scripts from this virtual machine in my python project, so i can't run it locally, and whenever there's a bug, or an error, I have to make changes locally and rebuild (which takes some time) and only then move the executable to the vm in order to run it.
my goal is to have all of the code packed with the dependencies, but with the structure of my project, and not as executable, so I can make quick changes on the VM itself.
The VM may not have an external connection, so I can't just install the dependencies on the machine.
is there a tool that can help me do such a thing?
Note
Currently the python version on the VM is different than the python version the project uses. it's possible to install another version if necessary.
I need to automate a cross-platform application build. Entire build runs on Windows machine. Part of it is written in Python and compiles for OS X. Currently this part of build is done manually on OS X.
I tried pyinstaller but it looks like it only building for the platform that it is running on. I also tried py2app but it did not install on Windows.
Are there any tools to compile Python script to OS X app on Windows machine?
Short answer:
Apparently, no simple way to do this with the standard set of tools you have mentioned. I outline a completely unprobable solution in the end that's probably too complex to consider.
End result: Keep doing it manually, it's probably the best option so far.
Drawing from credible and/or official sources:
There's already a long and curated list of options specified in micheal0x2a's excelent answer in Create a single executable from a Python project which outlines most tools available for creating standalone distributions of a Python program. One line stands out:
Also, unless otherwise noted, all programs listed below will produce an exe specifically for the operating system it's running in.
I might have read things wrong but there's no explicit note of cross platform support.
Similar lists can be found in in Distribution Utilities and Freezing Your Code — The Hitchhiker's Guide to Python. From these we can take a look at the Windows supported projects:
bbfreeze: Not maintained, no documentation on limitations generally superceded by the rest in this list.
PyInstaller: According to their documentation:
PyInstaller is tested against Windows, Mac OS X, and Linux. However, it is not a cross-compiler: to make a Windows app you run PyInstaller in Windows; to make a Linux app you run it in Linux, etc. PyInstaller has been used successfully with AIX, Solaris, and FreeBSD, but is not tested against them.
PyInstaller once tried to support cross-compilation (From Linux -> Windows) but later dropped the idea.
cx_freeze: Looking at their Freezing for other platforms question in their Frequently Asked Questions list:
cx_Freeze works on Windows, Mac and Linux, but on each platform it only makes an executable that runs on that platform. So if you want to freeze your program for Windows, freeze it on Windows; if you want to run it on Macs, freeze it on a Mac.
py2app: This is not an option since py2app is supported only OSX machines based on their note:
NOTE: py2app must be used on OSX to build applications, it cannot create Mac applications on other platforms.
So installing it on windows is a no-go.
This wraps out the tools available on for creating standalone applications on Windows. Even if not on Windows though, solutions don't exist for creating an OS agnostic tool.
The general consensus for achieving these sort of things is via Virtual Machines; you create a VM image of the target OS, run the dedicated tool for that OS inside the vm and then tranfer the bundled executable to compatible machines. Since OSX is generally not easy to virtualize, from what I know, you kinda run out of luck here.
One complex probable way:
Now, the reason why I said there is no simple way to do this is because there might be one but, from what I can see, it is so tedious you shouldn't even consider it. The general issue we're facing here is that executables for windows are simply not compatible with OSX, heck, Linux executables aren't either. So what we need is the ability to cross-compile things.
One compiler that I've heard supports cross-compilation is clang. So, in order remedy the incompatibility of executables, you could theoretically use clang as a cross compiler. Problem is, this is not a simple task; it is probably way harder than what you're willing to do since it is riddled with complex issues (From OSX to Windows example).
If you do actually find a way to that you now need a .cpp/.c file from your python scripts. Thankfully this is something that can be done by using tools like Nuitka or Cython.
The gist of these is the same, enter .py exit .cpp/.c. The second tricky part might be setting up the clang compiler to be used with these. I honestly have no idea if anyone has done this.
I haven't looked into them much, but for Nuitka, I know that it can also be used to create a standalone executable when passed the --standalone flag (See: Moving to other machines). So, theoretically you could use Nuitka with clang configured to cross-compile for OSX. Then you can rest; you've earned it.
You can use a docker image https://github.com/sickcodes/Docker-OSX like this to simulate a mac computer.
Then from this simulated mac you could install the pyinstaller and run your command from there.
This would then produce the desired file.
Some people do the same way to create windows executables using pyinstaller on linux.
I dont see why this could not work from windows to mac.
Install fabric (Python module, easily installed with pip) on your Windows machine so that you can run the build on your Mac as part of the same automated build process.
Fabric allows you to utilize SSH from Python. So long as you know how to access the Mac over SSH (just need the IP, username, and password), this is easy. Set it up like this:
from fabric.api import env
env.host_string = '<IP of the Mac Here>'
env.user = '<Username on the Mac>'
env.password = '<Password of the user>'
Then copy over the necessary source files from the Windows machine to the Mac like this (once fabric has been set up as above):
from fabric.operations import put
put(r'\path\to\local\source\files', '/path/to/where/you/want/them')
Now you want to run your build tool, whatever it is. You'll need to use run or sudo for that, depending on if the tool requires admin privileges or not.
from fabric.operations import run, sudo
# sudo works the same as run if you need that instead
run('/path/to/build/tool arguments for build tool')
Finally you have the build output which you can retrieve using get.
from fabric.operations import get
get('/path/to/dist/on/mac', '\local\path\to\store\on\windows')
There we go. Pretty simple. No VM needed - just automate the process that you were already manually doing before. The only real requirement is that your Mac has to be available to be connected to during the build process. If you don't have a readily available Mac server, consider just picking up a used Mac Mini - you can get them for as little as $50 on ebay, if budget is a concern.
Background:
I'm writing a non-commercial application in Python, that uses wxPython, and depends on pyPortMidi and SciPy (both available on PyPi). I would like to share this with a small circle of Mac users - who live in different countries.
I work on Ubuntu, and don't have access to OSX systems for testing.
What I'm looking for:
A end-user friendly means of deploying my application, especially given the dependencies
What I've found so far:
Like Ubuntu, OSX comes with it's own Python bundled
This answer
suggests py2app. However, it's not clear from the
documentation
whether I can build an OSX app on an Ubuntu platform. Ditto with cx-Freeze.
Specific Questions:
Can I use py2app to build an OSX app on Ubuntu? And will it automagically include the above dependencies, or do I need to specify it somehow?
If not, can I write some sort of OSX script that will install the package dependies (using easy install, perhaps), painlessly on the end-user system? I haven't used distutils before, and I'm unfamiliar with OSX scripting, so any pointers would be appreciated!
Apologies for the noob questions, and thanks in advance.
You can use py2exe for Windows
Freeze on Linux and as you say py2app for Mac
py2app doesn't work on non-mac machines. As suggested by #victor-castillo-torres, have a look at Freeze, as also suggested in the linked mailing list.
Py2app only works on OSX systems, the code does not support building
bundles for other platforms than the one it is running on. That
is, py2app uses the currently running python installation to build a
depedency graph and copies the files mentioned in that graph to the
application bundle.
From the point of view of building script to install the dependencies
for your script OSX is just like any other Unix system, but with
different GUI libraries. A script that uses easy_install to install
dependencies could be made to work, although I don't know if all your
dependencies are easily available that way (in particular wxPython).
I'm looking to create the following:
A portable version of python that can be run on any system (with any previous version of python or no python installed) and have it pre-configured with various python packages (ie, django, lxml, pysqlite, etc)
The closest I've found to the above is virtualenv, but this only goes so far.
If I package up a nice virtualenv for python on one machine, it contains sym links to a lot of the libraries it needs. I can take those sym links and convert them to their actual files, but if I try to move this entire directory to another machine, I get seg fault after seg fault.
To launch python on a different machine, I'm using:
LD_LIBRARY_PATH=lib/ ./bin/python
and in lib/ I have all of the shared libraries I copied from the original machine. The problem here is these shared libraries might rely on other shared libraries that I'm not including, so executing this on other linux distros does not work. Probably due to it falling back on older shared libaries installed on the system that do not work with what I copied over.
Anyone have an idea on how to get this working? Is this even possible?
EDIT:
To clarify, the desired outcome is to create a tar.gz of a python binary and associated packages (django, lxml, pysqlite, etc) that can be extracted and run on any linux based system, ie (ubuntu 8.04, redhat 5, suse 11, etc), all 32bit distros, where the locally installed version of python doesn't impact what's in the tar.gz.
I just tested this and it works great.
Get the copy of python you want to install and untar it and cd to the untarred folder first.
Also get a copy of setuptools and untar that.
/opt/portapy used below is of course just the name I came up with for this post, it could be any path and the full path should be tarred up and the same path should be used on any systems you put this on due to absolute path linking.
mkdir /opt/portapy
cd <python source dir>
./configure --prefix=/opt/portapy && make && make install
cd <setuptools source dir>
/opt/portapy/bin/python ./setup.py install
Make the virtual env folder inside the portapy folder.
mkdir /opt/portapy/virtenv
/opt/portapy/bin/virtualenv /opt/portapy/virtenv
cd /opt/portapy/virtenv
source bin/activate
Done. You are ready to install all of your libraries here and have the option of creating multiple virtual envs this way.
You can then tar up the whole /opt/portapy folder and transport it to any Linux system of the same arch, within reason I suspect.
I compiled 2.7.5 ond centOS 5.8 64bit and moved the folder to a Cent6.9 system and it runs perfectly.
I don't know how this is even possible. If it were, they woudn't need to distribute binary packages of python for different platforms. You can't simply distribute python that will run on any platform. It has to be built from source for that arch. Virtualenv will expect you to tell it which system python to use (using links).
This pretty much goes for almost any binary package that links against system libs. Again, if it were possible, we wouldn't need any platform specific binary distributions.
You can, however, achieve part of what you want. That is, running python on another machine that doesn't have python installed as long as its the same arch. This is the same concept behind freezing, or py2exe/py2app/pyinstaller. An interpreter is bundled into a standalone environment. So the app can run on any similar platform.
Edit
I just realized that while your question speaks about "system" agnostically, your title contains the reference "linux". There are different flavors of linux, so in order for it to work you would have to build it fat for multiple archs and also completely contain the standalone links. You might try building a package with pyinstaller and using that to include in your project.
You can try just building python from source, in your virtualenv:
$ ./configure --prefix=/path/to/virtualenv && make && make install
If you still have problems with the links to libs, you can also investigate building it statically
I'm not sure that working solely in Python is the way to go here. You might have better luck with Puppet of Chef, which are configuration tools that can be used to create a local environment. There is plenty of code out there to install virtualenv and python on just about any Linux plus OSX (probably not Windows though).
Your workflow would be to install chef or Puppet (your choice), run a script to install the Python you want, then enter a virtualenv and pip install any packages you might need.
Sorry this isn't as easy as virtualenv alone, but it is much more robust.
Well, since I rarely accept "can't be done", there is a way to do it. Warning: it isn't pretty and you should probably look into a different scenario.
What you will need to to is determine a standard location for this top level directory. Second, using that directory as your root you will need to compile Python on each Linux distribution you want to run this on. For this you would use something like "/usr/local/myappname/platform/" to configure and compile Python to live in. In each case substitute "platform" with the name of the platform such as "/usr/local/rhel/". If memory serves the configure option you are looking for here is --prefix.
Once you have each distribution compiled you will need a script to determine which one to use and either set environment variables or have it create symlinks to the appropriate "installation" of python. I would then use virtualenv and bootstrap in that tree to keep the "in-use" python libraries even more specific.
I can't think of a common Linux distribution that doesn't have Python by default. As such you could use setup.py and/or basic python scripts to script this out since you should be able to rely in Python being present - even if its ye olde version as in RHEL installs. Personally I find the above method overly complicated but it would meet your stated requirements with the allowance for a final script. Of course, you could use shar (SHell ARchive) to tar all of this into a runnable shell script to do the installation and avoid the need for secondary scripts. If you gzip the resulting shel archive then you can decompress it on target systems and execute it to set everything up.
All that said, I would not recommend this. I would recommend determining the minimum Python version you can run on and ensuring that is installed by the distribution whenever possible and if needs be pulling down from a repo and installing. Then, use virtualenv and bootstrap with a requirements.txt to install necessary python libraries and apps into the virutalenv. For that see this documentation
I faced the same problem, so I created PortableVirtualenv. Your Question is just the definition of it.
I use it as a base for commercial multiplatform app I develop. (But PortableVirtualenv is public domain - use it freely.)
If needed, you can pip-install any package and zip the whole directory to distribute also packages you need.
One nice option is to make a "snap" portable linux application. They have a python mode which lets you specify you specify exactly what modules you need. From https://snapcraft.io/first-snap#python :
Snaps let you distribute a dependency-isolated Python app in an app store experience for end users.
Another option is to containerize your application with something like docker. Then instead of executing your script directly, the user is actually running a small OS with just your application and its dependencies. https://www.infoq.com/articles/docker-executable-images/ has more about executable containers.
Container images can also be used for short lived processes: a containerized executable meant to be run on your computer. These containers execute a single task, are short lived and can generally be removed after use. We call these executable images. Examples are compilers (Golang) or build tools (Maven), presentation software (I love to hack a simple presentation in Markdown format and let a RevealJS Docker image serve that) and browsers (a fresh contained browser to follow that fishy link). A real evangelist for executable images is Docker's own Jessie Frazelle. To get some great inspiration be sure to read her blog about them or check out this presentation at DockerCon 2015.