I'm developer and I love to code with python, the problem is most of the server in which need to deploy my packages are Solaris 10 and they don't have python upgraded, so I configured one development server as my dev environment, and I didn't have problems following my common steps
CC="/usr/sfw/bin/gcc -m64" CXX="/usr/sfw/bin/gcc -m64"./configure -- prefix=$HOME/.local
/usr/sfw/bin/gmake
/usr/sfw/bin/gmake install
When was necessary just added CFLAGS CPPFLAGS LDFLAGS and so, I used this way because I'm not a privileged user.
Then with those steps, I created a script to build all those packages and it works only in this environment but it takes a lot of time (a little problem and not the most important), The main problem is, after ran my script with other user same server I got tons of errors so it began to frustrate me, I suppose the problem is they use different paths and libraries and I can't inspect all servers to compare them, since I'm not privileged to do that.
I'm trying to discover a clear path to reuse my local-dev environment and install it in any environment with same Solaris 10,
I found the packages are the way to share them reducing the installation time and be sure they will be installed, it is true?
In my experience, all the package always being compiled in root paths as /usr/local/ and I don't know if it is true.
Recently I read the http://www.sunfreeware.com/ (discontinued) deliver their packages in $HOME/usr/local, Regrettably, I couldn't get any source from there to looking how they did.
If I compile without prefix in my dev-environment and use gmake with DESTDIR set to $HOME/.local/ it preserves the source to build a package right?
Using pkgadd install me the packages in same $HOME/.local path?
Related
CentOS 7 system, Python 2.7. The OS's installed python has directory
/usr/lib/python2.7/site-packages
and that is where a
python setup.py install
command would install a package. On a computing cluster I would like to install some packages so that they are referenced from that directory but which actually reside in an NFS served directory here:
/usr/common/lib/python2.7/site-packages
That is, I do not want to have to run setup.py on each of the cluster nodes to do a local install on each, duplicating the package on every machine. The packages already installed locally must not be affected, some of those are used by the OS's commands. Also the local ones must work even if the network is down for some reason. I am not trying to set up a virtual environment, I am only trying to place a common set of packages in a different directory in such a way that the OS supplied python sees them.
It isn't clear to me what is the best way to do this. It seems like such a common problem that there must be a standard or preferred way of doing this, and if possible, that is the method I would like to use.
This command
/usr/bin/python setup.py install --prefix=/usr/common
would probably install into the target directory. However the "python" command on the cluster nodes will not know this package is present, and there is no "network" python program that corresponds to the shared site-packages.
After the network install one could make symlinks from the local to the shared directory for each of the files created. That would be acceptable, assuming that is sufficient.
It looks like the PYTHONPATH environmental variable might also work here, although I'm unclear about what it expects
for "path" (full path to site-packages, or just the /usr/common part.)
EDIT: This does seem to work as needed, at least for the test case. The software package in question was installed using --prefix, as above. PYTHONPATH was not previously defined.
export PYTHONPATH=/usr/common/lib/python2.7/site-packages
python $PATH_TO_START_SCRIPT/start.py
ran correctly.
Thanks.
I have looked into other python module distribution questions. My need is a bit different (I think!, I am python newbie+)
I have a bunch of python scripts that I need to execute on remote machines. Here is what the target environment looks like;
The machines will have base python run time installed
I will have a SSH account; I can login or execute commands remotely using ssh
i can copy files (scp) into my home dir
I am NOT allowed to install any thing on the machine; the machines may not even have access to Internet
my scripts may use some 'exotic' python modules -- most likely they won't be present in the target machine
after the audit, my home directory will be nuked from the machine (leave no trace)
So what I like to do is:
copy a directory structure of python scripts + modules to remote machine (say in /home/audituser/scripts). And modules can be copied into /home/audituser/scripts/pythhon_lib)
then execute a script (say /home/audituser/scripts/myscript.py). This script will need to resolve all modules used from 'python_lib' sub directory.
is this possible? or is there a better way of doing this? I guess what I am looking is to 'relocate' the 3rd party modules into the scripts dir.
thanks in advance!
Are the remote machines the same as each other? And, if so, can you set up a development machine that's effectively the same as the remote machines?
If so, virtualenv makes this almost trivial. Create a virtualenv on your dev machine, use the virtualenv copy of pip to install any third-party modules into it, build your script within it, then just copy that entire environment to each remote machine.
There are three things that make it potentially non-trivial:
If the remote machines don't (and can't) have virtualenv installed, you need to do one of the following:
In many cases, just copying a --relocatable environment over just works. See the documentation section on "Making Environments Relocatable".
You can always bundle virtualenv itself, and pip install --user virtualenv (and, if they don't even have pip, a few steps before that) on each machine. This will leave the user account in a permanently-changed state. (But fortunately, your user account is going to be nuked, so who cares?)
You can write your own manual bootstrapping. See the section on "Creating Your Own Bootstrap Scripts".
By default, you get a lot more than you need—the Python executable, the standard library, etc.
If the machines aren't identical, this may not work, or at least might not be as efficient.
Even if they are, you're still often making your bundle orders of magnitude bigger.
See the documentation sections on Using Virtualenv without bin/python, --system-site-packages, and possibly bootstrapping.
If any of the Python modules you're installing also need C libraries (e.g., libxml2 for lxml), virtualenv doesn't help with that. In fact, you will need the C libraries to be almost exactly the same (same path, compatible version).
Three other alternatives:
If your needs are simple enough (or the least-simple parts involve things that virtualenv doesn't help with, like installing libxml2), it may be easier to just bundle the .egg/.tgz/whatever files for third-party modules, and write a script that does a pip install --user and so on for each one, and then you're done.
Just because you don't need a full app-distribution system doesn't mean you can't use one. py2app, py2exe, cx_freeze, etc. aren't all that complicated, especially in simple cases, and having a click-and-go executable to copy around is even easier than having an explicit environment.
zc.buildout is an amazingly flexible and manageable tool that can do the equivalent of any of the three alternatives. The main downside is that there's a much, much steeper learning curve.
You can use virtualenv to create a self-contained environment for your project. This can house your own script, as well as any dependency libraries. Then you can make the env relocatable (--relocatable), and sync it over to the target machine, activate it, and run your scripts.
If these machines do have network access (not internet, but just local network), you can also place the virtualenv on a shared location and activate from there.
It looks something like this:
virtualenv --no-site-packages portable_proj
cd portable_proj/
source bin/activate
# install some deps
pip install xyz
virtualenv --relocatable .
Now portable_proj can be disted to other machines.
I'm looking to create the following:
A portable version of python that can be run on any system (with any previous version of python or no python installed) and have it pre-configured with various python packages (ie, django, lxml, pysqlite, etc)
The closest I've found to the above is virtualenv, but this only goes so far.
If I package up a nice virtualenv for python on one machine, it contains sym links to a lot of the libraries it needs. I can take those sym links and convert them to their actual files, but if I try to move this entire directory to another machine, I get seg fault after seg fault.
To launch python on a different machine, I'm using:
LD_LIBRARY_PATH=lib/ ./bin/python
and in lib/ I have all of the shared libraries I copied from the original machine. The problem here is these shared libraries might rely on other shared libraries that I'm not including, so executing this on other linux distros does not work. Probably due to it falling back on older shared libaries installed on the system that do not work with what I copied over.
Anyone have an idea on how to get this working? Is this even possible?
EDIT:
To clarify, the desired outcome is to create a tar.gz of a python binary and associated packages (django, lxml, pysqlite, etc) that can be extracted and run on any linux based system, ie (ubuntu 8.04, redhat 5, suse 11, etc), all 32bit distros, where the locally installed version of python doesn't impact what's in the tar.gz.
I just tested this and it works great.
Get the copy of python you want to install and untar it and cd to the untarred folder first.
Also get a copy of setuptools and untar that.
/opt/portapy used below is of course just the name I came up with for this post, it could be any path and the full path should be tarred up and the same path should be used on any systems you put this on due to absolute path linking.
mkdir /opt/portapy
cd <python source dir>
./configure --prefix=/opt/portapy && make && make install
cd <setuptools source dir>
/opt/portapy/bin/python ./setup.py install
Make the virtual env folder inside the portapy folder.
mkdir /opt/portapy/virtenv
/opt/portapy/bin/virtualenv /opt/portapy/virtenv
cd /opt/portapy/virtenv
source bin/activate
Done. You are ready to install all of your libraries here and have the option of creating multiple virtual envs this way.
You can then tar up the whole /opt/portapy folder and transport it to any Linux system of the same arch, within reason I suspect.
I compiled 2.7.5 ond centOS 5.8 64bit and moved the folder to a Cent6.9 system and it runs perfectly.
I don't know how this is even possible. If it were, they woudn't need to distribute binary packages of python for different platforms. You can't simply distribute python that will run on any platform. It has to be built from source for that arch. Virtualenv will expect you to tell it which system python to use (using links).
This pretty much goes for almost any binary package that links against system libs. Again, if it were possible, we wouldn't need any platform specific binary distributions.
You can, however, achieve part of what you want. That is, running python on another machine that doesn't have python installed as long as its the same arch. This is the same concept behind freezing, or py2exe/py2app/pyinstaller. An interpreter is bundled into a standalone environment. So the app can run on any similar platform.
Edit
I just realized that while your question speaks about "system" agnostically, your title contains the reference "linux". There are different flavors of linux, so in order for it to work you would have to build it fat for multiple archs and also completely contain the standalone links. You might try building a package with pyinstaller and using that to include in your project.
You can try just building python from source, in your virtualenv:
$ ./configure --prefix=/path/to/virtualenv && make && make install
If you still have problems with the links to libs, you can also investigate building it statically
I'm not sure that working solely in Python is the way to go here. You might have better luck with Puppet of Chef, which are configuration tools that can be used to create a local environment. There is plenty of code out there to install virtualenv and python on just about any Linux plus OSX (probably not Windows though).
Your workflow would be to install chef or Puppet (your choice), run a script to install the Python you want, then enter a virtualenv and pip install any packages you might need.
Sorry this isn't as easy as virtualenv alone, but it is much more robust.
Well, since I rarely accept "can't be done", there is a way to do it. Warning: it isn't pretty and you should probably look into a different scenario.
What you will need to to is determine a standard location for this top level directory. Second, using that directory as your root you will need to compile Python on each Linux distribution you want to run this on. For this you would use something like "/usr/local/myappname/platform/" to configure and compile Python to live in. In each case substitute "platform" with the name of the platform such as "/usr/local/rhel/". If memory serves the configure option you are looking for here is --prefix.
Once you have each distribution compiled you will need a script to determine which one to use and either set environment variables or have it create symlinks to the appropriate "installation" of python. I would then use virtualenv and bootstrap in that tree to keep the "in-use" python libraries even more specific.
I can't think of a common Linux distribution that doesn't have Python by default. As such you could use setup.py and/or basic python scripts to script this out since you should be able to rely in Python being present - even if its ye olde version as in RHEL installs. Personally I find the above method overly complicated but it would meet your stated requirements with the allowance for a final script. Of course, you could use shar (SHell ARchive) to tar all of this into a runnable shell script to do the installation and avoid the need for secondary scripts. If you gzip the resulting shel archive then you can decompress it on target systems and execute it to set everything up.
All that said, I would not recommend this. I would recommend determining the minimum Python version you can run on and ensuring that is installed by the distribution whenever possible and if needs be pulling down from a repo and installing. Then, use virtualenv and bootstrap with a requirements.txt to install necessary python libraries and apps into the virutalenv. For that see this documentation
I faced the same problem, so I created PortableVirtualenv. Your Question is just the definition of it.
I use it as a base for commercial multiplatform app I develop. (But PortableVirtualenv is public domain - use it freely.)
If needed, you can pip-install any package and zip the whole directory to distribute also packages you need.
One nice option is to make a "snap" portable linux application. They have a python mode which lets you specify you specify exactly what modules you need. From https://snapcraft.io/first-snap#python :
Snaps let you distribute a dependency-isolated Python app in an app store experience for end users.
Another option is to containerize your application with something like docker. Then instead of executing your script directly, the user is actually running a small OS with just your application and its dependencies. https://www.infoq.com/articles/docker-executable-images/ has more about executable containers.
Container images can also be used for short lived processes: a containerized executable meant to be run on your computer. These containers execute a single task, are short lived and can generally be removed after use. We call these executable images. Examples are compilers (Golang) or build tools (Maven), presentation software (I love to hack a simple presentation in Markdown format and let a RevealJS Docker image serve that) and browsers (a fresh contained browser to follow that fishy link). A real evangelist for executable images is Docker's own Jessie Frazelle. To get some great inspiration be sure to read her blog about them or check out this presentation at DockerCon 2015.
I'm developing on Snow Leopard and going through the various "how tos" to get the MySQLdb package installed and working (uphill battle). Things are a mess and I'd like to regain confidence with a fresh, clean, as close to factory install of Python 2.6.
What folders should I clean out?
What should I run?
What symbolic links should I destroy or create?
One thing you should not do is try to remove or change any of the Apple-supplied python files or links: they are in /usr/bin and /System/Library/Frameworks/Python.framework. These are part of OS X and managed by Apple. It is fine to clean up any unnecessary packages you have installed for that Python. They are in /Library/Python. If you installed a python.org Python and want to remove it, most of the files are in /Library/Frameworks/Python.framework. See here for complete instructions on how to remove them. And anything you installed into /usr/local is fair game.
Using virtualenvs is a fine idea but it's slightly less important on OS X where the concept of framework builds makes it easier to support multiple Python versions than on some other platforms.
The bigger issue, especially trying to use MySQL with Python, is getting all of the necessary non-Python libraries installed and built properly which is non-trivial given the variety of options available on OS X. For instance, depending on which Python instance and which OS X level running, you may need 32-bit or 64-bit or, possibly, both versions of things like the MySQL client libraries and the MySQLdb adapter. For that reason, I highly recommend using a complete solution from MacPorts. That way you have a good chance of getting all the right components built compatibly - and easily.
If necessary, install the base MacPorts as described on the MacPorts website then:
$ sudo port selfupdate
$ sudo port install py26-mysql
and that will pull in and build everything you need and make it available in /opt/local/bin. There are also plenty of other ports available, for instance:
$ sudo port install py26-virtualenv
Virtualenv might still work for you. Install it, then create virtual python environments with the --no-site-packages option. This won't clean up your base system, but should allow you to develop in pretty good isolation from the base system.
My experience doing development on MacOSX is that the directories for libraries and installation tools are just different enough to cause a lot of problems that you end up having to fix by hand. Eventually, your computer becomes a sketchy wasteland of files and folders duplicated all over the place in an effort to solve these problems. A lot of hand-tuned configuration files, too. The thought of getting my environment set up again from scratch gives me the chills.
Then, when it's time to deploy, you've got to do it over again in reverse (unless you're deploying to an XServe, which is unlikely).
Learn from my mistake: set up a Linux VM and do your development there. At least, run your development "server" there, even if you edit the code files on your Mac.
when doing an "port selfupdate", rsync timesout with rsync.macports.org. There are mirror sites available to use.
I am supporting an application with a hard dependency on python-devel 2.3.7. The application runs the python interpreter embedded, attempting to load libpython2.3.so - but since the local machine has libpython2.4.so under /usr/lib64, the application is failing.
I see that there are RPMs for python-devel (but not version 2.3.x). Another wrinkle is that I don't want to overwrite the existing python under /usr/lib (I don't have su anyway). What I want to do is place the somewhere in my home directory (i.e. /home/noahz/lib) and use PATH and LD_LIBRARY_PATH to point to the older version for this application.
What I'm trying to find out (but can't seem to craft the right google search for) is:
1) Where do I download python-devel-2.3 or libpython2.3.so.1.0 (if either available)
2a) If I can't download python-devel-2.3, how do I build libpython2.3.so from source (already downloaded Python-2.3.tgz and
2b) Is building libpython2.3.so.1.0 from source and pointing to it with LD_LIBRARY_PATH good enough, or am I going to run into other problems (other dependencies)
3) In general, am I approaching this problem the right way?
ADDITIONAL INFO:
I attempted to symlink (ln -s) to the later version. This caused the app to fail silently.
Distro is Red Hat Enterprise Linux 5 (RHEL5) - for x86_64
You can use the python RPM's linked to from the python home page ChristopheD mentioned.
You can extract the RPM's using cpio, as they are just specialized cpio archives.
Your method of extracting them to your home directory and setting LD_LIBRARY_PATH and PATH should work; I use this all the time for hand-built newer versions of projects I also have installed.
Don't focus on the -devel package though; you need the main package. You can unpack the -devel one as well, but the only thing you'll actually use from it is the libpython2.3.so symlink that points to the actual library, and you can just as well create this by hand.
Whether this is the right approach depends on what you are trying to do. If all you're trying to do is to get this one application to run for you personally, then this hack sounds fine.
If you wanted to actually distribute something to other people for running this application, and you have no way of fixing the actual application, you should consider building an rpm of the older python version that doesn't conflict with the system-installed one.
Can you use one of these rpm's?
What specific distro are you on?
http://www.python.org/download/releases/2.3.3/rpms/
http://rpm.pbone.net/index.php3/stat/4/idpl/3171326/com/python-devel-2.3-4.i586.rpm.html