I have a script that drives installation of a lot of packages. In one place, it uses pip. One of the packages requires it's own special command-line argument for the build process.
pip enables install options to be passed in to the build process as follows:
pip install -U --timeout 30 $options --install-option='--hdf5=/usr/local/hdf5' tables
--install-option is an argument to pip. The value it is set to, --hdf5=/usr/local/hdf5, will be passed on to the build process. So, the single quotes have to be there to group all of the text as one argument that follows the equal sign. You might say I could just use double quotes to surround the value of the install-option. Well, at the command line I could.
But, here is the added complication. This is in a script. The parameter values for the pip command are passed to a function in an array. The array entry for this package looks like:
("tables,pip,,--install-option='--hdf5=/usr/local/hdf5'")
The receiving function parses the array entry with set as in this fragment:
IFS="," # to split apart the pkg array entries
for pkg in "${pkglist[#]}"; do
set -- ${pkg}
if [[ "$2" == "pip" ]]; then # $1 is pkg, $2 is pip, $3 is url, $4 is options
DoPip $1 $3 $4
...
So, DoPip, for this package, is seeing: DoPip tables '' --install-option='--hdf5=/usr/local/hdf5'
The problem occurs in DoPip. I can't figure out how to expand the last argument when I need to run pip itself. I have done a bunch of debugging to see what happens. What happens is that the value of $3 is simply being dropped--it just disappears. It will echo in a string, but it will not work as part of a command.
Looking at the function DoPip. To help debug, I reassign the arguments to explicit variables. It's not necessary, but helped make sure there weren't stupid mistakes on my part.
DoPip() {
# run pip command to install packages
# arguments: 1: package-name 2: optional source <URL>
# 3: optional pip options
pkgname=$1
url=$2
options=$3
Next, I set a variable source to be either the pkgname or the url, if the url is non-blank. I am skipping this fragment--it works.
To debug, I echo the reassigned arguments:
echo "1. The inbound arguments are: $pkgname $url $options"
The output LOOKS like it ought to work:
The inbound arguments are: tables --install-option='--hdf5=/usr/local/hdf5'
Here is the statement that actually runs pip with these arguments:
pip install -U --timeout 30 $options $source
With debugging on, here is what Bash actually sees and runs:
+ pip install -U --timeout 30 tables
Whoa! What happened to $options? It's GONE! In fact, immediately prior to this statement I repeat the echo to verify that no intervening part of the script caused the value to get flushed. Not a problem. I can echo the value of $options immediately prior--it's ok. Then, it's gone.
I can't figure out what is happening or how to do this. I have tried various ways of escaping the single quotes in the array where the string literal is originally created based on reading how very special single quotes are. Nothing works. The whole variable expansion just goes away.
I have tried doing the expansion in various ways:
pip install -U --timeout 30 "$options" $source
That doesn't work. The string in options appears but surrounded by single quotes so the pip command throws an error. Next, I tried:
pip install -=U --timeout 30 "${options}" $source
also fails: single quotes and the curly braces appear and pip is unhappy again.
The --install-options argument is essential. The build fails without it.
There has to be some way to do this. Any suggestions?
$ bash -version
GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This script gave the following output:
#!/bin/bash -vx
options=--install-option='--hdf5=/usr/local/hdf5'
source=tables
pip install -U --timeout 30 $options $source
$ ./script.sh
#!/bin/bash -vx
options=--install-option='--hdf5=/usr/local/hdf5'
+ options=--install-option=--hdf5=/usr/local/hdf5
source=tables
+ source=tables
pip install -U --timeout 30 $options $source
+ pip install -U --timeout 30 --install-option=--hdf5=/usr/local/hdf5 tables
Downloading/unpacking tables
Downloading tables-3.1.1.tar.gz (6.7MB): 6.7MB downloaded
Running setup.py (path:/tmp/pip_build_ankur/tables/setup.py) egg_info for package tables
* Using Python 2.7.3 (default, Feb 27 2014, 19:58:35)
* Found numpy 1.6.1 package installed.
.. ERROR:: You need numexpr 2.0.0 or greater to run PyTables!
Complete output from command python setup.py egg_info:
* Using Python 2.7.3 (default, Feb 27 2014, 19:58:35)
* Found numpy 1.6.1 package installed.
.. ERROR:: You need numexpr 2.0.0 or greater to run PyTables!
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_ankur/tables
Storing debug log for failure in /home/ankur/.pip/pip.log
Ignore the errors, I am seeing the value of $options being correctly read and passed to the shell.
All of this was embedded in a very large script. I created a shorter version to focus on the problem. In the end, I adopted a different approach relying on the environment variable HDF5_DIR rather than the cmd line switch for pip.
Apparently, you can only do one or the other. In any case, in the cleaned up code the argument was passed, but that is an error if the environment variable is present. Go figure.
I have the shorter code if anyone is interested.
When banging head too hard, step away from the wall and find another approach. I'll call this one closed.
The installer for the python package tables (aka, potables) finds the hdf5 libraries either with a build switch supplied to pip or by setting an environment variable before starting the build. It was easier/more reliable in the bash script to simply create the environment variable immediately before starting to build tables. There is no need to put it in bash_profile. The environment variable only needs to exist in the shell where tables is built. Once tables is installed, there is no further need for the environment variable.
In this code fragment, pkglist is an array of modules to be installed and InstallPackages is a function that walks the array and calls the appropriate installer: pip in this case.
if [[ "$hdf5exists" == "True" ]]; then
# assert hdf libraries installed
export HDF5_DIR=/usr/local/hdf5
pkglist=("${hdf5_pkglist[#]}") # only install Python wrappers for hdf5
InstallPackages
fi
So, this is the solution I adopted.
Related
I am trying to run some code inside a server. In that server, we use docker images to create notebooks inside directories, with commands like:
docker run -it --gpus "device=1" -p 8886:8886 -v /folder/directory:/workspace/work --name container-name --shm-size=60g --ulimit memlock=-1 --ulimit stack=67108864 --rm imageid jupyter notebook --port=8886 --ip=0.0.0.0 --allow-root --no-browser
Once created the notebook with an image, we have two different environments with two different python versions in the folder that were designed to execute the code inside /folder/directory: venv3.6 and venv3.7.
Even if I didn't create them, I am confident that the environments worked at some point (there are checkpoints obtained from the execution of the code by a colleague that worked on it before me). However, it must have been messed up with at some point, maybe after some modifications on the libraries of the docker image.
The problem is that, whenever I try to activate venv3.7 by using source ./venv3.7/bin/activate and run a script with python script_name.py, the python version that is executed is not 3.7, but rather 3.6.10. When going into /venv3.7/bin/activate and trying to access or download the python, the python3 or the python3.7 files, they cannot be accessed, moved or activated (i.e., if I enter /venv3.7/bin/python3.7 on the terminal, I obtain the file not found error).
When the environment is activated:
root#XXXX:/workspace/work/path# which python
/opt/conda/bin/python
root#XXXX:/workspace/work/path# source ./venv3.7/bin/activate
(venv3.7) root#XXXX:/workspace/work/path#
Following this stackoverflow post, I make the following comprobations
(venv3.7) root#XXXX:/workspace/work/path# python -V
Python 3.6.10 :: Anaconda, Inc.
(venv3.7) root#XXXX:/workspace/work/path# echo $PATH
/workspace/work/path/venv3.7/bin:/usr/local/nvm/versions/node/v15.0.1/bin:/opt/conda/bin:/opt/cmake-3.14.6-Linux-x86_64/bin/:/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/tensorrt/bin
(venv3.7) root#XXXX:/workspace/work/path# which python
/opt/conda/bin/python
(venv3.7) root#XXXX:/workspace/work/path# alias
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias l='ls -CF'
alias la='ls -A'
alias ll='ls -alF'
alias ls='ls --color=auto'
Which shows that the path is added correctly and there is no alias for python that could be messing with the activation but, still, python command uses the version from /opt/conda/bin/python instead of /workspace/work/path/venv3.7/bin
I also have checked that the path VIRTUAL_ENV in activate script (venv3.7/bin/activate) is correct.
I noticed that the directory: /venv3.7/pyvenv.cfg contains:
home = /usr/bin
include-system-site-packages = false
version = 3.7.5
And when I go to the directory /usr/bin, which should contain the python in which the environment is based, it only has python2.7 files. Could that mean that, when the first directory in $PATH is followed, no valid version of Python is found?
My guess is that the python (python, python3, python3.7) files were symlinks that were broken because the python version changed in /usr/bin. However, I don't want to risk to update the version of python in that directory, because it would probably change the default python in /opt/conda/bin/python instead, and I don't know much about docker images. Do you think it would work? In that case, how would I do it?
As additional info, the python files inside venv3.6/bin seems to work well (it can be executed and copied), but maybe because /venv3.6/pyvenv.cfg leads to the default python instead (in /opt/conda/bin/python). Also, after asking the original creator of the code, she doesn't know how to solve this issue either.
I need the environment to work, and recreating it is problematic, since many libraries were downloaded from different places (it was delicate work).
What do you suggest?
EDIT
I have tried recreating the environment with the python version I need (3.7.5). Do you know of an easy way to install the same libraries than in the other environment, considering that I can't activate it?
I was thinking to use the folders with the libraries located in /venv3.7/lib, but It is not straight forward. Any idea on how to do it?
Also, would you recommend me to create the new environment with virtualenv (to have a separate python version) or, rather, with anaconda?
Thank you so much for reading me!
After checking the python3.7 file in the environment:
root#XXXX:/# cd workspace/work/path/venv3.7/bin
root#XXXX:/workspace/work/path/venv3.7/bin# stat python
File: python -> python3.7
Size: 9 Blocks: 0 IO Block: 4096 symbolic link
Device: XXXX Inode: XXXX Links: 1
Access: (XXXX) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2022-12-06 10:31:18.165001523 +0000
Modify: 2022-05-20 12:28:37.481538688 +0000
Change: 2022-05-20 12:28:37.481538688 +0000
Birth: -
root#XXXX:/workspace/work/path/venv3.7/bin# stat python3.7
File: python3.7 -> /usr/bin/python3.7
Size: 18 Blocks: 0 IO Block: 4096 symbolic link
Device: XXXX Inode: XXXX Links: 1
Access: (XXXX) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2022-12-06 10:31:18.165001523 +0000
Modify: 2022-05-20 12:28:37.481538688 +0000
Change: 2022-05-20 12:28:37.481538688 +0000
Birth: -
It became obvious that, as stated in the post, /usr/bin should be the directory where python3.7 should be installed. That means the problem could be solved by installing it in that folder.
As I didn't know that was the default folder for the Python installation, I tried installing python from source as exposed in several guides. However, even if now the environment started accessing python3.7 in the folder, that installation didn't work either.
So I just tried apt-get install python3.7. It took around 10 seconds and, when I tried the code again, it worked!
Next time, when your environments fails because the wrong python version is executed, and the aliases and $PATH are right (see this post for more details), just remember to check where the python files in the environment point to and verify that the python installation is correct!
I hope this is useful for you.
I'm trying to execute a command for each file in a directory but while using their absolute path (such as /home/richi/mydir/myfile.py) instead of their relative path (such as myfile.py).
In other words, I want to execute a command on files in a directory based on their absolute path - similar to for file in *.py; do thecommand -a "$file"; done but not quite.
I'm asking this because I'm trying to implement a Travis CI script running in an Ubuntu 14.04 environment which will install and use pyminifier to recursively minify all the Python code files in a directory.
Please note that I'm asking may be similar to this post but it's not.
Since you're on a standard Linux distro with a full userland, you can just use the realpath command:
Print the resolved absolute file name…
For example:
$ pwd
/home/abarnert/src/test
$ touch 1
$ realpath 1
/home/abarnert/src/test/1
That's it.
If you don't know how to use that from within bash, you can call a subcommand using $(…) syntax:
$ echo $(realpath 1)
/home/abarnert/src/test/1
Of course you want to pass it the value of the variable file, but that's just as easy:
$ file=1
$ echo $(realpath "$file")
/home/abarnert/src/test/1
I'm assuming you're using bash here. With a different sh-style shell, things will be different; with tcsh or zsh or fish or something, it may be even more different.
A really old userland, or a really stripped down one (e.g., for an embedded system) might not include realpath. In that case, you can use readlink, since the GNU version, as usually, adds everything including a couple kitchen sinks, and can be used as a realpath substitute.
Or, if worst comes to worst, Python has come with a realpath function since 2.2:
$(python -c 'import os,sys; print(os.path.realpath(sys.argv[1]))' "$file")
I've been usually installed python packages through pip.
For Google App Engine, I need to install packages to another target directory.
I've tried:
pip install -I flask-restful --target ./lib
but it fails with:
must supply either home or prefix/exec-prefix -- not both
How can I get this to work?
Are you using OS X and Homebrew? The Homebrew python page https://github.com/Homebrew/brew/blob/master/docs/Homebrew-and-Python.md calls out a known issue with pip and a work around.
Worked for me.
You can make this "empty prefix" the default by adding a
~/.pydistutils.cfg file with the following contents:
[install]
prefix=
Edit: The Homebrew page was later changed to recommend passing --prefix on the command line, as discussed in the comments below. Here is the last version which contained that text. Unfortunately this only works for sdists, not wheels.
The issue was reported to pip, which later fixed it for --user. That's probably why the section has now been removed from the Homebrew page. However, the problem still occurs when using --target as in the question above.
I believe there is a simpler solution to this problem (Homebrew's Python on macOS) that won't break your normal pip operations.
All you have to do is to create a setup.cfg file at the root directory of your project, usually where your main __init__.py or executable py file is. So if the root folder of your project is: /path/to/my/project/, create a setup.cfg file in there and put the magic words inside:
[install]
prefix=
OK, now you sould be able to run pip's commands for that folder:
pip install package -t /path/to/my/project/
This command will run gracefully for that folder only. Just copy setup.cfg to whatever other projects you might have. No need to write a .pydistutils.cfg on your home directory.
After you are done installing the modules, you may remove setup.cfg.
On OSX(mac), assuming a project folder called /var/myproject
cd /var/myproject
Create a file called setup.cfg and add
[install]
prefix=
Run pip install <packagename> -t .
Another solution* for Homebrew users is simply to use a virtualenv.
Of course, that may remove the need for the target directory anyway - but even if it doesn't, I've found --target works by default (as in, without creating/modifying a config file) when in a virtual environment.
*I say solution; perhaps it's just another motivation to meticulously use venvs...
I hit errors with the other recommendations around --install-option="--prefix=lib". The only thing I found that worked is using PYTHONUSERBASE as described here.
export PYTHONUSERBASE=lib
pip install -I flask-restful --user
this is not exactly the same as --target, but it does the trick for me in any case.
As other mentioned, this is known bug with pip & python installed with homebrew.
If you create ~/.pydistutils.cfg file with "empty prefix" instruction it will fix this problem but it will break normal pip operations.
Until this bug is officially addressed, one of the options would be to create your own bash script that would handle this case:
#!/bin/bash
name=''
target=''
while getopts 'n:t:' flag; do
case "${flag}" in
n) name="${OPTARG}" ;;
t) target="${OPTARG}" ;;
esac
done
if [ -z "$target" ];
then
echo "Target parameter must be provided"
exit 1
fi
if [ -z "$name" ];
then
echo "Name parameter must be provided"
exit 1
fi
# current workaround for homebrew bug
file=$HOME'/.pydistutils.cfg'
touch $file
/bin/cat <<EOM >$file
[install]
prefix=
EOM
# end of current workaround for homebrew bug
pip install -I $name --target $target
# current workaround for homebrew bug
rm -rf $file
# end of current workaround for homebrew bug
This script wraps your command and:
accepts name and target parameters
checks if those parameters are empty
creates ~/.pydistutils.cfg file with "empty prefix" instruction in it
executes your pip command with provided parameters
removes ~/.pydistutils.cfg file
This script can be changed and adapted to address your needs but you get idea. And it allows you to run your command without braking pip. Hope it helps :)
If you're using virtualenv*, it might be a good idea to double check which pip you're using.
If you see something like /usr/local/bin/pip you've broken out of your environment. Reactivating your virtualenv will fix this:
VirtualEnv: $ source bin/activate
VirtualFish: $ vf activate [environ]
*: I use virtualfish, but I assume this tip is relevant to both.
I have a similar issue.
I use the --system flag to avoid the error as I decribe here on other thread where I explain the specific case of my situation.
I post this here expecting that can help anyone facing the same problem.
I have just installed python 2.7 using macports as:
sudo port install py27-numpy py27-scipy py27-matplotlib py27-ipython +notebook py27-pandas py27-sympy py27-nose
during the process it found some issues, mainly broken files related with py25-haslib that I managed to fix. Now it seems eveything is ok. I tested a few programs and they run as expected. Currently, I have two versions of python: 2.5 (Default, from when I worked in my former institution) and 2.7 (just installed):
which python
/usr/stsci/pyssg/Python-2.5.1/bin/python
which python2.7
/opt/local/bin/python2.7
The next move would be set the new python version 2.7 as default:
sudo port select --set python python27
sudo port select --set ipython ipython27
My question is: is there a way to go back to 2.5 in case something goes wrong?
I know a priori, nothing has to go wrong. But I have a few data reduction and analysis routines that work perfectly with the 2.5 version and I want to make sure I don´t mess up before setting the default.
if you want to revert, you can modify your .bash_profile or other login shell initialization to fix $PATH to not add "/Library/Frameworks/Python.framework/Versions/2.5/bin" to $PATH and/or to not have /usr/local/bin appear before /usr/bin on $PATH.
If you want to permanently remove the python.org installed version,
paste the following lines up to and including the chmod into a posix-
compatible shell:
tmpfile=/tmp/generate_file_list
cat <<"NOEXPAND" > "${tmpfile}"
#!/bin/sh
version="${1:-"2.5"}"
file -h /usr/local/bin/* | grep \
"symbolic link to ../../../Library/Frameworks/Python.framework/"\
"Versions/${version}" | cut -d : -f 1
echo "/Library/Frameworks/Python.framework/Versions/${version}"
echo "/Applications/Python ${version}"
set -- Applications Documentation Framework ProfileChanges \
SystemFixes UnixTools
for package do
echo "/Library/Receipts/Python${package}-${version}.pkg"
done
NOEXPAND
chmod ug+x ${tmpfile}
...excripted from troubleshooting question on python website
I am using 'rPython' package for calling python within R but I am unable to make R refer to my python's virtual environment.
In R, I have tried using
system('. /home/username/Documents/myenv/env/bin/activate')
but after running the above my python library path does not change (which I check via python.exec(print sys.path)). When I run
python.exec('import nltk')
I am thrown the error:
Error in python.exec("import nltk") : No module named nltk
although it is there in my virtual env.
I am using R 3.0.2, Python 2.7.4 on Ubuntu 13.04.
Also, I know I can change the python library path from within R by using
python.exec("sys.path='\your\path'")
but I don't want this to be entered manually over and over again whenever a new python package is installed.
Thanks in advance!
Use the "activate" bash script before running R, so that the R process inherits the changed environment variables.
$ source myvirtualenv/bin/activate
$ R
Now rPython should be able to use the packages in your virtualenv.
Works for me. May behave strangely if the Python version you made the virtualenv with is different to the one rPython links into the R process.
Expanding on #PaulHarrison's answer, you can mimic what .../activate is doing directly in the environment (before starting python from R).
Here's one method for determining what vars are modified:
$ set > pyenv-pre
$ . /path/to/venv/activate
(venvname) $ set > pyenv-post
(venvname) $ diff -uw pyenv-pre pyenv-post
This gave me something like:
--- pyenv-pre 2018-12-02 15:16:43.093203865 -0800
+++ pyenv-post 2018-12-02 15:17:34.084999718 -0800
## -33,10 +33,10 ##
OPTERR=1
OPTIND=1
OSTYPE=linux-gnu
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
+PATH=/path/to/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
PIPESTATUS=([0]="0")
PPID=325990
-PS1='\[\e]0;\u#\h: \w\a\]${debian_chroot:+($debian_chroot)}\u#\h:\w\$ '
+PS1='(venvname) \[\e]0;\u#\h: \w\a\]${debian_chroot:+($debian_chroot)}\u#\h:\w\$ '
PS2='> '
PS4='+ '
PWD=/
## -50,10 +50,13 ##
TERM=xterm
UID=3000019
USER='helloworld'
+VIRTUAL_ENV=/path/to/venv
XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop
XDG_RUNTIME_DIR=/run/user/3000019
XDG_SESSION_ID=27577
-_=set
+_=/path/to/venv/bin/activate
+_OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
+_OLD_VIRTUAL_PS1='\[\e]0;\u#\h: \w\a\]${debian_chroot:+($debian_chroot)}\u#\h:\w\$ '
__git_printf_supports_v=yes
__grub_script_check_program=grub-script-check
_backup_glob='#(#*#|*#(~|.#(bak|orig|rej|swp|dpkg*|rpm#(orig|new|save))))'
## -2390,6 +2393,31 ##
fi;
fi
}
+deactivate () ... rest of this function snipped for brevity
So it appears that the important envvars to update are:
PATH: prepend the venv bin directory to the existing paths
VIRTUAL_ENV: set to /path/to/venv
I believe the other changes (OLD_VIRTUAL_* and deactivate () ...) are optional and really only used to back-out the venv activation.
Looking at the .../activate script verifies these are most of the steps taken. Another step is unset PYTHONHOME if set, which may not be shown in the diff above if you didn't have it set previously.
To R-ize this:
Sys.setenv(
PATH = paste("/path/to/venv/bin", Sys.getenv("PATH"), sep = .Platform$path.sep),
VIRTUAL_ENV = "/path/to/venv"
)
Sys.unsetenv("PYTHONHOME") # works whether previously set or not
I've had luck getting scripts to use my peynv installation by using:
#!/usr/bin/env python
So maybe try pointing R to that path (sans #!, of course).
manage to get it working by using bash -c:
system("/bin/bash -c \"source ./pydatatable/py-pydatatable/bin/activate && python -c 'import datatable as dt; print(dt.__version__)'\"")